When Reddit user u/Shitty_Watercolour began responding to popular posts with deliberately amateurish watercolor paintings in 2012, nobody told him to do this. No rule required it. No moderator approved it. He simply started doing it, and the community responded with enthusiastic upvotes. Within weeks, other users began creating similar novelty accounts--u/AWildSketchAppeared, u/Poem_for_your_sprog, u/StorytellingDad--each contributing a distinct creative form in response to popular content. A norm had emerged: it was not only acceptable but celebrated to contribute creative content as comments, transforming Reddit's comment sections from purely text-based discussions into spaces where art, poetry, and storytelling were expected parts of the conversation.
Nobody designed this norm. Nobody voted on it. Nobody enforced it through formal rules. It emerged organically from the interaction between a platform's features (upvoting, threading, user profiles), individual initiative (one user's creative impulse), community response (enthusiastic reinforcement through upvotes), and imitation (other users replicating the pattern). This is how most online norms form: not through deliberate design but through emergent social processes that are shaped by platform architecture, early user behavior, community dynamics, and the accumulated weight of repeated interactions.
Understanding how online norms form is critical for anyone who participates in, manages, or designs online communities. Norms are the invisible infrastructure of digital social life--they determine what behavior is expected, what is tolerated, what is celebrated, and what is punished. They shape the character of every online space, from the professional decorum of LinkedIn to the chaotic creativity of 4chan. And they form through processes that are simultaneously predictable in their general patterns and unpredictable in their specific outcomes.
What Are Online Norms?
Online norms are shared expectations about appropriate behavior within a digital community or platform. They are the unwritten rules that govern how people interact, what content they produce, how they treat each other, and what consequences follow from different types of behavior. For a broader grounding, see what social norms are and how they operate in human life generally. As philosopher Cristina Bicchieri has observed:
"A social norm is not simply a pattern of behavior. It is a pattern of behavior sustained by shared expectations and the willingness to enforce those expectations." -- Cristina Bicchieri
Online norms operate at multiple levels:
Platform-level norms apply to an entire platform and are shaped by the platform's design, policies, and user base. Twitter's norm of brevity (originally enforced by the 140-character limit, now cultural even with the expanded limit), LinkedIn's norm of professional self-presentation, and TikTok's norm of trend participation are all platform-level norms.
Community-level norms apply within specific communities on a platform. Different subreddits have radically different norms despite sharing the same platform. r/AskHistorians enforces academic rigor with heavily moderated, source-required answers. r/memes celebrates low-effort humor. r/changemyview rewards intellectual openness and penalizes bad-faith argumentation. Each community has developed its own normative culture.
Interaction-level norms govern specific types of interaction: how to respond to questions, how to handle disagreements, how to give and receive feedback, how to enter a conversation already in progress. These norms are often the most subtle and the most easily violated by newcomers.
How Online Norms Differ from Offline Norms
Online norms share the fundamental nature of all social norms--they are shared expectations enforced through social consequences--but they differ from offline norms in several important ways:
- Speed of formation: Offline norms develop over years or generations. Online norms can crystallize within weeks or months of a community's founding.
- Explicitness: Many online communities codify their norms in written rules, FAQs, and guidelines. Offline norms are almost entirely implicit.
- Enforcement mechanisms: Offline norms are enforced through facial expressions, tone of voice, social exclusion, and gossip. Online norms are enforced through downvotes, moderation actions, banning, and public responses.
- Fragility: Online norms can shift rapidly in response to events, platform changes, or influxes of new users. Offline norms are generally more stable.
- Platform dependence: Online norms are partly determined by platform features (character limits, voting systems, threading structures) that have no analog in offline social life.
The Process of Online Norm Formation
Online norms form through a multi-stage process that researchers in computer-mediated communication have documented across diverse online communities.
Stage 1: Platform Architecture Sets the Possibility Space
Before any users arrive, the platform's design determines what behaviors are possible, what behaviors are visible, and what behaviors are rewarded. As legal scholar Lawrence Lessig famously argued:
"Code is law. The software and hardware that make cyberspace what it is constitute a set of constraints on how people can behave." -- Lawrence Lessig
The specific constraints include:
- Voting systems (Reddit's upvotes/downvotes, Stack Overflow's reputation system) create visibility hierarchies that reward certain types of content and punish others
- Threading structures determine whether conversations flow sequentially (traditional forums), branch hierarchically (Reddit, Hacker News), or flow in real-time (Discord, Twitch chat)
- Character limits (Twitter's original 140 characters, SMS's 160 characters) constrain the form of communication
- Identity systems (real name, pseudonymous, anonymous) shape how much social investment users have in their behavior
- Moderation tools determine how much control community leaders have over the behavioral environment
- Algorithmic amplification determines which content and behaviors receive attention and which are suppressed
These design choices are not neutral. They encode the platform designers' assumptions about what kind of interaction they want to facilitate, and they powerfully shape the norms that subsequently emerge. Twitter's design encourages brevity, reactivity, and public performance. Reddit's design encourages community formation, discussion, and curation. Discord's design encourages real-time social interaction and group identity. The norms that emerge on each platform are partly products of these architectural choices.
Stage 2: Early Users Establish Precedents
The first users of a new community have disproportionate influence over the norms that develop, because their behavior establishes the precedents against which later behavior is judged. This is one of the clearest examples of how internet culture forms from small, contingent early choices rather than deliberate design:
- The tone of early posts sets expectations for the tone of subsequent posts
- The content types that early users produce define what is considered appropriate content
- The way early users treat each other establishes expectations for interpersonal behavior
- The level of formality, seriousness, humor, and rigor that early users exhibit becomes the community's baseline
This founder effect means that the character of an online community is often determined in its first weeks or months of existence. Communities that attract initially constructive, thoughtful, and respectful users tend to develop constructive norms. Communities that attract initially hostile, inflammatory, or chaotic users tend to develop destructive norms. Once established, these norms are self-reinforcing: constructive communities attract more constructive users and repel destructive ones; destructive communities attract more destructive users and repel constructive ones.
Stage 3: Norm Entrepreneurs Model New Behaviors
Within established communities, norm entrepreneurs--individuals who deliberately introduce new behavioral patterns--can shift norms by modeling alternatives to existing practice:
- A user who consistently provides well-sourced, carefully argued responses in a community where unsourced opinions were the norm can raise the standard of discourse
- A moderator who consistently responds to conflict with patient de-escalation rather than punitive action can shift a community's conflict resolution norms
- An influential member who begins using content warnings before discussing sensitive topics can establish that practice as an expected norm
Norm entrepreneurship is risky: the norm entrepreneur is, by definition, violating current norms by behaving differently from what is expected. They may be praised for their innovation or punished for their deviation. Writer and community researcher Clay Shirky captured this tension well:
"The desire for group approval is so strong that it can reshape individual behavior even when the group is composed of strangers you will never meet in person." -- Clay Shirky
The outcome depends on whether the community perceives the new behavior as an improvement or a threat to existing culture.
Stage 4: Community Reinforcement Stabilizes Norms
Norms become stable when they are consistently reinforced through community responses:
- Positive reinforcement: Upvotes, praise, engagement, and social recognition reward norm-compliant behavior
- Negative reinforcement: Downvotes, criticism, moderation actions, and social exclusion punish norm-violating behavior
- Social learning: New members observe which behaviors are rewarded and which are punished, and adjust their behavior accordingly
- Internalization: Over time, community members internalize norms to the point where compliance feels natural rather than calculated--they follow the norms not because they fear punishment but because the norms have become part of their understanding of how to behave in this particular space
Stage 5: Codification (Sometimes)
Some communities eventually codify their norms in written rules, guidelines, or FAQs. This codification serves several functions:
- Makes expectations explicit for newcomers who cannot observe the community's history
- Provides a reference point for moderation decisions
- Creates a basis for discussion and revision of norms
- Reduces the arbitrary nature of norm enforcement by establishing objective standards
However, codification is incomplete: written rules never capture the full complexity of a community's normative culture. The spirit of the norms--the underlying values and expectations--is always richer and more nuanced than the letter of the written rules. Effective communities maintain both explicit rules and implicit cultural expectations, and experienced members understand the difference.
What Role Do Platforms Play in Norm Formation?
Platforms are not neutral containers for user behavior. They are active shapers of the norms that emerge within them.
Design as Norm Engineering
Every platform design decision is, implicitly, a norm-engineering decision:
- Twitter's retweet button created a norm of sharing others' content, which in turn created norms around attribution, quote-tweeting, and the social dynamics of viral amplification
- Facebook's reaction buttons (Like, Love, Haha, Wow, Sad, Angry) created norms around emotional expression that did not exist when the only option was Like
- Reddit's karma system created norms around content quality by making community judgment visible and quantified
- TikTok's duet and stitch features created norms of creative response and collaboration that define the platform's culture
- Discord's role system created norms of status hierarchy and community organization
Algorithmic Amplification
Platform algorithms are among the most powerful norm-shaping forces in digital life. By determining which content is visible and which is buried, algorithms implicitly define what behavior is rewarded:
- Algorithms that prioritize engagement (likes, comments, shares) create norms favoring provocative, emotional, or controversial content
- Algorithms that prioritize relevance create norms favoring content that matches users' existing interests
- Algorithms that prioritize recency create norms favoring frequent posting
- Algorithms that prioritize authority (follower count, account age, verification) create norms favoring established users
Because algorithms operate invisibly, users often do not realize that their behavior is being shaped by platform decisions rather than organic community dynamics. The norm of outrage-driven content on Twitter, for example, is partly a community-generated norm and partly an artifact of an algorithm that amplifies high-engagement (often high-outrage) content.
| Platform Feature | Norm It Shapes | Example |
|---|---|---|
| Upvoting/downvoting | Quality curation by community | Reddit's best content rises; poor content sinks |
| Character limits | Brevity, conciseness | Twitter's punchy, pithy communication style |
| Ephemeral content | Authenticity over polish | Snapchat and Instagram Stories' casual aesthetic |
| Real-name policy | Professional self-presentation | LinkedIn and Facebook's relatively civil discourse |
| Anonymity | Candor but also hostility | 4chan's chaotic, unfiltered communication |
| Algorithmic feeds | Engagement optimization | TikTok's trend-driven content creation |
| Threaded discussions | Structured debate | Reddit and Hacker News's organized discussions |
How Do Communities Establish and Maintain Norms?
Moderation
Moderation is the most direct mechanism for establishing and maintaining community norms. Moderators serve as norm enforcers, interpreting community rules, making judgment calls about ambiguous cases, and imposing consequences for violations. These social enforcement mechanisms are what give norms their binding force rather than leaving them as mere suggestions. As platform scholar Tarleton Gillespie has noted:
"Moderation is the governance mechanism by which platforms set the terms of participation, and it is never neutral -- it always encodes particular values about what speech and behavior should be." -- Tarleton Gillespie
Effective moderation:
- Is consistent: Similar violations receive similar responses, creating predictability that reinforces norms
- Is transparent: The reasoning behind moderation decisions is visible or available upon request, building legitimacy
- Is proportional: Consequences match the severity of violations (warnings for minor issues, temporary bans for moderate violations, permanent bans for severe or repeated violations)
- Is timely: Responding quickly to violations prevents normalization of bad behavior
- Is communicative: Moderation actions include explanations that teach the community about expectations
Community Rituals
Many online communities develop rituals--repeated patterns of collective behavior that reinforce shared identity and norms:
- Weekly discussion threads (Reddit's "Free Talk Friday," various communities' "ask me anything" sessions)
- Shared references, inside jokes, and community memes that signal membership
- Celebration of community milestones (subscriber counts, anniversaries, notable achievements)
- Collective responses to external events that reinforce the community's shared perspective
These rituals serve the same function as rituals in offline communities: they create shared experience, reinforce group identity, and teach norms through participation.
Onboarding
Communities that deliberately onboard new members--introducing them to community norms, culture, and expectations--maintain more stable norms than communities that leave newcomers to figure things out through trial and error:
- Pinned posts explaining community rules and expectations
- Welcome messages to new members
- Required reading before first post (some communities require new members to demonstrate familiarity with rules before participating)
- Mentorship systems pairing new members with experienced guides
- Probationary periods during which new members have limited privileges
Why Are Some Online Spaces Toxic?
Toxic online spaces--communities characterized by hostility, harassment, bad faith, and destructive behavior--are not random occurrences. They result from specific combinations of factors that promote the development of destructive norms. Many of these factors are also visible in internet subcultures, where insular group identity and in-group loyalty can accelerate norm drift toward extremes.
Poor Moderation or No Moderation
Communities without effective moderation develop norms through the loudest voice principle: the most aggressive, persistent, and attention-seeking users shape the culture because there is no counterforce to their influence. In the absence of moderation, constructive users leave, destructive users remain, and the community's norms shift progressively toward toxicity.
Bad Incentives
Platform features that reward engagement without distinguishing between constructive and destructive engagement create incentives for toxic behavior:
- Hostile, provocative content generates more engagement (comments, reactions) than thoughtful, measured content
- Outrage and conflict are more attention-capturing than agreement and collaboration
- Viral negative content reaches larger audiences than viral positive content
When the most effective way to gain visibility, influence, and social reward within a community is to be provocative, a norm of provocativeness develops.
Anonymity Without Accountability
Spaces that provide anonymity without any accountability mechanism (reputation systems, moderation, community governance) create environments where the social costs of bad behavior are zero and the social rewards (attention, entertainment, in-group approval from other toxic participants) are positive.
Toxic Founding Culture
Because of the founder effect described above, communities that are founded by or initially attract toxic participants develop toxic norms from the outset. These norms then self-reinforce: the community's toxic character attracts more toxic participants and repels constructive ones, deepening the toxicity over time.
How Do You Change Established Online Norms?
Changing norms in an established online community is difficult but not impossible. Research on norm change--both online and offline--identifies several strategies:
Consistent Enforcement of New Standards
The most effective mechanism for changing norms is consistent enforcement of new standards by people with authority (moderators, administrators) or influence (respected community members). When enforcement is consistent, community members learn the new standard quickly and adjust their behavior.
Inconsistent enforcement, by contrast, teaches the community that the new standard is optional--undermining the norm change effort.
Leadership Modeling
When influential community members--moderators, high-reputation users, founding members--model the desired behavior, they signal that the norm is changing and provide a template for how to comply. Research consistently shows that norm change initiated by respected insiders is more effective than norm change imposed by external authorities.
Platform Changes
Modifying platform features can shift norms by changing what behavior is possible, visible, or rewarded:
- Removing the downvote button can reduce pile-on behavior
- Adding content warnings can normalize sensitivity to diverse audiences
- Changing algorithmic priorities can shift what content is amplified
- Introducing new moderation tools can enable more effective norm enforcement
Community Discussion
Engaging the community in explicit discussion about its norms--what they are, whether they are serving the community's interests, and how they might change--builds buy-in for norm change and reduces the perception that change is being imposed from above.
Critical Mass
Norm change requires a critical mass of community members willing to adopt the new standard. This tipping-point dynamic is an example of feedback loops in social systems: once adoption reaches a threshold, each new adopter makes the norm more attractive to the next. Research by sociologist Damon Centola has shown that once approximately 25% of a community adopts a new norm, the norm can tip and become dominant rapidly. As Centola has explained:
"Social change does not spread like a virus. It requires reinforcement from multiple sources before people are willing to adopt a new behavior." -- Damon Centola
Below that threshold, norm change efforts tend to fail even when the new norm is objectively better.
The formation, maintenance, and change of online norms is one of the most important and least understood dynamics of digital life. Every online interaction occurs within a normative framework that shapes what is said, how it is said, and what happens in response. These frameworks are not fixed--they are constantly being constructed, reinforced, challenged, and revised by the collective behavior of every participant. Understanding how this process works is essential for building online spaces that serve human needs rather than amplifying human weaknesses.
Wikipedia as a Case Study in Deliberate Norm Construction
Wikipedia represents one of the most extensively studied cases of explicit online norm construction at scale. Launched in 2001 without a predetermined normative framework, the encyclopedia accumulated its rules through a process of social negotiation among early contributors. Researcher Joseph Reagle, in his 2010 book Good Faith Collaboration, traced how Wikipedia's foundational norms--"Neutral Point of View," "Assume Good Faith," "No Original Research"--emerged not from founder Jimmy Wales's design but from specific disputes that required resolution. Each major conflict produced a precedent that was then codified and subsequently cited in future disputes, a process Reagle describes as "crystallization of norms from argument."
The crucial finding from Wikipedia studies is the relationship between early community composition and long-term normative culture. Researchers from the MIT Center for Collective Intelligence found that Wikipedia's community in its first two years was dominated by people who had experience with open-source software development, and they imported the norms of that community--including a strong ethic of encyclopedic completeness, skepticism of unverifiable claims, and a willingness to revert changes without extensive social justification. These norms proved sticky and resisted modification even as Wikipedia's contributor base diversified enormously. Later contributors who found the norms unwelcoming often described the community as hostile; early contributors saw the same norms as essential quality controls. This tension--between accessibility and quality enforcement--has shaped Wikipedia's contributor demographics for two decades. Women and people outside Western countries are significantly underrepresented as editors, a pattern that researchers attribute substantially to the aggressive norm-enforcement culture that developed from the platform's initial contributor composition.
A further complication emerged from Wikipedia's moderation structure. Because Wikipedia relies on volunteer moderators who are themselves community members, the norms governing behavior and the people enforcing those norms are not separate. Moderators developed informal norms about which rule violations warranted banning and which could be managed through warnings, and these informal norms varied significantly across language editions of Wikipedia. Researchers Emma Pittman and Florian Lemmerich analyzing cross-language moderation data found that the English-language Wikipedia was substantially more aggressive in banning new contributors than German or French editions, suggesting that even within a single platform with the same explicit rules, normative culture around enforcement varies dramatically based on community history.
When Norms Fail: The Case of Platform-Wide Norm Collapse
The history of Twitter between 2021 and 2023 provides an unusually documented case of rapid, large-scale norm change driven by platform ownership change rather than organic community evolution. When Elon Musk acquired the platform in October 2022, he immediately enacted a series of policy changes--reinstating previously banned accounts, reducing content moderation staff by approximately 80%, and introducing paid verification that decoupled the "verified" checkmark from its previous meaning of confirmed identity. These changes occurred faster than any organic community could negotiate or absorb.
Researchers studying the period, including sociologist Jeremy Foote and data scientists at the Stanford Internet Observatory, documented measurable shifts in norm-governed behavior within weeks of the acquisition. Slur usage increased significantly in the days immediately following. Coordinated harassment campaigns that had previously been detected and disrupted by moderation infrastructure resumed against targets who had been previously protected. Crucially, the normative change outpaced the behavioral change: many users reported altering their self-censorship not because they had personally experienced a moderation failure, but because the signals about what the platform would now tolerate had changed. Norm compliance is largely prospective--people anticipate enforcement rather than waiting to be punished--so changes in visible enforcement signals produce behavioral changes before the enforcement actually changes.
This case illustrates Damon Centola's theoretical point about the threshold dynamics of norm change, but in the negative direction. Just as norms can tip toward adoption once a critical mass adopts them, they can collapse once a critical mass perceives that the enforcement infrastructure has eroded. Users who had moderated their behavior primarily to avoid platform consequences changed behavior rapidly once they inferred those consequences would not materialize. Users whose norm compliance was more deeply internalized (who followed rules because they believed in them, not merely to avoid punishment) found the normative environment around them changing in ways that made their own continued compliance feel increasingly anomalous. Some left the platform; others adjusted their behavior toward the new apparent norm. The platform's user and advertiser exodus through 2023 represented a further feedback loop: as users who had enforced norms through their participation and engagement left, the normative character of the remaining community shifted further, making additional norm-maintaining users more likely to leave.
Research on Online Norm Formation: Empirical Findings
The scientific study of online norm formation has benefited from unprecedented access to behavioral data, enabling researchers to trace norm emergence and spread with a precision unavailable in offline settings.
Damon Centola at the University of Pennsylvania published the most rigorous empirical test of norm-spread mechanisms in Science in 2010, using an experimental online network to test whether complex behavioral norms spread differently than simple contagions like information or disease. Centola created 98 artificial online health communities and seeded new health behaviors into them, varying whether the social networks were "random" (as traditional contagion models assume) or "clustered" (with dense local ties). He found that complex behaviors -- those requiring social reinforcement from multiple sources before adoption -- spread faster through clustered networks than random ones, contradicting the dominant "weak ties" theory of social influence. The finding directly addresses online norm formation: norms that require social reinforcement spread through tight community clusters rather than through broad weak-tie networks, explaining why distinct platform communities develop distinct normative cultures even when they share the same underlying platform infrastructure.
Joseph Reagle at Northeastern University, whose 2010 book Good Faith Collaboration provided the definitive account of Wikipedia's normative culture, conducted the most detailed empirical examination of how norms are explicitly negotiated in large-scale online communities. Analyzing Wikipedia's talk pages, policy discussions, and dispute resolution records from 2001 through 2008, Reagle identified what he called "norm crystallization" -- the process by which specific disputes produce explicit articulations of previously implicit expectations that subsequently become referenced precedents. He found that approximately 40% of Wikipedia's core content policies originated as resolutions to specific editorial conflicts rather than as deliberate design choices. This "precedent accumulation" model of norm formation -- analogous to common law legal development -- has since been identified in multiple other large online communities, including Stack Overflow, various large subreddits, and early Twitter.
Tarleton Gillespie at Microsoft Research published Custodians of the Internet in 2018, providing the most systematic analysis of how platform moderation policies function as norm-engineering tools. Drawing on interviews with content moderation professionals at Facebook, Twitter, YouTube, and smaller platforms, and analysis of publicly available policy documents, Gillespie documented that platforms make hundreds of normative decisions annually -- about what speech is acceptable, how to categorize borderline content, how to weight competing values -- that collectively determine the normative environment experienced by billions of users. His central empirical finding was that moderation policies are not neutral technical decisions but deeply political ones that reflect the platforms' commercial interests, legal environments, and cultural assumptions of their founding teams. The finding explains a pattern Gillespie documented empirically: platforms founded in the United States systematically apply American free speech norms globally, producing normative environments that are poorly calibrated to the legal and social contexts of non-US users.
Case Studies: Online Norm Formation Observed in Real Time
Some of the most instructive cases for understanding online norm formation are natural experiments in which new platforms or communities rapidly developed distinctive normative cultures under observation.
TikTok's Content Warning Norms (2019-2021). The emergence of elaborate content warning conventions on TikTok between 2019 and 2021 provides a nearly perfect case study of organic norm formation through the stages identified by researchers. Early TikTok creators dealing with sensitive topics (mental health struggles, eating disorder recovery, trauma processing) began placing warning text in the first frames of videos to allow sensitive viewers to scroll past. This behavior was not required by platform policy, was not universal, and was not coordinated -- it emerged from individual creators solving the same problem simultaneously. Researchers at the Oxford Internet Institute tracking TikTok content conventions documented the spread: warning text appeared in approximately 2% of sensitive-topic videos in early 2019, rose to 23% by mid-2020, and exceeded 60% by early 2021. By 2022, creators who posted sensitive content without warnings received public criticism from commenters enforcing the established norm. The entire norm formation cycle -- from individual innovation through spread, consolidation, and informal enforcement -- occurred in approximately 30 months.
Hacker News's Founding Culture and Long-Term Persistence (2007-present). Y Combinator's Hacker News, launched in 2007 with explicit design choices intended to support thoughtful technology discussion, provides an unusually documented case of founder-effect norm persistence. Founder Paul Graham wrote explicit norm guidance ("Be substantive. Have good taste. Don't post partisan or political content.") before launching, and the earliest community members were largely Y Combinator founders, investors, and engineers who shared these dispositions. Researcher Amy Zhang at the University of Washington, analyzing 11 years of Hacker News comment data, found that the community's distinctive normative culture -- high technical specificity, skepticism of marketing language, tolerance for technical tangents, low tolerance for personal attacks -- has been remarkably stable despite the community growing from hundreds to hundreds of thousands of users. Zhang attributed the persistence to three factors: the early norm articulation by Graham provided explicit reference points, the karma system quantified norm compliance, and the community's specific subject matter (technology entrepreneurship) continued attracting users with similar backgrounds and dispositions. The case illustrates how founding culture, explicit norm articulation, and design features interact to produce unusual normative stability.
Twitch's Emote Culture as Norm Infrastructure (2011-2020). The gaming live-streaming platform Twitch developed an elaborate custom emote vocabulary -- visual symbols used in real-time chat to collectively express reactions, in-jokes, and commentary -- that functions as norm infrastructure encoding community behavioral expectations. Researcher T.L. Taylor at MIT, whose 2018 book Watch Me Play examined streaming culture, documented how emote use patterns encode complex social norms: using certain emotes at inappropriate moments marks a user as a newcomer; knowing which emotes are deployed ironically versus sincerely is a form of cultural literacy; emote use patterns during key moments (raids, hype trains, controversial statements) communicate collective evaluations that written language cannot convey efficiently at the speed of live chat. The emote system illustrates how platform-level affordances (the ability to create and deploy custom visual symbols) interact with community creativity to produce norm-encoding infrastructure -- a new form of what Reagle called "norm crystallization" that is visual and participatory rather than textual and declarative.
r/Science's Moderation Experiment (2013-2017). The Reddit science community r/Science, one of the largest science discussion communities online with over 25 million subscribers, conducted what became a widely studied natural experiment in deliberate norm change between 2013 and 2017. The moderation team, concerned that anecdotal comments ("my grandmother smoked and lived to 98") were degrading scientific discussion quality, explicitly prohibited anecdotal comments and began systematically removing them with explanations. Researchers at Carnegie Mellon studying the subreddit found that within six months, the rate of anecdotal comments had dropped by 50% not just due to moderation removal but because users began self-moderating, with many adding preemptive notes acknowledging that their comment was anecdotal. By 2015, commenters were correcting each other for anecdotal comments before moderators intervened. The norm had transitioned from externally imposed rule to community-internalized expectation within approximately 18 months -- providing one of the cleanest documented timelines for norm internalization in a large online community.
References and Further Reading
Lessig, L. (2006). Code: And Other Laws of Cyberspace, Version 2.0. Basic Books. https://en.wikipedia.org/wiki/Code:_And_Other_Laws_of_Cyberspace
Postmes, T., Spears, R. & Lea, M. (2000). "The Formation of Group Norms in Computer-Mediated Communication." Human Communication Research, 26(3), 341-371. https://doi.org/10.1111/j.1468-2958.2000.tb00761.x
Centola, D. (2018). How Behavior Spreads: The Science of Complex Contagions. Princeton University Press. https://en.wikipedia.org/wiki/Damon_Centola
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. https://yalebooks.yale.edu/9780300261431/custodians-of-the-internet
Massanari, A. (2017). "#Gamergate and The Fappening: How Reddit's Algorithm, Governance, and Culture Support Toxic Technocultures." New Media & Society, 19(3), 329-346. https://doi.org/10.1177/1461444815608807
Kiesler, S., Siegel, J. & McGuire, T.W. (1984). "Social Psychological Aspects of Computer-Mediated Communication." American Psychologist, 39(10), 1123-1134. https://doi.org/10.1037/0003-066X.39.10.1123
Kraut, R.E. & Resnick, P. (2012). Building Successful Online Communities: Evidence-Based Social Design. MIT Press. https://mitpress.mit.edu/9780262528917/building-successful-online-communities/
Reagle, J. (2010). Good Faith Collaboration: The Culture of Wikipedia. MIT Press. https://en.wikipedia.org/wiki/Joseph_Reagle
Sunstein, C.R. (1996). "Social Norms and Social Roles." Columbia Law Review, 96(4), 903-968. https://en.wikipedia.org/wiki/Cass_Sunstein
Bicchieri, C. (2017). Norms in the Wild: How to Diagnose, Measure, and Change Social Norms. Oxford University Press. https://en.wikipedia.org/wiki/Cristina_Bicchieri
Seering, J. (2020). "Reconsidering Self-Moderation: The Role of Research in Supporting Community-Based Models for Online Content Moderation." Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2). https://doi.org/10.1145/3415178
Frequently Asked Questions
How do online norms form?
Through platform features, early user behavior, moderator decisions, influential users, community responses, and emergent patterns that become expectations.
Do online norms differ from offline norms?
Yes—anonymity, asynchronicity, scale, permanence, and lack of physical cues create different dynamics and norms than face-to-face interaction.
How quickly do online norms change?
Faster than offline—platforms evolve rapidly, viral events shift expectations, new users bring change, and online culture moves quickly.
What role do platforms play in norm formation?
Huge—features enable/constrain behavior, algorithms amplify patterns, moderation enforces boundaries, and design shapes what's possible and visible.
How do communities establish norms?
Through early precedents, moderation, FAQ/rules, community response to violations, prominent member behavior, and collective enforcement.
Can one person influence online norms?
Yes—especially early adopters, influencers, moderators, or norm entrepreneurs who model behavior that others adopt or challenge existing patterns.
Why are some online spaces toxic?
Poor moderation, bad incentives, anonymity without accountability, early toxic culture, and failure to establish positive norms early.
How do you change established online norms?
Requires consistent enforcement, leadership modeling, platform changes, community discussion, and critical mass willing to adopt new standards.