In the spring of 1951, at Swarthmore College in Pennsylvania, Solomon Asch began what would become one of the most frequently cited and most disturbing experiments in the history of social psychology. Asch recruited male undergraduates and told them they were participating in a study of visual perception. They were seated around a table with six or seven other participants and shown a series of cards. Each card displayed a single vertical line on the left and three vertical lines of differing lengths on the right. The task was straightforward to the point of triviality: identify which of the three lines on the right matched the standard line on the left. The correct answer was never genuinely ambiguous. The differences between lines were large enough — sometimes inches — that under ordinary conditions, subjects answered correctly more than 99 percent of the time.
What subjects did not know was that everyone else around the table was a confederate, an actor briefed to give the same wrong answer on certain predetermined trials. When the confederates unanimously declared that an obviously shorter line was the matching line, what would the naive subject do?
The results, published in the Journal of Abnormal and Social Psychology in 1956, after earlier pilot presentations in 1951, were stark. Across multiple studies involving 123 real subjects and 18 critical trials each, 37 percent of all critical trials produced conforming responses — subjects who gave the wrong answer because everyone else had. When Asch examined individual subjects rather than individual trials, the picture was equally striking: 75 percent of all subjects conformed at least once. Only 25 percent of subjects maintained independence across every single trial. The experiment had set out to study perception. What it revealed, instead, was the extraordinary power of social consensus to override private judgment, even when the private judgment was objectively correct.
Asch's experiments gave scientific form to a tendency that had long been recognized in politics, economics, and everyday life under various names: mob mentality, herd behavior, mass hysteria, social contagion. Today, the umbrella term most widely used in popular discourse is the bandwagon effect — the phenomenon by which people adopt beliefs, behaviors, products, or preferences not because of independent evaluation but because they observe, or believe, that others have already done so. The name itself is telling. In nineteenth-century American politics, a literal bandwagon — a wagon carrying a brass band — would lead a parade. Jumping on the bandwagon meant attaching yourself to whatever or whoever appeared to be winning. The phrase, popularized in its modern sense by political scientist Harvey Cialdini, carries an implied premise: success attracts more success, popularity begets more popularity, and the crowd's apparent choices function as a signal that overrides individual reasoning.
This article traces the intellectual lineage of that idea, examines the cognitive science that explains it, and considers the cases — more numerous than critics of conformity tend to acknowledge — in which following the crowd is not a failure of reason but an expression of it.
"The tendency to do or believe things because many other people do or believe the same can override independent judgment." — Solomon Asch, 1956
Intellectual Lineage
The academic study of collective behavior predates Asch by several decades. The first rigorous experimental approach to the problem of social influence on individual judgment came not from psychology proper but from Gestalt-influenced social psychology. Muzafer Sherif, working at Columbia University in the 1930s, used a perceptual phenomenon called the autokinetic effect to study norm formation in groups. The autokinetic effect is a well-documented optical illusion: if a stationary point of light is viewed in a completely darkened room, it appears to move. Because there is no fixed reference point, individuals develop wildly different estimates of how far the light has traveled — estimates ranging, in Sherif's studies, from one to ten inches.
Sherif's critical insight, reported in his 1936 book The Psychology of Social Norms, was what happened when subjects made their estimates not in isolation but in groups. Individual estimates, which varied substantially when made alone, converged rapidly when subjects reported them aloud in the presence of others. Over successive sessions, the group arrived at a shared norm — a common estimate — without any explicit discussion, negotiation, or instruction to agree. More remarkably, when individuals who had formed group norms were later tested alone, they continued to use the group's estimate rather than returning to their original private judgment. Sherif interpreted this as evidence that social reality fills the vacuum left by physical ambiguity: when the external world provides no reliable anchor for perception, other people's judgments function as the anchor.
This was the first experimental demonstration of what would later be called informational social influence — the use of others' behavior as evidence about an uncertain world. But Sherif's situation was one of genuine ambiguity. Asch's contribution fifteen years later was to show that social pressure could override judgment even when the situation was not ambiguous at all, when private perception was clear and correct. This second mode of influence — conformity driven not by uncertainty but by the social costs of appearing deviant — would be formally labeled normative social influence by Morton Deutsch and Harold Gerard in their landmark 1955 paper, "A Study of Normative and Informational Social Influences Upon Individual Judgment," published in the Journal of Abnormal and Social Psychology.
Deutsch and Gerard's distinction between informational and normative influence became the theoretical spine of social influence research for the next half-century. Informational influence explains why a new investor in an unfamiliar market might follow the trades of experienced investors: others' behavior carries genuine information. Normative influence explains why a juror might publicly agree with a verdict they privately doubt: the social cost of dissent exceeds the psychological cost of capitulation. The distinction maps roughly, though imperfectly, onto the psychological distinction between private acceptance (genuine belief change) and public compliance (surface agreement without inner conviction).
The economic formalization of these ideas came in 1992, when Sushil Bikhchandani, David Hirshleifer, and Ivo Welch published "A Theory of Fads, Fashion, Custom and Cultural Change as Informational Cascades" in the Journal of Political Economy. Their model provided a precise account of how rational actors — people genuinely trying to maximize utility by using available information — could nonetheless produce irrational collective outcomes. The central concept, the informational cascade, describes a situation in which it becomes individually rational to ignore one's private information and instead imitate others, because the aggregate information implied by others' choices outweighs the weight of one's own private signal. Once enough people have made a particular choice, a rational observer — even one with private information pointing in the opposite direction — may find it optimal to follow the crowd. The cascade is self-reinforcing and can become self-perpetuating even when the original information triggering it was wrong, weak, or purely coincidental.
Robert Cialdini, whose work on persuasion and compliance appeared in his 1984 book Influence: The Psychology of Persuasion, named the relevant principle social proof: people look to the behavior of others to determine correct behavior, particularly in conditions of uncertainty. Cialdini documented this principle across domains ranging from laugh tracks in television (which genuinely increase audience laughter) to tip jars seeded with bills (which increase tipping rates), and articulated it as one of the fundamental shortcuts of human social cognition. His framework was explicitly prescriptive as well as descriptive — social proof, he argued, is a heuristic of genuine value that becomes maladaptive primarily when it is manipulated or applied in conditions where others' choices do not, in fact, carry reliable information.
What the Research Shows
The empirical study of bandwagon effects has expanded well beyond the laboratory into natural experiments, computational social science, and field studies. Several findings stand out for both their methodological rigor and the magnitude of the effects they reveal.
Asch's original work has been replicated in dozens of countries and modified to test the boundary conditions of conformity. A 1996 meta-analysis by Rod Bond and Peter B. Smith, published in Psychological Bulletin, examined 133 studies from 17 countries spanning the period 1952 to 1994. Bond and Smith found that conformity rates varied significantly across cultures — they were lower in individualistic societies and higher in collectivist ones — but that the effect persisted across virtually every cultural context tested. The United States, despite its cultural mythology of independence, showed conformity rates in the midrange compared to other nations. The meta-analysis also confirmed Asch's finding that group size mattered only up to a point: conformity rose sharply as groups increased from one to three confederates and then plateaued. A unanimously wrong group of three produced nearly as much conformity as a unanimously wrong group of nine.
The most consequential study of the bandwagon effect in cultural markets came from Matthew Salganik, Peter Sheridan Dodds, and Duncan Watts, published in Science in 2006. The researchers created a simulated music market called "Music Lab" in which approximately 14,000 participants could listen to and download songs by unknown bands. Participants were randomly assigned to one of two conditions: an independent condition, in which they could see no information about others' choices, or a social influence condition, in which download counts for each song were displayed. The social influence condition was further subdivided into eight separate "worlds," each starting from zero and evolving independently.
The results were striking on two dimensions. First, social influence dramatically increased inequality in market outcomes: popular songs in the social influence condition became much more popular, and unpopular songs became much less popular, relative to the independent condition — which produced a more moderate distribution. Second, and more troubling for theories of cultural meritocracy, the social influence worlds showed marked unpredictability: songs that ranked near the top in one world sometimes ranked near the bottom in another, despite the fact that all participants across all conditions were drawn from the same population and listened to the same songs. One song in particular oscillated between first place and eighth place across different worlds. The researchers concluded that early random fluctuations in download counts — essentially noise — could initiate cascades that bore little relationship to underlying quality as measured by the independent condition. Social influence did not simply amplify underlying preferences; it actively reshaped them.
Jan Lorenz and colleagues published a complementary study in 2011 in the Proceedings of the National Academy of Sciences, examining how social influence affects the accuracy of crowd estimates. When participants were shown others' estimates before making their own, two effects emerged: the crowd's mean estimate moved closer to the correct answer (some truth-tracking), but the diversity of estimates collapsed dramatically, and the probability of herding on a wrong answer increased. The crowd became simultaneously more uniform and less reliable as an aggregate — the conditions under which "wisdom of crowds" benefits apply were undermined precisely by the social exposure that is ubiquitous in real-world information environments.
Research on financial markets has consistently found evidence of herding behavior that cannot be explained by fundamentals alone. A series of studies by Sias (2004, Review of Financial Studies) documented that institutional investors — sophisticated, professional, highly incentivized actors — exhibited significant herding: their trading decisions were positively correlated with each other's prior decisions even after controlling for public information. This does not prove irrationality; it is consistent with informational cascades, with reputational incentives, and with the rational inference that other institutions have conducted research worth imitating. But it confirms that social influence operates in even the most information-dense and incentive-laden environments.
The Cognitive Science
Several overlapping mechanisms explain why social information is so powerful an influence on individual judgment, and why the bandwagon effect can operate even when people are aware of it.
The most direct account is the informational one. Bounded rationality — the idea, developed by Herbert Simon in the 1950s and formalized in subsequent decades — holds that human decision-making is constrained by limited time, limited information-processing capacity, and limited cognitive resources. In a world where gathering and evaluating all relevant information is prohibitively costly, heuristics serve as rational shortcuts. Using others' behavior as a proxy for their accumulated private information is not inherently foolish; it is a reasonable response to a real epistemic problem. The question is when the heuristic leads reliably toward good outcomes and when it leads away from them.
Daniel Kahneman's dual-process framework, developed with Amos Tversky over several decades and synthesized in Kahneman's 2011 book Thinking, Fast and Slow, provides a complementary account. System 1 thinking — fast, automatic, associative — is highly sensitive to social signals; the sight of a crowd queuing outside a restaurant triggers an automatic inference of quality. System 2 thinking — slow, deliberate, effortful — can interrogate that inference, but doing so requires motivation and cognitive resources, and it faces the additional burden that the System 1 inference has already shaped what information gets retrieved from memory and how it gets weighted.
Work by Nathaniel Daw and colleagues, using computational neuroscience methods, has shown that the brain's reward prediction systems treat social information similarly to environmental feedback. Observing another person succeed at a choice updates the observer's value estimate for that choice, much as experiencing a reward directly would. This "social value learning" proceeds through mechanisms overlapping with those used for individual reinforcement learning — suggesting that sensitivity to social information is not a cognitive anomaly or a bug but a deeply embedded feature of human neural architecture.
The social comparison process, formalized by Leon Festinger in 1954 in the Human Relations journal, adds a motivational dimension. People have a basic drive to evaluate their opinions and abilities; when objective standards are unavailable, they compare themselves to others. The pressure to align with social consensus arises partly from this comparison drive — deviation from the group implies that one's own judgment may be wrong, which is uncomfortable in ways that produce corrective cognitive pressure toward conformity.
Related Concepts Compared
The bandwagon effect is frequently conflated with neighboring phenomena. The distinctions are worth preserving because they imply different causal mechanisms and different interventions.
| Concept | Core Mechanism | Key Difference from Bandwagon Effect |
|---|---|---|
| Bandwagon Effect | Adopting beliefs or behaviors because others appear to have adopted them; popularity as a signal of quality or correctness | The reference class is perceived aggregate popularity |
| Informational Cascade | Rationally ignoring private information when the inferred information in others' choices outweighs it | Explicitly rational within the model; no normative pressure required |
| Groupthink | Suppression of dissent within a cohesive group due to desire for harmony; illusion of unanimity | Operates within deliberating groups, not anonymous markets or diffuse populations |
| Social Proof (Cialdini) | Using others' behavior as a shortcut to determine appropriate behavior in uncertain situations | Broader principle of which bandwagon is a specific application; more prescriptively neutral |
| Mere Exposure Effect | Increased preference for stimuli through repeated exposure; familiarity breeds liking | Not driven by others' choices but by personal exposure frequency |
| Availability Cascade | A belief becomes more credible the more it is publicly repeated, independent of its evidential basis | Operates through media repetition and public discourse, not direct observation of peers' choices |
| Herd Behavior | Correlated action among agents who observe each other's choices; common in financial markets | Often used specifically for financial contexts; can result from both rational and irrational sources |
Four Case Studies
Case Study 1: The Asch Conformity Experiments (1951-1956)
The experiments themselves constitute the foundational case study. Asch did not simply record that conformity occurred; he systematically varied the conditions to identify what made it more and less likely. When Asch gave the naive subject a single ally — a confederate who gave the correct answer — conformity rates fell precipitously, from 37 percent of critical trials to roughly 5 percent. The ally did not need to be accurate: even a confederate who gave a different wrong answer (one that nonetheless deviated from the majority) substantially reduced conformity. What mattered was not having the right answer validated but having one's willingness to dissent legitimized by another deviant. Unanimity was the critical variable.
Asch also varied group size, finding that conformity increased as the group grew from one to two to three members but showed little additional increase beyond three. The implication was that the social pressure was not simply a function of the quantity of information being communicated by the group — if it were, more group members should produce monotonically increasing conformity — but of the perception of unanimity itself. A unanimous group of three was nearly as coercive as a unanimous group of nine.
Post-experiment interviews revealed that subjects experienced the conformity pressure differently. Some came to genuinely doubt their own perception — they privately believed the majority must be seeing something they had missed. Others privately maintained their correct judgment but conformed publicly to avoid appearing deviant. Still others reported a kind of perceptual distortion, as though the majority's answer had retroactively altered what they saw. Asch recognized that these represented the informational and normative modes of influence, though Deutsch and Gerard would make the distinction more precise four years later.
Case Study 2: The Music Lab and the Arbitrariness of Cultural Markets (Salganik, Dodds, and Watts, 2006)
The Music Lab study is significant not because it shows that people are influenced by download counts — that much was predictable — but because of what it reveals about the arbitrariness of cultural success under social influence. In the independent condition, where participants had no information about others' choices, song quality — defined as the aggregate preference of the independent population — was a reasonably reliable predictor of eventual popularity. Songs that the independent listeners preferred tended to accumulate more downloads. The market was not perfectly meritocratic, but it showed a signal.
In the social influence condition, this quality signal was swamped by early stochastic variation. Whichever songs happened to accumulate slightly more downloads in the early moments of each "world" — through chance, through order effects, through whatever trivial difference in initial exposure occurred — those songs gained a visibility advantage that compounded over time. Songs of objectively lower quality, as judged by independent listeners, ended up significantly more popular than objectively superior songs in multiple worlds. The researchers were not making a normative argument about taste — musical quality is contested — but they were making a structural argument: when social influence is strong, initial random differences are amplified to the point where outcomes in the social condition and outcomes in the independent condition are only weakly correlated.
This has direct implications for understanding why cultural phenomena sometimes feel inexplicable in retrospect. A book, a band, a fashion, a phrase that achieves extraordinary cultural ubiquity may not have done so primarily because of its intrinsic properties. It may have crossed some threshold of early visibility — by accident, by marketing spend, by geographic concentration — and then ridden the cascade. Counterfactual cultural histories, the Salganik et al. study implies, would look quite different.
Case Study 3: Financial Market Herding — The Dot-Com Bubble and Cryptocurrency Cycles
The dot-com bubble of the late 1990s and the cryptocurrency cycles of the following two decades are both extensively documented illustrations of informational cascade dynamics in asset markets. In both cases, a genuine underlying innovation — the commercial internet; blockchain-based distributed ledgers — created real uncertainty about the future value of related assets. Under genuine uncertainty, others' behavior carries genuine information: if sophisticated investors are buying, perhaps they know something. The informational cascade model predicts that even rational actors would, in such circumstances, follow the crowd.
The problem in both cases was that the information cascade detached from the underlying informational reality. By the late 1990s, assets were being purchased not on the basis of anyone's genuine assessment of earning potential but because other people were purchasing them, which implied other people were purchasing them, in a recursive loop that produced valuations untethered from any plausible fundamental analysis. Robert Shiller, in his 2000 book Irrational Exuberance, documented the feedback dynamics in detail: media coverage of rising prices attracted new investors whose purchases drove prices higher, generating more media coverage, attracting more investors. When the cascade reversed — when early investors began exiting, signaling that the upward movement might have ended — the logic operated in reverse with equal force.
Cryptocurrency markets have exhibited this cycle repeatedly and in compressed timeframes. Tokens with no revenue, no customers, and sometimes no functional product have reached billion-dollar valuations during periods of intense social attention, driven by the observation that prices were rising, which attracted participants who drove prices higher, which attracted more participants. Shiller's narrative economics framework, developed in his later work including the 2019 book Narrative Economics, provides a complementary lens: not merely that prices influence purchases, but that certain compelling narratives — about technological disruption, about being early to a paradigm shift, about missing out — travel through social networks and recruit new participants to the cascade.
Case Study 4: Restaurant Queues and the Manufactured Signal of Popularity
A consistent finding in consumer behavior research is that visible queues outside restaurants, hotels, and venues substantially increase the likelihood that passersby will join the queue or seek entry. Field observations by researchers including Nicholas Christakis and James Fowler, synthesized in their 2009 book Connected, document how social behavior propagates through networks via direct observation. But the restaurant queue case is particularly clean because it illustrates the information-theoretic logic explicitly: a long queue provides genuine Bayesian evidence that the establishment is worth patronizing. People who have already arrived, assessed the options, and chosen to wait have exercised a revealed preference. Their willingness to wait is an informative signal about expected quality.
The adaptive logic becomes maladaptive when the signal is manipulated, as it routinely is. Nightclub owners have long understood that an empty interior is commercially fatal and that a manufactured queue outside — created by slowing admission, by holding capacity below what the venue can support — provides the same informational signal as an organic queue, without carrying the same information. Restaurants in competitive urban markets have employed analogous tactics: managing reservation availability to create artificial scarcity that implies high demand. These practices are not merely cynical; they are effective, and their effectiveness demonstrates precisely how much weight consumers place on social proof signals and how little additional processing they typically apply to assess whether the signal is genuine. The heuristic is useful enough that it is widely exploited, and the exploitation persists because the cognitive cost of interrogating every social signal exceeds its expected benefit in most everyday decisions.
When Conformity Is Adaptive: The Rational Kernel
Any account of the bandwagon effect that presents it purely as an error misunderstands both the phenomenon and the human beings who exhibit it. Social influence on belief and behavior is not a cognitive defect any more than the capacity to feel pain is a defect. Both are evolved responses to real adaptive problems, and both can produce maladaptive outcomes under conditions sufficiently different from those in which they were useful.
The rational case for following the crowd rests on three foundations. First, others' behavior carries genuine information when those others are making decisions in conditions similar to your own, using information you do not have. An investor observing that experienced analysts are consistently shorting a stock would be foolish to ignore that signal entirely. A new arrival in a city who needs a restaurant recommendation does worse by ignoring social proof than by consulting it. The information aggregated in social behavior is real; the question is its reliability and applicability to your specific situation.
Second, following social norms reduces coordination costs. Traffic conventions, professional etiquette, shared standards in technology and language — these are all cases where the value of the convention lies not in its intrinsic superiority but in its being the convention. Deviating from the driving side that everyone else uses is not an expression of intellectual independence but a risk to life. Conformity to arbitrary social norms makes coordination possible and predictable.
Third, social comparison provides useful benchmarking information. Festinger's social comparison theory noted that people are motivated to evaluate their performance against appropriate standards, and when objective standards are unavailable, peers serve that function. This is not a distortion; it is a reasonable use of the information available.
The adaptive case for social influence breaks down in several identifiable circumstances. It breaks down when the information underlying others' choices is itself poor or based on prior social influence rather than independent observation — when the crowd is following the crowd, so that the apparent information in the cascade is illusory. It breaks down when the decision requires information that is idiosyncratic to you — about your own values, specific circumstances, or domain knowledge that the crowd does not share. It breaks down when the costs of conformity error are asymmetric and severe, because the heuristic is calibrated for ordinary conditions where errors are correctable. And it breaks down when the social signal has been manufactured, when the apparent popularity is artificial.
Recognizing these boundary conditions is the practical upshot of understanding the bandwagon effect. The aim is not to dismiss social information but to assess it — to ask who is in the crowd, why they made the choice they made, whether their reasoning context is similar to your own, and whether the signal is organic or manufactured.
References
Asch, S. E. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monographs: General and Applied, 70(9), 1-70.
Sherif, M. (1936). The Psychology of Social Norms. Harper & Row.
Deutsch, M., & Gerard, H. B. (1955). A study of normative and informational social influences upon individual judgment. Journal of Abnormal and Social Psychology, 51(3), 629-636.
Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy, 100(5), 992-1026.
Salganik, M. J., Dodds, P. S., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854-856.
Bond, R., & Smith, P. B. (1996). Culture and conformity: A meta-analysis of studies using Asch's (1952b, 1956) line judgment task. Psychological Bulletin, 119(1), 111-137.
Lorenz, J., Rauhut, H., Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108(22), 9020-9025.
Sias, R. W. (2004). Institutional herding. Review of Financial Studies, 17(1), 165-206.
Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117-140.
Cialdini, R. B. (1984). Influence: The Psychology of Persuasion. William Morrow.
Shiller, R. J. (2000). Irrational Exuberance. Princeton University Press.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Frequently Asked Questions
What is the bandwagon effect?
The bandwagon effect is the tendency for people to adopt beliefs, behaviors, or preferences because others are doing so — to follow the crowd rather than rely on independent judgment. It operates through two distinct mechanisms: informational social influence (others' choices carry genuine information about what is correct or good) and normative social influence (conformity to avoid social rejection). Solomon Asch's 1951 conformity experiments provided the foundational laboratory demonstration.
What did Asch's conformity experiments find?
Asch placed subjects in groups of 6-8 confederates who unanimously gave obviously wrong answers to simple line-length comparison tasks. In critical trials, 37% of responses conformed to the wrong group answer. 75% of subjects conformed at least once across multiple trials. Only 25% maintained independence throughout. When subjects explained their choices, many described genuinely believing the group was correct — normative and informational pressures were difficult to separate even for the subjects themselves.
What did the Music Lab study show about popularity?
Salganik, Dodds, and Watts's 2006 experiment in Science created an artificial music market with 14,341 participants who rated unknown songs and could optionally download them. In the social condition, participants could see how many times each song had been downloaded. In the independent condition, they could not. Songs in the social condition showed dramatically higher inequality in downloads — early random differences in download counts cascaded into massive popularity differences. Identical songs had wildly different outcomes depending on their arbitrary early download counts.
When is following the crowd rational?
Conformity carries genuine information when others have independent knowledge you lack. Bikhchandani, Hirshleifer and Welch's informational cascade model shows that rational agents should sometimes defer to observed choices of predecessors — if enough people with private information chose X, their aggregate signal may outweigh your own. Restaurant queues, bestseller lists, and professional consensus are often worth heeding precisely because they aggregate dispersed information. The error is treating any social consensus as informative, regardless of whether the consensus-forming process was genuinely independent.
How does the bandwagon effect produce financial bubbles?
In financial markets, the bandwagon effect creates momentum through informational cascades: early price increases signal to observers that insiders know something, drawing in followers whose purchases validate the signal, attracting further followers. Bond and Smith's 1996 meta-analysis confirmed that conformity pressures are robust across contexts; in markets, they amplify trends beyond what fundamentals justify. The dot-com and crypto bubbles both showed the signature pattern: late entrants justified purchases by pointing to prior price appreciation as evidence of value.