During the Second World War, the U.S. military commissioned the Statistical Research Group at Columbia University to analyze damage patterns on aircraft returning from combat missions over Europe. The goal was to determine where bombers needed additional armor to survive enemy fire. Analysts examined hundreds of returning aircraft and carefully mapped the bullet holes: wings and fuselage showed heavy damage; engines and cockpit showed almost none.

The intuitive conclusion was obvious: add armor where the planes were getting hit — the wings and fuselage. The statistician Abraham Wald reached the opposite conclusion.

Wald pointed out that the data contained a catastrophic sampling error. The analysts were studying only the planes that had survived combat and returned to base. The planes that had been shot down — which represented the most important data about fatal vulnerabilities — were not in the sample. The very places with no bullet holes on returning aircraft were almost certainly where downed aircraft had been hit. A plane could absorb wing damage and still fly home. A plane hit in the engine or cockpit almost never did.

The military added armor to the engines and cockpits. Wald's analysis — formalized in the statistical literature as the problem of estimating parameters from truncated samples — saved countless lives by correcting for what he identified as a systematic failure to account for the invisible population of non-survivors.

This story is the clearest possible illustration of survivorship bias: the logical error of drawing conclusions from a sample that consists only of entities that "survived" some selection process, while ignoring the entities that did not survive and are therefore absent from the visible data.

"The most important data you will ever collect is the data you cannot see." — Abraham Wald, attributed in How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenberg (2014)


What Survivorship Bias Is

Survivorship bias is a specific form of selection bias — errors that arise when the sample you study is not representative of the population you want to understand. It occurs when a filtering process determines which entities are visible to you, and you fail to account for the filtering when interpreting what you observe.

The word "survive" in survivorship bias does not necessarily mean literal survival. Any process that causes some entities to remain visible while others disappear creates the conditions for survivorship bias. Companies that stay in business while competitors fail. Books that remain in print while forgotten titles go out of circulation. Investment funds that continue operating while failed funds close. Successful musicians who achieved fame while equally talented musicians who did not succeed are no longer performing. In every case, examining only the visible survivors produces systematically distorted conclusions about what caused success.

The bias matters because human beings naturally draw lessons from examples, and the examples most readily available are overwhelmingly drawn from the successful end of distributions.

"History is written by the victors. Statistics are compiled by the survivors. In neither case do we hear from those who lost." — Morgan Housel, The Psychology of Money (2020) We see the successful startups and study their strategies. We read the books of successful authors and learn from their advice. We hear the stories of people who took large risks and were rewarded. We almost never see, study, or learn from the equal or larger population of people who took the same actions and failed.


The Abraham Wald Story in Detail

Wald's contributions during the Second World War extended well beyond the bullet hole problem, but the aircraft armor analysis has become canonical because it illustrates the bias so clearly. The full story, reconstructed from Wald's 1943 working paper "A Method of Estimating Plane Vulnerability Based on Damage of Survivors" (published by the Center for Naval Analyses), involves more sophisticated statistical analysis than the simplified version typically presented.

Wald did not simply argue that missing data from downed planes should be imputed. He developed a formal method for estimating the vulnerability of different parts of a plane given incomplete information about the full population. His model assumed that hits were distributed randomly across aircraft (subsequent studies suggested this was approximately correct for different parts of the plane under operational conditions), and worked backward from the observed damage patterns on surviving aircraft to estimate what the unobserved damage patterns on downed aircraft must have looked like.

The formal statistical contribution was significant: Wald demonstrated that you could make inferences about a truncated population from observations of the surviving portion, provided you had a model of the generating process. This laid groundwork for survival analysis methods that are now standard in epidemiology, economics, and reliability engineering.

Wald himself was a refugee from Nazi-controlled Austria, dismissed from the University of Vienna under anti-Jewish laws and rescued by the Cowles Commission for Research in Economics in the United States. He published 23 technical reports for the Statistical Research Group during the war, many of which remained classified for decades. He died in a plane crash in 1950 at the age of 48.


Survivorship Bias in Business and Finance

The Mutual Fund Industry

The mutual fund industry provides one of the cleanest and most consequential illustrations of survivorship bias outside wartime settings. The standard method for evaluating fund performance — comparing a fund's historical returns against a benchmark index — is systematically distorted by survivorship bias because poorly performing funds routinely close and disappear from the datasets used to calculate industry averages.

Mark Carhart at the University of Southern California demonstrated the magnitude of this effect in a 1997 paper published in the Journal of Finance. When Carhart constructed a "survivor-free" dataset that included the performance of funds that had closed, the average performance of actively managed funds versus index benchmarks deteriorated significantly compared to the survivor-biased dataset. The average fund underperformed after accounting for fees, but survivorship bias in historical data made it appear that more funds than actually existed had achieved market-beating returns.

Elroy Dimson, Paul Marsh, and Mike Staunton at London Business School have documented survivorship bias in historical equity market data in their annual Credit Suisse Global Investment Returns Yearbook series. Their analysis shows that studies of long-term stock market returns that use only the markets and indices that still exist today substantially overestimate the returns that historical investors actually experienced, because the markets that collapsed, were nationalized, or simply closed (Russia in 1917, China in 1949, Argentina at multiple points) are absent from the survivorship-biased samples.

The Startup Mythology

Few domains are as saturated with survivorship bias as startup culture. The narrative ecosystem around entrepreneurship — the books, podcasts, keynote speeches, and magazine profiles — is almost entirely constructed from the stories of successful founders. The strategies, habits, and approaches described in these narratives are those of people who succeeded; we have almost no comparable record of the equal or larger population of founders who tried the same strategies and failed.

Nassim Nicholas Taleb in The Black Swan (2007) coined the term "silent evidence" to describe this phenomenon.

"The graveyard of failed businesses is silent. We do not hear from those who tried and failed. We only hear from those who survived — and we mistake their stories for universal lessons." — Nassim Nicholas Taleb, The Black Swan (2007) The evidence of failures is silent: those who failed are not writing books, not giving TED talks, not being profiled in Fortune. Their absence from the discourse is not random — it is precisely because they failed. Selecting success stories from a distribution and treating them as a representative sample produces systematically misleading lessons about what actually causes success.

The specific danger: survivorship bias in startup mythology encourages aspiring founders to copy the practices of successful companies without recognizing that those practices may have been irrelevant to success, or may have actually been obstacles that the company succeeded despite rather than because of. The garage origin story of Apple, the dorm room founding of Facebook, the informal culture of early Google — these details are memorable because the companies succeeded massively. Thousands of companies were founded in garages and dorm rooms and failed. The garage is not a success factor; it is simply a detail of the successful companies' stories that became memorable because of their success.

CB Insights has published analyses of startup postmortems — written accounts by founders of companies that failed explaining what went wrong. These postmortems represent rare explicit visibility into the failed end of the distribution. The most common reasons cited in these analyses (running out of cash, lack of market need, wrong team) are not the stories typically told in entrepreneurship education, which focuses on the characteristics of survivors.

The Self-Help Industry

Michael Shermer, founder of Skeptic magazine and columnist for Scientific American, has extensively documented survivorship bias in self-help literature. Books like In Search of Excellence (Peters and Waterman, 1982) identified the practices of high-performing companies and implied that following those practices would produce excellent performance. A famous follow-up analysis by Michele Clayman in the Financial Analysts Journal (1987) found that the "excellent" companies identified by Peters and Waterman actually underperformed the market over the following five years, while companies identified as "worst" by the book's criteria outperformed.

The explanation: the book identified characteristics of companies that had recently been excellent, but those characteristics were partly the result of favorable conditions that had already passed. The selection was for past performance, and past performance in business (as in investing) does not reliably predict future performance.

The pattern repeats across decades of management literature. Jim Collins' Good to Great (2001) identified eleven companies that had made the transition from good to great performance and extracted the common factors of their leaders, cultures, and strategies. Subsequent analysis found that several of the "great" companies subsequently declined, suggesting that Collins had identified not the causes of sustained greatness but perhaps the correlates of a temporary performance peak.


Survivorship Bias Across Other Domains

Historical Evidence and Textual Survival

The history of ideas is fundamentally shaped by survivorship bias in what texts have survived. Of the ancient Greek dramatists, only the plays of Aeschylus, Sophocles, Euripides, and Aristophanes survive in significant quantity. Dozens of other playwrights whose works were celebrated in their time are known only from brief fragments or references in other surviving texts.

This creates a distorted picture of ancient drama: the standards we use to evaluate ancient Greek theater are derived almost entirely from the works that happened to survive manuscript transmission over two millennia, not from any representative sample of what was actually written and performed. Works that were mediocre but stored in durable form may have survived while excellent works stored on perishable materials did not.

The same bias affects the history of science, philosophy, and literature broadly. The intellectual lineages we can trace are the ones whose documents survived. The intellectual lineages whose documents did not survive are invisible to us.

Medical Research and Publication Bias

Publication bias in academic research is a form of survivorship bias: studies that find statistically significant results are more likely to be published than studies that find null results, creating a skewed published literature that overestimates the prevalence and magnitude of reported effects.

The implications became dramatically visible during the 2010s replication crisis in psychology and medicine. John Ioannidis at Stanford University, in his influential 2005 paper "Why Most Published Research Findings Are False" (PLOS Medicine), modeled the statistical conditions under which the majority of published research findings would fail to replicate. His central argument: when many studies are conducted on the same hypothesis and only positive findings are published, the published literature systematically overstates the evidence in favor of the hypothesis. Survivorship bias in publication creates an illusion of stronger evidence than actually exists.

The Reproducibility Project (Open Science Collaboration, Science, 2015), which attempted to replicate 100 published psychology studies, found that only 36% of replications produced statistically significant results at the same level as the original studies. Many of the original findings had been selected for publication in part because they showed positive results — survivorship bias at the publication stage had inflated their apparent strength.

Rock Music and Retrospective Canonization

The rock music canon — the artists recognized as "classic," studied in music history, and played on classic rock radio — is heavily shaped by survivorship bias. The artists who achieved massive commercial success and cultural longevity are the ones whose work is most visible today. The equal or larger population of musicians from the 1960s and 1970s who were talented, worked hard, and simply did not achieve commercial success are largely invisible.

This creates a distorted picture of what differentiated the "great" artists from the rest. The differences that distinguished the Beatles, Led Zeppelin, and the Rolling Stones from forgotten contemporaries may have been genuine talent and craft — or may have included substantial luck in timing, geography, industry relationships, and radio play that had little to do with the qualities we now attribute to their success. We study their practices and strategies in retrospect, without being able to compare them against the equivalent practices and strategies of the equally talented artists who did not survive.


Research on Survivorship Bias: Experimental and Field Evidence

Beyond the domain-specific evidence, researchers have studied the cognitive mechanisms underlying survivorship bias directly.

Kim Klayman and Young-won Ha at the University of Chicago, in a 1987 paper in the Psychological Review, documented the "positive test strategy" — the tendency of people to search for evidence by looking at cases where they expect to find a target rather than cases where they expect not to find it. This bias systematically focuses attention on the surviving and visible cases at the expense of the absent and invisible ones.

Rainer Greifeneder at the University of Basel and colleagues demonstrated in experimental settings (published in Journal of Experimental Social Psychology, 2011) that people significantly overestimate the proportion of survivors in a population when they observe only survivors. When shown only the successful outcomes of a selection process, participants' estimates of the underlying success rate were substantially higher than the actual rate — the invisible failures were essentially not represented in their mental model of the process.

Paul Rozin and Edward Royzman at the University of Pennsylvania documented the "negativity bias" — the greater salience and weight humans assign to negative events compared to positive ones — in a 2001 review in Personality and Social Psychology Review. The negativity bias creates a partial offset to survivorship bias in some contexts: failures, when they are visible, receive disproportionate attention. But in contexts where failures are systematically absent from the data — as with closed mutual funds, failed startups, and downed aircraft — the negativity bias cannot compensate for the structural absence of negative evidence.


Correcting for Survivorship Bias

Seek Out Failure Data Explicitly

The most direct correction is to actively seek data from the failed end of distributions. Before drawing lessons from successful companies, explicitly research companies that tried the same strategies and failed. Before adopting the advice of successful authors or investors, explicitly research whether people who followed similar approaches produced different outcomes.

Failure postmortems — documented accounts of why specific efforts failed — are systematically underrepresented in popular discourse and enormously valuable precisely for that reason. Organizations that practice systematic postmortems of failures, not just victories, build a database of evidence that corrects for the survivorship bias in their success-focused institutional memory.

Ask About the Denominator

When encountering a claim of the form "X% of people who did Y succeeded," ask about the denominator. What was the total population of people who tried Y, including those who failed? Survivorship bias inflates apparent success rates by shrinking the visible denominator to only the survivors.

Look for the Invisible Population

For any situation involving a visible set of successful outcomes, ask: "What would this look like if I could see the full population, including the failures?" Wald's insight about the bullet holes was precisely this: he reconstructed the invisible population of downed aircraft from the visible population of surviving ones.

For related concepts, see cognitive biases explained, why smart people make bad decisions, and analytical models vs. intuition.


References

Frequently Asked Questions

What is survivorship bias?

Survivorship bias is the logical error of drawing conclusions from a sample that consists only of entities that 'survived' some selection process, while ignoring the entities that did not survive and are therefore absent from the visible data. The word 'survive' here does not necessarily mean literal survival. Any process that causes some entities to remain visible while others disappear creates the conditions for survivorship bias: companies that stay in business while competitors fail, books that remain in print while forgotten titles go out of circulation, investment funds that continue operating while failed funds close, musicians who achieved fame while equally talented musicians who did not succeed are no longer performing. In every case, examining only the visible survivors produces systematically distorted conclusions about what caused success. The classic illustration is Abraham Wald's WWII analysis of bullet holes in returning aircraft. The military initially wanted to armor the places where returning planes showed the most damage. Wald pointed out the data was missing the most important cases: the planes that had been shot down and did not return. The places with no bullet holes on surviving aircraft were precisely where downed aircraft had been hit — a single hit in those locations proved fatal. Adding armor based only on returning planes' damage patterns would have armored the least vulnerable locations. Survivorship bias occurs in any domain where visible examples are systematically drawn from the successful end of a distribution.

Who was Abraham Wald and what did he discover about survivorship bias?

Abraham Wald (1902-1950) was a Romanian-born mathematician who became one of the most important statisticians of the 20th century. He was dismissed from the University of Vienna under anti-Jewish laws in 1938 and escaped to the United States, where he was recruited by the Cowles Commission for Research in Economics and later by the Statistical Research Group (SRG) at Columbia University during World War II. The SRG was commissioned by the U.S. military to analyze statistical problems of military strategy. Wald's most famous contribution was his 1943 analysis of aircraft vulnerability, published as a working paper titled 'A Method of Estimating Plane Vulnerability Based on Damage of Survivors.' Military analysts had examined returning aircraft and found that bullet holes clustered in the wings and fuselage but were rare on engines and cockpits. The intuitive recommendation was to armor the most-hit locations. Wald recognized the fatal flaw: the sample consisted entirely of planes that had survived combat and returned. Planes that were shot down — the most informative cases for identifying fatal vulnerabilities — were not in the sample. The locations with few bullet holes on returning planes were almost certainly where downed aircraft had been hit. A plane could absorb wing and fuselage damage and still fly home. A plane hit in the engine or cockpit almost never did. Wald's formal statistical contribution was a method for estimating population parameters from truncated samples — inferring the characteristics of the invisible failures from the observable survivors. The military armored the engines and cockpits. Wald published 23 technical reports for the SRG during the war. He died in a plane crash in 1950 at age 48.

How does survivorship bias affect business advice and startup culture?

Startup culture and business advice are among the most saturated domains with survivorship bias. The narrative ecosystem — books, podcasts, keynote speeches, magazine profiles — is almost entirely constructed from the stories of successful founders and companies. The strategies, habits, and approaches described in these narratives are those of people who succeeded. We have almost no comparable record of the equal or larger population of people who tried the same approaches and failed. Nassim Nicholas Taleb coined the term 'silent evidence' to describe the failure data that is structurally absent from popular discourse. Those who failed are not writing bestselling books, not giving TED talks, not being profiled in Fortune. Their absence is not random — it is precisely because they failed. Selecting only success stories from a distribution and treating them as a representative sample produces systematically misleading lessons. Specific distortions: 'Work 80-hour weeks' — this is an observation about successful founders who also worked long hours. It ignores the likely larger population of founders who worked equally long hours and failed. Long hours may be necessary but not sufficient, or may be irrelevant to success. 'Follow your passion' — this is an observation about people who found their passion monetizable. It ignores the many people who pursued their passion and achieved neither financial success nor sustainable passion. 'Raise venture capital' — this is an observation about companies that raised VC and succeeded. It ignores the many VC-funded companies that failed, and the many bootstrapped companies that succeeded. The corrective: seek failure data explicitly. CB Insights' startup postmortem database, published analyses of failed companies, and research using survivor-free samples provide the invisible half of the distribution.

How does survivorship bias affect investing and mutual fund performance data?

The mutual fund industry provides one of the most precisely documented and consequential applications of survivorship bias. The standard method for evaluating fund performance compares a fund's historical returns against a benchmark index. But the datasets used to calculate industry averages suffer from survivorship bias: poorly performing funds routinely close and merge with better-performing funds, and their track records disappear from historical databases. Mark Carhart at the University of Southern California demonstrated the magnitude of this effect in a 1997 paper in the Journal of Finance. When he constructed a 'survivor-free' dataset including the performance of funds that had closed, the average performance of actively managed funds versus index benchmarks deteriorated significantly compared to the survivor-biased dataset. The average fund underperformed after accounting for fees, but survivorship bias in historical data made it appear that more funds than actually existed had achieved market-beating returns. The practical implication: when a mutual fund company advertises that its funds have beaten the market historically, the denominator for that calculation typically excludes the funds the company quietly closed. The track record of funds that survived selection is not the track record of the average fund that investors might have held. Elroy Dimson, Paul Marsh, and Mike Staunton at London Business School have documented survivorship bias in long-run stock market return data: studies using only currently existing markets and indices substantially overestimate historical returns, because markets that collapsed (Russia 1917, China 1949, Argentina at multiple points) are absent from survivor-biased samples. For individual investors: historical fund performance is a worse predictor of future performance than survivor-free data would suggest, and comparisons of fund categories should use survivor-free datasets.

What are examples of survivorship bias in everyday life?

Survivorship bias appears in contexts well beyond finance and business: Education and 'successful people dropped out of college': The story of college dropouts who became billionaires (Gates, Zuckerberg, Dell) is survivorship bias. For every dropout who succeeded, there are many more who dropped out and whose outcomes were worse than if they had completed their degrees. The dropouts who succeeded represent a highly selected sample of people who had specific, fully-formed business ideas and access to specific resources — not a representative sample of college dropouts. 'They don't make things like they used to': Old appliances and furniture that have survived for decades represent the most durable items from their era. The many poorly made items from the same era broke and were discarded long ago. You never encounter them. The visible sample of old goods is biased toward the most durable. Buildings and historical architecture: The buildings we still have from the 15th century are the most robust, well-maintained, and fortunate structures from that era. The many more buildings that collapsed, burned, or were torn down are invisible. The visible sample creates an illusion that historical construction was uniformly excellent. Friends' successes on social media: People post their achievements, promotions, vacations, and happy moments. They rarely post their failures, rejections, and bad days. The visible social media feed is a severely survivorship-biased sample of your friends' lives — filtered for the best moments. Observing only the visible sample creates inaccurate social comparison. Ancient wisdom and 'timeless' advice: Sayings and proverbs that have survived millennia are those that resonated across many cultures and circumstances. The equal or larger number of aphorisms from the same eras that did not survive are invisible — including many that were popular in their day but contradicted by events.

How does survivorship bias affect scientific research?

Publication bias in academic research is a form of survivorship bias: studies that find statistically significant results are substantially more likely to be published than studies finding null results, creating a skewed published literature that overestimates the prevalence and magnitude of reported effects. The mechanism: researchers conduct studies, find statistically significant results, and submit them for publication. Journals are more likely to accept significant findings than null findings. Researchers who find null results are less likely to submit them, knowing the low acceptance probability. The result: the published literature is a survivorship-biased sample of all studies conducted, overrepresenting positive findings. John Ioannidis at Stanford modeled the statistical conditions under which the majority of published research findings would fail to replicate ('Why Most Published Research Findings Are False,' PLOS Medicine, 2005). His central argument: when many studies test the same hypothesis and only positive findings are published, the published literature systematically overstates the evidence for the hypothesis. The implications became visible during the 2010s replication crisis. The Open Science Collaboration's Reproducibility Project (Science, 2015) attempted to replicate 100 published psychology studies and found only 36% of replications produced statistically significant results at the same level. Many original findings had been selected for publication partly because they showed positive results — survivorship bias at the publication stage had inflated their apparent strength. Pre-registration of hypotheses — requiring researchers to specify hypotheses before running experiments — directly combats this form of survivorship bias by preventing post-hoc selection of significant findings from multiple tested hypotheses.

How do you correct for survivorship bias in your own thinking?

Correcting for survivorship bias requires building systematic habits to seek out the invisible failures: Ask about the denominator explicitly. When you encounter a success story or a statistic about success rates, ask: what was the total population of people who tried this, including those who failed? 'This strategy worked for 90% of users' is meaningless without knowing how the 10% who did not succeed were treated in the sample. Seek failure data deliberately. Before drawing lessons from successful examples, explicitly research whether failed examples are available. Startup postmortems, case studies of failed companies, analyses of failed medical treatments, and accounts of failed projects all represent the missing half of the distribution. Academic databases include null result studies; industry publications sometimes include failure analyses. Ask 'what would the non-survivors look like?' For any domain where you observe only survivors, ask what the full population would look like if you could see it. This is Wald's technique: reconstruct the invisible cases from the visible ones. If the visible successful entrepreneurs all work 70-hour weeks, what work patterns did unsuccessful entrepreneurs show? (Available data suggests: similar.) Look for the 'graveyard.' Every domain has a graveyard of non-survivors that is larger than the visible population of successes. Seeking it out explicitly counteracts the natural human tendency to learn from visible examples. Weight success stories by how representative they are. The more famous and widely discussed a success story, the more likely it represents an unusual outlier rather than a representative case. Jeff Bezos and Elon Musk are famous partly because their outcomes are exceptional; learning from them requires extreme caution about generalization.

What is the relationship between survivorship bias and the self-help industry?

The self-help and business advice industry is structurally prone to producing survivorship-biased lessons because the authors, speakers, and subjects of advice books are almost exclusively drawn from the successful end of distributions. Michael Shermer, founder of Skeptic magazine, has extensively documented this pattern. Books like In Search of Excellence (Peters and Waterman, 1982) identified the practices of high-performing companies and implied that following those practices would produce excellence. Michele Clayman's follow-up analysis (Financial Analysts Journal, 1987) found that the 'excellent' companies actually underperformed the market in the years following the book's publication. The book had identified companies at the peak of a performance cycle and attributed their success to practices that may have been irrelevant to the peak or already past their effective life. Jim Collins' Good to Great (2001) identified eleven companies that had made the transition from good to great performance. Subsequent analysis found that several of the 'great' companies later declined or failed: Circuit City went bankrupt in 2009, Fannie Mae required a government bailout in 2008. Collins had identified correlates of a period of high performance, not causes of sustained excellence. The survivor selection problem: only companies that achieved sustained high performance were selected for the study; the lessons derived from them were then presented as general prescriptions. The practical implication for consumers of self-help content: treat all advice derived from success stories as hypothesis rather than proven prescription. Ask what the failure cases looked like — whether the authors of those failures used the same practices. The absence of that comparison makes any attribution of success to specific practices logically incomplete.

How does survivorship bias affect what we think we know about history?

Survivorship bias shapes historical knowledge in ways that are deeply embedded and rarely examined: Textual survival: Of ancient Greek dramatists, only plays by Aeschylus, Sophocles, Euripides, and Aristophanes survive in significant quantity. Dozens of other playwrights celebrated in their own time are known only from fragments or references. Our standards for evaluating ancient drama are derived from works that survived manuscript transmission over two millennia — not from a representative sample of what was performed and valued. The works that survived may include both the genuinely best and the works that happened to be stored in the most durable conditions. Technology and invention: The technologies we know about from ancient history are those whose artifacts survived or whose use was recorded in surviving texts. Vast numbers of innovations and inventions from ancient periods left no durable trace. Our picture of technological development in ancient civilizations is necessarily filtered through what survived to be found. Scientific ideas: The scientific ideas, theories, and discoveries we know from history are those that were written down, copied, and preserved. Many competing ideas, alternative hypotheses, and scientific dead-ends were never recorded or were recorded on perishable materials. The history of science that we teach is a survivor-selected sample that may overrepresent the ideas that happened to lead to the theories that eventually dominated. Great person theory of history: Historical narratives focus on individuals who achieved prominence, left records, and whose actions had documented effects. The equal or larger number of historical actors who worked hard, had genuine influence, but left no surviving record are invisible. This creates an illusion that history is made by exceptional individuals — the visible survivors — rather than by broad social processes in which exceptional individuals participate.