In 2015, a marketing report from Microsoft Canada made a claim that spread through global media with extraordinary speed: humans now have an average attention span of eight seconds, down from twelve seconds in 2000, meaning our attention is shorter than that of a goldfish, commonly credited with a nine-second span. Within weeks the statistic appeared in the New York Times, Time Magazine, the BBC, and hundreds of corporate presentations on the challenges of digital communication. It confirmed something that many people intuitively felt: that something has gone wrong with human attention in the age of smartphones and social media. It was also, in any scientifically meaningful sense, not true.
The Microsoft report was a market research document, not a peer-reviewed scientific study. It did not define "attention span" in a way that cognitive scientists would recognize, it drew on a non-scientific survey combined with EEG readings of a small and non-representative sample, and it conflated several distinct cognitive phenomena under a single misleading number. Cognitive scientists who examined it noted that "attention span" is not a unified trait that can be measured in seconds across tasks: a person who loses focus during a boring meeting may spend hours absorbed in a novel they find compelling. The goldfish comparison was invented; goldfish do not have a nine-second attention span either, and the concept does not transfer from fish to humans in any scientifically coherent way. Yet the statistic spread because it was quotable, because it confirmed existing anxieties, and because the media environment it described was the same media environment in which it spread — an irony that should have given pause to anyone repeating it.
The genuine research on attention in the digital age is both more nuanced and more troubling than the eight-second myth suggests. Something is changing in how people deploy attention, how frequently they switch tasks, and how comfortably they sustain focus on demanding material. Whether these changes represent permanent damage to cognitive capacity or modifiable behavioral habits is genuinely debated. The researchers who have spent careers studying human attention hold a range of views, from cautious alarm to measured reassurance, and the honest answer — that we know some things with confidence and others remain uncertain — is less satisfying than a statistic about goldfish but more useful.
"It takes an average of 23 minutes and 15 seconds to get back to a task with full focus after an interruption. We found that people switch tasks about every 3 to 5 minutes, often voluntarily. The phone, the app notifications, the email — they have made interruption the default condition of working." -- Gloria Mark, Attention Span (2023)
Key Definitions
Sustained attention: The ability to maintain focus on a task over an extended period. This is what people typically mean when they worry about "attention span." It varies substantially by task type, interest level, and environmental conditions.
Selective attention: The ability to focus on relevant stimuli while filtering out irrelevant ones. Nilli Lavie's cognitive load theory argues that higher cognitive load — filling working memory with demanding tasks — reduces susceptibility to distraction by leaving no processing capacity for distractors.
| Claim | What Research Actually Shows |
|---|---|
| Attention spans have fallen to 8 seconds | No peer-reviewed study supports this figure; originated from a Microsoft report without scientific citation |
| Smartphones directly cause attention decline | Research shows mixed results; problematic use patterns matter more than device ownership |
| Multitasking trains shorter attention | Task-switching costs are well-documented, but causal link to permanent attention reduction is unproven |
| Younger generations have shorter attention | Generational studies show context-dependent differences rather than a universal decline |
| Meditation reverses attention shrinkage | Mindfulness does improve sustained attention, but effect sizes are modest in most meta-analyses |
Attentional control: The executive function capacity to direct and sustain attention voluntarily, override automatic attention captures (like a buzzing phone), and return attention to a task after interruption. This is distinct from raw attentional capacity and more subject to training.
Task-switching cost: The reduction in performance that occurs when switching between tasks rather than focusing on one. Research consistently finds that switching between tasks takes time and produces errors even when each task individually is familiar.
Deep reading: Maryanne Wolf's term for the slow, analytical, reflective engagement with complex text that involves making inferences, empathizing with characters, and integrating new information with existing knowledge. Contrasted with skimming and scanning, which extract surface information without deep processing.
Attention restoration: The Kaplans' concept that directed attentional capacity is a limited resource that becomes fatigued with sustained use and is restored by involuntary attention directed at gentle stimuli, particularly natural environments.
What the Goldfish Myth Got Wrong (And What It Got Right)
The 2015 Microsoft Canada report cited a decline in "average attention span" from twelve seconds in 2000 to eight seconds in 2015, attributing the change to the growth of the internet and smartphone use. The statistic required that "attention span" be a single measurable number for the average human, independent of context, task, or motivation. No such measure exists in cognitive science.
Simon Maybin of the BBC attempted to trace the statistic to its source in 2017 and found that the twelve-second 2000 baseline itself had no identifiable scientific origin. Neither figure came from peer-reviewed research. The Microsoft report itself acknowledged it was not a scientific study but a "consumer insights" document. Cognitive neuroscientist Gemma Briggs of the Open University told the BBC that "it's very hard to pin down what we mean by attention span and it probably doesn't exist in the way that we think it does."
What the report described, if stripped of the misleading framing, was something real but different: changes in engagement patterns online, measured through website analytics. Internet users were browsing more pages for shorter average durations. This measures browsing behavior, not cognitive capacity — and browsing behavior might reflect platform design, abundance of alternatives, or changing content formats rather than a reduction in cognitive ability.
The myth spread because it landed in fertile ground. Many people do feel that their attention is more fragmented than it used to be, that they find it harder to read long books, that they reach for their phone reflexively during any pause. These experiences are real and worth taking seriously. They do not, however, establish that attention capacity has declined. They may reflect changes in habit, environment, and expectation rather than fundamental neurological change.
Gloria Mark and the Interruption Economy
Gloria Mark has been studying attention and interruption in workplace settings at the University of California Irvine since the early 2000s, making her one of the most credible researchers on the practical effects of digital technology on attention behavior. Her findings are sobering even without the goldfish mythologizing.
Mark's foundational finding, reported in multiple studies, is the recovery time after interruption: an average of approximately 23 minutes to return to a task with full cognitive engagement after being interrupted. This is not an estimate of how long it takes to return to looking at the task, but how long to return to the depth of processing that preceded the interruption. Interruptions displace working memory contents, break the cognitive state of flow, and require reconstruction of mental context when returning.
More troubling is her finding about self-interruption. Mark's later research tracked not only external interruptions (colleagues, email alerts, phone calls) but voluntary task-switching — moments when individuals voluntarily shift their attention away from a task without external prompting. In her earlier studies in the early 2000s, people switched tasks voluntarily approximately every five minutes. By her more recent tracking studies using monitoring software, voluntary self-interruptions had increased substantially, with people switching tasks approximately every three to five minutes. People have internalized the expectation of interruption and replicate it even in quiet conditions.
Mark's 2023 book "Attention Span" synthesizes this research and adds important nuance: she finds that attentional patterns are responsive to design and context. When organizations implement designated focus time — protected blocks where notifications are disabled — measured attentional patterns improve. When individuals practice deliberate focus, they can reduce self-interruption frequency. This suggests that what has changed is primarily habitual behavior rather than fixed cognitive capacity, which is a more hopeful finding.
Nicholas Carr and the Shaping of Reading
Nicholas Carr's "The Shallows: What the Internet Is Doing to Our Brains" (2010) presented a more alarming case, grounded in neuroscience and technology history. Carr argued that the brain's plasticity — its capacity to reshape neural pathways in response to experience — means that habitual use of the internet is literally rewiring how we think. The internet, he argued, is optimized for distraction: hyperlinks pull attention sideways, notifications interrupt focus, the visual and informational density of web pages rewards rapid scanning rather than sustained reading. If these are the attentional habits we practice for hours daily, Carr argued, they will strengthen the neural circuits for skimming and weaken those for deep linear reading.
Carr wrote personally about noticing that his ability to read long books — to sustain the focused engagement that serious reading requires — had deteriorated after years of heavy internet use. He found this distressing not merely for himself but for what it might mean for the cultural practices that serious reading sustains: critical thinking, empathy, historical imagination, the slow consideration of complex arguments.
"The Shallows" was widely praised and widely criticized. Critics noted that Carr's personal experience, however compellingly described, was not controlled research. They pointed out that moral panics about new media degrading attention and reading have accompanied every significant communication technology, from the printing press (too many books!) through novels (morally corrupting and addictive) to television (passive and mind-numbing) through video games. Each generation has feared the previous generation's attention-degrading medium. The critics have a point: the track record of technology-degrades-mind arguments is mixed at best.
The defense of Carr's central concern is that the internet is different from previous media in specific ways. Television is passive; the internet is interactive. Books and even television create extended narrative engagement; the internet's hyperlink and notification structure specifically fragments attention. The neuroscience of plasticity is real: habits of mind do shape neural organization. Whether internet habits have crossed some threshold of impact on reading and reasoning capacity that previous media did not is genuinely uncertain.
Maryanne Wolf and Deep Reading
Maryanne Wolf, a cognitive neuroscientist and reading development specialist at UCLA, has produced the most technically grounded version of Carr's concern. Her foundational research on how children learn to read — documented in "Proust and the Squid" (2007) — established the neuroscience of reading acquisition: reading is not a natural skill like language comprehension, but a cultural technology that requires extended education to acquire and that shapes the brain as it develops. The brain builds specialized circuits for reading that did not exist in human neurology before writing was invented.
Wolf's concern, articulated in "Reader, Come Home" (2018), is about what she calls the "reading brain" — the neural circuitry that supports deep reading — and whether habitual screen-based reading is eroding it. Deep reading, in Wolf's framework, involves multiple cognitive processes beyond basic text decoding: inferential reasoning (reading between the lines), empathetic imagination, analogical thinking, critical analysis, and background knowledge integration. These processes take time and require sustained engagement; they are incompatible with skimming.
Wolf conducted an informal but telling experiment on herself: after years of screen-heavy reading, she attempted to re-read Hermann Hesse's "The Glass Bead Game" and found she could not sustain the level of engagement the book requires. She had difficulty slowing down to the pace the text demands. This prompted a period of deliberate re-training — extended sessions of deep, slow reading without digital interruption — which she reported as successfully restoring her reading experience.
The pedagogical implication Wolf draws is that deep reading must be explicitly practised and protected, especially in children whose reading brains are still developing. If schools shift entirely to digital, fragmented reading environments during the formative years of reading acquisition, they may fail to develop the neural substrate for deep reading at all. This concern has influenced some educational researchers to advocate for paper-based reading instruction in early literacy.
The Multitasking Illusion
One of the most replicated findings in cognitive psychology is that humans cannot truly multitask on cognitive tasks. What we call multitasking is rapid task-switching — alternating between tasks rather than performing them simultaneously — and it carries measurable cognitive costs at each switch.
David Strayer at the University of Utah has studied the impairment effects of divided attention on driving, producing findings that entered the legal and policy mainstream. His research found that conversing on a mobile phone while driving — even hands-free — impairs driving performance to a statistically comparable degree as driving with a blood alcohol level of 0.08%, the US legal limit. The impairment comes not from the mechanics of holding a phone but from the cognitive resource demand of maintaining a conversation, which competes for the same processing capacity as monitoring the road and making driving decisions. Hands-free conversation does not reduce this impairment.
Eyal Ophir, Clifford Nass, and Anthony Wagner at Stanford published a landmark study in 2009 examining what they called "heavy media multitaskers" — people who habitually use multiple media streams simultaneously (browsing while watching television, texting while listening to lectures). They predicted these individuals would show superior multitasking ability. The results were the opposite: heavy media multitaskers performed significantly worse than light multitaskers on tasks measuring filtering of irrelevant information, switching between mental sets, and working memory capacity. The interpretation is that habitual media multitasking trains susceptibility to distraction rather than multitasking skill.
The Strayer and Ophir/Nass/Wagner findings converge on an important point: the cognitive costs of distraction are real and measurable, and habitual exposure to distracted conditions may impair rather than improve attentional performance. This is the legitimate core of concern about digital media effects, grounded in controlled research rather than in surveys and marketing reports.
Matthew Crawford and the Built Environment of Attention
Philosopher Matthew Crawford's "The World Beyond Your Head" (2015) approached the attention problem from a different angle: the designed environment. Crawford argued that attention is increasingly an economic commodity, extracted from individuals by commercial interests for sale to advertisers. The physical and digital environments we inhabit are engineered to capture and hold attention — slot machines, notification systems, social media feeds, cable news — in ways that override voluntary attentional control.
Crawford's argument draws on phenomenology and the philosophy of perception more than experimental psychology, but it intersects with a body of attention research. The concept of "attentional hijacking" — the capture of attention by stimuli designed to trigger automatic responses (novelty, social information, threat signals) — is documented in cognitive science. Smartphones are designed to maximize engagement, using social validation signals (likes, comments, notifications) that trigger dopaminergic responses and create habitual checking behavior. B.J. Fogg at Stanford's Persuasive Technology Lab has spent decades studying and teaching the design techniques that shape behavior; his students have gone on to build some of the most attention-consuming applications in history.
The implication of Crawford's argument is that the attention problem is not primarily individual pathology but a structural feature of a commercial environment that profits from distraction. This reframes the solution: individual practices of attention hygiene (turning off notifications, scheduled focus time, device-free periods) are necessary but incomplete responses to a system that is continuously re-engineered to defeat them.
ADHD, Diagnosis Rates, and Genuine Disorder
Any discussion of attention and its alleged deterioration must distinguish between clinical attentional disorder and normative attentional challenge. ADHD (Attention Deficit Hyperactivity Disorder) diagnosis rates have increased substantially in the United States since the 1990s — from approximately 6% of children in the early 1990s to approximately 10-11% by the 2010s. This increase has fueled debate about whether ADHD is overdiagnosed, whether diagnostic criteria have shifted, or whether there is a genuine increase in the prevalence of attentional pathology.
Russell Barkley, one of the world's leading ADHD researchers, has argued consistently that ADHD is a real and substantially heritable neurological condition, that it has been underdiagnosed historically, and that rising diagnosis rates primarily reflect better identification rather than overdiagnosis. Others have pointed to variations in diagnosis rates across countries — ADHD is diagnosed at much higher rates in the US than in most European countries — as suggesting cultural and systemic factors in diagnosis beyond prevalence alone.
The important point for the attention-technology debate is that ADHD represents a distinct clinical population with different mechanisms than the general attentional challenges associated with digital media use. Most people who find it hard to focus on a long book, or who check their phone frequently, do not have ADHD. Conflating attentional habits shaped by media environment with clinical attentional disorder obscures both phenomena.
Attention Restoration: What Actually Helps
The most practically useful body of attention research may be Stephen and Rachel Kaplan's attention restoration theory, developed over decades at the University of Michigan. The Kaplans distinguish between directed attention — effortful focus on tasks that require inhibition of distractions — and involuntary attention — the effortless engagement captured by inherently interesting stimuli, particularly in natural environments. Their theory proposes that directed attentional capacity is a limited resource that becomes fatigued with sustained use and is restored through involuntary attention to restorative environments.
Research testing attention restoration theory has consistently found that exposure to natural environments — parks, forests, water, even photographs or views of nature — produces measurable recovery in directed attentional capacity compared to urban environments. Studies by Marc Berman and colleagues found that a 50-minute walk in nature improved working memory performance, whereas a 50-minute walk in an urban environment did not. The mechanism is not fully understood but is consistent with the Kaplans' original formulation.
For individuals seeking to maintain attentional capacity, the research supports a practical toolkit: deliberate practice of focused reading and single-task work; scheduled device-free periods; exposure to natural environments; mindfulness practices that train the ability to notice and return attention after distraction; and modification of the notification environment to reduce involuntary interruptions. Adrian Ward and colleagues at the University of Texas, Austin, found in 2017 that the mere presence of a smartphone — face down, silenced, on a desk beside a person working on a cognitive task — reduced performance relative to having the phone in another room. The suppression of the impulse to check it consumes cognitive resources even when not acted on.
What Cannot Yet Be Concluded
Honest accounting of the evidence requires acknowledging significant uncertainties. We do not have long-term longitudinal studies tracking the attentional development of individuals across the transition from pre-smartphone to smartphone environments. We do not know whether the behavioral changes Gloria Mark documents — increased self-interruption, reduced average focus duration — represent permanent changes to cognitive capacity or habits that could be reversed with changed environments and deliberate practice. We do not know whether the reading brain changes Wolf worries about have occurred at population scale, or whether they are reversible through deliberate re-training.
What we know is that habitual practices shape neural organization; that the environments we have designed for ourselves in the digital age are optimized for distraction rather than focus; that task-switching carries cognitive costs; and that many people report subjective experiences of increased difficulty with sustained attention. These are sufficient grounds for taking the concern seriously without accepting the most alarming framings.
The honest position is that something is changing in how human attention is habitually deployed, that this matters for the kinds of thinking and understanding that require sustained focus, and that the changes appear to be addressable through deliberate environmental and behavioral choices — even in the absence of a definitive verdict on their long-term cognitive significance.
Practical Implications
For individuals: Treat attention as a trainable skill and an exhaustible resource. Schedule specific focus periods with notifications disabled. Practise sustained reading without multitasking. Spend time in natural environments. Do not conflate difficulty concentrating with incapacity — attentional habits respond to deliberate practice.
For educators: Protect extended reading time with physical books, especially for younger students developing the reading brain. Teach explicitly about the costs of distraction and the value of sustained attention. Create device-free learning environments for tasks requiring deep engagement.
For designers and policymakers: The commercial attention economy creates externalities — attentional costs — that individual users pay but are not reflected in product design. Regulation of addictive design patterns in platforms used by children has been proposed in multiple jurisdictions.
See also: Why Social Comparison Makes Us Miserable | Behavioral Economics Explained | Why Social Comparison Makes Us Miserable
References
- Mark, G. (2023). Attention Span: A Groundbreaking Way to Restore Balance, Happiness and Productivity. Hanover Square Press.
- Mark, G., Gudith, D., & Klocke, U. (2008). "The Cost of Interrupted Work: More Speed and Stress." Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 107-110.
- Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton.
- Wolf, M. (2018). Reader, Come Home: The Reading Brain in a Digital World. Harper.
- Ophir, E., Nass, C., & Wagner, A. (2009). "Cognitive Control in Media Multitaskers." Proceedings of the National Academy of Sciences, 106(37), 15583-15587.
- Strayer, D., Drews, F., & Crouch, D. (2006). "A Comparison of the Cell Phone Driver and the Drunk Driver." Human Factors, 48(2), 381-391.
- Ward, A., Duke, K., Gneezy, A., & Bos, M. (2017). "Brain Drain: The Mere Presence of One's Own Smartphone Reduces Available Cognitive Capacity." Journal of the Association for Consumer Research, 2(2), 140-154.
- Kaplan, S. (1995). "The Restorative Benefits of Nature: Toward an Integrative Framework." Journal of Environmental Psychology, 15(3), 169-182.
- Crawford, M. (2015). The World Beyond Your Head: On Becoming an Individual in an Age of Distraction. Farrar, Straus and Giroux.
- Lavie, N. (2005). "Distracted and Confused? Selective Attention Under Load." Trends in Cognitive Sciences, 9(2), 75-82.
- Mangen, A., Walgermo, B., & Bronnick, K. (2013). "Reading Linear Texts on Paper Versus Computer Screen: Effects on Reading Comprehension." International Journal of Educational Research, 58, 61-68.
- Berman, M., Jonides, J., & Kaplan, S. (2008). "The Cognitive Benefits of Interacting with Nature." Psychological Science, 19(12), 1207-1212.
Frequently Asked Questions
Is the '8-second attention span' statistic real?
No. The claim that humans now have an average attention span of 8 seconds — shorter than a goldfish — originated in a 2015 Microsoft Canada marketing report and was not peer-reviewed scientific research. The report itself had serious methodological problems: it conflated different measures of attention, used a non-scientific survey, and did not clearly define what 'attention span' meant. Cognitive scientists and attention researchers were quick to point out that 'attention span' is not a single unified cognitive trait that can be measured in seconds in a meaningful way. Attention varies enormously depending on task type, interest level, environmental conditions, and individual differences. The statistic spread rapidly in media because it was quotable and confirmed existing anxieties about technology, not because it was scientifically valid.
How has digital technology changed how we pay attention?
Research does find changes in attention habits, even if not in fundamental attention capacity. Gloria Mark, a professor at the University of California Irvine who has studied workplace attention for over two decades, found that it takes an average of approximately 23 minutes to return to a task with full focus after an interruption. Her research has also tracked how the frequency of self-interruptions — people voluntarily switching tasks — has increased with smartphone and notification culture: in her studies, people switch tasks approximately every three to five minutes on average. Nicholas Carr's 'The Shallows' (2010) argued, drawing on neuroscience, that the internet trains the brain toward skimming and scanning rather than deep linear reading. These are real documented changes in behavior, though whether they represent permanent capacity changes or modifiable habits is debated.
What does research say about multitasking?
The research is clear: humans cannot truly multitask on cognitive tasks. What we call multitasking is rapid task-switching, and it carries significant costs. David Strayer at the University of Utah has studied the cognitive costs of divided attention extensively, finding that driving while talking on a handheld or hands-free mobile phone impairs driving performance to a degree comparable to legal intoxication. Eyal Ophir, Clifford Nass, and Anthony Wagner at Stanford published research (2009) finding that heavy media multitaskers — people who frequently use multiple media simultaneously — performed worse than light multitaskers on a range of cognitive tasks including filtering irrelevant information and task-switching. This was counterintuitive: people who multitask frequently were worse at multitasking than those who didn't. The interpretation is that heavy multitasking trains susceptibility to distraction.
Does reading on screens affect comprehension vs paper?
Research is mixed but generally finds modest advantages for paper in retention and deep comprehension, particularly for complex or long texts. Anne Mangen and colleagues in Norway have conducted several studies finding lower text comprehension and worse spatial reconstruction of narrative for screen readers compared to paper readers. Naomi Baron's survey research found that students themselves reported comprehending more carefully on paper. However, screen reading research is complicated by confounds: reader habits, familiarity with the medium, task instructions, and whether annotation is possible all affect outcomes. The differences found in most studies are modest and may reflect current habits rather than inherent medium effects. As digital annotation tools improve and readers adapt to screens over generations, the gap may narrow.
What is happening to deep reading ability?
Maryanne Wolf, a cognitive neuroscientist at UCLA and author of 'Reader, Come Home' (2018), has raised concern about the impact of screen-based reading habits on the brain circuits that support deep reading — the slow, analytical, empathetic engagement with complex text that she argues is central to critical thinking and moral imagination. Wolf's argument, based on developmental research on reading neuroscience, is that deep reading is not automatic; it requires sustained practice, and the brain circuits it depends on are shaped by the type of reading habitually practised. If screen habits train skimming and interrupted reading, the neural substrate for deep reading may weaken. This is a hypothesis supported by plausible neuroscience but difficult to test directly in the long-term longitudinal studies it would require. Wolf has also written about 're-training' the reading brain through deliberate practice.
Can attention skills be improved?
Yes. Research on attention training provides cautious optimism. Mindfulness meditation, which trains the ability to return attention to a chosen object after distraction, has been studied extensively and shows measurable effects on attentional control in multiple randomized controlled trials — though effect sizes are often modest. Attention restoration theory, developed by Stephen and Rachel Kaplan, finds that exposure to natural environments restores directed attentional capacity after fatigue. Extended periods without digital interruption — designated focus time, device-free periods — are supported by research on the costs of context switching. Reading long-form text, practiced regularly, may maintain or strengthen the neural circuits for sustained attention. The distinction between attentional capacity (how much focused attention is possible) and attentional habits (how frequently attention is deployed on demanding tasks) is important: habits are more modifiable than capacity.
What environments best support sustained attention?
Research converges on several environmental factors. Reduced interruption: even the presence of a smartphone on a desk, face down and silenced, reduces cognitive performance in tasks requiring concentration — a finding by Adrian Ward and colleagues (2017) at the University of Texas, Austin. The act of suppressing the impulse to check the phone consumes cognitive resources. Natural environments: the Kaplans' attention restoration theory finds that natural settings (green spaces, water) restore directed attention more effectively than urban environments. Cal Newport's 'Deep Work' (2016) synthesizes cognitive and computer science research to argue that sustained concentration is an increasingly rare and valuable skill requiring specific practice and environmental conditions. Scheduled, protected focus periods with notifications disabled are the most practically supported intervention.