In 1971, Herbert Simon — the economist who would win the Nobel Prize in 1978 — wrote a paragraph that has become one of the most prescient in economics: "A wealth of information creates a poverty of attention." He was writing about organizational decision-making, about the problems that arise when managers are flooded with reports, memos, and data they cannot possibly process. He could not have known he was describing the defining condition of digital life for billions of people fifty years later.

Simon's observation was a throwaway line in a 1971 chapter in a volume called Computers, Communications and the Public Interest. He was not trying to coin a slogan. He was making a structural point about how systems organize around whatever is scarce: if you have more information than anyone can read, the bottleneck in decision-making is not information but attention. Attention becomes the resource that determines outcomes. In a world of information abundance, attention is the scarce good — and whoever controls it controls the levers of power.

What Simon could not anticipate was that within a generation, the competition for human attention would become the primary organizing logic of the most powerful companies in history. By 2023, Google and Meta together took in approximately $350 billion in annual advertising revenue. The mechanism of that revenue is the capture and sale of human attention. Every minute you spend on a platform is a minute of attention that can be packaged, quantified, and auctioned to an advertiser. The implications of that business model for how platforms are designed, what content they amplify, and what they do to the people who use them have become one of the defining questions of the twenty-first century.

"A wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it." — Herbert A. Simon, Computers, Communications and the Public Interest (1971)


Key Definitions

Attention economy: An economic system in which human attention, rather than money or physical goods, is the primary scarce resource being competed for, captured, and exchanged.

Information overload: The condition in which the volume of available information exceeds an individual or system's capacity to process it, identified by Simon as creating attention scarcity.

Scarcity of attention: The economic premise that because human attention is finite and cannot be expanded, it behaves as a scarce resource subject to market competition.

Surveillance capitalism: Shoshana Zuboff's term for the economic logic in which behavioral data is extracted from users, processed into prediction products, and sold to advertisers seeking to influence behavior.

Behavioral modification: The use of interface design, recommendation systems, and targeted content to change users' actual behavior, not merely predict it.

Intermittent reinforcement: A behavioral conditioning schedule (Skinner's variable ratio schedule) in which rewards are delivered unpredictably, producing persistent compulsive behavior; the schedule underlying slot machines and many social media design patterns.

Notification capitalism: The practice of using push notifications as an attention-capture tool, interrupting users with information optimized for re-engagement rather than relevance.

Dark patterns: Interface design choices that manipulate users into actions they would not take with full awareness — deceptive opt-outs, hidden costs, guilt-inducing label text.

Infinite scroll: A design pattern, invented by Aza Raskin, that removes the natural endpoint of a page, keeping users in a continuous feed without a decision point at which they might choose to stop.

Outrage gradient: The empirical finding that content expressing moral outrage spreads further and faster through social networks than content expressing other emotions, creating a systematic bias in algorithmically optimized feeds toward inflammatory material.

Filter bubble: Eli Pariser's term for the personalization effect by which recommendation algorithms show users content predicted to reinforce existing preferences, potentially narrowing exposure to challenging perspectives.

Engagement optimization: The practice of tuning content recommendation and platform design to maximize measurable engagement signals (time-on-platform, click-through rates, shares, comments) regardless of whether that engagement correlates with user well-being.

Zuboff's behavioral futures market: Shoshana Zuboff's concept of the market in which platforms sell prediction products — wagers on future user behavior — to advertisers, created from behavioral surplus data extracted from users.


Simon's Insight and the Making of the Attention Economy

Herbert Simon's 1971 observation was a structural insight, not a prediction. He was describing a general principle of systems: resources organize around scarcity. As technology produces abundance of anything — food, energy, information — the bottleneck shifts to whatever the system still lacks. In 1971, the shift he was identifying was from an information-scarce world to an information-abundant one. In such a world, the limiting resource for decision-making would be the human capacity to attend to information.

Twenty-six years later, a writer named Michael Goldhaber published an essay in Wired magazine that took Simon's structural observation and turned it into a theory of a new kind of economy. The year was 1997, the early commercial internet was a year old, and Goldhaber argued that the internet was not simply a new distribution medium for existing goods. It was creating the conditions for an entirely new economic system in which attention — not money, not physical goods — was the fundamental unit of exchange. Performers, writers, and platform builders were not just selling products; they were accumulating attention as a store of value. Being attended to was itself the good.

Goldhaber's essay was largely ignored at the time. Within a decade it would read like a precise description of what Google, YouTube, Facebook, and Twitter had become.

How the Attention Business Model Works

The business model of advertising-funded platforms is not complicated in principle, though it is complex in operation. Platforms provide services to users for free: search, social networking, email, video, mapping. The cost of these services is covered by selling advertising. Advertisers pay to show their messages to users. The more users a platform has, and the more time those users spend on the platform, the more advertising inventory the platform can sell and the more it can charge for it.

This creates the fundamental incentive structure of the attention economy: platforms maximize revenue by maximizing time-on-platform, regardless of what users are doing during that time or how they feel afterward. The engagement metric — the measure of how much users interact with a platform — functions as a proxy for captured attention. More engagement means more ad revenue. Whether that engagement is positive (people enjoying content) or negative (people outraged by content) is irrelevant to the revenue calculation.

The product, in the famous formulation, is not the service. The product is you. Specifically: the advertisers are the customers, and the attention of users is what is being sold to them. This is not merely a provocative framing — it is an accurate description of the economic transaction. Users are the raw material whose processing generates the valuable output: eyeballs in front of ads.

The perverse incentive structure this creates is well understood. A platform that cares only about engagement has no economic reason to distinguish between engagement driven by joy, curiosity, connection, or habit, and engagement driven by outrage, anxiety, compulsion, or addiction. All engagement looks the same to the revenue model.


The Engineering of Compulsion

Skinner's Variable Ratio Schedule

B.F. Skinner's mid-twentieth century research on operant conditioning identified the variable ratio reinforcement schedule as the most powerful mechanism for producing persistent behavior. In a variable ratio schedule, rewards are delivered after an unpredictable number of responses — sometimes after the first pull of a lever, sometimes after the fiftieth, with no pattern the subject can learn. This unpredictability is precisely what makes the behavior so resistant to extinction. Slot machines use exactly this schedule. So do most social media platforms, whether by design or by structural convergence.

When you pull down to refresh your social feed, sometimes there is something rewarding — a message, a like, an interesting post. Often there is nothing new. You do not know which it will be. The uncertainty is the mechanism. The app has become a slot machine: unpredictable reward delivery producing compulsive checking behavior.

The Like Button and Infinite Scroll

The like button, introduced by Facebook in 2009, transformed social media by making social approval instantaneously quantifiable and publicly visible. Former Facebook president Sean Parker has said the design was explicitly intended to exploit human psychological vulnerabilities: "How do we consume as much of your time and conscious attention as possible?" The like button created a direct feedback loop between users' posts and their social reward circuits, encouraging more posting, more checking, and more emotional investment in numerical scores of approval.

Aza Raskin, who invented infinite scroll while working at Mozilla, described the design intent as purely practical: he wanted a better reading experience than pagination. The result was a mechanism that removed the natural decision point at which a user reaches the bottom of a page and must actively choose to turn to the next one. Infinite scroll converts that active choice into the absence of a reason to stop. Raskin has since publicly apologized, estimating that his invention wastes approximately 200,000 hours of human time per day.

Tristan Harris, who spent years as a design ethicist at Google before resigning to found the Center for Humane Technology, has described the collective effect of these design choices as "a race to the bottom of the brain stem" — a competition among platforms to see which can most effectively access ancient neural reward circuits, bypassing deliberate choice. The implication is not that individual designers are malicious but that a business model rewarding engagement, operating through competitive pressure, systematically selects for designs that exploit psychological vulnerabilities.


Surveillance Capitalism

Shoshana Zuboff's 2019 book The Age of Surveillance Capitalism provides the most comprehensive theoretical account of the economic logic underlying the attention economy. Zuboff argues that Google and Facebook pioneered a new economic logic — distinct from earlier forms of capitalism — in which behavioral data is the key raw material.

The process she describes has several stages. First, behavioral surplus is extracted: data generated as a byproduct of your activity online — what you search for, what you click, how long you hover, who you communicate with, where you go — beyond what is needed to provide you with the service. Second, this surplus is fed into machine learning systems that produce prediction products: assessments of your likely future behavior. Third, these prediction products are sold in behavioral futures markets to advertisers and other customers who want to influence what you do. Fourth — and this is Zuboff's most important and contested claim — the system does not merely predict behavior, it modifies it. Platforms use what they know about you to subtly reshape your choices and actions toward behaviors that generate more data and more purchases.

The distinction between predicting and modifying behavior is crucial. A traditional advertiser tries to reach people who are already likely to want a product. A surveillance capitalist does not merely identify likely buyers — it works to create them, by presenting information, social cues, and choice environments designed to nudge people toward specific actions.

Zuboff's framework has been criticized for overstating the precision and effectiveness of behavioral modification (most targeted advertising is far less precise than its sellers claim), but her core observation about the economic logic — behavioral surplus as raw material for prediction products — accurately describes what large platforms are doing.


Cognitive and Social Costs

Attention and Cognitive Function

The claim that social media use reduces attention spans to eight seconds — shorter than a goldfish — circulated widely after a 2015 Microsoft Canada report. The report was methodologically weak, and the claim has been largely debunked by attention researchers. David Strayer, who directs the attention lab at the University of Utah and has conducted rigorous studies of human attention for decades, finds no evidence for dramatic reduction in human attention span attributable to technology. What research does find is that chronic multitasking — the pattern of switching between tasks and devices that smartphones enable — is associated with poorer performance on tasks requiring sustained focus, though whether this represents reduced capacity or changed habits is an open question.

Adolescent Mental Health

The most significant and contested evidence of social media harm concerns adolescent mental health. Jean Twenge's 2017 book iGen documented a sharp increase in depression, anxiety, and loneliness among American teenagers beginning around 2012 — precisely when smartphone ownership crossed 50% among teenagers. Twenge argued the correlation was causal. Jonathan Haidt and Jean Twenge extended this analysis in multiple papers and a 2021 article, pointing to cross-national evidence showing parallel trends in countries where social media adoption followed similar trajectories.

Joseph Firth and colleagues' 2019 meta-analysis in World Psychiatry, synthesizing data from 13 longitudinal studies covering 22,702 participants, found consistent associations between social media use and depression and anxiety, particularly in adolescents. However, the effect sizes were modest, causality remained difficult to establish, and significant heterogeneity across studies complicated interpretation.

Andrew Przybylski at the Oxford Internet Institute has repeatedly tested the size of these associations using large pre-registered studies and found effect sizes very small — on the order of 0.05 to 0.15 in correlation terms — comparable in magnitude to the association between wearing glasses and depression. Przybylski and Amy Orben published influential work arguing that the evidence base for a causal link between social media use and mental health harm was insufficient to justify specific policy interventions with confidence.

Haidt and Allen's 2020 paper in Psychological Science attempted to test the association across multiple large datasets; results were inconsistent, and the authors acknowledged the complexity of the evidence. The research community remains genuinely divided, with the weight of evidence suggesting a real but small association between heavy social media use and worse mental health outcomes, particularly for adolescent girls.


The Political Economy of Attention

The Outrage Gradient

Perhaps the most consequential finding for understanding the political effects of the attention economy comes from a 2017 study by William Brady and colleagues in Nature Human Behaviour. The researchers analyzed 560,000 tweets from elected officials and opinion leaders during the 2016 US election campaign, using machine learning to identify the presence of moral and emotional language. The finding was stark: each additional moral-emotional word in a tweet increased its retweet rate by approximately 20%. Content expressing outrage, indignation, and moral condemnation spread systematically faster and further than content expressing other emotions.

The implication for platform design is direct. Engagement-optimizing algorithms are structurally biased toward outrage-generating content — not because anyone designed them to produce political conflict, but because outrage drives the engagement metric the algorithm is optimizing for. Anger is the most viral emotion on social platforms, and platforms that optimize for virality structurally amplify anger.

Recommendation Systems and Radicalization

Research by Ribeiro and colleagues examining YouTube's recommendation algorithm found evidence that users were systematically recommended progressively more extreme content over time, as the algorithm's predictions of what would keep users watching pushed toward more stimulating and sensational material. YouTube has contested both the methodology and the implications of this research, and subsequent studies have produced mixed results.

Bail and colleagues' 2018 study in Science recruited Republican and Democrat Twitter users and assigned them to follow bots that retweeted content from political leaders on the other side. Contrary to the premise that more exposure to opposing views would moderate polarization, participants exposed to opposing-party content became more extreme in their own views. The backfire effect — increased polarization from cross-partisan exposure — appeared to be real in this experimental setting.

The aggregate picture from this research is of a feedback system: platforms reward engagement, outrage drives engagement, algorithms amplify outrage, users become more polarized, polarized users generate more outrage, and the cycle continues. The system does not manufacture political conflict from scratch, but it selects for its most inflammatory expressions and distributes them at scale.


Resistance and Alternatives

The Digital Wellbeing Debate

The Digital Wellbeing movement, promoted by tech companies including Google and Apple through screen time dashboards built into their operating systems, reflects growing recognition of the problem at the companies most responsible for it. Critics point out that voluntary screen time tools offered by the same companies whose business model depends on maximizing screen time are unlikely to be optimized for user well-being.

Andrew Przybylski's association studies, while cautioning against overstated claims of harm, consistently find small negative associations between heavy social media use and well-being — associations large enough to be worth taking seriously even if not large enough to justify dramatic regulatory responses. His work suggests that moderate use may have neutral or even positive effects on social connection, with harms concentrated in heavy and passive use patterns.

Subscription vs Advertising Models

The most structural alternative to the attention economy is the subscription model, which aligns platform revenue with user satisfaction rather than with time-on-platform. Substack, which allows writers to charge readers directly, and the Financial Times and the Economist, which have built profitable digital businesses on subscription revenue, demonstrate that alternative business models are viable. The limitation is access: subscription models create a two-tier information environment in which quality information is accessible only to those who can afford it, potentially deepening existing inequalities in civic knowledge.

Regulatory Approaches

The EU's Digital Services Act, which came into force in 2022 for very large platforms, represents the most significant regulatory response to date. It requires platforms to provide users with recommendation options not based on behavioral profiling, prohibits targeted advertising to minors, requires transparency about recommendation systems, and restricts certain dark patterns. The US has no equivalent federal legislation, though the FTC and state attorneys general have taken enforcement actions against specific practices.

Whether regulation can meaningfully address the structural incentives of the attention economy — or whether those incentives require more fundamental changes to platform business models — remains an open and consequential question.


References

  • Simon, H.A. (1971). Designing Organizations for an Information-Rich World. In M. Greenberger (Ed.), Computers, Communications and the Public Interest. Johns Hopkins University Press.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.
  • Brady, W.J., Wills, J.A., Jost, J.T., Tucker, J.A., & Van Bavel, J.J. (2017). Emotion shapes the diffusion of moralized content in social networks. Nature Human Behaviour, 1(8), 1086-1091. doi:10.1038/s41562-017-0195-1
  • Firth, J., Torous, J., Stubbs, B., Firth, J.A., Steiner, G.Z., Smith, L., ... & Sarris, J. (2019). The "online brain": how the Internet may be changing our cognition. World Psychiatry, 18(2), 119-129. doi:10.1002/wps.20617
  • Haidt, J., & Allen, N.B. (2020). Scrutinizing the effects of digital technology on mental health. Psychological Science, 31(9), 1084-1093. doi:10.1177/0956797620917340
  • Bail, C.A., Argyle, L.P., Brown, T.W., Bumpus, J.P., Chen, H., Hunzaker, M.B.F., ... & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Science, 360(6596), 1530-1534.
  • Twenge, J.M. (2017). iGen: Why Today's Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy — and Completely Unprepared for Adulthood. Atria Books.

See also: Why political polarization increases, How social media rewires the brain, Why we get bored

Frequently Asked Questions

What is the attention economy and who coined the term?

The attention economy is the idea that human attention is a scarce economic resource that can be captured, quantified, and sold. When information is abundant, the thing that becomes scarce is the attention needed to process it. The phrase is usually attributed to economist Herbert Simon, who wrote in 1971 that 'a wealth of information creates a poverty of attention.' But the term 'attention economy' as a distinct economic system was most clearly articulated by Michael Goldhaber in a 1997 essay in Wired magazine. Goldhaber argued that as the internet made information nearly free and infinitely abundant, the real economy would reorganize around capturing and holding human attention rather than around producing physical goods or even information itself. This turned out to be remarkably prescient. The business models of Google, Facebook, YouTube, TikTok, and virtually every major platform company are built on selling advertisers access to human attention. The product is not the content or the service — both are often provided free of charge. The product is you: specifically, the time and mental engagement you give to a screen. Understanding this basic structure is essential to understanding why these platforms behave the way they do. The design choices that make apps feel compulsive — the notifications, the infinite scroll, the algorithmically optimized content feeds — are not accidents or side effects. They are engineering decisions made in the service of a business model that requires maximizing time-on-platform above all other considerations.

What is surveillance capitalism and how does it differ from ordinary advertising?

Surveillance capitalism is a term coined by Harvard Business School professor Shoshana Zuboff in her 2019 book 'The Age of Surveillance Capitalism.' It refers to a specific economic logic in which companies collect behavioral data — what you click, how long you pause, what you search for, who you communicate with, where you go — and transform it into prediction products that are sold to advertisers. The critical distinction from traditional advertising is what is actually being sold. Traditional advertising sells exposure: show a million people an ad for a car. Surveillance capitalism sells prediction: show this specific person an ad for a car at the precise moment their behavioral signals indicate they are most likely to buy one. But Zuboff argues the system goes further still. The ultimate product is not prediction but behavioral modification — actually changing what people do, not just predicting what they will do. Platforms do not merely observe behavior; they use what they observe to subtly reshape it, nudging users toward behaviors that generate more data and more purchases. This is achieved through feed curation, notification timing, content recommendation, and interface design. Zuboff describes the behavioral data extracted from users as 'behavioral surplus' — data generated as a byproduct of your activity online that you did not consent to provide and receive no compensation for. This surplus is the raw material for a new asset class: behavioral futures markets, in which your predicted future actions are bought and sold before you have taken them.

What is intermittent reinforcement and why do social media apps use it?

Intermittent reinforcement is a behavioral conditioning principle identified by psychologist B.F. Skinner in his work on operant conditioning. Skinner found that behavior is most powerfully maintained not by rewarding it every time it occurs, but by rewarding it unpredictably and intermittently. This is the variable ratio reinforcement schedule, and it is the most effective schedule for producing persistent, compulsive behavior — the same schedule that makes slot machines so difficult to stop using. Social media apps exploit this principle throughout their design. When you refresh your feed or open an app, sometimes you get something rewarding — a new message, a compliment, an interesting post. Sometimes you get nothing. The unpredictability of the reward is precisely what makes the behavior so hard to extinguish. The 'pull to refresh' gesture on smartphones was deliberately designed to mimic the slot machine lever, according to Aza Raskin, who invented infinite scroll and has since publicly apologized for the harm he believes it has caused. Notification systems are similarly designed: the delay between an event and a notification can be tuned to maximize the sense of anticipation and reward. Tristan Harris, a former design ethicist at Google who left to found the Center for Humane Technology, has described this approach as 'a race to the bottom of the brain stem' — a competition among apps to see which can most effectively hijack ancient neural reward systems. The concern is not simply that apps are addictive in some casual sense, but that they are deliberately engineered to be so.

Does social media actually damage mental health, and how strong is the evidence?

The evidence is real but more contested than media coverage often suggests. The strongest case for harm comes from research on adolescents, particularly girls. Jean Twenge's 2017 book 'iGen' documented a sharp increase in teen depression and anxiety beginning around 2012 — precisely when smartphone adoption among teenagers reached majority levels. She argued this was not coincidental. Jonathan Haidt and others have made similar arguments, pointing to cross-national data showing parallel trends in countries that adopted social media at similar times. A 2019 meta-analysis by Firth and colleagues in World Psychiatry, drawing on 13 studies and covering 22,702 participants, found associations between social media use and depression and anxiety, though with important caveats about causality and heterogeneity. However, researchers such as Andrew Przybylski at the Oxford Internet Institute have argued that effect sizes in these studies are typically very small — comparable to the association between wearing glasses and depression — and that methodological weaknesses make causal conclusions premature. A 2020 paper by Haidt and Allen in Psychological Science tested whether the correlation between social media use and mental health held across multiple data sets; the results were inconsistent. The most defensible current position is that heavy social media use is associated with worse mental health outcomes, particularly for adolescent girls, but the effect sizes are modest, causality is difficult to establish, and individual variation is enormous. The debate is ongoing, and the stakes — for policy and for millions of users — are high.

What is the outrage gradient and how do algorithms amplify political conflict?

The outrage gradient refers to the empirical finding that content expressing moral outrage travels further and faster through social networks than content expressing other emotions. The foundational study was published by William Brady and colleagues in 2017 in Nature Human Behaviour. Analyzing 560,000 tweets from elected officials and opinion leaders, Brady et al. found that each moral-emotional word added to a tweet increased its retweet rate by approximately 20%. The implication is that social media platforms, by optimizing for engagement, are structurally biased toward outrage-generating content. This is not a conspiracy — no one sat down and decided to make people angry. It is an emergent property of optimizing for the metric that best serves the advertising business model (engagement) without regard for what kind of engagement is being generated. Research by Ribeiro and colleagues examined YouTube's recommendation algorithm and found that users were systematically recommended progressively more extreme content over time. Bail et al.'s 2018 study in Science assigned Republican and Democrat Twitter users to follow bots promoting opposing viewpoints; rather than moderating views, exposure to the other side made both groups more extreme. The overall picture that emerges from this research is of a feedback loop: platforms reward engagement, outrage drives engagement, recommendation algorithms amplify outrage-producing content, users become more polarized, polarized users produce more outrage, and the cycle continues. The system does not create political conflict, but it selects for its most inflammatory expressions and distributes them at scale.

What are dark patterns and how do they manipulate user behavior?

Dark patterns are interface design choices that are intended to manipulate users into actions they would not take if they fully understood what was happening. The term was coined by UX designer Harry Brignull in 2010. Dark patterns include: 'roach motel' designs that make it easy to sign up for a service but extremely difficult to cancel; 'confirmshaming' which labels the 'no' option with guilt-inducing language ('No thanks, I don't want to save money'); hidden costs that appear only at checkout; misdirection that draws attention away from the action the company wants to hide; and notification systems that exploit permission dialogs to grant access to data the user has not consciously agreed to share. In the context of the attention economy, dark patterns serve the goal of maximizing time-on-platform. Autoplay, which loads the next video before the current one has finished, removes the decision point at which a user might choose to stop watching. Infinite scroll removes the natural pause that occurred when users reached the bottom of a page. Default notification settings, which must be actively disabled rather than actively enabled, ensure maximum interruption of users' attention. The EU's Digital Services Act, which came into force in 2022, specifically targets dark patterns used by very large online platforms, requiring them to provide options that are equally prominent, accessible, and easy to use as the most favorable option. The California Consumer Privacy Act similarly restricts certain deceptive interface designs. Regulatory attention to dark patterns has accelerated, but enforcement remains challenging.

Are there viable alternatives to the attention economy model?

Several alternatives exist, each with different trade-offs. The subscription model replaces advertising revenue with direct payment from users, removing the incentive to maximize engagement at the cost of well-being. Substack, which allows writers to charge readers directly, represents this approach. The Economist and the Financial Times have built subscription-based digital businesses that are profitable without optimizing for outrage or compulsion. The limitation is that subscription models exclude users who cannot afford to pay, potentially deepening information inequality. Cooperative or public interest models, in which platforms are owned by users or operated as public utilities, have been proposed by scholars including Tim Berners-Lee. The BBC's public broadcasting model — advertising-free, mission-driven — offers a precedent, though translating this to social media at scale is unproven. Regulatory approaches include mandatory data minimization (limiting the behavioral data that can be collected), prohibition of certain algorithmic recommendation practices, transparency requirements for how content is ranked, and interoperability mandates that would allow users to take their social graph to competing platforms, reducing lock-in. Researcher Andrew Przybylski has emphasized that the evidence for harm from existing platforms is not strong enough to justify specific interventions with confidence — that well-intentioned reforms could have unintended consequences. The Digital Wellbeing movement, promoted by Google and Apple in their own operating systems, offers screen time monitoring and app limits, though critics note that these are offered voluntarily by the same companies whose business model depends on maximizing screen time.