What did the Frey and Osborne 2013 study actually find about automation?
Carl Benedikt Frey and Michael Osborne's 2013 Oxford study estimated that 47% of US jobs were at high risk of computerization over an unspecified number of years. Their analysis was based on occupational susceptibility to three bottlenecks that limited automation at the time: perception and manipulation, creative intelligence, and social intelligence.
The Automation Debate in Context
Every era of technological change produces predictions that machines will put people out of work — and every era has been partially right and substantially wrong. The Luddite rebellion of the 1810s protested textile machinery that genuinely destroyed the livelihoods of skilled weavers. The fears were valid. But employment did not collapse; it transformed. Agriculture employed 90% of the workforce in 1800 and under 2% today in developed countries — not because mass unemployment resulted, but because new forms of work emerged alongside each wave of technological displacement.
This historical context does not mean current automation is harmless or that displacement is always smooth. It means that the question "will automation take my job?" is less useful than "which tasks in my job will automate, which will be augmented, and which new tasks will be created?" The research on these questions is more specific — and more nuanced — than most popular coverage suggests.
The current wave differs from previous ones in at least one important respect: its breadth. Steam power displaced physical labor. The computer displaced clerical labor. Artificial intelligence, and particularly generative AI, is beginning to touch professional cognitive labor — the drafting, the analysis, the communication tasks that were long considered safe. Whether this broadening of automation's reach represents a genuine discontinuity or another iteration of the same historical pattern is the central unresolved question of contemporary labor economics.
What Automation Actually Is
Automation is the use of technology — mechanical, electronic, software, or AI-based — to perform tasks with reduced or eliminated human intervention. The definition spans a vast range: a dishwasher is automation; so is a robotic assembly line; so is an algorithm that approves loan applications; so is a language model generating text.
What distinguishes automation from mere tool use is the degree to which the system can initiate, direct, and complete tasks without human action at each step. A calculator requires a human to press every button; a spreadsheet formula executes automatically when inputs change; an automated accounting system processes transactions according to rules without human involvement in routine cases.
The economic significance of automation comes from two characteristics: scale (automated systems can process far more volume than humans for the same cost) and consistency (they make the same errors repeatedly rather than the varied errors humans make, which is often an improvement for high-volume low-stakes work).
A Taxonomy of Automation Types
Automation is not a monolithic category. Understanding the different types helps clarify which industries, roles, and tasks face meaningful exposure at any given moment.
Mechanical automation uses physical machinery to replace manual labor in repetitive physical tasks. The assembly line introduced by Henry Ford in 1913 is the canonical example. Modern robotics — including collaborative robots ("cobots") that work alongside humans — is the current frontier of this category.
Process automation uses software to execute defined sequences of digital tasks. Rule-based systems, scripts, and robotic process automation (RPA) software all fall here. The common thread is that the logic is explicit: the system does what it is programmed to do, in the order specified, with no interpretation required.
Cognitive automation uses machine learning and AI to handle tasks that require perception, classification, or judgment. Image recognition, natural language processing, and predictive modeling are the core technologies. Unlike process automation, cognitive automation systems are trained on data rather than explicitly programmed — they generalize from examples rather than following rules.
Generative AI automation — the newest category — uses large language models (LLMs) to produce text, code, images, and other content. This is the category that has most significantly shifted the automation debate in recent years, because it encroaches on tasks previously thought to require human creativity and expertise.
Frey and Osborne: What the 47% Study Actually Found
The most cited document in the automation-and-jobs debate is Carl Benedikt Frey and Michael Osborne's 2013 paper "The Future of Employment: How Susceptible Are Jobs to Computerization?" Published by the Oxford Martin Programme on Technology and Employment, it estimated that 47% of US occupations were at high risk of automation.
That number has been widely quoted, widely misunderstood, and frequently misrepresented.
What the Study Actually Did
Frey and Osborne began by identifying three engineering bottlenecks that made tasks difficult to automate as of 2013:
- Perception and manipulation: Tasks requiring the recognition of irregular objects, fine motor skills in unstructured physical environments, or creative interpretation of sensory input
- Creative intelligence: Tasks requiring originality, artistic creation, or novel problem formulation
- Social intelligence: Tasks requiring negotiation, persuasion, care, and interpretation of social and emotional signals
They then rated 702 occupations on their reliance on tasks requiring these capabilities, using O*NET occupational data. Occupations with high bottleneck requirements were classified as low risk; those with low bottleneck requirements were classified as high risk.
The 47% figure refers to occupations classified as "high risk" — but the study explicitly stated this represented potential susceptibility "over some unspecified number of years" and that the timing was uncertain. The study was not predicting 47% unemployment.
Criticisms and Limitations
Subsequent economists raised several methodological concerns:
The occupation-vs-task distinction: An occupation might include 80% automatable tasks, but if the remaining 20% require human judgment, eliminating the human role entirely is economically suboptimal. Workers adapt by specializing in the non-automatable components.
Demand effects: Automation typically reduces prices, which increases demand, which partially or fully offsets displacement. A trucking company that automates route planning needs fewer logistics coordinators but more truck drivers to meet increased demand. Technology changes the equilibrium, not just one side of the equation.
New task creation: The Frey-Osborne framework cannot model the creation of new occupations that automation enables. The smartphone created the iOS developer, UX designer, and app product manager — none of which existed before 2007.
Geographic and distributional blindness: Even if aggregate employment holds steady, the workers displaced and the workers created are not the same people in the same places. A factory town where routine manufacturing automates does not automatically generate growth in non-routine cognitive work for those same workers. The macro aggregate can obscure severe local disruption.
McKinsey's Task-Based Analysis
McKinsey Global Institute's 2017 report "A Future That Works: Automation, Employment, and Productivity" took a different methodological approach, decomposing jobs into constituent tasks rather than treating occupations as units.
Their findings:
- Approximately 5% of occupations could be fully automated using current technology
- Up to 60% of occupations contained at least 30% of tasks that could be automated
- The total affected work time — tasks technically susceptible to automation — represented about 50% of the global economy's time
A follow-up McKinsey report in 2023, "The Economic Potential of Generative AI," significantly updated these estimates. The researchers found that generative AI could automate tasks accounting for 60 to 70% of employees' time across occupations — up substantially from prior estimates, reflecting the expansion of automation capability into cognitive and language-based tasks.
| Analysis | Headline Figure | What It Actually Measures |
|---|---|---|
| Frey and Osborne (2013) | 47% of US jobs at risk | Occupational susceptibility to computerization |
| McKinsey (2017) | 5% fully automatable now | Full occupational replacement with current technology |
| McKinsey (2017) | 60% with 30%+ automatable tasks | Partial automation transforming job content |
| OECD (2016) | 9% at high risk | Stricter definition applied to task-level analysis |
| McKinsey (2023) | 60-70% of work time exposed | Generative AI and cognitive automation combined |
The divergence in estimates largely reflects different methodological choices about what counts as "at risk." All analysts agree that routine tasks are most susceptible; disagreement centers on how many jobs consist predominantly of routine tasks, and on the rate at which technology is advancing.
The Task-Level Analysis: What Actually Automates
The research consensus is clearer at the task level than the occupation level. David Autor's extensive research on labor market polarization identifies a fundamental distinction:
Routine tasks — which follow explicit rules that can be encoded in software — are highly susceptible to automation. These include both:
- Routine cognitive tasks: data processing, bookkeeping, standard calculations, rule-application, document formatting, scheduling
- Routine physical tasks: repetitive manufacturing, assembly, sorting, and precise manipulation in structured, predictable environments
Non-routine tasks are far less susceptible:
- Non-routine cognitive (analytical): complex problem-solving, creative judgment, research, writing for novel contexts, expert diagnosis
- Non-routine cognitive (interpersonal): negotiation, persuasion, teaching, caregiving, managing people through conflict and change
- Non-routine physical: operating in unstructured environments, fine manipulation of irregular objects, skilled trades requiring constant adaptation
Autor's landmark finding is that the US labor market has shown polarization over the past 40 years: employment has grown in both high-wage non-routine jobs and low-wage non-routine manual jobs, while declining in middle-wage routine jobs. This is the fingerprint of task automation — not mass unemployment, but hollowing out of the middle.
"The jobs at greatest risk are not necessarily the least-skilled. They are those that are most routine — a distinct dimension." — David Autor
Importantly, Autor's work shows that technological change has been a complement to non-routine cognitive tasks — it increases the productivity and therefore the economic value of workers doing complex analytical and interpersonal work. The story is not simply "automation destroys jobs." It is "automation destroys some tasks, augments others, and creates new demand for activities that complement the technology."
Which Specific Jobs Are Most and Least at Risk
High Risk
Jobs consisting predominantly of routine tasks in structured environments are most vulnerable:
- Data entry clerks and similar transcription-based roles
- Bookkeepers and accounting clerks (though not accountants — complex judgment persists)
- Routine manufacturing operatives performing repetitive assembly in structured facilities
- Toll booth operators and similar transaction-based service roles
- Basic customer service for standard, script-following interactions
- Radiological image analysis (AI now matches radiologist accuracy for specific diagnoses, though the radiologist role evolves rather than disappears)
- Paralegals and legal clerks involved in document review and contract summarization
- Basic financial analysis that involves pattern recognition in structured data sets
Low Risk
Jobs requiring the bottleneck capabilities Frey and Osborne identified are most resilient:
- Skilled trades (plumbers, electricians, carpenters) — require manipulation in highly unstructured physical environments
- Healthcare practitioners at high judgment levels — diagnosis, treatment planning, surgical adaptation
- Teachers and educators — social intelligence, relationship-building, adaptation to individual learners
- Managers and leaders — interpersonal coordination, motivation, navigating organizational politics
- Social workers and mental health professionals — requiring genuine human presence and relationship
- Skilled caregivers — where human emotional presence is intrinsic to the service value
The Middle Ground: Jobs That Are Transforming
A significant portion of professional roles fall into a third category: jobs that will neither be replaced nor remain unchanged, but that will be substantially transformed. The actual work changes in composition even when the job title and headcount remain stable.
Software engineers are a clear example. Generative AI tools can now write, explain, and debug code with considerable competence. Yet software engineer employment continues to grow. The explanation is that AI handles more of the mechanical code-generation work, while the engineer's time shifts toward system architecture, requirements clarification, code review, and higher-order design decisions. Productivity per engineer increases; the number of engineers organizations want to hire does not decrease proportionately.
The Generative AI Disruption
The emergence of large language models (LLMs) like GPT-4 and its successors has expanded the frontier of automation to include tasks previously considered safe: drafting text, writing code, creating images, answering questions, summarizing documents, and generating routine creative content.
This is a significant departure from the Frey-Osborne framework, which identified creative intelligence as a bottleneck. Generative AI erodes that bottleneck for some types of creative work — particularly work that is routine within its category (generic marketing copy, standard code templates, first-draft summaries).
A 2023 study by Erik Brynjolfsson, Danielle Li, and Lindsey Raymond at Stanford and MIT examined the impact of an AI assistant on customer service agents at a large software firm. They found that access to the AI tool increased worker productivity by an average of 14%, with the largest gains going to the least-experienced workers. The AI effectively compressed the experience curve — newer workers could access the accumulated knowledge patterns of the best performers. This is a cleaner empirical window into augmentation than most macro-level studies provide.
However, the disruption follows a similar pattern to other automation waves: augmentation before replacement. Lawyers use AI to process discovery documents more efficiently; programmers use Copilot to write boilerplate code faster; writers use AI to draft and then revise. Productivity increases; headcount adjusts gradually; the human role shifts toward judgment, curation, and higher-order tasks.
| Occupation | Primary LLM Exposure | Human Advantage |
|---|---|---|
| Paralegal | Document review, contract drafting | Novel legal judgment, client relationship |
| Copywriter | Routine ad copy, social content | Brand voice development, creative strategy |
| Junior programmer | Boilerplate code, bug fixes | Architecture, complex system design |
| Data analyst | Standard report generation | Insight framing, stakeholder communication |
| Customer support agent | Tier 1 FAQ resolution | Complex escalations, emotional situations |
| Medical coder | Routine classification from notes | Ambiguous case judgment, appeals |
| Translator (standard content) | Common language pairs, known formats | Literary translation, specialized domains |
The Historical Precedent: Augmentation as the Modal Outcome
The historical record strongly suggests that augmentation — technology making workers more productive without replacing them — is the modal outcome of automation, even when the technology is genuinely transformative.
When ATMs were introduced in the 1970s, the assumption was that bank teller employment would fall dramatically. Instead, teller employment grew substantially through the 1980s and 1990s. The ATM reduced the cost of running a branch, which led banks to open more branches, which required more tellers — but doing more relationship-intensive work. The ATM automated the cash dispensing; it augmented the overall banking service.
This pattern repeats across industries:
- Checkout scanners reduced the time per customer transaction, leading to more checkout lanes, lower prices, increased shopping volume, and cashier employment that remained stable or grew for decades after the technology was introduced.
- Spreadsheets made individual accountants dramatically more productive. The accounting profession grew. Demand for financial analysis expanded faster than productivity gains reduced headcount.
- Industrial robots reduced the per-unit labor required in manufacturing but also reduced costs and increased quality, expanding the markets for manufactured goods and shifting worker roles toward quality control, machine maintenance, and process supervision.
- CAD software transformed architecture and engineering. Drafting by hand was eliminated as a distinct occupation, but architects and engineers became more productive, and the profession grew.
None of this is automatic or painless. Displaced workers in specific roles face real hardship, particularly those with skills concentrated in the displaced tasks and limited geographic or skills mobility. The aggregate economic outcome and the distributional outcome can diverge substantially. The macroeconomic picture and the experience of a 55-year-old data entry worker in a region with no local reskilling infrastructure are very different realities.
Skills That Persist and Grow in Value
The research consistently identifies categories of skill that have grown in economic value as routine tasks have automated:
Non-routine interpersonal skills: Research by David Deming (2017) found that the share of jobs requiring social skills grew by nearly 12 percentage points between 1980 and 2012, and that wages grew faster in high-social-skill jobs than in low-social-skill jobs even controlling for education. Social skill complements cognitive skill; the combination is the hardest to replicate. Deming's data show that workers who are both cognitively skilled and high in social skill have experienced the strongest wage growth of any category over this period.
Complex problem-solving in ambiguous contexts: The ability to define a problem correctly, not just solve a well-specified one, is highly resistant to automation. This is distinct from applying a known procedure to a known problem type. When the problem structure itself is uncertain — when nobody knows what questions to ask, not just what answers to find — human judgment is essential.
Continuous learning capacity: As the half-life of specific technical skills shortens, the meta-skill of learning new domains quickly becomes increasingly valuable. Workers who can acquire new technical competencies repeatedly are less vulnerable to any single wave of displacement. This is not the same as general intelligence — it is a learned capacity for deliberate practice and knowledge transfer that some workers develop more systematically than others.
Cross-domain integration: Connecting insights from multiple domains — technical, interpersonal, contextual — to make judgments that no narrow specialist can replicate. This is increasingly the value proposition of experienced generalists and of roles like product management, strategy consulting, and general management.
Caregiving and human presence: A growing body of economic research suggests that tasks requiring genuine human physical and emotional presence will remain in high demand as populations age in developed countries. Demand for personal care aides, home health workers, and mental health professionals is projected to grow substantially in coming decades regardless of automation trends in other sectors. The U.S. Bureau of Labor Statistics projects that home health and personal care aides will represent the single largest source of new jobs through 2032.
The Social Skill Premium
One of the most consistent findings in labor economics over the past two decades is the growing premium on social and interpersonal skills. David Deming's research found that since 2000, the only occupations with strongly rising employment and wages are those that require both cognitive skill and social skill — not cognitive skill alone.
This finding has an important implication for automation strategy. Workers who invest only in technical skills may find those skills are augmented or displaced by automation, while workers who combine technical skills with strong interpersonal, communication, and collaborative capacity are more durable. The combination is specifically valuable because it is harder to replicate — social intelligence requires experiential development that does not transfer easily through training data.
The Education and Training Gap
One of the most persistent structural problems in automation-era labor markets is the mismatch between the skills displaced workers have and the skills growing occupations require. This is not simply a skills gap in the general sense — it is a geographic, temporal, and resource gap.
Workers displaced from routine manufacturing in the Midwest face retraining costs in time and money that many cannot bear without income support. The new jobs being created in high-wage non-routine cognitive work are often in different cities, in different sectors, and require training investment that many workers cannot access. Lawrence Katz and Alan Krueger have documented that geographic mobility among workers has declined significantly in the United States since the 1980s, which means that even when new jobs are created, the workers who need them most may not be positioned to access them.
Policy responses to this gap — retraining programs, portable benefits, wage insurance, place-based economic development — remain contested and inadequately funded in most developed economies. The technological disruption is, on current evidence, outpacing the institutional capacity to manage its distributional consequences.
What Organizations Should Do
The automation question for organizations is not "should we automate?" but "what should we automate, in what sequence, with what investment in transition?"
Effective automation programs:
- Start with tasks, not jobs: Map the constituent tasks of roles before evaluating automation potential
- Identify augmentation opportunities first: Where can technology make your workers more productive before pursuing replacement?
- Plan for reskilling explicitly: Workers whose tasks are automated need support transitioning to non-automated tasks; leaving this to chance produces operational disruption and morale problems
- Build monitoring and exception-handling: Automated processes that fail without human detection create large-scale problems fast
- Invest in change management: Automation programs that do not address worker concerns about job security typically generate resistance that slows or derails deployment
- Measure actual outcomes: Many automation business cases are based on theoretical savings. Measuring actual labor hours recovered and redirected is essential for honest program evaluation
The Policy Dimension
The automation debate is not only an organizational or individual challenge — it is a policy challenge. The decisions that shape how automation's benefits and costs are distributed are, at their core, political decisions about investment in education, social insurance, and labor market regulation.
Several policy levers have received serious academic and policy attention:
Investment in retraining infrastructure: The U.S. spent approximately 0.1% of GDP on active labor market policies in recent years, compared to 0.3-0.5% in comparable OECD nations. Expanding and improving the efficacy of retraining programs is a prerequisite for managing automation-era displacement humanely.
Portable benefits: Tying health insurance and pension access to employment relationships creates friction that slows labor market adjustment. Portable benefits that workers carry between jobs would reduce the cost of job transition.
Early childhood and K-12 education investment: The skills most resilient to automation — social intelligence, complex problem-solving, learning agility — are developed over years of education and experience. Underinvestment in education reduces the stock of workers who can adapt to automation-era labor markets.
Research and development in human-complementary AI: Current AI investment is heavily concentrated in applications that substitute for human labor. Greater public investment in AI applications that augment human capability could shift the distribution of benefits.
Conclusion
Automation is neither the apocalypse nor a pure windfall. It is a structural transformation of which tasks require human effort, proceeding at different speeds across different domains. The research consensus — from Frey and Osborne's occupational analysis to McKinsey's task-level decomposition to Autor's labor market polarization evidence to Deming's social skill premium research — points to a consistent picture: routine tasks automate, non-routine tasks (particularly interpersonal ones) are resilient, and the aggregate employment effect depends heavily on demand responses and new task creation.
The individuals most at risk are those with skills concentrated in routine tasks and limited access to reskilling resources. The most resilient are those with strong social skills, genuine problem-solving ability in unstructured contexts, and the capacity to keep learning as the technical landscape shifts.
The generative AI wave is expanding automation into creative domains previously considered bottleneck-protected. But the pattern of augmentation before replacement still appears to hold — at least for now. The honest answer to "will automation take my job?" remains: probably not entirely, but certainly in part, and the specific tasks at risk and the specific opportunities created are worth mapping carefully for your own role and industry.
References
- Frey, C. B., and Osborne, M. A. (2013). The Future of Employment: How Susceptible Are Jobs to Computerization? Oxford Martin Programme on Technology and Employment.
- McKinsey Global Institute. (2017). A Future That Works: Automation, Employment, and Productivity. McKinsey and Company.
- McKinsey Global Institute. (2023). The Economic Potential of Generative AI. McKinsey and Company.
- Autor, D. (2015). Why Are There Still So Many Jobs? The History and Future of Workplace Automation. Journal of Economic Perspectives, 29(3), 3-30.
- Deming, D. J. (2017). The Growing Importance of Social Skills in the Labor Market. Quarterly Journal of Economics, 132(4), 1593-1640.
- Brynjolfsson, E., Li, D., and Raymond, L. (2023). Generative AI at Work. NBER Working Paper No. 31161.
- OECD. (2016). Automation and Independent Work in a Digital Economy. Policy Brief on the Future of Work. OECD Publishing.
- Arntz, M., Gregory, T., and Zierahn, U. (2016). The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis. OECD Social, Employment and Migration Working Papers No. 189.
- Katz, L. F., and Krueger, A. B. (2019). Understanding Trends in Alternative Work Arrangements in the United States. RSF: The Russell Sage Foundation Journal of the Social Sciences, 5(5), 132-146.
- Acemoglu, D., and Restrepo, P. (2019). Automation and New Tasks: How Technology Displaces and Reinstates Labor. Journal of Economic Perspectives, 33(2), 3-30.
- U.S. Bureau of Labor Statistics. (2023). Occupational Outlook Handbook: Home Health and Personal Care Aides. bls.gov.
- World Economic Forum. (2023). The Future of Jobs Report 2023. World Economic Forum, Geneva.
Frequently Asked Questions
What did the Frey and Osborne 2013 study actually find about automation?
Carl Benedikt Frey and Michael Osborne's 2013 Oxford study estimated that 47% of US jobs were at high risk of computerization over an unspecified number of years. Their analysis was based on occupational susceptibility to three bottlenecks that limited automation at the time: perception and manipulation, creative intelligence, and social intelligence. Jobs low on all three dimensions were classified as high risk. The study has been highly influential but widely misunderstood — it assessed task susceptibility, not near-term probability of displacement.
What is the difference between automation and augmentation?
Automation refers to technology replacing human performance of a task entirely. Augmentation refers to technology enhancing human capability without eliminating the human role. Historical evidence suggests augmentation is far more common than replacement across economic transitions — new tools more often change what workers do and how effectively they do it, rather than eliminating their jobs. Most automation studies focus on tasks, not jobs, because jobs rarely consist entirely of automatable tasks.
Which types of tasks are most susceptible to automation?
Research consistently identifies routine tasks — whether cognitive or physical — as most susceptible to automation. Routine cognitive tasks include data processing, standard calculations, rule-based decisions, and document formatting. Routine physical tasks include repetitive manufacturing, sorting, and precise physical manipulation in structured environments. Non-routine tasks, both cognitive (judgment, creativity, novel problem-solving) and interpersonal (caregiving, persuasion, negotiation), have shown significantly greater resistance to automation.
How does McKinsey's analysis differ from Frey and Osborne's?
McKinsey Global Institute's analysis uses a task-based methodology that decomposes jobs into constituent tasks rather than treating jobs as units. Their 2017 report estimated that while 5% of jobs could be fully automated with current technology, up to 60% of jobs contained at least 30% automatable tasks. This approach produces more nuanced predictions — the same job can be partly automated and partly augmented, changing its nature rather than eliminating it. McKinsey's estimates for displacement are significantly lower than Frey and Osborne's headline figure.
What skills are most resilient to automation?
The most automation-resilient skills are those requiring genuine human judgment in unpredictable contexts, interpersonal intelligence (empathy, persuasion, care), physical dexterity in unstructured environments, creativity requiring integration of novel ideas, and complex ethical reasoning. Research also shows that social skills have become increasingly economically valuable over the past 40 years as routine tasks have been automated — the relative premium for non-routine social skills has grown substantially.