Artificial intelligence has shifted from a research curiosity to the central technology investment priority of virtually every large organisation. The economic consequences of that shift are playing out in real time in the labour market: AI-adjacent roles are among the best compensated in technology, and the demand gap between available talent and open positions remains significant despite a surge in AI-related educational programmes. For anyone evaluating where to direct career energy, AI deserves serious attention.
The challenge is that 'AI career' obscures an enormous amount of variation. The machine learning engineer building production recommendation systems at a streaming service, the AI safety researcher working on interpretability at an alignment lab, the AI product manager defining the roadmap for an enterprise AI product, and the AI ethicist advising a government on model regulation are all working 'in AI' — with almost entirely different skills, backgrounds, and daily activities. Understanding which role actually fits your interests and background is the first and most important question.
This article maps the major AI career paths in detail: what each role does day-to-day, what backgrounds they typically draw from, what skills they require, what they pay, and what realistic transition paths look like for people who are not already working in AI.
"The most important thing to know about AI careers right now is that the field is young enough that career paths are still being written. The people who define what an 'AI ethicist' or 'AI product manager' does are often inventing the role as they go." — Andrew Ng, founder of DeepLearning.AI
Key Definitions
Machine Learning (ML): A subset of artificial intelligence where systems learn from data to improve performance on a task without being explicitly programmed for each scenario.
Large Language Model (LLM): A type of neural network trained on vast quantities of text data, capable of generating, summarising, and reasoning about language. GPT-4, Claude, and Gemini are examples.
Fine-tuning: The process of taking a pre-trained model and training it further on a specific dataset to adapt it for a particular task or domain.
Interpretability (Explainability): The field of AI research focused on understanding why a model makes a particular decision or produces a particular output, as opposed to treating the model as a black box.
AI Alignment: The research problem of ensuring that AI systems pursue goals that are actually aligned with human values and intentions, particularly as systems become more capable.
Role 1: Machine Learning Engineer
What they do: ML engineers build and deploy machine learning systems at scale. Unlike a research scientist who may work on a model for months before it touches production, an ML engineer focuses on getting models into production systems that serve real users. This includes data pipeline construction, model training and evaluation, deployment infrastructure (serving models via APIs), monitoring for performance drift, and A/B testing.
Required background:
- Strong Python programming (numpy, pandas, scikit-learn, PyTorch or TensorFlow)
- Statistical foundations: probability, linear algebra, calculus
- Software engineering skills: version control (git), CI/CD pipelines, containerisation (Docker)
- Cloud platforms (AWS, GCP, or Azure) and ML-specific infrastructure (MLflow, Kubeflow, Vertex AI)
Salary (2024):
- US Big Tech (Google, Meta, Microsoft, Apple): $180,000-$350,000 total compensation (base + bonus + RSUs)
- US startups: $120,000-$200,000 cash, significant equity
- UK: £70,000-£150,000
- Non-tech sectors (finance, healthcare, retail): $120,000-$200,000 US; £60,000-£110,000 UK
Transition path: A software engineer with Python skills and some data background is the most direct transition. Taking fast.ai, deeplearning.ai, or completing the Hugging Face course provides practical foundations. Building 2-3 portfolio projects (demonstrably trained and deployed models, not toy tutorials) on GitHub is the most important step. Target ML engineer roles at companies where the ML function is established rather than nascent — you will learn faster.
Role 2: AI Research Scientist
What they do: Research scientists work on advancing the capabilities of AI systems — developing new model architectures, training techniques, or theoretical frameworks. At industry labs (DeepMind, Google Brain, Meta AI, Anthropic, OpenAI), this means working on problems at the frontier of what is technically possible. At university research groups, this typically means publishing peer-reviewed papers and developing the foundational science.
Required background:
- PhD in machine learning, computer science, statistics, or a related field (almost always required)
- Strong publication record and familiarity with current research literature (ArXiv)
- Deep mathematical foundations: linear algebra, probability theory, optimisation, information theory
- Implementation skills to validate research ideas
Salary:
- US frontier AI labs (Anthropic, OpenAI, Google DeepMind): $200,000-$500,000+ total compensation for senior researchers
- University: $100,000-$180,000 for a tenure-track professorship, varying by institution
- Research scientist at non-frontier tech companies: $170,000-$280,000 US
Transition path: The standard path is a PhD in ML or CS, ideally including a research internship at an industry lab during the PhD. Publications at top venues (NeurIPS, ICML, ICLR, CVPR) are the primary currency. For those already in tech without a PhD, a part-time or full-time master's at a research-active university with the goal of producing research is a viable bridge.
Role 3: AI Product Manager
What they do: AI product managers (AI PMs) define the product strategy, roadmap, and success metrics for AI-powered products. They work at the interface between technical teams (ML engineers and researchers), business stakeholders, and users. An AI PM needs to understand enough about ML to ask good questions of engineers, evaluate feasibility claims, and make informed prioritisation decisions — without necessarily being able to build models themselves.
Required background:
- Previous product management or product marketing experience is typically required
- Strong analytical skills and comfort with data
- Understanding of ML capabilities and limitations (what is feasible, what is a research problem)
- Domain expertise is highly valued (an AI PM for healthcare products benefits from healthcare industry knowledge)
Salary (2024):
- US Big Tech (Google, Meta, Microsoft): $200,000-$350,000+ total compensation
- US AI startups: $160,000-$250,000 plus equity
- UK: £90,000-£150,000
Transition path: The most common route is from software product management into AI product management. Building AI product knowledge by studying how AI products work (not just what they do), taking courses in ML fundamentals (sufficient to understand capabilities, not to build them), and seeking positions on AI-focused product teams within your current company is a viable path. Alternatively, technical professionals (ML engineers, data scientists) can transition into PM roles where their technical credibility is a primary differentiator.
Role 4: AI Ethicist
What they do: AI ethicists analyse the social, ethical, and legal implications of AI systems and advise organisations on responsible deployment. This includes assessing bias and fairness in AI outputs, evaluating privacy implications of data practices, developing ethical review frameworks for AI products, engaging with policymakers and regulators, and communicating risks to non-technical stakeholders.
The role is newer and more variable than the others listed here. At some organisations, AI ethicists are primarily researchers publishing academic work. At others, they are embedded in product teams reviewing specific systems. At governments and civil society organisations, they may focus on policy development and regulation.
Required background:
- Strong background in ethics, philosophy, sociology, law, or public policy
- Deep understanding of AI systems (not necessarily technical implementation, but deep familiarity with capabilities and failure modes)
- Research skills and the ability to communicate complex ideas clearly to diverse audiences
Salary:
- US tech companies: $120,000-$220,000
- US government/think tank: $80,000-$140,000
- UK: £60,000-£120,000 in industry; £40,000-£80,000 in academia or civil service
Transition path: Academia (philosophy, sociology, law) with a focus on AI-related research is one route. Policy backgrounds (working in government tech policy, digital rights organisations) provide complementary skills. Many AI ethicists publish independently on platforms like Substack or write for organisations like the AI Now Institute, the Ada Lovelace Institute, or the Future of Life Institute to establish credibility. Demonstrating both technical fluency and ethical rigour is essential.
Role 5: AI Safety Researcher
What they do: AI safety researchers work on ensuring that AI systems — particularly increasingly capable and autonomous systems — behave safely and reliably. The field encompasses:
- Alignment research: Ensuring AI systems pursue the goals humans actually intend, not misaligned proxies
- Interpretability research: Understanding what computations are happening inside neural networks and why they produce specific outputs
- Robustness research: Making AI systems reliable across out-of-distribution inputs and adversarial conditions
- Governance and strategy: Understanding what institutional structures, norms, and regulations would make the development of advanced AI safer
Required background: For technical safety research: similar to AI research scientist, with emphasis on mechanistic interpretability, formal verification, or RLHF (Reinforcement Learning from Human Feedback) techniques.
For governance/strategy safety work: policy, law, or social science background combined with serious AI technical literacy.
Salary:
- AI safety-focused labs (Anthropic, ARC Evals, Redwood Research, UK AI Safety Institute): $150,000-$400,000+ for senior technical researchers
- Academic safety researchers: variable; university salaries apply
- Policy/governance roles: $90,000-$160,000 at government and think tanks
Transition path: The AI safety field actively recruits people from adjacent areas. MATS (ML Alignment Theory Scholars) and ARENA (AI Safety fundamentals) provide structured training programmes. The 80,000 Hours organisation provides detailed career guides specifically for AI safety. For technical roles, prior ML research or engineering experience is typically expected; demonstrating specific interest in safety problems through independent research or contributions to open safety research is valued.
How to Transition Into AI From a Different Field
From software engineering: The closest transition. Focus on filling the ML-specific gaps: statistics, ML frameworks, and model training. The fast.ai practical deep learning course and Hugging Face tutorials are efficient starting points. Move toward roles where software and ML overlap — MLOps, AI infrastructure, or data engineering roles that bridge into ML.
From data science: Many data scientists are already working with ML in practice. The gap to 'ML engineer' is primarily engineering: learning to build robust production systems, not just notebooks. The gap to 'AI researcher' is mathematical depth and a publication track record.
From consulting or strategy: AI product management or AI ethics/governance are accessible. Build technical fluency by taking structured AI courses (Google's Machine Learning Crash Course, fast.ai, deeplearning.ai) until you can hold substantive conversations with ML teams. Seek engagements where AI is central.
From academia (non-CS fields): Humanities scholars transitioning to AI ethics have a genuine competitive advantage if they also develop technical fluency. Social scientists bring rigorous analytical skills that are genuinely scarce in AI teams. The path usually involves some bridging: publishing at intersections of your field and AI (ethics, social science, legal analysis), then targeting organisations that specifically value that intersection.
Practical Takeaways
The field is moving quickly enough that self-teaching from current online resources is genuinely viable and respected. Build things, document them publicly, and engage with the AI research community through papers, forums, and social media. For technical roles, contributions to open-source projects (Hugging Face, LangChain, Ollama) are visible signals of practical capability. For non-technical roles, demonstrating serious intellectual engagement with AI capability and risk is more important than credentials. Start now — the field rewards early movers and penalises those who wait until the perfect preparation is complete.
References
- Bureau of Labour Statistics, Computer and Information Research Scientists (2023). bls.gov
- Levels.fyi, Machine Learning Engineer Salary Data (2024). levels.fyi
- Andrew Ng, DeepLearning.AI Course Series (2024). deeplearning.ai
- 80,000 Hours, AI Safety Career Guide (2024). 80000hours.org
- Hugging Face, NLP Course and ML Engineering Resources (2024). huggingface.co
- AI Now Institute, Annual AI Index Report (2024). ainowinstitute.org
- Anthropic, AI Safety Research Overview (2024). anthropic.com
- Ada Lovelace Institute, AI Ethics Research (2024). adalovelaceinstitute.org
- MATS Programme, ML Alignment Theory Scholars (2024). matsprogram.org
- Google DeepMind, Careers and Research Overview (2024). deepmind.google
- Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.
- ARENA, AI Safety Fundamentals Course (2024). arena.education
Frequently Asked Questions
What is the best career path to get into AI?
For most people, the fastest route into a well-paying AI role is becoming a machine learning engineer. This requires strong Python programming, statistics, and familiarity with ML frameworks (PyTorch, TensorFlow). A portfolio of projects demonstrating applied ML skills is often more persuasive than a degree alone.
Do you need a PhD to work in AI?
No. Research scientist roles at AI labs (DeepMind, OpenAI, Google Brain) typically require PhDs. However, the majority of AI jobs are ML engineering, data science, and AI product management roles that are accessible with a bachelor's or master's degree plus demonstrated practical skills.
What does an ML engineer earn?
ML engineers at major US tech companies earn \(150,000-\)300,000+ in total compensation (base, bonus, and stock). UK ML engineers at major companies earn £70,000-£150,000. Salaries at startups are lower in cash but may include substantial equity.
What is AI safety research?
AI safety research focuses on ensuring that advanced AI systems behave as intended and do not cause unintended harm at scale. Researchers work on problems including value alignment, interpretability (understanding what models are actually doing), robustness, and preventing dangerous emergent capabilities.
Can a non-technical person work in AI?
Yes. AI product management, AI ethics, AI policy, and AI strategy roles are accessible to people with strong domain knowledge and analytical skills who are not ML practitioners. Understanding AI capabilities and limitations is more important than implementation skills for these roles.