The phrase "first, do no harm" is among the most enduring principles in medicine. It appears, in various forms, in medical traditions from Hippocratic Greece to 19th-century English practice. Its persistence across two and a half millennia suggests something uncomfortable: that harm caused by treatment has always been a concern, and that the medical profession has always known it.

The formal term for this phenomenon is iatrogenesis — harm originating from medical care itself. The word comes from the Greek iatros (physician) and genesis (origin). Understanding iatrogenesis is not an argument against medicine; modern medicine has extended and improved human life in extraordinary ways. It is an argument for epistemic humility, for measuring costs as rigorously as benefits, and for recognizing that the cobra effect — unintended harm from a well-intentioned intervention — is not unique to colonial government policy. It is a recurring pattern in one of humanity's most important institutions.

Ivan Illich and the Limits of Medical Benefit

The most radical critique of iatrogenesis came not from a physician but from a philosopher. Ivan Illich, the Austrian-born social critic, published Medical Nemesis in 1976, arguing that industrialized medicine had become what he called "a major threat to health."

Illich distinguished three layers of iatrogenesis:

Clinical iatrogenesis: The direct physical harm caused by medical treatment — adverse drug reactions, surgical complications, hospital-acquired infections, and the physical damage of overtreatment.

Social iatrogenesis: The process by which medicine medicalizes ordinary human experience — transforming childbirth, grief, aging, unhappiness, and normal variation into medical conditions requiring professional management. In Illich's view, this undermines the social structures and personal resources through which people have historically supported each other through difficult experiences.

Structural iatrogenesis: The deepest level — the way medicine's dominance erodes the human capacity for self-care, tolerance of pain, and acceptance of death. A society that has delegated its health entirely to professionals loses something irreplaceable.

Illich's argument was controversial when published and remains so. He was accused of romanticizing pre-modern medicine and ignoring real therapeutic gains. But his core observation — that medical intervention creates harms that are often unmeasured, underreported, and invisible against the backdrop of its benefits — is supported by substantial evidence.

The Scale of Clinical Iatrogenesis

How common is medically caused harm? The question is harder to answer than it should be, because healthcare systems have historically been reluctant to systematically measure their own errors.

The landmark 1999 report by the US Institute of Medicine, To Err is Human, estimated that between 44,000 and 98,000 Americans die annually from preventable medical errors in hospitals — making medical error a leading cause of death, comparable to motor vehicle accidents and breast cancer at the time.

A 2013 analysis in the Journal of Patient Safety revised this estimate dramatically upward, suggesting that errors contribute to between 210,000 and 440,000 patient deaths per year in the United States — a figure that would place preventable medical harm among the top three causes of death in the country.

Category of Iatrogenic Harm Estimated Annual Impact (US)
Preventable hospital deaths (IOM 1999 estimate) 44,000-98,000
Revised preventable death estimate (James 2013) 210,000-440,000
Adverse drug reactions requiring hospitalization 700,000+
Hospital-acquired infections (HAIs) 1.7 million infections, ~99,000 deaths
Unnecessary procedures (e.g., C-sections, back surgery) Hundreds of thousands annually

These numbers should be interpreted with care — defining "preventable" is genuinely difficult in complex clinical settings — but the scale is undeniable.

The Opioid Crisis: Iatrogenesis at Population Scale

No modern example of healthcare-generated harm is more consequential than the opioid crisis. It is, in its origins, a textbook case of the cobra effect: a well-intentioned medical intervention that created a catastrophe far larger than the problem it was designed to solve.

The Origins

In the late 1980s and 1990s, medical opinion shifted decisively toward treating chronic pain more aggressively. Pain was recast as the "fifth vital sign," inadequate pain management was identified as a major quality-of-care problem, and pharmaceutical companies — most notably Purdue Pharma with OxyContin — aggressively marketed long-acting opioids as safe and effective for chronic non-cancer pain.

The claim that opioids carried low addiction risk when used for pain was, at best, based on thin evidence and, at worst, a deliberate misrepresentation. Purdue Pharma's internal documents, revealed through litigation, showed that the company knew its marketing claims were exaggerated and that rates of misuse and addiction were higher than publicly acknowledged.

Physicians — acting in good faith to address genuine patient suffering — prescribed opioids at levels that had never been documented in peacetime medicine. Between 1991 and 2011, opioid prescriptions in the United States nearly tripled.

The Cascade

The iatrogenic cascade unfolded in stages. Patients prescribed opioids developed dependence. Some began obtaining pills outside legitimate prescriptions. When prescribing tightened in the early 2010s, many people dependent on prescription opioids switched to heroin, which was cheaper and more available. When the heroin supply became contaminated with illicitly manufactured fentanyl — a synthetic opioid many times more potent — overdose deaths surged.

The cumulative death toll from opioid overdoses in the United States surpassed 500,000 between 1999 and 2020, according to the Centers for Disease Control and Prevention. At the crisis's peak, more Americans were dying from opioid overdoses each year than died in the entire Vietnam War.

"The opioid epidemic was not the result of patients seeking pleasure or escape. It was the result of a medical system that undertreated pain for decades, then overcorrected with a drug that turned out to be far more addictive and dangerous than its proponents claimed. The harm flowed through the prescription pad." — Dr. Andrew Kolodny, co-director, Opioid Policy Research Collaborative, Brandeis University

The opioid crisis is the cobra effect at population scale: an incentive structure (reimbursement for prescribing, pharmaceutical marketing, cultural pressure to relieve suffering immediately) that created a harm vastly larger than the one it addressed.

Antibiotic Resistance: A Slow-Moving Catastrophe

Antibiotics are among the most transformative medical innovations in history. Infections that killed routinely before the mid-20th century became treatable. Surgical procedures, organ transplants, and chemotherapy — all of which compromise immune defenses — became viable because antibiotics could address opportunistic infections.

The iatrogenic problem: widespread antibiotic use, including for viral infections where antibiotics provide no benefit and for agricultural purposes, has accelerated the evolution of antibiotic-resistant bacteria.

How Resistance Develops

Bacterial resistance is a direct consequence of natural selection under antibiotic pressure. When antibiotics kill susceptible bacteria, any resistant mutants in the population survive and reproduce. The more frequently and broadly antibiotics are used, the stronger the selective pressure for resistance.

Medical and agricultural overuse of antibiotics has generated resistant strains of nearly every major bacterial pathogen. Methicillin-resistant Staphylococcus aureus (MRSA), carbapenem-resistant Enterobacteriaceae (CRE), and drug-resistant tuberculosis are among the most serious examples.

The World Health Organization estimates that antimicrobial resistance currently causes approximately 700,000 deaths per year globally. A 2014 review commissioned by the UK government projected that, on current trends, antibiotic-resistant infections would kill 10 million people per year by 2050 — more than cancer currently kills.

The cruel irony: antibiotics, given to treat infections, are creating the conditions for future infections that cannot be treated. The treatment produces the future harm.

Overdiagnosis and Overtreatment: When Detecting Is Harming

A quieter form of iatrogenesis has attracted growing attention in medical literature: overdiagnosis — the detection, through screening or testing, of abnormalities that would never have caused harm in the patient's lifetime.

The Thyroid Cancer Epidemic That Wasn't

South Korea provides a striking case study. In 1999, South Korea launched a nationwide cancer screening program that included ultrasound examination of the thyroid gland. The incidence of diagnosed thyroid cancer increased fifteen-fold over the following decade, making South Korea the world leader in thyroid cancer rates.

But the death rate from thyroid cancer in South Korea remained essentially unchanged throughout this period.

What was happening? Ultrasound technology was detecting tiny thyroid nodules that would never have grown, never have spread, never have caused symptoms, and never have killed anyone. The patients whose cancers were "found" were then treated — thyroidectomy, radioiodine therapy, lifelong surveillance — exposing them to surgical risks, medication side effects, and psychological burden for a condition that posed no actual threat.

Researchers H. Gilbert Welch and Hyeong Sik Ahn published a 2014 analysis in the New England Journal of Medicine estimating that approximately 90% of the additional thyroid cancers diagnosed in South Korea represented overdiagnosis.

Overdiagnosis in Breast Cancer and Prostate Cancer

Similar patterns have been documented in prostate-specific antigen (PSA) testing for prostate cancer and in mammography for breast cancer. Both screening programs have demonstrably saved lives — they also generate significant rates of overdiagnosis, leading to treatment with side effects (incontinence and impotence from prostate surgery, lymphedema and cardiac toxicity from breast cancer treatment) for cancers that may never have progressed.

The challenge of overdiagnosis is that it is, in a fundamental sense, invisible: you cannot know, for an individual patient, whether their detected cancer was destined to progress or not. The harm can only be measured at the population level, by comparing treatment rates with mortality rates.

The Precautionary Principle and Primum Non Nocere

The two ethical principles most relevant to iatrogenesis are related but distinct.

Primum non nocere — "first, do no harm" — is a statement of priority: before helping, make sure you are not hurting. It does not counsel paralysis; it counsels deliberateness. The question "what harm might this treatment cause?" should precede or accompany "what benefit might it provide?"

The precautionary principle goes further: in the face of scientific uncertainty about potential harm, the burden of proof lies with those proposing the intervention to demonstrate safety and benefit, not with those urging caution. This principle underlies much of environmental and pharmaceutical regulation, though its application is contested when it conflicts with urgent needs.

Together, these principles define a posture toward medical intervention that is meaningfully different from the implicit posture of much modern medicine, which tends toward: "If we can detect it, detect it. If we can treat it, treat it."

When Doing Nothing Is Doing Something

Active surveillance — the deliberate decision to monitor a condition rather than treat it immediately — has become an accepted approach for low-risk prostate cancer, small thyroid nodules, and some other conditions. This represents an institutional acknowledgment that watchful waiting is a legitimate medical choice, not a failure to act.

The shift toward active surveillance is, in part, a structural adjustment to the growing evidence of overdiagnosis and overtreatment. It is also psychologically difficult: both patients and physicians feel the pull toward action when a diagnosis exists. Doing nothing feels like abandonment, even when it is the evidence-based choice.

Defensive Medicine: The Cobra Effect of Malpractice Liability

A less-discussed source of iatrogenesis is defensive medicine — the practice of ordering tests, procedures, and consultations not because they are clinically indicated but to reduce liability risk. A physician who orders every possible test cannot be accused of missing a diagnosis. One who exercises clinical judgment — ordering tests selectively — takes on personal risk.

Estimates of the cost of defensive medicine in the United States range from $45 billion to over $200 billion annually (the range reflects significant methodological disagreement). Beyond cost, defensive medicine exposes patients to unnecessary radiation from imaging, false positives that trigger further unnecessary workup, and complications from procedures performed primarily to protect the physician.

The cobra effect is evident: a legal system designed to protect patients from physician negligence has created incentives that generate a different form of harm — not through negligence but through excess. The bounty for avoiding lawsuits is paid partly in unnecessary medical procedures.

What Responsible Medicine Looks Like

None of this is an argument for medical nihilism. Modern medicine delivers extraordinary value. The challenge is to deliver that value while maintaining rigorous accounting of costs.

Evidence-based medicine represents the systematic attempt to ground clinical decisions in high-quality evidence of benefit and harm, rather than tradition, authority, or commercial pressure. Cochrane Reviews and similar evidence syntheses have repeatedly identified medical practices that were widely adopted without sufficient evidence — and that have since been found ineffective or harmful.

Shared decision-making involves presenting patients with genuine information about the probabilities of benefit and harm from treatment options, including the option of no treatment, and respecting patient preferences. It is the opposite of paternalistic medicine and is associated with better-aligned treatment choices and higher patient satisfaction.

Systematic post-market surveillance of drugs and devices, with mandatory adverse event reporting and genuine willingness to modify or withdraw approvals, addresses the gap between clinical trial populations and real-world patients.

Prescribing stewardship for antibiotics — reserving broad-spectrum antibiotics for confirmed bacterial infections, rotating regimens to reduce selection pressure — represents the kind of collective action required to address population-level iatrogenesis.

Conclusion

The cobra effect appears in healthcare as reliably as in colonial governance or financial regulation: a well-intentioned intervention, poorly calibrated to the complexity of the system it operates in, creates harms that rival or exceed the harms it was designed to address.

Iatrogenesis is not an indictment of physicians, who are generally well-intentioned and often working under considerable constraint. It is an indictment of systems — pharmaceutical incentives, liability structures, screening culture, prescribing norms — that push medicine toward doing more without always asking whether more is better.

The ancient injunction remains the clearest guide: first, do no harm. It is not a prohibition on action. It is a demand for honesty about what actions cost.

Frequently Asked Questions

What is iatrogenesis?

Iatrogenesis refers to harm caused by medical examination, treatment, or advice. The term comes from the Greek 'iatros' (physician) and 'genesis' (origin). Iatrogenic harm ranges from individual adverse drug reactions to population-level crises like the opioid epidemic, which was substantially driven by physician prescribing of OxyContin and other opioids in the 1990s and 2000s.

Who introduced the concept of iatrogenesis as a social critique?

Ivan Illich, the Austrian philosopher and social critic, gave iatrogenesis its broadest formulation in his 1976 book 'Medical Nemesis.' Illich argued that modern medicine had become 'a major threat to health' through clinical iatrogenesis (direct harm from treatment), social iatrogenesis (medicalizing normal life), and structural iatrogenesis (undermining people's capacity to manage their own health).

How does antibiotic resistance represent a form of iatrogenesis?

Antibiotic resistance is an iatrogenic problem at the population level: by using antibiotics to treat individual patients (often for viral infections where they provide no benefit), medicine has created the conditions for bacteria to evolve resistance. The World Health Organization estimates that antimicrobial resistance causes 700,000 deaths annually, a figure projected to rise to 10 million per year by 2050 if current trends continue.

What is overdiagnosis and how does it cause harm?

Overdiagnosis occurs when a condition is detected through screening or testing that would never have caused symptoms or harm in the patient's lifetime. The harms flow from subsequent treatment: surgery, chemotherapy, and radiation for cancers that were never destined to progress. Studies of thyroid cancer screening in South Korea and breast cancer screening in multiple countries show significant rates of overdiagnosis, leading to unnecessary interventions with real side effects.

What is the precautionary principle in medicine?

The medical precautionary principle holds that when an action carries risk of harm, the burden of proof lies with those proposing the action to demonstrate safety and benefit, not with those urging caution. It is related to the ancient maxim 'primum non nocere' (first, do no harm), which frames the physician's primary obligation as avoiding harm rather than merely attempting to benefit. Both principles counsel caution in the face of uncertainty.