In 1962, Thomas Kuhn published "The Structure of Scientific Revolutions" -- a slim book by a physicist-turned-historian of science, originally addressed to specialists in the history and philosophy of science. It became one of the most cited academic books of the 20th century, was read by more non-scientists than any philosophy of science text ever written, and introduced "paradigm shift" into ordinary language, where it quickly became a cliche deployed to describe any significant change in thinking, however modest. Kuhn was not entirely pleased with this reception.

The deeper problem was that critics accused him of relativism -- of saying that science is just another form of culturally situated belief, no more reliable than mythology or folk medicine, that scientific revolutions are social events rather than rational responses to evidence. He spent the rest of his life insisting that this reading was wrong, that incommensurability between paradigms does not imply that all theories are equally good, that the history of science shows genuine progress even if that progress cannot be reduced to a simple accumulation of confirmed facts. Whether he successfully defended himself against the charge remains disputed among philosophers.

The question Kuhn placed at the center of intellectual life -- what makes science special, and how special is it? -- is not a merely academic one. It bears on how we evaluate expert knowledge, how we make public policy decisions about evidence-contested issues, how we understand the relationship between scientific institutions and the rest of society, and whether the extraordinary achievements of modern science are reliable enough to build a civilization on. The question is not fully settled, and the unsettled portions are interesting.

"Science does not rest on solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or 'given' base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure." -- Karl Popper, The Logic of Scientific Discovery (1959)


Key Definitions

Scientific realism: the view that successful scientific theories give us approximately true descriptions of reality, including unobservable entities like electrons, genes, and black holes.

Anti-realism (instrumentalism): the view that scientific theories are tools for predicting observable phenomena, not descriptions of underlying reality; we need not believe in theoretical entities.

Constructive empiricism: Bas van Fraassen's position that science aims only at empirical adequacy -- accurate prediction of observable phenomena -- and that belief in unobservable entities goes beyond what the evidence warrants.

Underdetermination of theory by evidence: the thesis that any body of evidence is logically compatible with multiple incompatible theories, so evidence alone does not determine which theory we should accept.

Theory-ladenness of observation: the thesis that what we observe depends partly on the theories and concepts we bring to observation; there is no purely neutral observation language.

Pessimistic meta-induction: Larry Laudan's argument that the history of successful-but-false theories gives inductive grounds for expecting our current successful theories to also be false.

Demarcation problem: the philosophical question of how to distinguish science from non-science and pseudoscience.

Duhem-Quine thesis: the thesis that no hypothesis faces evidence in isolation -- it faces evidence only together with a web of auxiliary assumptions -- so no single experiment can definitively refute a theory.

Abduction / inference to the best explanation: the form of inference that selects the hypothesis that would, if true, best explain the observed data; the primary logical form of scientific reasoning.

Social epistemology: the study of how social structures, institutions, and relationships affect the production of knowledge.

Values in science: the role of non-epistemic values (political, ethical, aesthetic) in shaping scientific practice and theory choice, alongside epistemic values like simplicity and fit with evidence.

Standpoint epistemology: the view, associated with feminist philosophers, that one's social position shapes one's epistemic access to certain truths, and that marginalized perspectives can provide distinctive epistemic advantages for certain questions.


What Is Science Trying to Do?

Before asking whether science succeeds, we need to ask what it is trying to do. This turns out to be more contested than it might appear.

The naive answer is that science tries to discover the truth about the world. This is scientific realism. The realist holds that the theories of physics, chemistry, and biology describe the world as it actually is, including the entities and processes that we cannot directly observe. Electrons are real. DNA is real. Natural selection is a real process that actually occurred and continues to occur. Theoretical terms do not merely abbreviate patterns in observable data but refer to actual entities that causally structure the world. The best argument for realism is the no-miracles argument: if our best scientific theories were not at least approximately true, the extraordinary predictive and technological success of science would be inexplicable. Quantum electrodynamics predicts the magnetic moment of the electron to eleven decimal places of accuracy. Modern medicine has dramatically reduced child mortality and extended human lifespan. The transistor and the laser, products of quantum mechanical theory, underlie essentially all modern computing and communications technology. What explanation other than approximate truth could account for this kind of success?

The anti-realist positions are motivated by skepticism about whether this inference from success to truth is valid. The instrumentalist argues that we have no good reason to believe in theoretical entities we cannot directly observe, and that all we actually need from science is accurate prediction of observable outcomes. We use the concept of the electron as a computational device for generating predictions; whether anything corresponding to it actually exists is a question we have no means to settle. Bas van Fraassen's constructive empiricism, the most carefully developed contemporary anti-realist position, introduces the concept of empirical adequacy: a theory is empirically adequate if it correctly predicts all observable phenomena. Van Fraassen argues that this is all science needs to aim for, and that belief in the existence of unobservable entities goes beyond what the evidence can support. We should be empiricists about evidence and agnostic about theoretical ontology.

The debate is not merely abstract. It has practical implications for how scientists and the public should relate to theoretical claims. A realist thinks that when scientists say dark matter exists, or that a specific gene causes a disease, or that the early universe was in a state of extremely high density, they are making claims about actual features of the world that are approximately true or false. An instrumentalist or constructive empiricist thinks these claims should be evaluated by their predictive utility, not by metaphysical questions about existence. Most working scientists operate as practical realists, but the philosophical arguments suggest that this confidence requires more justification than is usually provided.


The Demarcation Problem and Popper's Solution

Karl Popper came to the demarcation problem through his early observation of what seemed to him a striking contrast between the epistemic practices of Einstein's general relativity and those of the psychological and social theories that were dominant in Vienna in the 1920s. Einstein's theory made a specific, surprising prediction: that light passing near a massive object like the sun would be deflected by gravity by a specific, calculable angle. The prediction was tested during the solar eclipse of 1919 by Arthur Eddington's expedition to Principe Island. The results confirmed the prediction. More importantly, the prediction could have come out differently: if Eddington had found no deflection, or deflection of a different magnitude, general relativity would have been refuted.

By contrast, psychoanalysis and Adlerian psychology seemed to Popper to be able to explain any possible human behavior. Whatever a person did, a psychoanalyst could provide a post-hoc explanation in terms of unconscious drives and developmental history. Similarly, Marxist theory of historical development seemed able to accommodate any historical event as an instance of the predicted pattern. Popper concluded that this explanatory flexibility was not a virtue but a defect: a theory that can explain everything predicts nothing and therefore cannot be tested. It is not that these theories are false -- Popper did not claim to know they were false -- but that they are unfalsifiable, and therefore outside the domain of scientific inquiry.

Popper's falsificationism has been enormously influential and captures something important. The rhetorical move of constructing a theory so that every apparent disconfirmation can be reinterpreted as a confirmation -- the "immunization strategy" -- is a genuine pathology that appears in pseudoscientific and fringe theories. The insistence on specifying in advance what would count as evidence against a theory, and on being genuinely open to that evidence, is a sound methodological norm.

But the account faces serious difficulties. The Duhem-Quine thesis shows that falsificationism is logically untenable as a strict criterion: no experiment refutes a theory conclusively, because the inference from "the experiment came out wrong" to "the theory is false" requires all the auxiliary assumptions to be correct, and a rational scientist may have grounds to doubt the auxiliaries rather than the core theory. When the Michelson-Morley experiment in 1887 failed to detect the expected variation in the speed of light due to the Earth's motion through the luminiferous ether, the logical possibilities included both that the ether theory was wrong and that one of the many assumptions about the experimental apparatus and measurement methods was wrong. Lorentz and Fitzgerald proposed that matter contracts in the direction of motion, preserving the ether theory while accommodating the result. This was not irrationality; it was the reasonable response of scientists protecting a successful theory while they worked on alternatives.


Theory-Ladenness and the Incommensurability of Paradigms

Norwood Russell Hanson argued in "Patterns of Discovery" (1958) that observation in science is not a neutral process of recording sense data but is thoroughly shaped by the theoretical framework the observer brings to experience. His famous illustration: when Tycho Brahe and Johannes Kepler looked at the sun on the horizon, did they see the same thing? Optically, yes. But Brahe believed the sun was moving relative to a fixed Earth; Kepler believed the Earth was moving relative to a fixed sun. The same visual stimulus was interpreted through radically different conceptual frameworks, producing different perceptual reports. Hanson's conclusion was that there is no theory-neutral observation language -- a language of pure sensory report that is available to all observers regardless of their theoretical commitments.

Kuhn developed this insight into his concept of paradigm incommensurability. Normal science proceeds within a paradigm -- a framework of foundational assumptions, exemplary problem solutions, methodological norms, and conceptual structures that defines what questions are worth asking and what counts as a good answer. When a paradigm is overthrown by a scientific revolution, the shift is not merely a change in theory but a change in the standards of evaluation, the meanings of key terms, and the picture of what the world is like at a fundamental level. Newtonian mechanics and Einsteinian relativity both use the term "mass," but the Newtonian concept of mass is invariant across all reference frames while the Einsteinian concept varies with velocity. These are not the same concept expressed in different theories; they are different concepts that happen to share a word. The linguistic and conceptual frameworks are partially incommensurable -- not fully translatable into one another.

The implication Kuhn drew was not that we cannot compare paradigms at all -- he consistently denied that conclusion -- but that the comparison is not straightforwardly a matter of checking both against neutral evidence. It requires a kind of comparative hermeneutics, an understanding of each paradigm from within, that cannot be reduced to an algorithm. Scientific revolutions involve, in part, a change in what counts as an explanation, what questions are legitimate, and what the world is like at its most fundamental level. The comparison of rival paradigms is guided by good epistemic reasons -- accuracy, breadth of explanation, internal consistency, fruitfulness -- but these reasons do not constitute a paradigm-neutral decision procedure.


The Pessimistic Meta-Induction

Perhaps the sharpest argument against scientific realism was developed by Larry Laudan in a 1981 paper, drawing on the history of science. The argument is an induction from the historical record: past science is populated with theories that were highly successful by any reasonable empirical standard -- they made accurate predictions, organized large bodies of data, and guided productive research programs for decades or centuries -- but whose central theoretical posits we now know to be nonexistent.

The caloric theory of heat, dominant in the late 18th and early 19th centuries, successfully predicted a wide range of thermal phenomena and organized the discipline of calorimetry. The caloric -- a weightless fluid posited as the carrier of heat -- does not exist. The phlogiston theory of combustion, developed by Stahl in the early 18th century, organized a large body of chemical observations about combustion, calcination, and reduction. Phlogiston does not exist. Ptolemaic astronomy, with its elaborate system of epicycles, predicted the positions of planets with considerable accuracy for more than a millennium. There is no Earth at the center of the solar system. Ether theory in 19th-century physics served as the medium for the propagation of electromagnetic waves and was the basis of much productive research. There is no luminiferous ether.

Laudan's inductive argument: if our best past theories were false in their central ontological claims despite being empirically successful, then the realist inference from empirical success to approximate truth is unjustified. We should expect our current successful theories to be equally false in their central ontological claims. The pessimistic induction gives us grounds for anti-realism not despite the success of science but because of it.

Scientific realists have developed responses. The most influential is John Worrall's structural realism, proposed in a 1989 paper in Philosophy of Science. Worrall observed that even in scientific revolutions, the mathematical structure of successful theories is often preserved: Fresnel's wave optics, with its posit of mechanical ether waves, was superseded by Maxwell's electromagnetic theory, which abandoned the mechanical picture but preserved the same mathematical equations for the behavior of light. Structure is retained even when ontology changes. Worrall's proposal is that we should be realists about the mathematical structure of our theories -- what the world is structurally like -- while remaining agnostic about the intrinsic nature of the entities that instantiate this structure. This is a more modest form of realism that is less vulnerable to the pessimistic induction.


Social Epistemology and Values in Science

The recognition that science is practiced by humans in social institutions -- with funding pressures, career incentives, institutional hierarchies, and cultural assumptions -- does not by itself undermine scientific objectivity, but it does require a more sophisticated account of how objectivity is achieved than the naive picture of individual scientists dispassionately following evidence.

Helen Longino's "Science as Social Knowledge" (1990) developed the most careful philosophical account of how social organization affects epistemic outcomes. Longino argued that objectivity is a property of research communities, not of individual scientists, and that it is achieved through social structures that enable genuine critical scrutiny. She identified four criteria for communities that achieve a degree of objectivity: public venues for criticism (journals, conferences, seminars where results and methods can be challenged); uptake of criticism (the community must actually respond to criticism rather than simply ignoring it); publicly acknowledged evidence standards (shared criteria for what counts as adequate support); and equality of intellectual authority (the standing to raise criticism cannot be confined to members of a social elite). Diversity of perspectives is epistemically valuable on Longino's account because homogeneous communities are more likely to share unexamined background assumptions that go unchallenged. A research community all of whose members share the same social background, educational history, and cultural framework will tend to have the same blind spots.

Feminist philosophers of science documented how androcentric assumptions shaped scientific research in ways that produced distorted knowledge. In primatology, the dominant narrative of primate social structure in the mid-20th century focused on male competition and hierarchy as the organizing principle of social life; female behavior was largely ignored. When more female primatologists entered the field in the 1970s and 1980s -- Jane Goodall, Sarah Hrdy, Barbara Smuts -- they brought different observational attention to different behaviors and documented rich patterns of female choice, cooperation, and alliance that had been invisible to the predominantly male prior observers. The resulting science was not just more "politically correct" but more complete and more accurate. This is a case where the social composition of the research community affected the quality of scientific knowledge it produced.

Sandra Harding and Donna Haraway developed standpoint epistemology as a broader theory: the claim that one's social position shapes one's epistemic access to certain features of reality, and that the perspectives of marginalized groups can provide distinctive epistemic advantages for questions about social life, power relations, and their natural consequences. The argument is not that marginalized people are smarter or more virtuous, but that their structural position gives them access to aspects of social reality that are invisible from positions of power. Those who experience subordination have better information about how systems of power actually operate than those for whom those systems are invisible background.


The Science Wars

The science wars of the 1990s brought these issues into public view with unusual acrimony. On one side were scientists, particularly physicists, who were alarmed by what they read in science studies, feminist theory, and postmodern cultural criticism: claims that scientific knowledge is socially constructed, that there is no neutral access to reality, that scientific authority is a form of political power. On the other side were scholars in history, sociology, and cultural studies of science who argued that the internal workings of science -- including its knowledge-production -- are shaped by social interests, cultural assumptions, and institutional power.

The Sokal hoax in 1996 was the most visible battle: physicist Alan Sokal submitted a parody paper to Social Text arguing that quantum gravity has political implications for progressive social theory, and the paper was published without substantive peer review. Sokal revealed the hoax simultaneously with publication, arguing that the journal had suspended intellectual standards in favor of ideological agreement. The paper was deliberately absurd in its scientific content: it misused quantum mechanics, chaos theory, and mathematical concepts systematically. That it passed review in a prominent cultural studies journal appeared to confirm scientists' concerns about postmodern science studies.

The strongest version of the constructivist position -- that scientific facts are simply whatever scientific communities agree on, with no independent evidential constraint -- does face serious objections. It cannot explain why scientific theories make accurate predictions about phenomena that had not been observed when the theories were formulated. It cannot explain the convergence of independent lines of evidence on the same conclusions. And it tends to undermine the epistemic basis of its own claims: if there is no objective truth about the natural world, what grounds the claims that science is politically biased or that its knowledge is socially constructed?

But the critics of social constructivism were often too dismissive of its genuine insights. The fact that social factors influence the practice of science is not a sociological speculation but a well-documented empirical reality. Funding structures determine what questions are investigated. Career incentives shape what results get published and how. Cultural assumptions about gender and race demonstrably influenced research design and interpretation in psychology, medicine, and biology for most of the 20th century. These influences can and do lead to errors in scientific knowledge. Acknowledging this does not require abandoning the idea that some scientific claims are better supported by evidence than others; it requires building social structures for science -- including diversity, transparency, and genuine critical culture -- that minimize the distorting effects of these influences.


What Makes Science Work

Despite all the philosophical difficulties -- the underdetermination of theory by evidence, the theory-ladenness of observation, the incommensurability of paradigms, the influence of social factors -- science produces reliable knowledge and extraordinary technological power. This is a fact that any adequate philosophy of science must explain, not explain away.

The consensus answer among contemporary philosophers of science is that science works not through any single method or algorithm but through a particular social organization of inquiry that has proven remarkably effective at identifying and correcting errors. Multiple independent lines of evidence can converge on the same conclusion from different directions: if the reality of the electron is supported by cathode ray experiments, Thomson scattering, atomic spectroscopy, the photoelectric effect, and quantum electrodynamics, the convergence of these independent lines of evidence makes the skeptical position increasingly difficult to maintain. Replication -- the requirement that results be reproducible by independent investigators using independent methods -- provides a powerful error-correction mechanism. Peer review, for all its imperfections, provides some check on obvious errors and allows expert scrutiny of methods and inferences.

Openness to refutation -- the Popperian norm even if not the strict Popperian criterion -- creates a culture in which results that are genuinely false are eventually exposed. The cumulative error-correction of science over time means that although any single study or theory may be wrong, the trajectory of scientific knowledge across generations is toward better models of reality. None of this is magic, and none of it is pure rationality. It is a particular historical achievement: a social institution that, imperfectly and unevenly, has developed norms and practices that produce reliable knowledge at a rate that exceeds any alternative method humanity has tried.

The philosophy of science does not discover this by simply accepting scientists' self-image. It investigates carefully how science actually works -- when it succeeds and when it fails, what the relationship between evidence and theory actually is, and how social and institutional factors facilitate or distort the epistemic process. The result is a more honest and more useful account of science than either the naive picture of pure objective observation or the cynical picture of science as merely another form of power.


Connections

For an analysis of when and why scientific experts disagree, including the difference between genuine scientific uncertainty and manufactured doubt, see why experts disagree. For the practical mechanics of scientific investigation, see how the scientific method works. For a detailed examination of why published research is frequently wrong and what that means for evidence-based knowledge, see why most published research is wrong.


References

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

Popper, K. R. (1959). The Logic of Scientific Discovery. Hutchinson. (Original German publication 1934.)

van Fraassen, B. C. (1980). The Scientific Image. Oxford University Press.

Longino, H. E. (1990). Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton University Press.

Laudan, L. (1981). A confutation of convergent realism. Philosophy of Science, 48(1), 19-49.

Worrall, J. (1989). Structural realism: The best of both worlds? Dialectica, 43(1-2), 99-124.

Hanson, N. R. (1958). Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science. Cambridge University Press.

Bloor, D. (1976). Knowledge and Social Imagery. Routledge.

Sokal, A., & Bricmont, J. (1997). Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science. Picador.

Gardiner, S. M. (2011). A Perfect Moral Storm: The Ethical Tragedy of Climate Change. Oxford University Press. (Cited for the moral reasoning parallel.)

Harding, S. (1986). The Science Question in Feminism. Cornell University Press.

Frequently Asked Questions

What is scientific realism and why does it matter?

Scientific realism is the view that successful scientific theories give us approximately true descriptions of the world as it actually is, including the unobservable entities those theories posit -- electrons, genes, black holes, quarks -- even though we cannot directly observe these things. The realist argues that the extraordinary predictive success of science would be a miracle if our theories were not at least approximately true: what other explanation could there be for the fact that quantum electrodynamics predicts experimental results to eleven decimal places, or that molecular biology built on the gene concept has produced working drugs and therapies? The alternative to scientific realism is anti-realism in one of several forms. Instrumentalism holds that scientific theories are just tools for predicting observable phenomena, and we need not believe that the unobservable entities they posit actually exist -- we just use them as computational devices. Bas van Fraassen developed the most sophisticated contemporary anti-realist position, constructive empiricism, in 'The Scientific Image' (1980): science aims at empirical adequacy (accurate predictions of observable phenomena) rather than truth, and we should believe only that theories are empirically adequate, not that their theoretical entities are real. The debate matters because it determines what kind of achievement science represents. If realism is right, science is giving us genuine knowledge of an objective reality that exists independently of our minds and our social institutions. If anti-realism is right, science is a very successful technology for prediction and control, but we should be agnostic about whether it tells us what the world is really like. Most working scientists are practical realists, but the philosophical arguments for and against realism remain unresolved.

What is the demarcation problem and how did Popper try to solve it?

The demarcation problem is the question of what distinguishes science from non-science -- and specifically from pseudoscience. The problem was central to Karl Popper's philosophical project, motivated by his observation that psychoanalysis, Adlerian psychology, and Marxist historical theory seemed capable of explaining any possible outcome: they could accommodate confirming evidence and also explain away disconfirming evidence through auxiliary hypotheses. By contrast, Einstein's general theory of relativity made a specific, surprising prediction -- that light would be bent by gravity during a solar eclipse -- that was precisely measurable and could have turned out differently. Popper's solution was falsificationism: a theory is scientific if and only if it makes predictions that could in principle be refuted by observation. Falsifiability, not verifiability, is the criterion of demarcation. A theory that can explain every possible outcome is unfalsifiable and therefore unscientific -- not necessarily false, but outside the domain of empirical inquiry. Popper's criterion is elegant and has been enormously influential, but it faces serious objections. The Duhem-Quine thesis (discussed separately) shows that no single experiment can falsify a theory, because any hypothesis faces evidence only in conjunction with a web of auxiliary assumptions, and a determined defender can always modify the auxiliaries rather than abandon the core theory. More broadly, the history of science contains many examples of scientists holding on to falsified theories for good methodological reasons while looking for a way to resolve the apparent falsification -- and being vindicated. Newtonian mechanics 'predicted' a slightly different orbit for Mercury than was observed; astronomers in the 19th century responded by searching for an undiscovered inner planet (Vulcan) rather than abandoning Newtonian mechanics, which was otherwise so successful. In this case, the anomaly turned out to require general relativity to resolve -- but the methodological instinct to protect a successful theory in the face of recalcitrant data was not irrational.

What is underdetermination and why does it threaten scientific knowledge?

Underdetermination is the philosophical thesis that any body of observational data is logically compatible with infinitely many different theories. Even a very large amount of empirical evidence does not uniquely determine which theory we should accept, because you can always construct alternative theories that are equally consistent with all the observations made so far but make different predictions about unobserved cases. This is not merely a theoretical possibility: the history of science contains genuine cases where multiple incompatible theories were empirically equivalent over long periods. The threat to scientific knowledge is that if evidence underdetermines theory choice, then the reasons we favor one theory over another must involve factors beyond evidence -- theoretical virtues like simplicity, elegance, coherence with existing knowledge, and fruitfulness in generating new research. But these virtues are not purely logical or empirical criteria: they reflect aesthetic and pragmatic judgments that could vary from one scientific community to another. Why should we trust them as guides to truth? The most serious form of underdetermination is the Duhem-Quine thesis, developed independently by the French physicist Pierre Duhem and the American philosopher W.V.O. Quine: no hypothesis faces empirical evidence in isolation, but only as part of a web of auxiliary assumptions. When an experiment gives an unexpected result, the logic of the situation tells you only that something in the conjunction of the core hypothesis plus all the auxiliary assumptions is wrong, not which element must be revised. A committed defender of any hypothesis can always maintain it by revising auxiliary assumptions. This makes the naive Popperian picture of science as proceeding by straightforward falsification of hypotheses untenable. Real science proceeds through a more complex process of weighing the costs of revising different elements of the theoretical web, which requires judgment that goes beyond the logic of falsification.

What is Kuhn's concept of a paradigm and what was controversial about it?

Thomas Kuhn introduced the concept of the paradigm in 'The Structure of Scientific Revolutions' (1962) to describe the framework of assumptions, methods, exemplary problem solutions, and values that define normal scientific practice within a research community during periods of regular, non-revolutionary science. A paradigm is not merely a theory: it includes standard laboratory techniques, criteria for what counts as a good solution to a problem, judgments about which problems are worth investigating, and a collection of exemplary past investigations that serve as models for future work. Normal science proceeds by working within the paradigm -- extending it, applying it to new cases, resolving minor anomalies -- without questioning its fundamental commitments. When anomalies accumulate beyond what can be resolved within the paradigm, and when a rival paradigm offers an alternative framework, a scientific revolution occurs: the community shifts its allegiance to the new paradigm. The controversial element of Kuhn's account was his discussion of incommensurability: he argued that competing paradigms are in some sense not fully comparable because they frame problems differently and use terms with different meanings. Newtonian and Einsteinian mechanics use the term 'mass' but mean different things by it. The shift from one paradigm to another is not simply a matter of rationally evaluating which better fits the data, because the criteria for evaluation are themselves partly paradigm-dependent. Many readers took this to imply that scientific revolutions are non-rational events driven by social, psychological, or rhetorical factors rather than evidence, and that there is no objective fact about whether one paradigm is better than another. Kuhn rejected this reading and spent much of the rest of his career arguing that incommensurability does not imply relativism -- that paradigm choices are guided by good epistemic reasons even if those reasons do not reduce to a paradigm-neutral algorithm.

What is the pessimistic meta-induction and does it undermine scientific realism?

The pessimistic meta-induction, developed as a sharp argument against scientific realism by Larry Laudan in a 1981 paper, draws on the history of science to challenge the realist inference from predictive success to approximately true theories. The argument proceeds as follows: the history of science is populated with highly successful theories -- theories that made accurate predictions, organized large bodies of data, and guided productive research programs -- that we now know to be false in their central ontological claims. The caloric theory of heat was extremely successful in the 18th and early 19th centuries: it predicted heat transfer phenomena accurately, organized calorimetry, and guided important experimental discoveries. But the caloric -- the fluid postulated as the carrier of heat -- does not exist. The phlogiston theory of combustion successfully organized a large body of chemical phenomena before Lavoisier. Phlogiston does not exist. Ptolemaic astronomy with its epicycles predicted the positions of planets with considerable accuracy for over a thousand years. There is no Earth at the center of the solar system. If the most successful past theories were false in their core ontological claims, Laudan argued, we have inductive grounds for expecting that our current successful theories are also false in their core ontological claims. The realist inference from success to approximate truth is therefore unwarranted. Scientific realists have developed several responses. John Worrall proposed structural realism in 1989 as a compromise: even when science undergoes revolutionary change, the mathematical structure of successful theories is often preserved in their successors. Maxwell's electromagnetic theory and its quantum successors share structural equations even though they offer very different pictures of what electromagnetic phenomena consist in at the ontological level. Worrall suggests that what science reliably converges on is not the nature of unobservable entities but their structural relations -- which is a more modest and defensible realist claim.

How do social and political factors shape science, and does this undermine objectivity?

The sociology of scientific knowledge (SSK) emerged in the 1970s with the program articulated by David Bloor in 'Knowledge and Social Imagery' (1976). Bloor proposed that the sociology of science should explain why scientists believe what they do using the same kinds of causal factors regardless of whether the beliefs are true or false, successful or unsuccessful -- the symmetry principle. Social interests, cultural assumptions, and institutional power relations shape what questions get investigated, what results get published, what counts as a good explanation, and how evidence gets interpreted. This was applied to the hard sciences: Steven Shapin and Simon Schaffer's 'Leviathan and the Air Pump' (1985) argued that Robert Boyle's experimental program and Thomas Hobbes's opposition to it reflected different visions of the proper relationship between science and political authority, not merely different assessments of experimental evidence. Feminist philosophers of science extended this analysis to gendered assumptions: Helen Longino in 'Science as Social Knowledge' (1990) documented how background assumptions about natural hierarchies and the normativity of stereotypical sex roles shaped research design and interpretation in primatology, endocrinology, and developmental biology. Ruth Hubbard and others showed how the science of female reproductive biology long reflected the assumption that female hormonal cycles were primarily deviations from a male baseline. The question is whether this acknowledgment of social influence on science undermines scientific objectivity. Longino's answer is no: objectivity is a social achievement, not a property of individual scientists. A scientific community is more or less objective depending on whether it has social structures that enable genuine critical scrutiny of background assumptions: venues for criticism, uptake of criticism, public evidence standards, and diversity of perspectives. Diversity of backgrounds and standpoints is an epistemic resource, not a threat to objectivity, because diverse communities are less likely to share unexamined background assumptions.

What were the science wars and what did each side get right?

The science wars were a series of disputes in the 1990s between scientists and social constructivists in science studies departments about the nature of scientific knowledge. The flash point was the Sokal hoax: in 1996, physicist Alan Sokal submitted a parody paper titled 'Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity' to the cultural studies journal Social Text. The paper was a pastiche of postmodern jargon, misused scientific concepts, and political verbiage with no coherent argument. Social Text published it. Sokal revealed the hoax simultaneously in Lingua Franca, arguing that postmodern science studies had abandoned any standard of intellectual rigor. Sokal and Jean Bricmont followed with 'Fashionable Nonsense' (1997), documenting cases of prominent French intellectuals using scientific concepts -- quantum mechanics, chaos theory, Godel's incompleteness theorems -- inaccurately and without evident understanding to lend authority to social theory. The scientists' critique had genuine targets: some science studies writing did appear to embrace a relativism according to which scientific knowledge is merely one culturally situated perspective among many, with no special epistemic status. This is a view with serious problems: it cannot explain the instrumental success of science, and it tends to undermine the epistemic basis for the political critiques that motivate it (if there is no objective truth, there is no objective truth about environmental harm or medical racism). The science studies side also had genuine points: science is practiced by humans in social institutions with funding pressures, career incentives, and cultural assumptions; these factors demonstrably influence what gets studied, what counts as a valid result, and how evidence is interpreted. Both of these things can be true simultaneously, and the resolution of the science wars has been, at its best, a more sophisticated account of how social factors shape scientific practice without making science just another form of ideology.