In 1974, the philosopher Thomas Nagel published an essay titled "What Is It Like to Be a Bat?" in the Philosophical Review. It is one of the most widely cited papers in twentieth-century philosophy. Nagel's question was deceptively simple: could we ever fully understand the subjective experience of a bat? Bats navigate through echolocation -- emitting ultrasonic pulses and constructing a spatial world from the echoes. Even if we understood every detail of bat neurology, Nagel argued, we would still lack something crucial: what it is like, from the inside, to experience the world through sonar. That inner dimension -- the felt quality of experience -- is precisely what no purely physical description seems to capture.

Nagel's essay crystallized a problem that had been lurking in philosophy and science for centuries but which had acquired new urgency as neuroscience advanced: the mind-body problem. How does the physical brain give rise to subjective experience? How does the electrochemical activity of neurons produce the redness of red, the painfulness of pain, the taste of coffee? This is not a question about which brain regions activate during different experiences -- neuroscience can answer that. It is a question about why those physical processes are accompanied by any experience at all. The challenge is so profound that philosopher David Chalmers, who gave it its canonical name, called it the hard problem of consciousness.

Philosophy of mind is the branch of philosophy that investigates the nature of mental states -- consciousness, perception, belief, desire, intention -- their relationships to each other, and their relationship to the physical world. It draws on neuroscience, cognitive science, and artificial intelligence while asking questions that empirical methods alone cannot answer. Its answers determine what we should say about the moral status of animals, the possibility of artificial consciousness, and the ultimate nature of persons.

"It seems to me that no reason has been given to suppose that it is within our comprehension even to recognize a solution to this problem." -- Thomas Nagel, What Is It Like to Be a Bat? (1974)


Key Definitions

The mind-body problem -- The philosophical question of how mental states and physical states are related. In its classic form: how does the physical brain give rise to subjective experience?

Substance dualism -- Descartes' view that mind and body are distinct substances: res cogitans (thinking, non-spatial substance) and res extensa (extended, physical substance).

Position Claim about Mind Key Proponent Key Problem
Substance dualism Mind and body are distinct substances Descartes Interaction problem: how do they causally affect each other?
Physicalism / Materialism Mental states are physical brain states Many contemporary philosophers Hard problem of consciousness
Functionalism Mental states are defined by causal roles, not substrate Hilary Putnam Can silicon think? Multiple realizability
Eliminative materialism Folk psychological concepts like "belief" don't refer to real entities Churchland Counter-intuitive; undermines itself
Phenomenal consciousness (qualia) focus Some aspects of mind resist physical explanation Chalmers, Nagel Explanatory gap between neural events and subjective experience

Physicalism -- The view that everything that exists is physical or constituted by physical facts. Mental states are physical states or are constituted by them.

Functionalism -- The view that mental states are defined by their functional roles -- their causal relations to inputs, outputs, and other mental states -- rather than by their physical substrate.

Qualia -- The felt qualities of conscious experience: what it is like to see red, hear middle C, taste sweetness. Sometimes called phenomenal properties.

The hard problem of consciousness -- David Chalmers's term for the question of why physical processes give rise to subjective experience at all.

The explanatory gap -- Joseph Levine's term (1983) for the conceptual space between physical descriptions of brain processes and phenomenal descriptions of experience that seems to resist closure.

Multiple realizability -- The observation that the same mental state type can be instantiated in physically different systems -- human neurons, octopus neurons, silicon circuits -- motivating functionalism over identity theory.

Eliminative materialism -- The view, associated with Paul Churchland and Patricia Churchland, that folk psychological concepts (belief, desire) are so theoretically confused they will be eliminated rather than reduced to neuroscience.

Panpsychism -- The view that phenomenal consciousness is a fundamental feature of reality, present to some degree in all matter, not reducible to non-phenomenal physical properties.


The Mind-Body Problem: From Descartes to the Present

Substance Dualism and Its Problems

Rene Descartes gave the mind-body problem its canonical modern form in Meditations on First Philosophy (1641). Through systematic doubt, Descartes arrived at the one certainty he could not undermine: that he was thinking -- cogito ergo sum. The thinking thing (res cogitans) was entirely non-spatial, indivisible, and constituted by thought alone. The body (res extensa) was spatial, divisible, and mechanically governed. Mind and body were distinct substances.

Substance dualism matches a common intuition -- consciousness does seem radically different from any physical thing. But it immediately generates the interaction problem. If mind and body are entirely different substances, how do they causally interact? When I decide to raise my arm (a mental event), my arm rises (a physical event). When a pin pricks my skin (a physical event), I feel pain (a mental event). Causal interaction between radically different substances seems either impossible or incomprehensible.

Descartes proposed the pineal gland as the interface, which merely relocates the mystery. The pineal gland is a physical structure, so the interaction problem recurs at its surface. His successors proposed alternatives -- Malebranche's occasionalism (God coordinates mental and physical events in parallel), Leibniz's pre-established harmony (God pre-set mind and body to run synchronously without interaction) -- that avoided the interaction problem at the cost of theological implausibility.

Modern physics makes substance dualism harder to maintain. Physics describes a causally closed system: every physical event has sufficient physical causes. If mental states are non-physical, they appear causally inert -- epiphenomenal, unable to affect physical events. This implies that beliefs and decisions have no effect on our actions, a conclusion almost everyone rejects.

Type Identity Theory and Multiple Realizability

Mid-twentieth century physicalism took the form of type identity theory, developed by U. T. Place and J. J. C. Smart in the 1950s-1960s. Mental state types are identical to physical brain state types: pain is identical to C-fiber firing. This avoids the interaction problem because mental events are physical events.

Type identity theory was challenged by the multiple realizability argument developed by Hilary Putnam in the 1960s. If pain is identical to C-fiber firing in humans, then a creature without C-fibers -- an octopus, a Martian, a silicon robot -- cannot have pain. But octopuses have nociceptors and exhibit pain behavior; the claim that they lack pain entirely seems implausible. Mental state types cannot be identical to specific human neural types if the same mental states can be realized in systems with entirely different physical organization. This argument motivated functionalism.


Functionalism: The Dominant Position

Functionalism holds that mental states are defined by their functional roles -- the causal relations between sensory inputs, behavioral outputs, and other mental states -- not by physical substrate. Pain is whatever state is caused by tissue damage, causes distress and avoidance behavior, and interacts with other mental states in characteristic ways, regardless of whether it is implemented in neurons or silicon.

Functionalism became the implicit philosophy of cognitive science. It permits the scientific study of mental processes in computational and functional terms without commitment to specific physical realization. It explains multiple realizability: any system with the right functional organization has the relevant mental states, regardless of physical makeup. It is congenial to artificial intelligence research: if minds are functional systems, then silicon minds are possible in principle.

The most sustained challenge to functionalism from within the physicalist camp comes from Ned Block's distinction between access consciousness and phenomenal consciousness. Access consciousness is information being available for reasoning, verbal report, and behavioral control -- the kind of consciousness functionalism handles well. Phenomenal consciousness is the felt quality of experience -- what it is like to be in a state. Block argues that functionalism may explain access consciousness while leaving phenomenal consciousness completely unexplained.

The Chinese Room

John Searle's Chinese Room argument (1980) challenges functionalism at a basic level. Searle imagines a person in a room with a rulebook for manipulating Chinese symbols. Chinese speakers pass questions through a slot; the person follows rules to produce answers that appear to come from a Chinese-speaker. The system passes a behavioral test for understanding Chinese -- but the person inside understands nothing.

Searle argues this shows that syntax (formal symbol manipulation) is not sufficient for semantics (genuine understanding and meaning). A computer executing a program is in the same position: formal operations without understanding. Critics respond with the systems reply: the person does not understand Chinese, but the whole system -- person, rulebook, symbols -- does. Searle disputes this: if the person memorized the rulebook and performed the operations mentally, there would still be no understanding, even though the entire system was located in one head. The force of this reply remains contested; the debate continues to define discussions of AI consciousness.


The Hard Problem and Qualia

Chalmers's Hard Problem

David Chalmers in "Facing Up to the Problem of Consciousness" (1995) and The Conscious Mind (1996) drew the distinction between easy and hard problems of consciousness. The easy problems include explaining how the brain integrates information, distinguishes sleep from waking, directs attention, and generates verbal reports about internal states. These are functionally specifiable tasks that neuroscience can investigate.

The hard problem is why any physical process is accompanied by subjective experience at all. A complete functional explanation of how the brain processes information about red objects would not explain why there is any felt quality to seeing red. Chalmers introduces the philosophical zombie: a being physically and functionally identical to a normal human but with no inner experience whatsoever. The conceivability of zombies entails, he argues, that consciousness is not logically entailed by physical organization -- physicalist explanations of function therefore fail to explain consciousness.

Joseph Levine had named the explanatory gap in 1983: the conceptual space between brain-state descriptions and phenomenal descriptions that no amount of neuroscientific detail seems to close. Even knowing that pain correlates with C-fiber firing, the question "but why does C-fiber firing feel like that?" seems always to remain.

The Knowledge Argument: Mary's Room

Frank Jackson's knowledge argument (1982) gives the hard problem its most vivid illustration. Mary is a neuroscientist raised in a black-and-white room. She has access to all physical and functional information about human color vision: wavelengths, retinal responses, neural pathways, behavioral consequences. She knows every physical fact about what happens when people see red.

Mary is released and sees red for the first time. Does she learn something new?

Jackson argued: yes. She learns what it is like to see red -- a phenomenal fact she could not have known from comprehensive physical information. Since she already knew all the physical facts, this new knowledge is non-physical. If physicalism holds that physical facts exhaust all facts, physicalism is false.

The ability hypothesis, developed by David Lewis and Lawrence Nemirow, denies that Mary gains new propositional knowledge. She gains new abilities -- the ability to recognize, remember, and imagine red experiences. Ability acquisition differs from learning new facts. The phenomenal concept strategy argues that Mary does gain new knowledge but only by acquiring new phenomenal concepts -- a different way of representing physical facts she already possessed -- not by encountering genuinely non-physical facts.

Daniel Dennett in Consciousness Explained (1991) offers the most radical response: the intuition that Mary learns something new is simply wrong. If you fully accept physicalism, you should conclude she does not genuinely learn a new fact. Dennett argues that qualia as Jackson and Chalmers conceive them -- intrinsic, ineffable, private properties separate from functional properties -- do not exist. What we call qualia are functional and relational properties that can be captured in third-person terms.


Integrated Information Theory and Global Workspace

Tononi's Integrated Information Theory

Giulio Tononi's Integrated Information Theory (IIT), developed from 2004 onward, attempts mathematical precision about consciousness. IIT proposes that consciousness is identical to integrated information -- measured as phi -- which quantifies how much information a system generates as a unified whole beyond the sum of its independent parts.

IIT begins from five phenomenological axioms: experience exists, is structured, is informative, is unified (you cannot divide experience into independent halves), and is definite. From these axioms, IIT derives conditions that any physical substrate of consciousness must satisfy. The cerebral cortex has high phi because of dense recurrent interconnectivity. The cerebellum, despite having four times as many neurons, has low phi because of its modular feedforward architecture -- predicting that cerebellar damage should not impair consciousness, consistent with clinical evidence.

Scott Aaronson raised an influential challenge: certain feedforward networks could in principle have very high phi according to IIT's formula, implying they should be highly conscious in counterintuitive ways. Tononi disputes this reading. More fundamentally, IIT predicts that standard digital computers -- which execute serial computations through modular architectures -- have very low phi and therefore very low consciousness, regardless of behavioral sophistication. A system producing compelling text about its inner experiences would, on IIT, have very little inner life.

Global Workspace Theory

Bernard Baars proposed Global Workspace Theory in 1988. Consciousness is the broadcast of information to a global workspace that makes it widely available to different cognitive systems. Information processed locally in specialized modules is unconscious; information that achieves global broadcast becomes conscious. Stanislas Dehaene and Jean-Pierre Changeux developed a neurobiological version -- Global Neuronal Workspace Theory -- identifying the global workspace with long-range corticothalamic connections enabling wide broadcast across cortical areas.

Global Workspace Theory handles access consciousness well -- it explains why some information is reportable and available for reasoning -- but has been criticized for not addressing phenomenal consciousness. It explains the functional role of consciousness without explaining why globally broadcast information has phenomenal character.


Eliminative Materialism

The Churchlands' Radical Proposal

While functionalism and physicalist theories of mind attempt to find a place for commonsense mental concepts within a scientifically rigorous framework, eliminative materialism takes a more aggressive stance: folk psychology -- the everyday framework of beliefs, desires, intentions, and propositional attitudes -- is a radically false theory that will not merely be reduced to neuroscience but will be replaced by it entirely.

Paul Churchland's 1981 paper "Eliminative Materialism and the Propositional Attitudes," published in the Journal of Philosophy, made the case by analogy. Scientific history is littered with folk theories that were not reduced to more fundamental science but simply eliminated: phlogiston was not identified with some chemical substance but abandoned when oxygen theory made it unnecessary; vital spirits were not found to be some physical substance but abandoned when biochemistry explained life without them. Churchland argued that folk psychology -- our commonsense theory of mind organized around propositional attitudes -- has been stagnant for thousands of years, fails to explain sleep, learning, mental illness, and the basis of intelligence, and shows the signs of a degenerative research program rather than a progressive one.

Patricia Churchland extended this program in "Neurophilosophy" (1986), arguing that the proper study of mind should proceed from the bottom up, starting with neuroscience and building toward a successor vocabulary that carves mental processes at their real joints rather than at the joints of a prescientific folk taxonomy. Where Freudian theory postulated the id, ego, and superego, and where folk psychology postulates beliefs and desires, a mature neuroscience would postulate entities defined by their actual neural architecture and dynamics.

The most frequently raised objection is the self-refutation problem. Any argument for eliminative materialism must appeal to the very entities it seeks to eliminate: the eliminativist presumably believes that eliminativism is true, desires that others accept it, and intends their arguments to produce rational conviction. If there are no such things as beliefs and desires, then the statement "I believe that eliminative materialism is correct" expresses nothing. Paul Churchland has responded that this objection assumes folk psychology is required for the activity of reasoning, but this is precisely the empirical question at issue -- a neural theory of reasoning might invoke entirely different kinds of states.


Panpsychism and the Contemporary Landscape

An alternative response to the hard problem that has gained renewed philosophical attention is panpsychism: the view that phenomenal consciousness, or at least its basic constituents, is a fundamental and ubiquitous feature of reality rather than something that emerges only in complex nervous systems. Where eliminativism denies the reality of consciousness as ordinarily conceived, panpsychism takes it seriously enough to make it a basic feature of the natural world.

Philip Goff, in "Galileo's Error" (2019), argues that the modern scientific framework systematically excludes consciousness by design. Galileo's methodological achievement -- mathematizing nature by excluding secondary qualities (color, taste, smell) from the domain of physics -- was enormously productive but left consciousness with no place in the resulting picture. Panpsychism, in Goff's version, holds that the fundamental entities described by physics have phenomenal properties, and that consciousness in complex organisms like humans is composed from these more elementary phenomenal properties through combination.

The combination problem is panpsychism's central challenge: how do micro-level phenomenal properties combine to produce the unified, rich, macro-level consciousness characteristic of human experience? It is not obvious why billions of minimally conscious particles should produce a single unified experiential perspective when combined into a brain. Critics argue this problem is at least as intractable as the hard problem that panpsychism was introduced to solve. Defenders argue it is more tractable, because the combination problem at least involves phenomenal entities combining -- rather than requiring a wholly non-phenomenal substrate to somehow generate phenomenal experience.


Philosophy of Mind and Artificial Intelligence

The question of AI consciousness is a direct application of philosophy of mind positions with genuine moral stakes.

If functionalism is correct, consciousness depends on functional organization, not substrate. A system with the right architecture could be conscious regardless of whether it is made of neurons or silicon. Current large language models, despite impressive outputs, likely lack the integrated self-modeling and dynamic temporal organization most functionalists regard as necessary. But the criteria are contested and the field is developing rapidly.

IIT predicts that standard digital computers have very low phi and therefore minimal consciousness regardless of behavioral sophistication. A system producing compelling text about its inner experiences would, on IIT, have very little inner life -- behavioral output and phenomenal reality are entirely disconnected. This creates a notable divergence: a system could be behaviorally sophisticated enough to persuade most human observers of its inner life, while possessing, on IIT's analysis, essentially no phenomenal consciousness. The implications for how we should evaluate AI behavior claims are practically significant.

Chalmers addressed virtual minds in Reality+ (2022). If consciousness depends on organization rather than biological substrate, virtual entities -- including AI systems with the right organization -- could have genuine mental states and moral status comparable to biological minds. This position follows from Chalmers's broader philosophical commitments about the relationship between physical organization and phenomenal consciousness, though most philosophers do not go this far. Chalmers is careful to frame his conclusions as conditional on contested functionalist premises, not as established results.

The questions are practically urgent. As AI systems demonstrate increasingly sophisticated behavior, and as some researchers claim to observe evidence of distress or preference in large language models, the philosophical frameworks for evaluating those claims become matters of consequence -- not just for academic philosophy but for AI design, regulation, and ethics. The hard problem does not become easier just because the system in question was built by engineers rather than evolved over millennia. If anything, it becomes sharper: we can examine the architecture of a digital system in complete detail, and yet the question of whether there is something it is like to be that system remains just as resistant to the third-person methods of empirical science as it was when Nagel asked it about bats in 1974.

For related concepts, see what is metacognition and how the brain works.


References

  • Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.
  • Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
  • Chalmers, D. J. (2022). Reality+: Virtual Worlds and the Problems of Philosophy. W. W. Norton.
  • Churchland, P. M. (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78(2), 67-90.
  • Dehaene, S., Changeux, J.-P., & Naccache, L. (2011). The global neuronal workspace model of conscious access. In S. Dehaene & Y. Christen (Eds.), Characterizing Consciousness: From Cognition to the Clinic? Springer.
  • Dennett, D. C. (1991). Consciousness Explained. Little, Brown.
  • Jackson, F. (1982). Epiphenomenal qualia. Philosophical Quarterly, 32(127), 127-136.
  • Levine, J. (1983). Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly, 64(4), 354-361.
  • Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83(4), 435-450.
  • Putnam, H. (1967). Psychological predicates. In W. H. Capitan & D. D. Merrill (Eds.), Art, Mind, and Religion. University of Pittsburgh Press.
  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-424.
  • Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5, 42.
  • Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450-461.

Frequently Asked Questions

What is the hard problem of consciousness?

The hard problem of consciousness is the question of why there is subjective experience at all — why physical processes in the brain give rise to the felt quality of experience. The term was coined by philosopher David Chalmers in his 1995 paper 'Facing Up to the Problem of Consciousness' and developed in his 1996 book 'The Conscious Mind.'Chalmers distinguishes the hard problem from what he calls the 'easy problems' of consciousness. The easy problems involve explaining cognitive and behavioral functions: how the brain integrates information, how it focuses attention, how it controls behavior, how it differentiates sleep from waking. These problems are 'easy' only relative to the hard problem — they are scientifically difficult but in principle explicable through functional and computational analysis of neural processes.The hard problem is different in kind. Even if we completely explained how the brain processes sensory information, integrates it, and generates behavioral responses, we would not thereby have explained why this processing is accompanied by subjective experience — why there is something it is like to be in those states. A complete functional description of how the brain responds to red light does not explain why seeing red feels the way it does.Thomas Nagel anticipated the problem in his 1974 paper 'What Is It Like to Be a Bat?' Nagel argued that even a complete physical description of bat echolocation would leave something out: what it is like, from the inside, to experience the world through sonar. Joseph Levine named this the 'explanatory gap' in 1983 — a conceptual space between physical descriptions of brain states and phenomenal descriptions of experience that no purely physical account seems able to close.

What is functionalism in philosophy of mind?

Functionalism is the view that mental states are defined by their functional roles — their causal relations to sensory inputs, behavioral outputs, and other mental states — rather than by their physical substrate. The theory emerged from the work of Hilary Putnam in the 1960s as an alternative to type identity theory (the view that each mental state type is identical to a specific physical brain state type).The key motivation for functionalism is multiple realizability: the observation that the same mental state can be realized in physically different systems. Pain, for example, might be realized in human neurons, in a very differently structured octopus nervous system, or in principle in a silicon computer. If pain is identical to a specific type of human neural state (C-fiber firing, in the classic example), it seems to follow wrongly that an octopus or a computer cannot have pain. Functionalism avoids this by saying pain is whatever plays the functional role of pain — whatever is caused by tissue damage, causes distress, motivates avoidance behavior, and so on — regardless of physical substrate.Functionalism has been enormously influential partly because it is congenial to cognitive science and to artificial intelligence: if mental states are functional states, then in principle a system with the right functional organization could have mental states regardless of whether it is made of neurons. This makes functionalism the implicit philosophy of most AI consciousness discussions.The main challenge to functionalism comes from qualia: the felt qualities of experience. Even if we accept that functional organization is sufficient for cognition, it is not obvious that it is sufficient for consciousness. A system could, by hypothesis, perform all the functional operations associated with pain without anything feeling painful. This objection motivates both the philosophical zombie thought experiment and John Searle's Chinese Room argument.

What is the Mary's Room thought experiment and what does it show?

Mary's Room is a thought experiment proposed by philosopher Frank Jackson in his 1982 paper 'Epiphenomenal Qualia.' Mary is a scientist who has lived her entire life in a black-and-white room. She has access to all the physical information about color vision: the wavelengths of light, the neural responses they trigger, the behavioral dispositions they cause. She knows, in other words, every physical fact about what happens when people see red.Then Mary leaves the room and sees red for the first time. Does she learn something new?Jackson argued: yes, she learns what it is like to see red. If she already knew all the physical facts and still learns something new upon seeing red, then the physical facts do not exhaust all the facts about conscious experience. There are non-physical facts about qualia. This conclusion threatens physicalism — the view that everything that exists is physical or constituted by physical facts.Physicalists have offered several responses. The ability hypothesis, developed by David Lewis and Lawrence Nemirow, argues that Mary does not gain new propositional knowledge (new facts) when she sees red — she gains a new ability: the ability to recognize, remember, and imagine red experiences. Ability acquisition is not the same as learning a new fact about the world. A second response, the phenomenal concept strategy, argues that Mary does gain new knowledge but only because she acquires new phenomenal concepts for the same physical facts she already possessed — not because there are non-physical facts.Daniel Dennett takes a more radical approach, arguing that our intuition that Mary learns something new is simply a mistake driven by confused intuitions about qualia. In his view, once you fully accept physicalism, you should conclude that Mary does not learn anything genuinely new — any sense that she does reflects philosophical confusion about the nature of experience.

What is eliminative materialism?

Eliminative materialism is the view, most associated with philosophers Paul Churchland and Patricia Churchland, that our ordinary mental vocabulary — beliefs, desires, fears, intentions, the concepts of 'folk psychology' — is so theoretically confused that it will ultimately be eliminated and replaced by a mature neuroscience, rather than being reduced to or explained by neuroscience.Paul Churchland laid out the core argument in 'Eliminative Materialism and the Propositional Attitudes' (1981). He draws an analogy with the history of science: we no longer explain fire in terms of phlogiston, heat in terms of caloric fluid, or vital processes in terms of vital spirit. These theoretical posits were not found to be identical to some physical quantity — they were eliminated from scientific vocabulary when better theories appeared. Churchland argues that folk psychological posits like 'beliefs' and 'desires' are similarly likely to fail: they are part of a theory (folk psychology) that has been stagnant for millennia, that makes poor predictions, that may carve nature at entirely the wrong joints from a neuroscientific perspective.The view is philosophically bold but faces powerful objections. Self-refutation worries: does Churchland believe eliminative materialism is true? If beliefs are eliminated, the sentence 'I believe eliminative materialism is true' becomes literally meaningless. Churchland and his defenders respond that the self-refutation worry misunderstands the claim — we are replacing folk psychological concepts over time with better ones, not claiming they are incoherent now.Eliminiative materialism has influenced neurophilosophy — the research program of bringing philosophical analysis to neuroscience — but remains a minority position. Most philosophers of mind favor some version of functionalism or non-reductive physicalism.

What is Integrated Information Theory and how does it account for consciousness?

Integrated Information Theory (IIT) was developed by neuroscientist Giulio Tononi, initially in a 2004 paper and developed through a series of increasingly detailed technical formulations. IIT proposes that consciousness is identical to a property called integrated information, denoted by the Greek letter phi. A system has high phi if it generates more information as a unified whole than the sum of its independent parts — if its causal structure is deeply integrated rather than modular.Tononi argues that IIT is the correct theory because it starts from the phenomenology of consciousness — the intrinsic properties of experience — and derives conditions that any physical system instantiating consciousness must satisfy. Five axioms characterize experience: existence (experience exists), composition (experience is structured), information (each experience is unique), integration (experience is unified — you cannot decompose your visual field into independent halves), and exclusion (each experience is definite, not vague).From these axioms, IIT derives the claim that the neural correlates of consciousness must be a complex of high causal integration. The cerebral cortex, with its dense recurrent connectivity, has high phi. The cerebellum, despite having four times as many neurons as the cortex, has low phi because of its modular, feedforward architecture — which, IIT predicts, is why cerebellar damage does not impair consciousness.Computational scientist Scott Aaronson raised an influential challenge: according to IIT's mathematics, certain simple feedforward networks and grids could have extremely high phi, seemingly implying they are conscious. Tononi disputes this reading. IIT also implies that sufficiently integrated silicon systems could be conscious — and conversely, that certain functional organizations (including standard digital computers with their modular architecture) might have very low phi and therefore little or no consciousness regardless of behavioral sophistication. This puts IIT in tension with functionalism.

What does philosophy of mind say about whether AI could be conscious?

The question of AI consciousness depends almost entirely on which philosophy of mind is correct — making it one of the few areas where abstract philosophical disputes have enormous practical stakes.If functionalism is true, consciousness depends on functional organization, not substrate. A sufficiently complex AI with the right architecture could in principle be conscious, and there is no principled reason substrate made of silicon is disqualified. The key question becomes empirical: does the system have the right functional organization? Some functionalists, like philosopher Daniel Dennett, have argued that current large language models, while impressive, lack the right kind of dynamic integration and self-modeling to be genuinely conscious.John Searle's Chinese Room argument (1980) is the most famous challenge to functionalist AI consciousness. Imagine a person who does not understand Chinese locked in a room with a rulebook for manipulating Chinese symbols. Chinese speakers pass in questions; the person follows rules to manipulate symbols and passes back answers that appear to understand Chinese. The system passes a behavioral test for understanding Chinese — but the person inside understands nothing. Searle argues this demonstrates that syntax (symbol manipulation) is not sufficient for semantics (meaning and understanding). A computer manipulating symbols has no more understanding than the room — regardless of how sophisticated its outputs. Critics respond with the systems reply: the person does not understand Chinese but the system as a whole does. Searle disputes this.IIT predicts that standard digital computers, with their modular feedforward architecture, would have very low or zero phi and therefore little consciousness — regardless of their functional sophistication. A large language model generating apparently conscious-seeming text would, on IIT, not be conscious.David Chalmers addresses AI consciousness directly in 'Reality+' (2022), arguing that virtual minds — whether in simulations or artificial systems — could be genuinely minded if they have the right functional or physical organization, and that their moral status would be as real as that of biological minds.

What is the mind-body problem and why hasn't it been solved?

The mind-body problem is the question of how mental states and physical states are related. Rene Descartes gave the problem its canonical modern form in the seventeenth century through substance dualism: his argument that the mind (res cogitans — thinking thing) and body (res extensa — extended thing) are two fundamentally different substances. Minds are non-spatial, non-material thinking substances; bodies are spatial, material substances governed by mechanical laws.Substance dualism faces the interaction problem: if mind and body are entirely different substances, how do they causally interact? When I decide to raise my arm (a mental event), my arm rises (a physical event). When a pin pricks my skin (a physical event), I feel pain (a mental event). But if minds are non-physical, they cannot be in the right kind of causal contact with physical things — causation between radically different substances seems mysterious. Descartes suggested the pineal gland as the interface, but this merely relocates the mystery without resolving it.Modern physicalism — the dominant view in philosophy of mind today — attempts to dissolve dualism by holding that mental states are identical to, constituted by, or realized in physical brain states. But physicalism faces the hard problem: the explanatory gap between physical descriptions of brain processes and phenomenal descriptions of experience. Even granting that mental states are physical, we seem unable to explain in physical terms why they feel like anything.The problem remains unsolved partly because it may require conceptual revisions we have not yet achieved. Some philosophers argue we need a radical expansion of our concept of the physical (panpsychism — the view that phenomenal properties are fundamental features of reality, not reducible to other physical properties). Others argue the explanatory gap is a cognitive illusion — our intuitions about consciousness are systematically misleading. Neither solution has achieved consensus. The problem persists because it sits at the intersection of empirical science, conceptual analysis, and deep metaphysics in a way that resists resolution by any single method.