In September 1956, two events occurred within weeks of each other that would fundamentally redirect the course of the human sciences. At the Dartmouth Summer Research Project on Artificial Intelligence — the meeting that effectively named and launched the field of AI — Allen Newell and Herbert Simon demonstrated the Logic Theorist, a computer program that could prove mathematical theorems in ways that appeared to mimic human reasoning. Simon told his students that Christmas that they had "invented a thinking machine." Then, at the MIT Symposium on Information Theory on September 11, 1956, a thirty-seven-year-old psychologist named George Miller presented a paper with an unusual title: "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information." Miller's argument was deceptively simple: working memory — the mind's capacity to hold information actively in awareness — has a definite limit of approximately seven items, plus or minus two. Chess masters and poetry students alike were constrained by the same cognitive bottleneck. The paper would become one of the most cited in the history of psychology.

Within one month, the cognitive revolution had launched. The mind was being reimagined not as a black box accessible only through observable behavior — the behaviorist orthodoxy that had dominated psychology for forty years — but as an information-processing system whose internal structure could be studied directly. The metaphor borrowed from the newly available digital computer: the brain was hardware, the mind was software. Mental states were representations; cognition was computation; memory was storage; attention was bandwidth. This was not merely a new set of hypotheses but a new framework for what psychology was about.

The field that emerged from these convergences — eventually named cognitive science — became one of the most consequential intellectual developments of the twentieth century. It changed how we understand language, perception, memory, consciousness, and machine intelligence. It also raised questions that remain, seventy years later, genuinely unresolved: what makes a physical process mental, why physical processes give rise to subjective experience at all, and whether the computational metaphor is ultimately adequate to explain the mind.

"The mind is what the brain does." — Marvin Minsky, The Society of Mind (1986)


Key Definitions

Cognitive science — The interdisciplinary study of mind and intelligence, combining methods and findings from psychology, neuroscience, linguistics, philosophy of mind, computer science, and anthropology. Unified by the goal of explaining cognition — thought, perception, language, memory, and consciousness — at multiple levels of analysis.

The computational theory of mind (CTM) — The hypothesis that mental processes are computations: the brain manipulates representations according to rules, analogously to how a computer manipulates symbols. Associated with functionalism (Hilary Putnam) and the language of thought hypothesis (Jerry Fodor 1975).

Behaviorism — The school of psychology, dominant from approximately 1920 to 1960, that held psychology should study only observable behavior, not unobservable mental states. B.F. Skinner's radical behaviorism claimed all behavior, including language, is explicable through stimulus-response conditioning and reinforcement histories.

Marr's three levels — David Marr's framework for analyzing cognitive systems at three levels: the computational level (what the system does and why), the algorithmic level (how it does it — what representations and procedures), and the implementational level (how the algorithm is physically realized in brain tissue or silicon).

Working memory — The cognitive system that holds a limited amount of information in active awareness for short periods, supporting ongoing thought and behavior. Distinct from long-term memory. George Miller (1956) established its capacity at approximately seven items; later work by Baddeley and Hitch (1974) identified its component structure.

Embodied cognition — The view that cognitive processes are fundamentally shaped by the structure, capabilities, and sensorimotor experience of the physical body, rather than being body-independent computations.

Extended mind — Andy Clark and David Chalmers's (1998) thesis that cognitive processes can extend beyond the skull into the environment: when an external resource (a notebook, a smartphone) functionally substitutes for biological memory, it constitutes part of the cognitive system.

Modularity — Jerry Fodor's (1983) proposal that the mind consists of specialized, encapsulated processing modules — domain-specific systems (face recognition, syntax processing, color perception) that process information automatically and independently of general cognition. Contrasts with unmodular views in which cognition is globally interconnected.

Connectionism — The approach to modeling cognition using artificial neural networks: distributed representations across many processing units, learning through adjusting connection weights, no explicit symbolic rules. Associated with the PDP (Parallel Distributed Processing) group and their 1986 volumes.

The hard problem of consciousness — David Chalmers's (1995) distinction between the "easy problems" of consciousness (explaining how the brain processes information, integrates signals, reports mental states) and the "hard problem" (explaining why there is subjective experience — why processing feels like anything at all).


The Cognitive Revolution: Breaking with Behaviorism

What Behaviorism Got Wrong

Behaviorism's dominance in early twentieth-century American psychology was not arbitrary. Watson and Skinner offered a genuinely scientific research program: measurable stimuli, measurable responses, reproducible experiments. The objection to mentalistic concepts — intentions, beliefs, representations, images — was methodological. How do you study what you cannot observe?

The problem was that behaviorism's explanatory scope was too narrow to account for much of what minds actually do. The most decisive challenge came from language. In 1959, Noam Chomsky published a devastating forty-page review of Skinner's 1957 book "Verbal Behavior," in which Skinner had attempted to explain language acquisition through operant conditioning: children learn words and sentences because correct utterances are reinforced by parental approval and communicative success.

Chomsky's critique was multi-pronged. First, children produce sentences they have never heard before — grammatical sentences that no reinforcement history could have shaped. A four-year-old who says "I goed to the store" has never been taught that irregular verb form; she has extracted a rule (add -ed for past tense) and overapplied it. Second, sentences have deep structure distinct from surface form: "John is easy to please" and "John is eager to please" have identical surface structure but radically different meanings (in the first, someone pleases John; in the second, John pleases someone). Syntax cannot be learned from surface regularities alone. Third, children converge on the same grammatical rules despite hearing an impoverished sample of the language — the "poverty of the stimulus" argument for an innate universal grammar that constrains language learning.

Chomsky's alternative — transformational generative grammar — proposed that language competence consists of an internal rule system that generates an infinite number of grammatical sentences from a finite set of rules. This was a cognitive, representational account: the mind had internal structure, and that structure was worth studying.

Miller, Newell, and Simon: The New Paradigm

Miller's 1956 paper established the concept of chunking: working memory holds seven items, but an "item" can be a single digit, a word, a chess position, or an entire poem if the subject has chunked the lower-level elements into a single retrievable unit. Chase and Simon's 1973 studies of chess masters showed this empirically: masters looking briefly at game positions could reconstruct them from memory with remarkable accuracy, but only for positions from real games, not random piece placements. They were not remembering fifty-four individual piece positions; they were recognizing familiar chunks — constellations of pieces with familiar tactical meanings.

Newell and Simon's General Problem Solver (1957) and their later work on human problem-solving proposed a production system model: cognition as the application of condition-action rules ("productions") to a problem space. The model made specific, testable predictions about the sequence of steps humans would take solving logical and mathematical problems, and matched human protocols collected by asking subjects to think aloud.

Together, these developments established a new scientific agenda: mapping the computational processes, representations, and capacities that constitute human cognition.


The Computational Metaphor: Mind as Information Processing

Alan Turing and the Foundations

The intellectual ancestor of the computational theory of mind is Alan Turing's 1950 paper "Computing Machinery and Intelligence," which introduced the imitation game — now called the Turing Test — and asked whether a machine could be built that would be indistinguishable from a human in conversation. Turing was not just asking whether machines could simulate intelligence but whether there was any principled reason to deny that a machine capable of human-level conversational performance was genuinely intelligent.

The philosophical framework that emerged — functionalism, developed most fully by Hilary Putnam in the 1960s — holds that mental states are defined by their functional roles: their causal relations to inputs (stimuli), outputs (behaviors), and other mental states. If pain is whatever plays the pain role — causing avoidance behavior, being caused by tissue damage, generating attention to the damaged region — then pain can in principle be realized by any physical system that plays that role, whether neurons, silicon, or something else entirely. This is multiple realizability: the same mental state can be instantiated in different physical substrates.

Jerry Fodor pushed this into the "language of thought hypothesis" (1975): thought is conducted in an inner mental language (Mentalese) with a combinatorial structure. Mental states are relations to sentences in Mentalese — to believe that it's raining is to have a tokening of the Mentalese sentence "IT IS RAINING" in your belief box. The combinatorial structure explains the productivity of thought (we can think indefinitely many thoughts) and systematicity (whoever can think "the cat chased the dog" can also think "the dog chased the cat").

The Chinese Room: A Challenge to CTM

The most influential philosophical challenge to the computational theory of mind is John Searle's Chinese Room argument (1980). Imagine a person locked in a room with a large set of Chinese symbols and a rulebook that specifies, for any sequence of Chinese symbols passed in through a slot, which sequence to pass back out. To Chinese speakers outside the room, the responses appear to demonstrate perfect understanding of Chinese. But the person inside understands nothing — they are simply manipulating symbols according to rules without any understanding of what the symbols mean.

Searle's argument: this is exactly what a computer does. Syntax — the manipulation of symbols according to formal rules — cannot produce semantics — meaning, understanding, intentionality. A system can process symbols perfectly correctly without any of those symbols meaning anything to it.

The Chinese Room generated enormous philosophical debate and continues to do so. Standard replies include the "systems reply" (the person doesn't understand Chinese, but the system — person plus rulebook — does) and the "robot reply" (a Chinese Room embedded in a robot with sensorimotor connections to the world might acquire genuine understanding). Searle's reply is that none of these replies touch his central point: syntax is not sufficient for semantics.


Multiple Levels of Analysis: Marr's Framework

David Marr's 1982 book "Vision: A Computational Investigation" introduced what remains the most widely used framework for organizing explanations in cognitive science. Marr distinguished three levels at which any information-processing system can be analyzed:

The computational level: What is the system doing, and why? What problem is it solving, what is its goal, and what is the logic of the solution? For vision: the system is computing a representation of the three-dimensional structure of the visible world from two-dimensional retinal images.

The algorithmic level: What representations does the system use, and what procedures operate on them? How does the computation actually proceed step by step? For edge detection: a specific algorithm for identifying discontinuities in intensity gradients in the image.

The implementational level: How is the algorithm physically realized? In what neural hardware, or silicon, does the computation run?

Marr's insight was that these levels are largely independent: a given computational-level problem can in principle be solved by many different algorithms, and a given algorithm can in principle be implemented in many different physical substrates. Cognitive scientists need not wait for neuroscience to identify the neural implementation before studying the computational and algorithmic levels. Conversely, knowing the neural implementation tells you little about the algorithm, and knowing the algorithm tells you little about why that algorithm was the one evolution settled on.

This framework is not uncontroversial — embodied cognition researchers argue that the levels are not as separable as Marr supposed, because the algorithm is shaped by the body's sensorimotor capacities — but it remains the dominant framework for organizing cognitive science explanations.


Symbolic AI vs. Connectionism: Two Models of Mind

Good Old-Fashioned AI and Its Limits

The symbolic AI (GOFAI) program, pioneered by Newell, Simon, and Minsky, held that intelligence is the manipulation of explicit symbolic representations according to explicit rules. Expert systems in the 1980s embodied this approach: they represented domain knowledge as explicit if-then rules and applied logical inference to derive conclusions. They worked impressively within narrow domains but failed at the generalization, pattern recognition, and commonsense understanding that come effortlessly to humans.

Hubert Dreyfus, in "What Computers Can't Do" (1972) and subsequent works, argued that symbolic AI was fundamentally misguided: skilled human expertise — catching a ball, riding a bicycle, recognizing a friend's face — is not the application of explicit rules but tacit know-how that cannot be articulated as rules. Dreyfus drew on Merleau-Ponty's phenomenology to argue that embodied, skilled coping with the world is the basis of human intelligence, not symbol manipulation.

Connectionism and Neural Networks

The 1986 publication of "Parallel Distributed Processing" by Rumelhart, McClelland, and the PDP Research Group offered a radically different model. Rather than storing knowledge in explicit symbolic rules, connectionist networks store it in patterns of connection weights across many simple processing units — loosely analogous to neurons. Learning occurs by adjusting weights in response to errors. The resulting systems exhibit many cognitively plausible properties: pattern completion from partial input, graceful degradation under damage, generalization to novel instances, and emergent categorical representations.

Connectionism captured something that GOFAI missed: many cognitive processes feel more like pattern matching than rule application, and the brain's architecture is more neural-network-like than symbolic-program-like. The debate between symbolic and connectionist approaches was not resolved in the 1990s and in some sense remains open: modern deep learning, descended from connectionism, has produced spectacular performance on specific tasks, but questions about whether these systems model human cognition accurately — and whether they understand in any meaningful sense — remain actively contested.


Embodied, Situated, and Extended Cognition

The Body Shapes Thought

Lakoff and Johnson's "Metaphors We Live By" (1980) made a radical claim about the structure of abstract thought: it is organized by conceptual metaphors grounded in bodily experience. The concept of time is spatial (the future is "ahead," the past is "behind," we "look forward" to events). The concept of quantity is vertical (prices go "up," volume goes "down"). The concept of argument is combat ("defending a position," "attacking an argument," "shooting down a proposal"). These are not optional rhetorical decorations but the primary structure of abstract conceptual thought.

Roger Shepard and Jacqueline Metzler's 1971 "Science" paper demonstrated mental rotation: when subjects were asked to judge whether two drawings of three-dimensional objects were the same object in different orientations, reaction time increased linearly with the angular difference between the orientations. The mind appeared to "rotate" mental images at a rate that mirrors physical rotation — a finding suggesting that imagination is grounded in simulated sensorimotor processes, not purely abstract symbolic operations.

The Extended Mind

Andy Clark and David Chalmers's 1998 paper "The Extended Mind" introduced what they called the "parity principle": if a process outside the head plays the same functional role that an internal process would play if it were inside the head, there is no principled basis for excluding it from the mind. They illustrated with Otto, a man with early-stage Alzheimer's disease who carries a notebook in which he records everything he would otherwise forget. When Otto wants to go to the museum, he consults his notebook for the address, just as his friend Inga consults her biological memory. On the parity principle, Otto's notebook is part of his cognitive system — literally part of his mind — in the same sense that Inga's biological memory is part of hers.

The extended mind thesis remains philosophically contested — critics argue that mental states require biological implementation, or that the tight coupling between Otto and his notebook is insufficient to count as cognitive integration — but it has been productive in cognitive science, directing attention to the study of how humans use environmental resources as cognitive scaffolding.


Core Experimental Findings

Working Memory and Its Limits

Miller's 1956 work on working memory capacity has been refined substantially. Baddeley and Hitch's (1974) multicomponent model divided working memory into the phonological loop (holding verbal information), the visuospatial sketchpad (holding visual and spatial information), the central executive (allocating attention between the components), and later the episodic buffer (integrating information across components). Cowan's (2001) revision of the capacity estimate reduced the core working memory limit to approximately four chunks — the seven-plus-or-minus-two figure having inflated the true limit by conflating chunking with raw capacity.

Mental Rotation and Visual Imagery

Shepard and Metzler's mental rotation findings established the study of mental imagery as a legitimate scientific domain. Kosslyn's subsequent work showed that visual mental images are processed in the same brain regions as visual perception, with properties (spatial extent, resolution) that correspond to physical properties of visual scenes — supporting the idea that imagery involves genuine depictive representation rather than purely propositional description.

Heuristics and Biases

Daniel Kahneman and Amos Tversky's research program on heuristics and biases (from the early 1970s, summarized in Kahneman's "Thinking, Fast and Slow" 2011) documented systematic departures from rational choice in human judgment. Availability heuristic: events that come easily to mind are judged more probable than events that are harder to recall. Representativeness heuristic: probabilities are judged by similarity to prototypes rather than by base rates. Anchoring: initial numerical anchors systematically pull subsequent estimates toward them. These findings demonstrated that the computational processes underlying human judgment are not optimal probability calculations but fast-and-frugal heuristics that are efficient but systematically biased.


Consciousness as a Cognitive Science Problem

Consciousness remains the deepest unsolved problem in cognitive science. Francis Crick (co-discoverer of DNA's double helix) and Christof Koch began a research program in the 1990s focused on identifying neural correlates of consciousness (NCCs) — the minimal neural events sufficient for a specific conscious percept. Their focus on the binding problem — how the brain integrates distributed processing into a unified conscious experience — directed research attention to synchronous neural oscillations in the gamma band (30-80 Hz) as a potential binding mechanism.

Bernard Baars's Global Workspace Theory (1988) proposes that consciousness corresponds to the broadcasting of information from specialized processors into a "global workspace" accessible to many systems simultaneously — making information globally available rather than keeping it local to a specialized module. Dehaene's subsequent neuroimaging work found evidence for this in the pattern of cortical activation associated with conscious versus unconscious perception.

Giulio Tononi's Integrated Information Theory (IIT) takes a different approach: consciousness is identical to integrated information (phi) — the amount of information generated by a system above and beyond the sum of its parts. High phi means high consciousness; zero phi means no consciousness. The theory makes distinctive predictions and is controversial partly because it implies that some highly integrated artificial systems could be conscious.

David Chalmers's 1995 "hard problem" remains the crux. Even a complete explanation of how the brain integrates information, generates behavior, and reports mental states — the "easy problems" — leaves open the question of why any of this gives rise to subjective experience. This explanatory gap between third-person physical description and first-person phenomenal experience has not been bridged, and there is no consensus on whether it can be.


Cognitive Development: Building the Mind

Jean Piaget's theory of cognitive development proposed that children pass through qualitatively distinct stages — sensorimotor (birth to 2), preoperational (2-7), concrete operational (7-11), formal operational (11+) — in which their cognitive capacities are fundamentally restructured. The theory's core insight — that children are not simply ignorant adults but think differently in kind, not just quantity — was enormously productive. The empirical details have been substantially revised: Piaget systematically underestimated infant and young children's capacities, as later researchers using more sensitive methods demonstrated.

Lev Vygotsky's alternative emphasized the social and cultural dimensions of cognitive development. The zone of proximal development — the space between what a child can do alone and what a child can do with expert guidance — identified the optimal zone for instruction: just beyond current competence, supported by a more capable partner. Language, in Vygotsky's theory, is not just communication but a cognitive tool that restructures thought when internalized.

Theory of mind development — the child's acquisition of the understanding that others have mental states (beliefs, desires, intentions) distinct from their own — has been studied extensively since Wimmer and Perner's (1983) false-belief task. Typically developing children pass this task around age four; autistic children often show delays in theory of mind development, which Frith and Baron-Cohen connected to the social-cognitive difficulties of autism.


For related topics, see how the mind actually works, dual process theory explained, and AI versus human intelligence compared.


References

  • Miller, G. A. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review, 63(2), 81–97. https://doi.org/10.1037/h0043158
  • Chomsky, N. (1959). Review of Verbal Behavior by B.F. Skinner. Language, 35(1), 26–58. https://doi.org/10.2307/411334
  • Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433
  • Searle, J. R. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/S0140525X00005756
  • Shepard, R. N., & Metzler, J. (1971). Mental Rotation of Three-Dimensional Objects. Science, 171(3972), 701–703. https://doi.org/10.1126/science.171.3972.701
  • Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Freeman.
  • Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19. https://doi.org/10.1093/analys/58.1.7
  • Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.
  • Chalmers, D. J. (1995). Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, 2(3), 200–219.
  • Kahneman, D., & Tversky, A. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
  • Newell, A., & Simon, H. A. (1976). Computer Science as Empirical Inquiry: Symbols and Search. Communications of the ACM, 19(3), 113–126. https://doi.org/10.1145/360018.360022

Frequently Asked Questions

What is cognitive science?

Cognitive science is the interdisciplinary scientific study of mind and intelligence, drawing together psychology, neuroscience, linguistics, philosophy, computer science, and anthropology. Rather than treating the mind as a single discipline's object of study, cognitive science insists that understanding thought, perception, language, memory, and consciousness requires methods and findings from all of these fields simultaneously. The field coalesced in the mid-1950s, when several converging developments — George Miller's work on the limits of working memory, Noam Chomsky's transformational grammar, Allen Newell and Herbert Simon's computational models of problem-solving, and the emergence of information theory from Claude Shannon's work — collectively shifted the framework for studying the mind from behaviorism (the mind as inaccessible black box, only behavior matters) to cognitive science (the mind as an information-processing system that can be studied directly). The term 'cognitive science' itself gained currency in the 1970s, and the journal Cognitive Science was founded in 1977, formalizing the field. Today the field investigates topics from the neural correlates of consciousness to how children acquire language, from the limits of rational decision-making to the possibility of machine intelligence. It is unified not by a single theory but by a shared commitment to understanding mental processes at multiple levels of analysis — what David Marr called the computational, algorithmic, and implementational levels.

What is the computational theory of mind?

The computational theory of mind (CTM) is the hypothesis that mental processes are forms of computation — that the brain manipulates representations according to rules, much as a computer manipulates symbols according to programs. The idea builds directly on Alan Turing's work: Turing's 1950 paper 'Computing Machinery and Intelligence' introduced the imitation game (now called the Turing Test) and argued that if a machine could converse indistinguishably from a human, there would be no principled basis for denying it intelligence. The broader philosophical move — that minds are what brains do computationally, not what they are made of physically — is called functionalism, associated with Hilary Putnam in the 1960s. If mental states are defined by their functional roles (inputs, outputs, and relations to other states) rather than their physical substrate, then in principle silicon could realize the same mental states as neurons. Jerry Fodor elaborated this into the 'language of thought hypothesis' (1975): thought is conducted in an inner mental language (Mentalese) with a combinatorial structure, and mental states are relations to sentences in this language. The CTM does significant explanatory work — it explains why similar computational processes might occur in different physical systems (multiple realizability), and it makes cognitive science possible as a discipline distinct from neuroscience. But it faces challenges: John Searle's Chinese Room argument (1980) claims that symbol manipulation can never constitute genuine understanding, and embodied cognition researchers argue that thought is not merely computation in a body-independent language but is shaped fundamentally by the structure of the physical organism and its environment.

What did the cognitive revolution replace?

The cognitive revolution replaced behaviorism as the dominant framework in scientific psychology. Behaviorism, most associated with John B. Watson and B.F. Skinner, held that psychology should study only observable behavior — not mental states, representations, or inner processes, which were considered scientifically inaccessible. In Skinner's framework, all behavior, including language, was explicable in terms of stimulus-response associations, reinforcement histories, and conditioning. The cognitive revolution challenged this on multiple fronts. George Miller's 1956 paper 'The Magical Number Seven, Plus or Minus Two' demonstrated that working memory has a definite capacity limit — a finding that implicated an internal cognitive architecture, not just stimulus-response patterns. Noam Chomsky's devastating 1959 review of Skinner's book 'Verbal Behavior' showed that language acquisition cannot be explained by reinforcement: children learn grammatical rules that allow them to produce and understand sentences they have never heard before, a fact (called the 'poverty of the stimulus') that requires positing internal rule-governed representations. Chomsky argued that humans have an innate universal grammar — a built-in linguistic faculty that constrains language learning. Allen Newell and Herbert Simon built computer programs that solved logical proofs and chess problems in ways that modeled human problem-solving, showing that mental processes could be formalized and studied computationally. Together, these developments established that the mind had internal structure worth studying — that representations, rules, and processes were scientifically tractable. The cognitive revolution did not eliminate behaviorism entirely (learning theory remains important) but dethroned it as the organizing framework of the discipline.

What is embodied cognition?

Embodied cognition is the view that mental processes are not merely computations performed by a body-independent brain but are fundamentally shaped by the structure, capabilities, and sensorimotor experience of the physical body. On the standard computational view, the body is hardware that runs cognitive software — the interesting action is in the software. Embodied cognition reverses this: the body's morphology, its motor capabilities, and its perceptual sensitivities co-constitute thought itself. The philosophical roots lie in phenomenology (Merleau-Ponty's argument that perception is always already bodily) and in Hubert Dreyfus's sustained critique of classical AI: Dreyfus argued that the tacit know-how of skilled human action — riding a bicycle, recognizing faces, catching a ball — could not be reduced to explicit rules, and that AI programs that treated the body as irrelevant would always fail at the level of commonsense competence. George Lakoff and Mark Johnson's 'Metaphors We Live By' (1980) demonstrated empirically that abstract conceptual thought is structured by embodied metaphors: we understand abstract concepts (time, importance, morality, quantity) through mappings from bodily experience ('more is up,' 'argument is war,' 'the future is ahead'). Roger Shepard and Jacqueline Metzler's 1971 mental rotation experiments showed that imagining rotating objects takes time proportional to the angle of rotation — the mind 'rotates' at a rate that mirrors physical rotation, suggesting cognition is grounded in simulated sensorimotor experience. Andy Clark and David Chalmers extended this further with the extended mind thesis (1998): if a notebook or smartphone serves the same functional role as biological memory, there is no principled reason to regard it as outside the mind.

How does cognitive science relate to AI?

Cognitive science and artificial intelligence have had a deeply intertwined relationship since both fields emerged together in the 1950s. Allen Newell and Herbert Simon's General Problem Solver (1957) was simultaneously a model of human problem-solving and an early AI system — the computational model of mind and the AI program were, in their vision, the same project. This approach, called Good Old-Fashioned AI (GOFAI) or symbolic AI, built systems that manipulated symbolic representations according to explicit rules, modeling cognition as rule-governed symbol processing. The 1986 emergence of connectionism — neural network models trained on data, capturing learning through adjusting connection weights rather than programming explicit rules — challenged the symbolic approach with a very different model of mind. Connectionists like David Rumelhart and James McClelland argued that many cognitive phenomena (pattern recognition, language acquisition, generalization) emerge naturally from networks of simple units learning from examples, without explicit symbolic rules. Modern deep learning is descended from connectionism and has produced AI systems that outperform humans on specific tasks (image recognition, game playing, certain language tasks) — but whether these systems 'understand' in any cognitively meaningful sense remains contested. Jerry Fodor's modularity thesis argues that core cognitive processes are encapsulated, domain-specific modules — a view that aligns better with symbolic AI. Cognitive science informs AI by providing computational accounts of human capabilities; AI in turn serves cognitive science as a tool for testing theories of cognition by building systems that either succeed or fail at modeling human-like performance.

What are the biggest unsolved problems in cognitive science?

The field's deepest unsolved problems cluster around consciousness, the nature of meaning, and the relationship between neural hardware and cognitive software. The 'hard problem of consciousness,' named by David Chalmers in 1995, asks why there is subjective experience at all — why neural processes give rise to the felt quality of experience (the redness of red, the painfulness of pain), rather than just processing information in the dark. 'Easy problems' of consciousness — explaining how the brain integrates information, generates behavior, and reports mental states — are scientifically tractable; the hard problem is not obviously tractable even in principle, because any mechanistic account of neural processing seems to leave open the question of why it feels like anything. Francis Crick and Christof Koch proposed that neural correlates of consciousness (NCCs) — the minimal neural events sufficient for a conscious percept — could be identified empirically; this has produced productive research but not an answer to the hard problem. Competing theories — Global Workspace Theory (Baars, Dehaene), Integrated Information Theory (Tononi), Higher-Order Theories — disagree fundamentally about what kind of neural process is sufficient for consciousness. A second deep problem is how mental representations acquire meaning — what makes the word 'cat' in your head refer to cats rather than something else. A third is the relationship between the explicit, rule-governed aspects of cognition (language syntax, logical reasoning) and the implicit, skilled, embodied aspects (Dreyfus's tacit know-how). The integration of symbolic and connectionist approaches into a unified architecture remains incomplete.