Moral Intuitions vs. Moral Reasoning: The Battle Inside Every Ethical Decision

In 2001, the psychologist Jonathan Haidt presented people with a scenario that has since become one of the most discussed thought experiments in moral psychology. A brother and sister, Julie and Mark, are traveling together in France. One night, they decide to have sex. Julie is on birth control; Mark uses a condom. They enjoy it but decide never to do it again. They keep the experience a secret and feel closer because of it.

Most people react with immediate, visceral disgust. They declare it wrong. But when pressed for reasons, something interesting happens. They cite risks of birth defects--but both used contraception. They mention psychological damage--but the scenario stipulates neither was harmed. They argue about power dynamics--but both consented freely as adults. One by one, their rational justifications get knocked away. Yet almost nobody changes their mind. They still insist it was wrong, even without being able to explain why.

Haidt called this phenomenon moral dumbfounding--the state where people hold strong moral convictions they cannot rationally defend. This single observation opened a fault line in moral philosophy that continues to generate productive tension: Are our moral judgments driven by gut feelings that reasoning merely decorates after the fact? Or does careful rational analysis lead us to genuinely better ethical conclusions?

The relationship between moral intuitions and moral reasoning sits at the center of how humans actually navigate right and wrong. Philosophers since Plato have debated whether reason should govern emotion or whether feeling carries its own moral authority. But only in recent decades has empirical research from psychology, neuroscience, and behavioral economics provided data that reshapes this ancient debate in ways neither side fully anticipated.

Understanding how intuition and reasoning interact matters far beyond academic philosophy. Every time you feel something is wrong before you can explain why, every time you construct a logical argument for a moral position you already hold, every time you override your gut reaction after careful deliberation--you are navigating the tension between these two fundamental sources of moral judgment. The question is not whether both matter. They do. The question is how they interact, when to trust each one, and what happens when they conflict.


What Moral Intuitions Actually Are

Moral intuitions are immediate, automatic evaluative responses--feelings of approval or disapproval, rightness or wrongness, that arise without deliberate reasoning. They show up fast, feel certain, and carry emotional weight. When you see someone kick a dog and immediately feel that is wrong, you are experiencing a moral intuition. You did not run through a utilitarian calculus or consult Kant's categorical imperative. The judgment arrived fully formed.

These intuitions have several distinctive features that separate them from reasoned moral conclusions:

Speed: Moral intuitions typically arise within milliseconds. Brain imaging studies show that moral evaluations activate emotional processing regions (particularly the ventromedial prefrontal cortex and amygdala) before deliberative reasoning areas fully engage. The philosopher and psychologist Joshua Greene documented this timing difference using functional MRI while subjects evaluated moral dilemmas, finding that emotional responses preceded and often determined rational evaluation.

Automaticity: You do not choose to have moral intuitions any more than you choose to feel pain when touching a hot surface. They arise involuntarily, triggered by features of situations that your brain has learned to flag as morally relevant--harm, fairness violations, authority challenges, purity breaches.

Affective charge: Moral intuitions carry emotional signatures--disgust, anger, sympathy, indignation, admiration. These emotions are not incidental decorations on cognitive judgments; they are constitutive of the intuitions themselves. The feeling is the judgment, at least initially.

Confidence without articulation: People experiencing strong moral intuitions typically feel highly confident in their judgments but may struggle to articulate why they feel that way. This disconnect between conviction and justification is exactly what Haidt's dumbfounding research highlighted.

The Evolutionary Roots of Moral Intuition

Moral intuitions did not evolve to help individuals solve abstract philosophical puzzles. They evolved as rapid social coordination mechanisms in species living in cooperative groups. Primatologist Frans de Waal has documented proto-moral responses in non-human primates--capuchin monkeys reject unequal rewards, chimpanzees console victims of aggression, and various species punish free-riders within their social groups. These behaviors suggest that some moral intuitions have deep evolutionary roots predating human language and abstract reasoning entirely.

The evolutionary logic is straightforward. Groups whose members could rapidly detect cheaters, respond to unfairness, empathize with those in distress, and cooperate effectively outcompeted groups without these capacities. Over hundreds of thousands of generations, these rapid evaluative responses became hardwired into human cognitive architecture.

Jonathan Haidt and Craig Joseph proposed that human moral intuitions cluster around several moral foundations that reflect recurrent adaptive challenges:

Moral Foundation Intuitive Trigger Evolutionary Function Example Intuition
Care/Harm Suffering, vulnerability Protecting offspring and allies Feeling distress when witnessing cruelty
Fairness/Cheating Unequal treatment, free-riding Maintaining cooperative relationships Anger at someone cutting in line
Loyalty/Betrayal Group threats, disloyalty Coalition maintenance Disapproval of those who abandon their group
Authority/Subversion Disrespect, disobedience Social hierarchy navigation Discomfort at open defiance of legitimate leaders
Sanctity/Degradation Contamination, impurity Pathogen avoidance, sacred boundaries Disgust at desecration of meaningful symbols
Liberty/Oppression Domination, restriction Resistance to bullying and tyranny Outrage at unjust coercion

Not everyone weights these foundations equally. Research consistently finds that political liberals emphasize care and fairness most heavily, while conservatives distribute concern more evenly across all six foundations. This asymmetry helps explain why moral disagreements across political lines often feel like people are speaking entirely different moral languages--because, in a real sense, they are responding to different intuitive signals.


What Moral Reasoning Actually Involves

Moral reasoning, in contrast to intuition, is deliberate, conscious, effortful thinking about ethical questions. It involves applying principles, considering consequences, weighing competing values, examining evidence, and constructing arguments. When you sit down and carefully think through whether physician-assisted suicide should be permitted, weighing autonomy against sanctity of life, considering slippery slope risks against suffering reduction, examining precedents and principles--you are engaging in moral reasoning.

Philosophical traditions have long treated reasoning as the proper mode of moral thought. Plato's Republic imagines reason as the charioteer controlling the unruly horses of appetite and spirit. Immanuel Kant argued that moral worth comes exclusively from acting on rational duty, not from inclination or feeling. The entire enterprise of moral philosophy, with its systematic frameworks and careful argumentation, implicitly privileges reasoning over intuition.

The Rationalist Model

The traditional rationalist model of moral judgment works like this:

  1. You encounter a moral situation
  2. You reason about it using moral principles
  3. Your reasoning produces a moral judgment
  4. Emotions may accompany the judgment but do not generate it

This model treats moral reasoning as genuinely causative--your conclusions follow from your reasoning process, the way mathematical conclusions follow from axioms and proofs. Emotions are downstream effects, not upstream causes.

Under this model, moral disagreements are fundamentally errors in reasoning. If people just thought more carefully, applied principles more consistently, and avoided logical fallacies, they would converge on correct answers. Moral progress happens through better arguments, clearer thinking, and more rigorous application of principles.

How Reasoning Actually Functions in Moral Life

Moral reasoning in practice involves several distinct cognitive operations:

Principle application: Taking an abstract moral rule (do not lie, minimize suffering, respect autonomy) and applying it to a specific situation. This requires judgment about which principles apply, how they map onto particular facts, and what they demand in context.

Consequence evaluation: Tracing out the likely results of different actions and evaluating those results against moral criteria. This is the core operation of consequentialist thinking, but even non-consequentialists reason about consequences when they matter.

Analogical reasoning: Comparing a novel moral situation to familiar cases where your judgment feels settled. If this case is relevantly similar to that case where the answer was clear, then perhaps the same answer applies here.

Perspective-taking: Deliberately imagining how a situation looks and feels from the standpoint of others involved. This cognitive operation can generate new moral information that intuition alone might miss--especially regarding people whose experiences differ significantly from your own.

Consistency checking: Examining whether your moral judgments across different cases form a coherent set, or whether you are making exceptions and special cases that reveal inconsistency or bias.


Haidt's Social Intuitionist Model: Reasoning as Rationalization

In 2001, Jonathan Haidt proposed the Social Intuitionist Model that turned the rationalist picture upside down. His central claim: moral intuitions come first, and moral reasoning is primarily post-hoc rationalization.

The model works like this:

  1. You encounter a moral situation
  2. Moral intuitions arise automatically (gut feelings)
  3. You form a moral judgment based on those intuitions
  4. Reasoning kicks in after the judgment to construct justifications
  5. These justifications are aimed at convincing others (and yourself) that the intuitive judgment was right all along

Haidt compared moral reasoning to a press secretary or lawyer: not an impartial judge seeking truth, but an advocate constructing the best possible case for a conclusion already reached. The reasoning is real--it involves genuine cognitive effort, draws on actual principles, and can be logically sophisticated. But its function is advocacy, not inquiry.

The Evidence for Intuitionism

Multiple lines of evidence support the intuitionist picture:

Moral dumbfounding: As in the Julie and Mark scenario, people maintain moral convictions even when every rational justification has been defeated. If reasoning caused the judgment, removing the reasons should remove the judgment. It does not.

Speed of judgment: Moral evaluations occur within 200-300 milliseconds of encountering morally relevant stimuli--far too fast for deliberate reasoning to be the source. Reasoning, which requires working memory, attention, and sequential processing, operates on a timescale of seconds to minutes.

Emotional impairment effects: Patients with damage to the ventromedial prefrontal cortex--a region crucial for integrating emotion into decision-making--show profoundly altered moral judgments. The neuroscientist Antonio Damasio documented patients who could reason perfectly well about moral scenarios in the abstract but made catastrophically poor moral decisions in real life because they lacked the emotional signals that normally guide judgment.

Motivated reasoning: Extensive research documents that people reason toward their preferred conclusions rather than from evidence to conclusions. When people hold a moral conviction, they selectively seek confirming evidence, generate favorable arguments, and scrutinize counterarguments more aggressively than supporting ones. This pattern is exactly what the intuitionist model predicts.

Post-hoc confabulation: In split-brain studies and other experimental paradigms, people construct confident, detailed explanations for judgments and choices that were actually determined by factors they have no conscious access to. The brain is remarkably skilled at generating plausible-sounding reasons for decisions made on other grounds.

The "Elephant and the Rider"

Haidt popularized the metaphor of the elephant and the rider. The elephant is the intuitive, automatic system--massive, powerful, and going where it wants. The rider is conscious reasoning--small, sitting on top, and largely along for the ride. The rider can sometimes influence the elephant's direction through gentle nudging, but cannot overpower it through brute force. When the elephant and rider disagree, the elephant almost always wins. The rider's main skill is generating post-hoc narratives explaining why the elephant's chosen path was exactly right.

This metaphor captures a deeply unsettling possibility for anyone who takes moral reasoning seriously: perhaps the elaborate philosophical arguments that fill ethics journals and classroom discussions are, at their core, sophisticated rationalizations for feelings that were never generated by those arguments in the first place.


The Pushback: Reasoning Fights Back

The intuitionist picture, while empirically powerful, has faced substantial criticism from philosophers and psychologists who argue it overstates the case against reasoning.

Reasoning Can Override Intuition

The philosopher Peter Singer has long argued that careful reasoning can and should override moral intuitions, pointing to historical examples where intuitive moral responses turned out to be deeply wrong:

  • Slavery: For most of human history, the institution of slavery was intuitively acceptable to most people in slaveholding societies. It was reasoned arguments--about human equality, natural rights, the arbitrariness of birth circumstances--that gradually undermined those intuitions and drove abolition.

  • Women's suffrage: The intuition that women were naturally unsuited for political participation felt obvious and correct to generations of people. Reasoned arguments about equal capacity and equal rights eroded that intuition.

  • Homosexuality: Moral disgust toward homosexuality was a powerful and widespread intuition. Philosophical and empirical arguments about harm (or rather, the absence of harm), autonomy, and equal dignity have substantially changed moral intuitions across many societies within decades.

These examples suggest that moral reasoning is not merely a press secretary for fixed intuitions. Reasoning can change intuitions over time, even deeply held ones. The process is slow and effortful, but it is real.

The Dual-Process Alternative

Psychologist Joshua Greene proposed a dual-process theory of moral judgment that offers a more nuanced picture than either pure rationalism or pure intuitionism. Greene argues that moral judgment involves two systems:

System 1 (automatic/intuitive): Fast, effortless, emotionally driven. Produces deontological moral judgments--judgments focused on rights, duties, and rules. When you feel that pushing someone off a bridge is wrong even though it would save five others, that is System 1 responding to the personal, visceral nature of the action.

System 2 (deliberative/reasoning): Slow, effortful, cognitively demanding. Produces consequentialist moral judgments--judgments focused on outcomes. When you calculate that saving five lives at the cost of one produces better overall outcomes, that is System 2 overriding the intuitive response.

Greene's research using brain imaging showed that personal moral dilemmas (like physically pushing someone to their death) activated emotional processing regions more strongly, while impersonal dilemmas (like flipping a switch that redirects a threat) activated cognitive reasoning regions. The "right" answer, Greene suggests, often requires System 2 to override System 1's sometimes misleading emotional signals.

This model preserves a genuine role for reasoning while acknowledging intuitionism's insights about the primacy and power of automatic responses.


When to Trust Intuitions, When to Trust Reasoning

Neither moral intuitions nor moral reasoning is uniformly reliable. Each has characteristic strengths and failure modes.

Intuitions Are More Reliable When:

The situation is familiar and well-practiced. Moral intuitions perform best in domains where you have extensive experience. A seasoned emergency room nurse's intuition about patient care reflects years of pattern recognition, not mere feeling. Similarly, your moral intuitions about everyday interpersonal situations--fairness among friends, treatment of family members, basic honesty norms--are calibrated by a lifetime of social experience.

The domain is one where evolutionary calibration applies. Intuitions about harm, basic reciprocity, and care for vulnerable individuals draw on deep evolutionary programming that has been refined over millions of years. When someone is suffering and you feel compelled to help, that intuition reflects functional moral knowledge that no amount of philosophical reasoning invented from scratch could easily replicate.

Time pressure demands rapid response. In emergencies, you cannot deliberate. If a child is drowning, your intuitive impulse to help is more valuable than careful reasoning about the ethics of bystander intervention. Intuitions exist precisely for situations where the cost of delayed decision is high.

Multiple independent intuitions converge. When your intuitions about fairness, harm, and rights all point in the same direction, the convergence provides evidence that the judgment is tracking something morally real rather than reflecting a single bias.

Reasoning Is More Reliable When:

The situation is novel or unprecedented. Moral intuitions are calibrated for ancestral and familiar environments. When you face genuinely novel moral questions--about artificial intelligence, genetic engineering, obligations to future generations, digital privacy--your intuitions may not have relevant calibration data. Reasoning allows you to extend moral principles into unfamiliar territory.

Intuitions may reflect bias rather than moral truth. In-group favoritism, disgust-based prejudice, status quo bias, and scope insensitivity are all well-documented intuitive tendencies that systematically distort moral judgment. When you suspect your intuitive reaction might reflect mere prejudice or cultural conditioning rather than genuine moral insight, reasoning provides a corrective check.

Multiple intuitions conflict. When your sense of loyalty conflicts with your sense of fairness, or when compassion for an individual conflicts with concern for a larger group, intuition alone cannot adjudicate. You need deliberative reasoning to weigh competing moral considerations and reach a reflective judgment.

The stakes are high and time allows deliberation. For major ethical decisions--policy choices affecting millions, medical ethics decisions, criminal justice determinations--the cost of intuitive error is enormous. These situations warrant the cognitive effort of careful reasoning, even when intuitions feel clear.

Justification to others is required. In democratic societies, moral decisions affecting others need to be justified in terms those others can engage with. "It just feels wrong" is not a publicly defensible basis for policy. Reasoning provides the shared language of justification that cooperative moral life requires.


Reflective Equilibrium: The Integration

The philosopher John Rawls proposed a method called reflective equilibrium that offers the most promising framework for integrating intuitions and reasoning. The method works iteratively:

  1. Start with your moral intuitions about particular cases--specific situations where you have strong feelings about what is right or wrong.

  2. Formulate general principles that explain and systematize those intuitive judgments.

  3. Check the principles against further cases. Do the principles produce judgments that match your intuitions in new situations?

  4. When conflicts arise, adjust. Sometimes you revise a principle to accommodate a strong intuition. Other times you revise an intuition that conflicts with a principle you find compelling on reflection.

  5. Iterate until your principles and intuitions form a coherent, mutually supporting whole.

This method treats neither intuitions nor reasoning as authoritative on their own. Intuitions provide data--evidence about moral reality that reasoning must take seriously. Principles provide structure--frameworks that can reveal when intuitions are tracking bias rather than truth. Neither is infallible; both are indispensable.

Reflective equilibrium mirrors how thoughtful moral agents actually work through difficult ethical questions. Consider how views on animal welfare have shifted. Many people started with an intuition that animal cruelty is wrong. Philosophical reasoning (notably Peter Singer's Animal Liberation) extended this intuition systematically, arguing that if suffering matters morally, the species of the suffering being is irrelevant. This reasoning challenged other intuitions--that eating meat is perfectly acceptable, that animal experimentation needs no special justification--and gradually, for many people, those intuitions shifted. The process involved constant adjustment between intuitive responses and reasoned principles, with each informing and constraining the other.


Cultural Variation and Universal Foundations

A critical question about moral intuitions concerns their universality: are moral intuitions shared across cultures, or are they culturally constructed?

The answer, supported by extensive cross-cultural research, is both--and recognizing this dual nature is essential for understanding the interplay between intuition and reasoning.

Evidence for Universal Moral Intuitions

Certain moral intuitions appear across virtually all documented human cultures:

Harm aversion: Every known culture regards the deliberate infliction of unnecessary suffering on innocents as wrong. The specific boundaries of "innocent" and "unnecessary" vary enormously, but the core intuition that gratuitous cruelty is bad appears universal.

Reciprocity: The expectation of fair exchange and the negative response to free-riding appear in all human societies, and in rudimentary forms in other primate species. "Do unto others as you would have them do unto you" appears in some form in every major religious and philosophical tradition--not because these traditions copied each other, but because reciprocity intuitions are deeply embedded in human social cognition.

In-group loyalty: Preference for one's own group--family, tribe, nation--appears universally, though its expression varies dramatically.

Respect for certain authorities: While the specific forms of authority that command respect vary culturally, the intuitive tendency to recognize and defer to legitimate authority exists everywhere.

Evidence for Cultural Variation

Despite universal foundations, the expression, emphasis, and application of moral intuitions vary dramatically across cultures:

Richard Shweder's research identified three broad ethics that different cultures emphasize to very different degrees:

  • Ethics of autonomy (dominant in Western, educated, industrialized, rich, democratic societies): Emphasizes individual rights, harm, and fairness. The moral universe centers on persons as autonomous agents.

  • Ethics of community: Emphasizes duty, hierarchy, loyalty, and social role. The moral universe centers on social units--families, communities, nations.

  • Ethics of divinity: Emphasizes purity, sanctity, and spiritual pollution. The moral universe centers on sacred order and spiritual development.

All three ethics exist in all cultures, but their relative weight varies enormously. A moral intuition that feels universal and self-evident within one cultural framework may seem puzzling or even offensive within another. This variation does not mean moral intuitions are arbitrary--the foundations are shared. But the structures built on those foundations differ substantially.

What Cultural Variation Means for the Intuition-Reasoning Debate

Cultural variation in moral intuitions has important implications: if intuitions vary across cultures while the underlying moral reality (if there is one) presumably does not, then at least some intuitions in some cultures must be tracking cultural conditioning rather than moral truth. This gives reasoning a crucial role in distinguishing well-calibrated intuitions from culturally contingent ones.

At the same time, the universality of certain intuitions suggests they are not mere cultural artifacts but reflect something genuine about human moral cognition--perhaps tracking real features of social life that matter for human flourishing.


Moral Intuitions Can Change--And That Matters

One of the most important features of moral intuitions, often overlooked in the debate, is that they are not fixed. Intuitions change over time, both at individual and societal levels.

How Intuitions Change

Exposure to different perspectives: Meeting people from different backgrounds, hearing their stories, and understanding their experiences can shift moral intuitions substantially. The dramatic change in attitudes toward homosexuality in many Western societies correlates strongly with increased visibility and personal acquaintance with gay and lesbian individuals. Direct experience reshapes intuitive responses in ways that abstract argument alone often cannot.

Reasoned argument over time: While Haidt is right that moral reasoning rarely changes intuitions in the moment, sustained exposure to compelling arguments can gradually reshape intuitive responses. Philosophical arguments about animal rights, environmental ethics, and global poverty have measurably shifted moral intuitions in populations exposed to them over years and decades.

Emotional education: Literature, film, journalism, and art can cultivate moral sensitivity by engaging emotional responses to situations one might never personally encounter. Harriet Beecher Stowe's Uncle Tom's Cabin did not change views on slavery through philosophical argument but through emotional engagement that reshaped moral intuitions about the humanity of enslaved people.

Practice and habituation: Aristotle argued that virtues develop through practice--by repeatedly doing courageous things, you become courageous. Modern psychology confirms that behavioral patterns reshape emotional and intuitive responses. Practicing empathic attention to others gradually strengthens empathic intuitions.

The Implications of Mutable Intuitions

If intuitions can change, then the hard opposition between intuition and reasoning softens considerably. Reasoning can gradually reshape the intuitive landscape, even if it cannot instantly override it. And intuitions provide the motivational force that makes reasoned moral conclusions actionable, even if they do not generate those conclusions independently.

The most morally sophisticated individuals are not those who follow intuition blindly or those who reason in purely abstract terms disconnected from feeling. They are people whose intuitions have been educated by reasoning and experience--refined, calibrated, and extended through sustained reflection--while retaining the emotional force and rapid responsiveness that make moral life possible.


Practical Implications: Navigating Intuition and Reasoning in Real Life

Understanding the interaction between moral intuitions and reasoning has practical consequences for how we approach ethical decisions in everyday life and institutional contexts.

Personal Ethical Decision-Making

Treat intuitions as important data, not final verdicts. When you feel strongly that something is right or wrong, take that feeling seriously--it reflects real moral cognition. But also subject it to scrutiny. Ask: Could this intuition be driven by bias? Is it consistent with my other moral commitments? Would I endorse it after careful reflection?

Be suspicious of confident intuitions in unfamiliar domains. Your moral instincts about face-to-face interactions with people you know are well-calibrated by experience. Your intuitions about complex policy questions, distant populations, statistical risks, and long-term consequences are much less reliable. These domains demand more deliberative reasoning and less intuitive confidence.

Watch for motivated reasoning. When you find yourself constructing elaborate arguments for a position you feel strongly about, pause. Are you reasoning toward truth or constructing a case for a predetermined conclusion? The test: genuinely try to argue the opposite position. If you cannot construct a reasonable opposing argument, you may be rationalizing rather than reasoning.

Institutional and Policy Contexts

Design institutions that check intuitive biases. Criminal justice systems, for example, should be designed to counteract well-documented intuitive biases--in-group favoritism, disgust-based prejudice, anchoring effects. Deliberative procedures, diversity of perspectives, and structured decision-making help ensure that institutional moral judgments are not merely aggregated biases.

Preserve space for deliberation on important decisions. The modern attention economy pushes toward rapid, intuitive reactions to moral issues. Social media rewards immediate emotional responses and punishes nuanced deliberation. Creating institutional and cultural space for slower, more careful moral reasoning is essential for sound ethical judgment on complex issues.

Value moral disagreement. If people weight moral foundations differently and bring different intuitive profiles to ethical questions, moral disagreement is not merely error to be corrected but information to be integrated. Productive moral discourse requires engaging seriously with perspectives rooted in different intuitive foundations, not merely dismissing them as irrational.


The Neuroscience of Moral Judgment

Recent neuroscience research has added biological detail to the intuition-reasoning picture. Brain imaging studies reveal that moral judgment involves a distributed network of brain regions, not a single "moral center."

The ventromedial prefrontal cortex (vmPFC) integrates emotional signals into decision-making. Patients with vmPFC damage often know the "right" answer to moral questions in the abstract but cannot feel it--and consequently make poor moral decisions in real life. This suggests that moral reasoning without emotional intuition is practically inert.

The dorsolateral prefrontal cortex (dlPFC) supports deliberative reasoning, cognitive control, and the ability to override intuitive responses. When this region is more active, people are more likely to make utilitarian judgments that override emotional aversion. Interestingly, artificially enhancing dlPFC activity through transcranial stimulation can shift people toward more utilitarian judgments.

The temporoparietal junction (TPJ) supports perspective-taking and mental state attribution--the ability to understand what others think and feel. Disrupting this region's function impairs the ability to consider intent, which shifts moral judgment toward evaluating outcomes alone.

The anterior insula generates feelings of moral disgust and registers social norm violations. Its activation correlates with judgments about purity and sanctity--the moral foundation most tied to visceral intuitive responses.

The neuroscientific picture confirms that moral judgment is not a single cognitive operation but an interaction between multiple brain systems handling emotion, reasoning, perspective-taking, and social evaluation. Neither intuition nor reasoning has sole neurological authority over moral judgment.


Where Philosophy Goes From Here

The empirical research on moral intuitions and reasoning has not settled the philosophical questions--it has deepened them. Several frontier issues remain actively debated:

The reliability question: Even if intuitions come first psychologically, might they still be epistemically reliable? Evolutionary processes that shaped moral intuitions tracked features of social life that genuinely matter. Perhaps intuitions are reliable precisely because they evolved under selection pressures related to genuine moral considerations like harm, fairness, and cooperation--even if the reliability is not perfect.

The authority question: When intuition and reasoning conflict after careful reflection, which should win? There is no meta-principle that resolves this. Sometimes the intuition reveals a genuine moral truth that reasoning has failed to capture. Sometimes reasoning correctly identifies an intuitive error. Distinguishing these cases is itself a matter of judgment that cannot be fully formalized.

The progress question: If moral reasoning can reshape moral intuitions over time, and if some of those changes represent genuine moral progress (the abolition of slavery, the expansion of moral concern to previously excluded groups), then reasoning has a transformative power that pure intuitionism struggles to explain. But identifying which changes represent progress and which represent mere cultural drift remains deeply contested.

The convergence question: Will the interplay of intuition and reasoning eventually lead human moral thinking to converge on shared conclusions? Or is irreducible moral pluralism the permanent human condition? The universality of certain moral foundations suggests some convergence is possible. The depth of cultural variation in their expression suggests complete convergence is unlikely.

What seems increasingly clear is that the opposition between moral intuitions and moral reasoning was always somewhat artificial. Human moral cognition is an integrated system where rapid intuitive responses and slower deliberative processes continuously interact, each shaping and constraining the other. The task is not to choose between them but to understand their respective contributions and cultivate both in the service of living well.


References and Further Reading

  1. Haidt, J. (2001). "The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment." Psychological Review, 108(4), 814-834. https://faculty.virginia.edu/haidtlab/articles/haidt.emotionaldog.manuscript.pdf

  2. Greene, J. D. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin Press. Overview at https://en.wikipedia.org/wiki/Moral_Tribes

  3. Rawls, J. (1971). A Theory of Justice. Harvard University Press. Discussion of reflective equilibrium: https://plato.stanford.edu/entries/reflective-equilibrium/

  4. Damasio, A. R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. Overview at https://en.wikipedia.org/wiki/Descartes%27_Error

  5. Haidt, J. & Joseph, C. (2004). "Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues." Daedalus, 133(4), 55-66. Moral Foundations Theory overview: https://moralfoundations.org/

  6. De Waal, F. (2006). Primates and Philosophers: How Morality Evolved. Princeton University Press. https://press.princeton.edu/books/paperback/9780691141299/primates-and-philosophers