The Day a Steam Engine Learned to Listen
In 1788, James Watt and his business partner Matthew Boulton faced a problem that had plagued every steam engine operator since the technology emerged: the engines ran away. Load a Watt engine lightly, and it accelerated until it tore itself apart. Load it heavily, and it stalled. The engine had no way of knowing what it was doing.
Watt's solution was mechanical genius disguised as simplicity. He attached two iron balls to a rotating spindle connected to the engine's crankshaft. As the engine sped up, centrifugal force pushed the balls outward and upward. This outward movement mechanically throttled the steam valve, reducing fuel input. As the engine slowed, the balls dropped, opening the valve again. The engine had been given a voice with which to talk to itself.
This device — the centrifugal governor — became the first widely deployed engineered feedback mechanism in industrial history. It did not optimize the engine. It did not make it faster or more powerful. It made the engine stable. The governor introduced what engineers would later formalize as a balancing feedback loop: a signal from output fed back into input, reducing deviation from a desired state. Within two decades, Watt's governor had spread across Britain's textile mills, flour mills, and pumping stations. The Industrial Revolution did not merely run on steam — it ran on feedback.
What Watt built by intuition, mathematicians would not fully formalize until Norbert Wiener's work 160 years later. What Wiener formalized, Jay Forrester would weaponize against bad urban policy. What Forrester modeled, Donella Meadows would translate into one of the most important books of the twentieth century. The story of feedback loops is the story of how humanity slowly learned to see the invisible architecture of cause and effect that governs every system we inhabit — from stock markets to ecosystems, from personal habits to organizational cultures.
Defining Feedback: When Output Becomes Input
A feedback loop exists whenever the output of a system influences its own future inputs. This is not metaphor. It is a structural feature of a system — a closed causal chain in which information about system behavior flows back to modify that behavior.
There are two fundamental types.
Reinforcing feedback loops (also called positive feedback, though "positive" here means self-amplifying, not beneficial) occur when a change in a system causes further change in the same direction. More produces more. Less produces less. The loop accelerates whatever direction it is already traveling. A bank paying compound interest creates a reinforcing loop: interest earns interest earns interest. A rumor spreading on social media earns attention, which earns more attention. A startup with network effects attracts users because it has users. Reinforcing loops are engines of growth, collapse, and runaway change.
Balancing feedback loops (also called negative feedback, meaning self-correcting, not destructive) occur when a change in a system triggers a response that pushes back against that change, toward a goal or equilibrium. A thermostat detects temperature above the setpoint and shuts off the furnace. Hunger triggers eating, which reduces hunger. A central bank raises interest rates when inflation rises, cooling economic activity. Balancing loops are the stabilizing architecture of living systems — the mechanisms by which bodies maintain temperature, ecosystems maintain species balance, and organizations maintain operational norms.
The crucial insight, articulated clearly by systems thinking researchers, is that most real systems contain both types simultaneously, interacting. The human body's glucose regulation involves a balancing loop through insulin — but chronic sugar overconsumption can trigger reinforcing loops of insulin resistance that gradually overwhelm the balancing mechanism. Financial markets contain balancing loops through price discovery — but speculative manias trigger reinforcing loops of momentum that temporarily overwhelm the balancing signal of fundamental value.
Understanding any system requires mapping which loops dominate under which conditions, and how that dominance can shift.
Reinforcing vs. Balancing: A Comparison
| Dimension | Reinforcing (Positive) Feedback | Balancing (Negative) Feedback |
|---|---|---|
| Direction of effect | Amplifies deviation — change produces more change in the same direction | Corrects deviation — change produces response in the opposite direction |
| System tendency | Growth, collapse, or runaway change | Stability, equilibrium, oscillation around a goal |
| Common descriptor | "Vicious cycle" or "virtuous cycle" depending on direction | "Self-correcting" or "goal-seeking" behavior |
| Classic ecological example | Algal bloom: nutrients feed algae growth, algae consume oxygen, killing competing species, concentrating nutrients further | Predator-prey balance: wolves reduce deer population, deer scarcity reduces wolf population, allowing deer recovery |
| Classic economic example | Bank run: fear of insolvency triggers withdrawals, withdrawals cause insolvency, confirming fear | Supply and demand: high prices reduce demand and increase supply until price falls toward equilibrium |
| Classic organizational example | High performance culture attracting talent, talent producing higher performance, reinforcing reputation | Management by exception: problems trigger intervention, intervention resolves problems, reducing need for intervention |
| Classic technological example | Network effects: platform value grows with users, attracting more users (Metcalfe's Law) | Automatic gain control in radio receivers: strong signal reduces amplifier gain to prevent distortion |
| Time behavior | Exponential growth or collapse | Oscillation, damping toward setpoint, or S-curve (growth limited by balancing) |
| Danger | Unbounded acceleration, collapse, or lock-in | Oscillation and overcorrection when delays are present |
Why Humans Cannot See Feedback Loops
The difficulty is not intellectual — it is cognitive and structural. Feedback loops are invisible not because they are exotic but because they violate the basic architecture of how human minds construct causal stories.
Humans are linear thinkers in a nonlinear world. Psychologist Philip Johnson-Laird's research on mental models, developed through the 1980s at Cambridge, showed that humans naturally construct causal chains as sequences: A causes B causes C. This is adequate for simple tools and simple physical processes. It fails entirely when C loops back to modify A. The mind wants a story with a beginning and an end. Feedback loops have neither — they are circles.
Delays destroy the perception of causation. When cause and effect are separated by time — days, months, years — the human brain does not naturally associate them. Economist Herbert Simon's work on bounded rationality established that humans use heuristics that work well in environments with tight feedback (touching a hot stove) and fail in environments with delayed feedback (dietary choices and long-term health). Peter Senge, in The Fifth Discipline (1990), illustrated this with what he called the "beer game" — a supply chain simulation developed at MIT in the 1960s in which participants, playing roles of retailer, wholesaler, and factory, inevitably cause massive oscillations in inventory and production. The simulation has been run with thousands of executives and students. The boom-bust cycle almost always emerges, regardless of participant sophistication, because the delays in the supply chain are long enough to prevent participants from perceiving the feedback their own orders are creating.
"The most pernicious aspect of poor systems thinking is that it leads us to fight the same problem over and over again." — Peter Senge, The Fifth Discipline, 1990
We assign blame rather than see structure. When a system misbehaves, the mind's first move is to identify an agent responsible — a bad actor, a stupid decision, an unusual event. This impulse, what social psychologist Lee Ross in 1977 named the "fundamental attribution error," systematically blinds us to structural causes. The 2008 financial crisis was initially attributed to the greed of specific bankers. The systemic feedback loops that made individual greed catastrophic — the way mortgage-backed securities created reinforcing loops between house prices, lending standards, and leverage — took years of post-crisis analysis to document clearly.
Feedback operates across system boundaries we draw arbitrarily. Any feedback loop involves a boundary — the system must "see" its own output. But we rarely draw system boundaries to include the full causal circuit. A company tracks its own sales but not the morale effects of its management practices on those sales. A government tracks GDP but not the environmental depletion that eventually constrains it. Meadows identified this in Thinking in Systems (2008) as a boundary problem: the system boundary we draw determines which feedback loops we can perceive and which we render invisible by exclusion.
Four Historical Case Studies
Case Study 1 — Ecological: The Kaibab Plateau Deer Explosion (1906–1939)
The system. The Kaibab Plateau in northern Arizona contained a population of roughly 4,000 mule deer in 1906, held in balance by predators — primarily mountain lions, wolves, and coyotes — and by hunting by Indigenous communities and settlers.
The feedback loop. The U.S. Forest Service launched a predator elimination campaign beginning in 1906. Between 1906 and 1931, hunters and government agents killed 781 mountain lions, 30 wolves, 4,889 coyotes, and 554 bobcats. The predator-prey balancing feedback loop — which had maintained deer population stability for millennia — was severed. Without the balancing signal, a reinforcing loop emerged: more deer survived to reproduce, producing more deer. The population exploded from 4,000 in 1906 to an estimated 100,000 by 1924.
Why it was missed. The Forest Service's mental model contained only a one-way causal chain: predators kill deer; therefore, eliminating predators saves deer. The reciprocal signal — that deer population controlled predator population, and predator population controlled deer population — was not part of the managerial model.
Consequence. The deer population, having vastly exceeded the plateau's carrying capacity, began dying of starvation. By 1939, the population had collapsed to approximately 10,000 — far below the original 4,000 that had been sustained by the intact balancing system. Aldo Leopold, who had participated in the predator elimination program as a young Forest Service officer, later credited the Kaibab collapse with transforming his understanding of ecological systems. It became a foundational case study in what would become ecology as a discipline.
Case Study 2 — Economic: The 1987 Black Monday Crash and Portfolio Insurance
The system. By October 1987, U.S. equity markets had risen approximately 250% over the previous five years. An estimated $60 to $90 billion in institutional assets were protected by portfolio insurance, developed by finance professors Hayne Leland and Mark Rubinstein at UC Berkeley in 1976. Portfolio insurance worked by automatically selling stock index futures as the market fell.
The feedback loop. On October 19, 1987, the Dow Jones Industrial Average fell 508 points — 22.6%, the largest single-day percentage decline in history. The Brady Commission concluded that portfolio insurance created a catastrophic reinforcing feedback loop. As the market fell, portfolio insurance systems automatically sold futures. This selling pushed prices lower. Lower prices triggered more portfolio insurance selling. Each sale confirmed and amplified the decline. The systems designed to protect individual portfolios from falling markets collectively caused a market collapse.
Why it was missed. Portfolio insurance was priced and sold based on historical volatility models that assumed portfolio insurance trades would have no market impact. The models did not include a feedback pathway from portfolio insurance activity to market prices.
Consequence. The Federal Reserve, under Chairman Alan Greenspan, injected liquidity and the market recovered within two years. But the crash permanently ended portfolio insurance as a mass-market product and triggered the introduction of circuit breakers — balancing feedback mechanisms — into U.S. exchanges.
Case Study 3 — Technological: The Therac-25 Radiation Therapy Machine (1985–1987)
The system. The Therac-25 was a computerized radiation therapy machine manufactured by Atomic Energy of Canada Limited, introduced in 1982. It was an advanced version of the Therac-20, which had contained hardware interlocks — physical safety mechanisms that prevented dangerous radiation doses regardless of software behavior.
The feedback loop. The Therac-25 designers removed the hardware interlocks, relying entirely on software safety controls. When software bugs produced race conditions that bypassed the safety checks, the machine delivered radiation doses estimated at 100 times the intended therapeutic dose to at least six patients between 1985 and 1987. Three patients died. The missing feedback loop was the one that mattered most: there was no sensor that measured actual delivered radiation dose and compared it against the intended dose.
Why it was missed. Nancy Leveson and Clark Turner, in their landmark 1993 analysis published in IEEE Transactions on Software Engineering, identified that the accidents resulted from a systemic assumption: that software could be trusted to provide safety guarantees that hardware had previously provided. The feedback loop between system output (actual radiation dose) and system input (dose delivery command) was severed by design choice.
Consequence. The Therac-25 case became the canonical example in software safety engineering of the dangers of removing physical feedback mechanisms and replacing them with software-only monitoring. It directly shaped the development of IEC 62304, the international standard for medical device software.
Case Study 4 — Organizational: Kodak and the Feedback Loop It Built to Destroy Itself
The system. Eastman Kodak at its peak in 1976 commanded approximately 90% of U.S. film sales and 85% of camera sales. Kodak engineer Steve Sasson invented the digital camera in 1975. The company commissioned internal studies throughout the late 1970s and 1980s on the threat digital imaging posed to its film business.
The feedback loop. Film and development were extraordinarily profitable — operating margins above 70% in some segments. Investment in digital photography would cannibalize this revenue. Executives who promoted digital investment threatened the company's current earnings and therefore their own advancement. The organizational feedback loop — promotions and resources flowing to those who protected the film business — systematically filtered out the people and proposals that might have enabled adaptation.
Why it was missed. The internal studies Kodak commissioned correctly predicted digital photography's eventual dominance. The studies did not model the organizational feedback loop that would prevent the company from acting on that prediction.
Consequence. Kodak filed for bankruptcy in January 2012. Its film revenues had declined from $7.7 billion in 1996 to near zero. The company that invented digital photography did not survive the transition to digital photography.
Feedback Loops in Four Domains
Financial Markets and Bank Runs
Financial markets are among the most feedback-dense environments humans have created. Price is itself a feedback signal — it communicates the aggregate judgment of buyers and sellers about value, which then influences buying and selling behavior, which then influences price.
Bank runs are the classic destabilizing case. The economist Douglas Diamond and Philip Dybvig formalized the mechanism in their 1983 paper in the Journal of Political Economy: a bank is by design illiquid — it takes short-term deposits and makes long-term loans. If depositors believe other depositors will withdraw, it becomes individually rational to withdraw first. This belief, if sufficiently widespread, causes the insolvency it anticipated. The reinforcing loop is self-fulfilling: fear produces withdrawals, withdrawals produce insolvency, insolvency confirms fear.
The 2023 collapse of Silicon Valley Bank demonstrated that digital banking and social media had dramatically accelerated the reinforcing loop. Approximately $42 billion was withdrawn in a single day on March 9, 2023 — triggered in part by posts on fintech Twitter discussing SVB's balance sheet vulnerabilities.
Organizational Culture
Culture in organizations operates through feedback loops that are almost entirely invisible to the people inside them. Edgar Schein's model of organizational culture, developed at MIT Sloan over his career and synthesized in Organizational Culture and Leadership (2010), distinguishes between visible artifacts, espoused values, and underlying assumptions. The underlying assumptions perpetuate themselves through feedback: they shape what behaviors are rewarded, rewarded behaviors attract people who display those behaviors, those people reinforce the assumptions.
Attempts to change culture by changing visible behaviors (artifacts) often fail because the feedback loops running through rewards, hiring, and promotion preserve the underlying assumptions. Organizations that want to shift from risk-averse to innovative cultures typically find that after initial surface change, behavior reverts within 18 to 24 months — the balancing loop of cultural immune response has reasserted the prior state.
Personal Habits and Behavior Change
Charles Duhigg's synthesis of habit research in The Power of Habit (2012) describes what neuroscientists at MIT, particularly Ann Graybiel's lab, identified through the 1990s: habits are neurological feedback loops. The habit loop — cue, routine, reward — is a balancing feedback mechanism that reduces cognitive load by automating frequently repeated behavior.
Research by Wendy Wood at the University of Southern California, synthesized in her 2019 book Good Habits, Bad Habits, shows that willpower-based approaches to behavior change have roughly 30% success rates because the cue continues to trigger craving. More effective approaches change the cue (the input to the loop), change the environment to prevent the loop from activating, or substitute a new routine that delivers a similar reward signal.
Technology Platforms
Network effects — Metcalfe's Law, proposed by Robert Metcalfe in 1980 in the context of Ethernet networking, which holds that the value of a network is proportional to the square of the number of connected users — are reinforcing feedback loops. They explain why platform markets tend toward monopoly or duopoly: once a platform achieves critical mass, the reinforcing loop of user-attracting-users creates a winner-take-most dynamic.
The attention economy operates on nested reinforcing loops. Engagement metrics provide platform algorithms with feedback that directs content distribution toward the highest-engagement content. High-engagement content tends to be emotionally activating — outrage, fear, novelty. Users exposed to emotionally activating content become habituated, requiring increasingly extreme content to produce engagement. Researchers at New York University's Center for Social Media and Politics documented in 2022 studies using Twitter data that political misinformation spreads significantly faster and further than factual corrections, consistent with this reinforcing loop structure.
This connects directly to second-order effects: the platform's first-order goal (engagement) produces second-order feedback loops that reshape the information environment in ways the platform designers did not intend.
The Intellectual Lineage of Feedback Loop Theory
Norbert Wiener and Cybernetics (1948). Wiener, a mathematician at MIT, coined the term "cybernetics" in his 1948 book Cybernetics: Or Control and Communication in the Animal and the Machine. Drawing on his wartime work developing anti-aircraft gun targeting systems, Wiener proposed a unified science of systems that regulate themselves through feedback. He identified that the same mathematical structure governed thermostats, economies, nervous systems, and social institutions.
"We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the name Cybernetics." — Norbert Wiener, Cybernetics, 1948
Jay Forrester and System Dynamics (1961). Forrester, an electrical engineer at MIT, published Industrial Dynamics in 1961, introducing the methodology of system dynamics. His 1971 paper, "Counterintuitive Behavior of Social Systems," published in Technology Review, remains one of the most important documents in systems thinking. Its core claim: interventions that feel correct to human intuition frequently worsen systems by activating balancing loops that push back, or by strengthening the reinforcing loops driving the pathology.
Peter Senge and The Fifth Discipline (1990). Senge published The Fifth Discipline in 1990, making Forrester's system dynamics concepts accessible to business audiences and introducing a framework of "systems archetypes" — recurring feedback loop structures that appear across different organizations and contexts. The book has sold over a million copies and is consistently cited in surveys of the most influential business books of the twentieth century.
Donella Meadows and Thinking in Systems (2008). Meadows, who had been a student of Forrester's and co-authored the landmark 1972 The Limits to Growth report for the Club of Rome, completed her master work Thinking in Systems: A Primer before her death in 2001. Published posthumously in 2008, it synthesizes decades of systems thinking into the most accessible and rigorous treatment available.
"The world is a complex, interconnected, finite, ecological — social — psychological — economic system. We treat it as if it were not, as if it were divisible, separable, simple, and infinite." — Donella Meadows, Thinking in Systems, 2008
Research Findings
Sterman (1989) on the beer game. John Sterman at MIT Sloan published a detailed analysis of beer game results in Management Science in 1989, documenting data from 48 trials involving 210 participants. He found that order oscillations were universal — every trial produced the boom-bust pattern — because participants systematically underweighted the supply line of orders already placed but not yet received.
Paich and Sterman (1993) on market growth and collapse. A 1993 study in Management Science had subjects manage a simulated firm in a market with network effects. Subjects consistently overinvested in growth during reinforcing loop phases and underinvested in adapting to market saturation as balancing loops emerged.
Dörner (1996) on complex system management. Dietrich Dörner, a cognitive psychologist at the University of Bamberg, published The Logic of Failure in 1996, based on experiments in which subjects managed simulated complex systems. Subjects with high intelligence and domain knowledge consistently failed. The most common failure modes were: ignoring feedback from uncontrolled variables; oscillating between ignoring a problem and overcorrecting; and accelerating intervention as problems worsened, triggering positive feedback loops of counterproductive action.
Minsky (1992) on financial instability. Economist Hyman Minsky developed the Financial Instability Hypothesis, arguing that financial stability is itself destabilizing through a reinforcing feedback loop. During stable periods, investors grow confident. Confidence reduces caution. Reduced caution increases lending, which increases asset prices, which increases confidence further. The "Minsky moment" — when the structure collapses — produces a reinforcing loop of asset sales, price declines, margin calls, and more asset sales. Minsky's framework, largely ignored during his lifetime, became widely cited after 2008 because it described the mechanism of the financial crisis with notable precision.
When Feedback Loops Fail: The Limits of Self-Correction
Delays. When the delay between action and feedback is long relative to the decision cycle, decision makers typically overshoot the target. A thermostat that takes an hour to register temperature change will produce a house that alternates between overheated and underheated. Central banks that observe inflation with a six-month lag face the same problem — their corrective actions arrive after the situation has changed, frequently overcorrecting.
Loop dominance shifts. In systems with both reinforcing and balancing loops, which loop dominates can shift as the system state changes. Exponential population growth eventually runs into resource constraints — a balancing loop that was always present but was dominated by the reinforcing growth loop. The transition between loop dominance is frequently abrupt and produces the S-curves, boom-busts, and collapses that characterize ecological and economic systems.
External disturbance exceeding corrective capacity. Balancing feedback loops have finite corrective power. A thermostat can maintain a house at 20°C when outside temperature is 0°C — but not when it is -40°C. Meadows called these "eroding goals" — when balancing loops are overpowered, systems often respond by adjusting the goal downward rather than increasing corrective effort, producing a slow, invisible degradation.
Signal corruption. Feedback loops require accurate information to function. When the signal is corrupted — delayed, biased, or absent — the loop cannot correct. The Soviet economy's fatal flaw, documented by economist János Kornai in his 1980 book Economics of Shortage, was that price signals — the core feedback mechanism of market economies — were suppressed by central planning. Without price feedback, there was no mechanism for the economy to know what it was doing.
Unintended loop creation. The "cobra effect" — in which a colonial British policy offering bounties for dead cobras in India led to cobra farming and ultimately a larger cobra population — is a class of policy failure in which the corrective intervention creates a reinforcing loop in the opposite direction. This is the same dynamic captured in Goodhart's Law: using a feedback signal as a target corrupts the signal, destroying the feedback loop's function.
Seeing the Loops
The practical challenge of feedback loops is not intellectual acceptance — most educated people, presented with the concept, find it obvious. The challenge is perceptual: training the mind to see circular causality in real time, in the systems you actually inhabit.
Several practices help. Mapping causal loops explicitly — drawing arrows between variables and marking each arrow as positive (same direction) or negative (opposite direction) — forces the mind to complete the circuit rather than trace only the linear chain. System dynamicists call these causal loop diagrams, and the discipline of completing them reveals loops that narrative thinking obscures.
Asking "what does this output signal back to the system?" for every major variable converts linear mental models into circular ones. Where the feedback signal is absent or corrupted, you have identified a structural vulnerability.
Looking for delays is equally important. The question is not just whether there is a feedback loop, but what the delay structure is. Long delays in balancing loops create oscillation. Long delays in reinforcing loops create false security — the loop is running, accumulating momentum, but the effect is not yet visible.
Meadows' final prescription was the most demanding: to cultivate what she called "living in a world of systems" — a fundamental perceptual shift from seeing events to seeing structures, from asking "who caused this?" to asking "what feedback loop produced this?" It requires accepting that in systems with strong feedback, the concept of a single cause is not merely incomplete — it is structurally false.
The centrifugal governor Watt bolted to his steam engine in 1788 did not know it was doing any of this. It simply responded to what the system was doing and pushed back. That is all a feedback loop is: a system listening to itself. The art is in learning to listen along with it.
References
- Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the Machine. MIT Press, 1948. https://mitpress.mit.edu/9780262730099/cybernetics/
- Forrester, Jay W. Industrial Dynamics. MIT Press, 1961. https://archive.org/details/industrialdynami00forr
- Forrester, Jay W. "Counterintuitive Behavior of Social Systems." Technology Review, 1971. https://web.mit.edu/sysdyn/sd-intro/D-4468-2.pdf
- Meadows, Donella H. Thinking in Systems: A Primer. Chelsea Green Publishing, 2008. https://www.chelseagreen.com/product/thinking-in-systems/
- Senge, Peter M. The Fifth Discipline. Doubleday, 1990. https://www.penguinrandomhouse.com/books/161734/the-fifth-discipline-by-peter-m-senge/
- Diamond, Douglas W. and Philip H. Dybvig. "Bank Runs, Deposit Insurance, and Liquidity." Journal of Political Economy 91, no. 3 (1983): 401–419. https://www.journals.uchicago.edu/doi/10.1086/261155
- Sterman, John D. "Modeling Managerial Behavior: Misperceptions of Feedback in a Dynamic Decision Making Experiment." Management Science 35, no. 3 (1989): 321–339. https://doi.org/10.1287/mnsc.35.3.321
- Leveson, Nancy G. and Clark S. Turner. "An Investigation of the Therac-25 Accidents." IEEE Transactions on Software Engineering 18, no. 7 (1993). https://ieeexplore.ieee.org/document/274940
- Brady Commission. Report of the Presidential Task Force on Market Mechanisms. U.S. Government Printing Office, 1988. https://archive.org/details/reportofpresiden00unit
- Dörner, Dietrich. The Logic of Failure. Basic Books, 1996. https://www.basicbooks.com/titles/dietrich-dorner/the-logic-of-failure/9780201479485/
- Minsky, Hyman P. "The Financial Instability Hypothesis." Levy Economics Institute Working Paper No. 74, 1992. https://www.levyinstitute.org/pubs/wp74.pdf
- Wood, Wendy. Good Habits, Bad Habits. Farrar, Straus and Giroux, 2019. https://us.macmillan.com/books/9781250159076/goodhabitsbadhabits
- Duhigg, Charles. The Power of Habit. Random House, 2012. https://charlesduhigg.com/the-power-of-habit/
- Kornai, János. Economics of Shortage. North-Holland, 1980. https://www.sciencedirect.com/book/9780444854032/economics-of-shortage
- Paich, Mark and John D. Sterman. "Boom, Bust, and Failures to Learn in Experimental Markets." Management Science 39, no. 12 (1993). https://doi.org/10.1287/mnsc.39.12.1439
- Schein, Edgar H. Organizational Culture and Leadership, 4th ed. Jossey-Bass, 2010. https://www.wiley.com/en-us/Organizational+Culture+and+Leadership%2C+4th+Edition-p-9780470185865
Frequently Asked Questions
What is a feedback loop?
A feedback loop exists when the output of a system influences its own future inputs — a closed causal chain where system behavior feeds back to modify that same behavior.
What is the difference between reinforcing and balancing feedback loops?
Reinforcing (positive) feedback amplifies change in the same direction — more produces more. Balancing (negative) feedback corrects deviation toward a goal — change triggers a response in the opposite direction.
What is an example of a reinforcing feedback loop?
Bank runs: fear of insolvency triggers withdrawals, withdrawals cause insolvency, confirming fear. Or compound interest: interest earns interest earns interest.
What is an example of a balancing feedback loop?
A thermostat: temperature above setpoint shuts off the furnace; temperature below setpoint turns it on. Predator-prey dynamics in ecology work the same way.
Why are feedback loops hard to see?
Humans think in linear cause-effect chains, not circles. Delays separate cause from effect in time, making the connection invisible. We assign blame to agents rather than seeing structural loops.
Who developed feedback loop theory?
Norbert Wiener formalized feedback in cybernetics (1948). Jay Forrester built system dynamics modeling (1961). Peter Senge made it accessible to organizations (1990). Donella Meadows synthesized it in Thinking in Systems (2008).
What causes feedback loops to fail or destabilize?
Delays cause overcorrection and oscillation. Signal corruption prevents loops from functioning. External disturbances can exceed corrective capacity. Interventions sometimes create unintended new reinforcing loops.
How does the 1987 stock market crash relate to feedback loops?
Portfolio insurance automatically sold futures when markets fell. This created a reinforcing loop: falling prices triggered more selling, which caused more price falls. $60-90 billion in portfolio insurance collectively caused the crash it was designed to prevent.