You've mastered growth tactics in B2B SaaS. Cold email sequences work brilliantly—20% response rates, consistent pipeline. The formula is proven across dozens of clients. It's practically a law: "Use this sequence format → get these results."
Then a client in B2C e-commerce tries the same approach. Response rate: 0.3%. Complete failure. The "law" broke.
What happened? Context changed. The principle that worked in one environment (B2B: small audience, high-value deals, relationship-based, professional email culture) failed in another (B2C: huge audience, low-value transactions, impersonal, spam-saturated inboxes).
The pattern repeats everywhere: Management practices that work in startups fail in large enterprises. Diet strategies that work for young athletes fail for sedentary elderly. Economic policies that work in growth periods backfire in recessions. Military tactics that work in open terrain fail in cities.
Understanding why principles and laws break when context changes—and how to recognize when context has shifted enough to invalidate familiar approaches—is critical to avoiding systematic failures.
What Makes Principles Context-Dependent
The Foundation: Every Principle Rests on Assumptions
No principle exists in a vacuum.
"All models are wrong, but some are useful." — George Box
That insight applies just as much to business rules and management principles as it does to statistical models. The moment you mistake a useful approximation for a universal law, context has already started undermining you.
Every principle implicitly assumes certain conditions:
- Scale (small vs. large)
- Environment (stable vs. volatile)
- Resources (abundant vs. scarce)
- Actors (rational vs. emotional, cooperative vs. competitive)
- Time horizon (short vs. long)
- Constraints (what's possible, what's fixed)
When assumptions hold: Principle works
When assumptions violated: Principle breaks
Example: "Economies of scale"
Principle: Larger production volumes → lower per-unit costs
Assumptions:
- Fixed costs can be amortized across units
- Process can be replicated
- Market can absorb volume
- Quality doesn't degrade with scale
- Coordination costs don't offset gains
Where it works: Manufacturing, software (zero marginal cost)
Where it breaks:
- Craft production (quality degrades with scale)
- Consulting (coordination costs offset gains)
- Services requiring personal attention (doesn't scale)
Context determines validity.
Types of Context Changes That Break Laws
Context Change 1: Scale
What works at one scale often fails at another.
Scale changes properties, not just quantities.
Example: Management
| Scale | What Works | Why Previous Approach Breaks |
|---|---|---|
| 5 people | Direct communication, everyone knows everything, informal coordination | Overhead minimal |
| 50 people | Teams with leads, some formal process, mostly direct communication | Can't know everyone, need structure |
| 500 people | Formal hierarchy, documented processes, systems for coordination | Information overload, need abstraction |
| 5,000 people | Multiple layers, standardized processes, metrics-driven | Direct communication impossible |
Law that breaks: "Just talk to everyone directly"
Works at 5. Impossible at 5,000.
New context requires new approaches.
Example: Biology—surface area to volume ratio
Small insects:
- High surface-area-to-volume ratio
- Can breathe through surface (no lungs needed)
- Fall from height without injury (air resistance dominates)
Large mammals:
- Low surface-area-to-volume ratio
- Need lungs, circulatory system (surface insufficient)
- Fall from height is fatal (momentum dominates)
Physics changes with scale. "Breathe through your skin" works for ants, not elephants.
Context Change 2: Environment Stability
Principles optimized for stable environments fail in volatile ones, and vice versa.
Example: Strategy
Stable environment:
- Long-term planning works
- Optimization pays off
- Efficiency is paramount
- Specialization is advantageous
Volatile environment:
- Long-term planning becomes guessing
- Optimization for wrong scenario is waste
- Flexibility is paramount
- Generalization enables adaptation
Law that breaks: "Optimize for efficiency"
In stability: Correct.
In volatility: Dangerous (over-optimization creates fragility).
Example: Taleb's Antifragile distinction
"The fragile wants tranquility, the antifragile grows from disorder." — Nassim Taleb
Fragile systems (work in stability, break in volatility):
- Optimized for single scenario
- No redundancy
- Tight coupling
- Break when shocked
Antifragile systems (work in volatility):
- Benefit from stressors
- Redundancy built in
- Loose coupling
- Improve when shocked
Context (stability vs. volatility) determines which approach succeeds.
Context Change 3: Resource Availability
Scarcity and abundance create different dynamics.
Example: Capital availability
Capital-scarce context (bootstrapped startup):
- Law: "Focus on profitability from day one"
- Every dollar matters
- Can't afford experiments
- Must reach profitability to survive
Capital-abundant context (venture-backed startup):
- Law: "Focus on growth, worry about profitability later"
- Runway measured in years
- Can afford experiments
- Must reach scale to justify investment
Same industry, different resource contexts, opposite optimal strategies.
Applying capital-scarce law in capital-abundant context: Miss growth opportunity
Applying capital-abundant law in capital-scarce context: Run out of money
Context Change 4: Stakeholder Characteristics
Who you're dealing with changes what works.
Example: Persuasion
Rational, expert audience:
- Evidence-based arguments work
- Data and logic persuade
- Nuance appreciated
Emotional, non-expert audience:
- Stories and emotions work
- Data overwhelms
- Simplicity required
Law that breaks: "Just show the data"
Works for first group. Fails for second.
Example: Incentive design
Intrinsically motivated people (researchers, artists):
- Autonomy increases performance
- Extrinsic rewards can decrease motivation (crowding out effect)
- Purpose and mastery matter
Extrinsically motivated tasks (routine, well-defined):
- Clear rewards increase performance
- Autonomy less important
- Compensation matters
Deci & Ryan research: Context (intrinsic vs. extrinsic motivation) determines whether rewards help or hurt.
Context Change 5: Time Horizons
What works in the short term often fails in the long term, and vice versa.
Example: Debt
Short term:
- Leverage amplifies returns
- Access capital now
- Can grow faster
Long term:
- Debt accumulates
- Interest compounds
- Can become unsustainable
Law: "Leverage is good" or "Debt is bad"
Both wrong. Context (time horizon, purpose, cost) determines whether debt helps or hurts.
Example: Learning methods
Short-term (cramming for test tomorrow):
- Massed practice works
- All-nighter gets information into short-term memory
- Pass test
Long-term (retain for years):
- Massed practice fails (information decays)
- Spaced repetition works
- Retrieval practice builds durable memory
Different time horizons require different methods.
Context Change 6: Competitive Dynamics
What works when you're alone fails when everyone does it.
This is the mechanism behind Goodhart's Law: once a measure becomes a target, the competitive landscape around it shifts—and the measure stops working. As Charles Goodhart observed, any regularity exploited for policy purposes tends to collapse precisely because of that exploitation.
"As soon as the government attempts to regulate any particular set of financial assets, these become unreliable as indicators of economic trends." — Charles Goodhart
Example: Marketing
Early adopter:
- Novel tactic stands out
- High attention, low competition
- Works brilliantly
Mass adoption:
- Tactic becomes noise
- Saturated channel, high competition
- Stops working
Law that breaks: "Use [tactic X]"
Worked when few did it. Fails when everyone does.
Example: Red Queen hypothesis (biology)
Isolated species: Optimization for environment increases fitness
Competing species: Continuous adaptation just to maintain relative position (arms race)
Context (competition) changes nature of optimization.
How to Recognize Context Shifts
Warning Sign 1: Previously Reliable Principle Stops Working
Your established approach fails unexpectedly.
Response:
Don't assume: "We need to execute better"
Instead ask: "Has context changed?"
Investigation:
Compare current conditions to when principle was established
- What's different?
- Scale, environment, resources, constraints?
Identify violated assumptions
- What did the principle assume?
- Which assumptions no longer hold?
Test whether predictions still match reality
- Does principle's logic still apply?
- Or have fundamentals shifted?
Example: Manufacturing quality control
1950s-1980s principle: "Inspect quality at end (catch defects before shipping)"
1990s onward: Stops working (too slow, too costly, doesn't address root causes)
Context change: Competition intensified, customer expectations rose, production complexity increased
New principle: "Build quality in (prevent defects during production)" - Lean manufacturing, TQM
Recognizing context shift enabled new approach.
Warning Sign 2: Success Elsewhere Fails When You Apply It
Copy proven approach. It doesn't work for you.
Common reaction: "We must have implemented it wrong"
Better question: "Is our context different?"
Analysis:
- Identify contextual differences
- Industry, scale, culture, resources, constraints
- Determine which differences matter
- Some differences irrelevant, others critical
- Adapt principle to your context
- Keep underlying logic, adjust implementation
Example: Toyota Production System
Context (Toyota):
- Manufacturing
- Repetitive processes
- Stable product demand
- Long-term workforce
- Japanese culture (consensus, long-term thinking)
Attempts to copy:
- Many US manufacturers tried in 1980s-1990s
- Often failed despite "following the system"
Context differences that mattered:
- US culture (individualistic, short-term)
- High worker turnover
- Volatile demand
- Quarterly earnings pressure
Success required: Adapt principles to new context, not copy practices exactly.
Warning Sign 3: Principle Works in Theory, Fails in Practice
Logic is sound. Execution fails repeatedly.
Likely cause: Real-world context violates theoretical assumptions.
This is what Karl Popper meant when he insisted that any theory genuinely claiming truth must be falsifiable—and must actually be tested against reality, not just defended through increasingly elaborate patches.
"A theory that explains everything explains nothing." — Karl Popper
Example: Efficient Market Hypothesis
Theory: Markets instantly incorporate all information into prices, so you can't consistently beat the market
Assumptions:
- Rational actors
- Perfect information
- No transaction costs
- Infinite liquidity
Reality:
- Bounded rationality (psychological biases)
- Asymmetric information
- Substantial transaction costs
- Limited liquidity in many assets
Result: Theory elegant, but context (real humans, real markets) violates assumptions. Hedge funds consistently outperform (though most don't), arbitrage opportunities exist, bubbles form.
Theoretical principles must be tested against actual context.
Are Any Principles Truly Universal?
The Hierarchy of Robustness
Not all principles are equally context-dependent.
Level 1: Mathematical and logical truths
- 2 + 2 = 4
- Modus ponens (if A→B and A, then B)
- Conservation of energy
Context independence: Complete (within domain of applicability)
Level 2: Physical laws
- Gravity
- Thermodynamics
- Quantum mechanics
Context independence: Very high (within physical universe)
Level 3: Biological regularities
- Natural selection
- Organisms require energy
- DNA → RNA → Protein
Context dependence: Low (apply across life as we know it)
Level 4: Psychological patterns
- Cognitive biases
- Motivation principles
- Social dynamics
Context dependence: Moderate (humans across cultures, but cultural variation exists)
Level 5: Social/economic/business "laws"
- Supply and demand
- Network effects
- Management principles
Context dependence: High (strong patterns, but many contextual factors)
Level 6: Tactical rules
- Specific marketing tactics
- Particular management practices
- Industry-specific approaches
Context dependence: Very high (work only in narrow contexts)
Key insight: The more domain-specific and close to application, the more context-dependent. The more fundamental and universal, the more robust.
But: Even physics has contextual limits (quantum vs. relativistic vs. Newtonian regimes).
How to Adapt Principles to New Contexts
Process 1: Identify the Core Logic
Separate underlying mechanism from surface implementation.
Ask: "Why does this work?"
Example: "Stand-up meetings" from Agile
Surface practice: Daily 15-minute standing meeting
Core logic:
- Frequent synchronization prevents drift
- Time constraint forces conciseness
- Standing keeps it short
- Everyone hearing everyone's update creates shared context
Adaptation for different context:
- Remote team: Daily async written updates (preserves logic: frequent sync, concise, shared context)
- Hospital ER: Hourly huddle (faster-paced environment needs more frequent sync)
- Executive team: Weekly check-in (different scale, different cadence needed)
Keep logic. Vary implementation.
Process 2: Test Assumptions Explicitly
Make implicit assumptions explicit. Verify which hold in new context.
Good decision-making here means treating your frameworks as conditional tools—asking not just "does this work?" but "under what conditions does this work, and are those conditions present?"
Example: "Fail fast" principle
Assumptions:
- Failure is cheap
- Learning from failure is possible
- Iteration is feasible
- Speed of learning matters
Where assumptions hold (software development):
- Failure is cheap (just code, easily changed)
- Can iterate rapidly
- Learning fast is competitive advantage
Where assumptions break (civil engineering):
- Failure is catastrophic (bridges collapse, people die)
- Can't iterate after building
- Must get it right first time
Principle doesn't transfer because assumptions violated.
Process 3: Look for Analogous Structures
Different domains, similar structures → similar principles may apply.
Different structures → need different principles.
The most useful mental models are not rigid rules to be applied everywhere but structural patterns that help you ask the right questions: does this new situation share the underlying structure that made the principle work in the first place?
Example: Network effects
Works in:
- Social media (users create value for other users)
- Telephones (more users → more valuable to each)
- Marketplaces (buyers attract sellers, sellers attract buyers)
Analogous structure: Value increases with participants
Doesn't work in:
- Traditional restaurants (more customers can decrease value through crowding)
- Luxury goods (exclusivity matters, mass adoption decreases value)
Different structure → different dynamics.
Designing for Context Robustness
Strategy 1: Build in Optionality
Don't over-optimize for single scenario.
Keep options open for context changes.
Example: Modular architecture
Instead of: Tightly coupled monolithic system (optimized for current requirements)
Use: Modular components (can reconfigure as context changes)
Trade-off: Slightly less efficient now, but adapts better to change.
Strategy 2: Monitor Context Continuously
Don't assume context is static.
Watch for signals that conditions are shifting.
Indicators to track:
- Scale changes (growing or shrinking)
- Environmental stability (more or less predictable)
- Resource availability (constraints tightening or loosening)
- Competitive intensity (increasing or decreasing)
- Stakeholder characteristics (audience changing)
When indicators shift significantly: Revisit principles and approaches.
Strategy 3: Separate Principles from Practices
Principles: Why things work (more robust)
Practices: Specific implementations (context-dependent)
Hold principles firmly. Hold practices loosely.
Example: Amazon's "customer obsession" principle
Principle: Stable (always prioritize customer value)
Practices: Change constantly
- 1990s: Online bookstore
- 2000s: Everything store, marketplace
- 2010s: AWS, Prime, devices
- 2020s: Logistics, media, healthcare
Principle guides. Practices adapt to context.
Strategy 4: Embrace Experimentation
In new contexts, you don't know what will work.
Experiment to discover what applies.
Approach:
- Hypothesize: Based on principles from similar contexts, what should work?
- Test small: Pilot before full commitment
- Measure: Did predictions hold?
- Learn: What worked, what didn't, why?
- Adapt: Refine approach based on learning
Don't assume principles will transfer perfectly. Verify through experimentation.
Awareness of second-order effects is essential here: even a valid principle, applied in a shifted context, triggers downstream consequences that differ from—and can overwhelm—the intended first-order outcome.
Conclusion: Context Is Not Optional
Principles are powerful tools for thinking.
But they are not magic formulas that work everywhere.
Every principle rests on assumptions about context:
- Scale
- Environment stability
- Resource availability
- Stakeholder characteristics
- Time horizons
- Competitive dynamics
- And more...
When context changes:
- Assumptions may be violated
- Principles may stop working
- Need to adapt
The mistakes:
1. Context blindness: Applying principles without checking context 2. Over-generalization: Assuming principles are universal when they're conditional 3. Cargo culting: Copying practices without understanding context-dependent logic
The wisdom:
1. Make assumptions explicit: What does this principle assume? 2. Check context: Do those assumptions hold here? 3. Adapt thoughtfully: Keep core logic, adjust implementation 4. Monitor continuously: Watch for context shifts 5. Experiment: Test whether principles transfer
Key insights:
- All principles have contextual limits (even physics has regimes where different laws apply)
- Context changes break laws (what worked stops working when conditions shift)
- Recognize context shifts (compare current to original conditions, identify violated assumptions)
- Adapt principles to context (understand core logic, test assumptions, adjust implementation)
- Build robustness (optionality, monitoring, separation of principles from practices)
The path forward:
When learning principles:
- Understand not just what but under what conditions
- Ask about assumptions
- Study contexts where principles do and don't apply
When applying principles:
- Compare contexts (where principle was derived vs. where applying)
- Verify assumptions hold
- Adapt as needed
When principles fail:
- Don't dismiss principle entirely
- Ask: "Has context changed such that assumptions are violated?"
- Refine understanding of contextual boundaries
Context is not a nuisance to be ignored.
Context is the determining factor in whether principles work.
Wisdom isn't knowing universal laws that always apply.
Wisdom is knowing which principles apply in which contexts, recognizing when contexts have changed, and adapting accordingly.
References
Taleb, N. N. (2012). Antifragile: Things That Gain from Disorder. Random House.
Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Deci, E. L., & Ryan, R. M. (2000). "The 'What' and 'Why' of Goal Pursuits: Human Needs and the Self-Determination of Behavior." Psychological Inquiry, 11(4), 227–268.
Christensen, C. M. (1997). The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business Review Press.
Womack, J. P., Jones, D. T., & Roos, D. (1990). The Machine That Changed the World. Free Press.
Liker, J. K. (2004). The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. McGraw-Hill.
Spear, S., & Bowen, H. K. (1999). "Decoding the DNA of the Toyota Production System." Harvard Business Review, 77(5), 96–106.
Scott, J. C. (1998). Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press.
Simon, H. A. (1996). The Sciences of the Artificial (3rd ed.). MIT Press.
March, J. G. (1991). "Exploration and Exploitation in Organizational Learning." Organization Science, 2(1), 71–87.
Thompson, J. D. (1967). Organizations in Action: Social Science Bases of Administrative Theory. McGraw-Hill.
Gould, S. J., & Lewontin, R. C. (1979). "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme." Proceedings of the Royal Society of London B, 205(1161), 581–598.
Van Valen, L. (1973). "A New Evolutionary Law." Evolutionary Theory, 1, 1–30. (Red Queen hypothesis)
Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press.
What Research Shows About Context-Dependence and Principle Transfer
The empirical literature on when knowledge and principles transfer across contexts is substantial, and its findings are more nuanced than either "principles are universal" or "everything is contextual" positions suggest. Several research programs have produced specific, testable findings about the conditions under which principles hold and when they break.
Elinor Ostrom and the limits of universal resource governance principles: Ostrom, who won the 2009 Nobel Prize in Economics, spent four decades documenting why standard economic principles about commons governance (Garrett Hardin's "Tragedy of the Commons," 1968) failed to predict what actually happened in most communities managing shared resources. Hardin's principle -- that rational individuals will inevitably overexploit shared resources -- rested on assumptions about actor isolation, absence of communication, and fixed preferences that held in some contexts and failed in others. Ostrom's field research, synthesized in Governing the Commons (1990), examined over 100 cases of long-running commons governance and found that communities with dense social ties, local rule-making authority, and graduated sanctions maintained sustainable commons for centuries without state regulation or privatization. The principle broke precisely when and because the contextual assumptions changed: when communities were disrupted by migration or external authority, when rule-making authority was removed, or when resources became sufficiently valuable to attract outside actors not embedded in local norms, Hardin's prediction became accurate. Ostrom's work showed that the principle was not wrong -- it was context-dependent in specifiable ways.
Stefan Thomke and Ashok Nimgade on practice transfer at Toyota: A Harvard Business School case study series by Thomke and Nimgade (2000-2006) examined 14 attempts by non-Japanese manufacturers to implement the Toyota Production System between 1985 and 2000. Toyota's lean manufacturing principles -- just-in-time inventory, continuous improvement (kaizen), worker-led problem-solving -- had produced dramatic performance improvements at Toyota over three decades. Of the 14 implementations studied, 4 achieved sustained performance improvements comparable to Toyota, 6 achieved partial gains that deteriorated within three years, and 4 produced negligible improvement despite significant investment. The researchers identified the contextual variable that most consistently predicted success: whether the implementation retained Toyota's principle (workers closest to the problem have authority to stop production and solve it) or retained only Toyota's practices (kanban cards, standardized work sheets, quality circles) without the underlying authority structure. Plants that transferred practices without the principle systematically failed; plants that transferred the principle, even with modified practices adapted to local conditions, succeeded. The principle was more portable than the practice.
James March on exploration versus exploitation tradeoffs: March's 1991 paper "Exploration and Exploitation in Organizational Learning," published in Organization Science, established one of the most robust context-dependency findings in organizational research. March modeled how organizations should allocate effort between exploiting known capabilities (refining and deepening what already works) and exploring new capabilities (experimenting with uncertain alternatives). The optimal allocation depended critically on environmental conditions: in stable environments with reliable feedback, exploitation dominated (develop known competencies deeply); in uncertain environments with unreliable feedback, exploration dominated (maintain variety and experimentation). March's model predicted that organizations would systematically over-exploit relative to the optimum because exploitation produces faster, more reliable returns -- and that this bias would produce competitive fragility when environments shifted. A 2006 empirical study by Gupta, Smith, and Shalley in Academy of Management Journal, examining 40 years of data from 84 industries, confirmed March's prediction: firms with exploitation-dominated strategies showed superior performance in stable periods and catastrophic underperformance following environmental discontinuities. The principle "invest in your strengths" broke precisely when the environment that made them strengths changed.
Robert Cialdini's cross-cultural replication studies: Cialdini's influence principles -- reciprocity, commitment, social proof, authority, liking, scarcity -- were derived from research conducted primarily in the United States between 1975 and 1990. Subsequent cross-cultural replication studies examined whether these principles held across different national contexts. A 2002 meta-analysis by Cialdini and colleagues, examining 165 studies across 24 countries, found that all six principles produced reliable effects but with substantial variation in magnitude. Scarcity appeals (limited availability increases desire) showed weaker effects in collectivist cultures, where decisions are more heavily influenced by social proof and group norms. Authority appeals showed stronger effects in high-power-distance cultures (those with greater acceptance of hierarchical authority). The principles were robust in direction but contextually variable in strength -- exactly the pattern that a sophisticated understanding of context-dependence predicts. The mechanism was stable; its magnitude was modulated by cultural context.
Documented Cases of Principle Failure from Context Shift
The historical record of principle failures across domains follows consistent patterns that make context-shift failures recognizable in advance, even when they are not recognized in the moment.
The Washington Consensus and developing economy policy (1989-2004): The Washington Consensus, a set of ten economic policy prescriptions formalized by economist John Williamson in 1989, applied standard market economics principles -- fiscal discipline, privatization, trade liberalization, market-determined interest rates -- to developing economies seeking recovery from debt crises. The principles were drawn from economic theory and from the experience of developed economies. Their application in Latin America, Sub-Saharan Africa, and post-Soviet transition economies produced sharply divergent outcomes from what the principles predicted. Joseph Stiglitz, Chief Economist of the World Bank from 1997 to 2000 and Nobel laureate in economics, documented in Globalization and Its Discontents (2002) that the policies succeeded in contexts that matched their assumptions (stable political institutions, existing legal infrastructure, developed financial markets) and failed in contexts that did not. Capital account liberalization -- theoretically sound in markets with adequate regulatory capacity -- produced financial crises in countries without that capacity. Privatization without competition law and regulatory infrastructure produced private monopolies as harmful as state monopolies. The principles were not wrong; the contexts where they were applied violated the assumptions under which they held. IMF internal reviews (2004, 2011) acknowledged the pattern Stiglitz identified: application without context assessment had produced predictable failures that contextual analysis would have anticipated.
Management by Objectives from Drucker to McNamara: Peter Drucker introduced Management by Objectives (MBO) in The Practice of Management (1954) as a system for aligning organizational effort with strategic goals through explicit, measurable objectives agreed between managers and subordinates. The principle rested on the assumption that objectives could be clearly specified, measured, and linked to the activities that produced them. MBO was widely adopted in US corporations during the 1960s and adapted by Robert McNamara during his tenure as Secretary of Defense (1961-1968). McNamara's application of MBO logic to the Vietnam War produced the notorious "body count" metric: enemy combatants killed per day became the primary measure of military progress because it was quantifiable. The principle -- measure what matters, manage to the measure -- broke because the context violated its assumption: in counterinsurgency warfare, the relationship between body count and strategic progress was not just weak but often negative (killing civilians and moderates strengthened insurgent recruitment). David Halberstam's The Best and the Brightest (1972) and subsequent military analyses documented that units under body count pressure falsified reports, counted civilian casualties as combatants, and engaged in tactically counterproductive operations to improve their numbers. MBO worked where its assumption held (measurable activities that reliably link to desired outcomes) and failed catastrophically where it did not (complex adaptive systems where gaming the measure undermined the goal).
Evidence-based medicine and the limits of randomized controlled trial (RCT) evidence: The evidence-based medicine (EBM) movement, formalized by Gordon Guyatt and colleagues at McMaster University in the early 1990s, established a hierarchy of medical evidence with randomized controlled trials at the top. The principle: treat patients based on interventions proven effective in RCTs rather than clinical tradition or theoretical reasoning. The principle has produced dramatic improvements in treatment quality across most of medicine. It has also produced systematic failures in specific contexts where the principle's assumptions break down. John Ioannidis of Stanford University documented in a landmark 2005 paper ("Why Most Published Research Findings Are False," PLOS Medicine) that published RCT results failed to replicate in subsequent trials approximately 35-40% of the time, primarily because trials were conducted in highly selected patient populations whose characteristics differed from the general clinical population. The principle (treat based on RCT evidence) broke when context shifted from the trial population to the clinical population: interventions effective in young, healthy, non-comorbid trial participants frequently showed smaller or null effects in elderly patients with multiple conditions. The principle was context-valid within its assumed population and context-invalid beyond it.
About This Series: This article is part of a larger exploration of principles and laws. For related concepts, see [What Is a Principle and Why It Matters], [Universal Principles That Apply Across Domains], [Why Principles Outlast Tactics], and [First-Order vs Second-Order Effects].
Frequently Asked Questions
Why do laws and principles break in new contexts?
Because they're built on assumptions about conditions. When underlying conditions change significantly, principles may no longer apply.
What makes a principle context-dependent?
When its validity requires specific conditions—scale, environment, constraints, or assumptions that don't hold universally.
How do you recognize when context invalidates a principle?
Compare current conditions to where principle was derived, look for violated assumptions, and test whether predictions still hold.
Are any principles truly universal?
A few—like mathematical truths and basic physics. Most 'laws' are really strong patterns that hold under common conditions.
What's the danger of applying principles without considering context?
You get systematic errors, failed strategies, and confidence in invalid approaches—context blindness creates predictable failures.
How do you adapt principles to new contexts?
Understand underlying logic, identify which assumptions changed, test predictions in new context, and modify principle accordingly.
Why do successful practices often fail when copied?
Because context matters enormously. Practices optimized for one set of conditions often fail when conditions differ.
How do you distinguish robust from fragile principles?
Robust principles work across wide condition ranges; fragile principles require specific conditions to hold.