Step-by-Step: Building a Mental Model

Every person who has ever tried to understand anything, from a child learning that a stove is hot to an executive trying to understand why market share is declining, has built a mental model. A mental model is an internal representation of how something works: a simplified, personal theory that captures the key elements, relationships, and dynamics of a domain well enough to support understanding, prediction, and decision-making. You use mental models constantly, usually without realizing it. When you predict that your colleague will resist a proposed change because "they always resist change," you are applying a mental model. When you estimate that a software project will take longer than the initial estimate because "projects always take longer than planned," you are applying a mental model. When you decide to take a different route to work because "traffic is always bad on Fridays," you are applying a mental model.

The question is not whether you build mental models. You do, automatically, because the human brain cannot function without them. The question is whether your mental models are accurate, useful, and consciously maintained, or whether they are inaccurate, limiting, and invisible. Most people never examine their mental models explicitly. They operate on assumptions and intuitions that were formed through experience, often years or decades ago, and that may no longer match the reality they are trying to navigate. A manager whose mental model of motivation was formed in the 1980s may still believe that financial incentives are the primary driver of employee performance, unaware of the decades of research showing that intrinsic motivation is more powerful for complex work. An investor whose mental model of markets was formed during a long bull run may systematically underestimate downside risks. A software architect whose mental model of system design was formed before cloud computing may reflexively favor on-premises solutions even when cloud alternatives are clearly superior.

Building mental models deliberately, consciously, and systematically is one of the most powerful cognitive skills a person can develop. It transforms understanding from something that happens to you (passively absorbing information) into something you do (actively constructing frameworks that organize information, reveal patterns, and guide action). This guide provides a step-by-step process for building mental models that are accurate, useful, and updatable, whether you are trying to understand a new domain, improve your performance in a familiar one, or simply make better decisions.


What Exactly Is a Mental Model?

A mental model is an internal representation of how something works. It is a simplified structure that captures the essential elements, relationships, and rules of a domain, omitting details that are irrelevant to the model's purpose while preserving the structure that matters for understanding, prediction, and decision-making.

Mental models are not unique to any specific domain. You have mental models for physical systems (how does a car engine work?), social systems (how does office politics work?), economic systems (how do markets respond to interest rate changes?), biological systems (how does the immune system fight infections?), and psychological systems (how do people react to bad news?). You have mental models for specific people (what motivates your boss? what will your partner say about this?), for organizations (how does this company make decisions?), and for yourself (what are my strengths? what triggers my stress?).

What makes mental models so powerful and so dangerous is that they operate largely below conscious awareness. You do not decide to consult your mental model of office politics before navigating a meeting. You just "know" who to talk to, what to say, what topics to avoid, and how to read the room. This fluency is the mental model in action, operating automatically and invisibly to guide your behavior. When the mental model is accurate, this automatic guidance produces good outcomes. When the mental model is inaccurate, perhaps because the organizational dynamics have shifted, or because the model was formed from limited experience, or because the domain has changed in ways the model doesn't capture, this same automatic guidance produces systematic errors.

Mental Models as Simplifications

Every mental model is a simplification. The real world is infinitely complex; a mental model that captured every detail of reality would be as complex as reality itself and therefore useless. The value of a mental model lies precisely in its simplicity: it strips away the irrelevant details and preserves the structural patterns that matter for the model's purpose.

George Box, the statistician, famously said: "All models are wrong, but some are useful." This is the essential truth about mental models. Your mental model of how the economy works is wrong in many details, but if it captures the key relationships well enough to guide reasonable investment decisions, it is useful. Your mental model of how your team functions is wrong in many details, but if it captures the key dynamics well enough to guide effective management decisions, it is useful.

The goal of building a mental model is not to achieve perfect accuracy, which is impossible, but to achieve useful accuracy: accuracy that is sufficient for the decisions and predictions the model needs to support. A pilot's mental model of aerodynamics does not need to include quantum-level physics; it needs to capture the relationships between angle of attack, airspeed, altitude, and lift well enough to fly the airplane safely. A manager's mental model of team dynamics does not need to include every psychological nuance of every team member; it needs to capture the key motivational patterns, communication styles, and conflict dynamics well enough to lead the team effectively.

Mental Models as Hypotheses

A crucial mindset for building effective mental models is to treat them as hypotheses rather than facts. A hypothesis is a provisional explanation that is subject to testing, revision, and replacement as new evidence emerges. Treating your mental model as a hypothesis keeps you open to updating it when reality contradicts it, which is the single most important habit for maintaining model accuracy over time.

The opposite mindset, treating your mental model as settled truth, creates what psychologists call confirmation bias: the tendency to seek, notice, and remember information that confirms the model while ignoring, dismissing, or forgetting information that contradicts it. Confirmation bias is the primary mechanism by which inaccurate mental models persist long after the evidence against them has accumulated. A manager who "knows" that remote workers are less productive will notice every instance of a remote worker missing a deadline while ignoring the many instances of remote workers delivering excellent work, thereby confirming a model that may be completely wrong.


How Do I Identify Core Concepts to Include?

Building a mental model begins with identifying the fundamental elements and relationships that define the domain. This is the most challenging step because it requires distinguishing between what is essential (must be in the model) and what is peripheral (can be safely omitted). Several approaches help with this distinction.

Start with the Most Fundamental Elements

Ask: What are the key entities in this domain? If you are building a mental model of a market, the key entities might include buyers, sellers, products, prices, substitutes, and regulators. If you are building a model of team dynamics, the key entities might include team members, their roles, their motivations, their relationships, and the tasks they are working on. If you are building a model of a disease, the key entities might include the pathogen, the host immune system, transmission mechanisms, symptoms, and treatments.

Map the Relationships

Once you have identified the key entities, ask: How do they interact? Interactions are the relationships, flows, and causal connections between entities. In a market model: buyers and sellers interact through transactions; prices signal the relative scarcity of products; substitutes provide alternatives when prices rise; regulators impose constraints on behavior. In a team dynamics model: team members interact through communication; roles define who is responsible for what; motivations drive engagement and effort; relationships determine trust, conflict, and collaboration patterns.

The relationships are often more important than the entities themselves. Two market models with the same entities but different relationship structures will generate very different predictions. Understanding how elements interact is more valuable than simply knowing what the elements are.

Identify the Governing Principles

Ask: What principles or rules govern behavior in this domain? Principles are the regularities, the patterns that hold across many specific situations, that give the domain its characteristic structure. In economics, supply and demand is a governing principle: when demand exceeds supply, prices rise, which reduces demand and increases supply until equilibrium is reached. In psychology, loss aversion is a governing principle: people feel losses more intensely than equivalent gains, which shapes risk-taking behavior. In ecology, carrying capacity is a governing principle: populations grow until they reach the limits of their resource environment, then stabilize or crash.

Governing principles are the most powerful components of a mental model because they generate predictions across a wide range of specific situations. A person who understands supply and demand can reason about any market, from housing to labor to commodities, even if they have never studied that specific market before. A person who understands loss aversion can predict behavior in negotiations, investments, organizational changes, and personal decisions.

Look for Patterns That Repeat

One of the most efficient ways to build mental models is to look for patterns that repeat across different domains. Many complex systems share structural patterns that produce similar behavioral dynamics regardless of the specific domain. Reinforcing feedback loops appear in population growth, compound interest, viral marketing, and arms races. Balancing feedback loops appear in thermostats, market equilibrium, homeostasis, and budget controls. Delays between cause and effect appear in drug therapies, economic policies, educational interventions, and organizational changes.

Recognizing these recurring patterns allows you to transfer understanding from domains you know well to domains you are still learning. If you understand reinforcing feedback loops from studying compound interest, you can recognize the same dynamic in viral growth, organizational decline, or reputation effects, even if the specific domain is new to you.

Identify the Variables That Matter Most

Not all variables in a system are equally important. Some variables have outsized influence on the system's behavior; others are peripheral and can be safely ignored in your model. To identify the most important variables, ask: If I could change only one variable, which change would have the largest impact on the system's behavior? The answer identifies the variable with the most leverage. Ask the same question again, excluding the first variable, to identify the second most important, and so on.

This prioritization is essential because of the constraint on model complexity: a mental model that is too detailed becomes unwieldy and loses its value as a thinking tool. By identifying and including only the most influential variables, you build a model that is both simple enough to use and powerful enough to generate useful predictions.


How Detailed Should My Mental Model Be?

The question of model detail is a constant tension in mental model construction. More detail captures more of reality's nuance but makes the model harder to use. Less detail keeps the model manageable but risks missing important dynamics. The answer depends on your purpose, your expertise level, and the specific questions you need the model to answer.

The Minimum Viable Model

Start with the minimum viable model: the simplest model that can generate useful predictions about the specific questions you need to answer. If you are trying to understand why customer churn is increasing, your initial model might include just four elements: customer satisfaction, service quality, competitor attractiveness, and switching costs. This four-element model is crude, but it may be sufficient to generate the hypothesis "churn is increasing because competitor attractiveness has risen while our service quality has declined," which can then be tested against data.

Add Complexity Only When Needed

The minimum viable model should be your starting point, not your ending point. As you test the model against reality (Step 5 below), you will discover areas where the model's predictions diverge from what actually happens. These divergences are signals that the model needs more detail in those specific areas. If your four-element churn model correctly predicts overall churn rates but fails to predict which customers churn, you may need to add detail about customer segments, usage patterns, or relationship history, but only in the areas where the model's predictions are inadequate.

This strategy of starting simple and adding complexity only where needed produces models that are simultaneously simple (in areas where simplicity is sufficient) and detailed (in areas where detail is required). This heterogeneous level of detail is actually a feature, not a bug: it concentrates your cognitive resources on the aspects of the domain that are most complex, most uncertain, or most relevant to your decisions.

The Working Memory Constraint

There is a practical constraint on model complexity that cognitive science has established clearly: human working memory can hold approximately four to seven chunks of information simultaneously. This means that the core of your mental model, the elements and relationships that you actively use when thinking about the domain, should be simple enough to hold in working memory. If your model has 50 elements, you cannot use it for real-time reasoning because you cannot hold 50 elements and their relationships in your head simultaneously.

The solution is hierarchical organization. The top level of your model has a small number of elements (4 to 7) that capture the domain's broadest structure. Each of these elements can be "unpacked" into a sub-model with its own elements and relationships, which can in turn be unpacked further. This hierarchical structure allows you to reason at the level of detail that is appropriate for the current question: broad structure for strategic questions, detailed sub-models for tactical questions.


Step-by-Step: The Model-Building Process

Step 1: Define Your Purpose

What specific question, decision, or prediction do you need the model to support? Write it down explicitly. "I want to understand why our product adoption is slowing down." "I need to predict how our team will respond to the proposed reorganization." "I want to understand the dynamics of the cryptocurrency market well enough to make informed investment decisions."

Your purpose determines everything: which elements to include, which relationships to model, how much detail is needed, and how you will validate the model. A model built without a clear purpose tends to grow in random directions, accumulating detail in areas that happen to be interesting rather than areas that are analytically important.

Step 2: Gather Raw Material

Before constructing the model, immerse yourself in the domain. Read, observe, ask questions, analyze data. The goal is not yet to organize what you learn, but to accumulate a rich stock of observations, facts, patterns, and impressions that the model will later organize.

Sources of raw material include: direct observation (watching the system in action), data analysis (looking for patterns in quantitative data), expert interviews (asking people who know the domain well how they understand it), published research (academic and practitioner literature on the domain), case studies (detailed accounts of specific events or decisions in the domain), and personal experience (your own prior interactions with the domain).

Cast a wide net at this stage. You will encounter information that seems relevant, information that seems irrelevant, and information whose relevance you cannot yet determine. Capture it all. The purpose of this step is to build a rich base of material that the subsequent steps will organize and refine.

Step 3: Identify the Key Elements and Relationships

From your raw material, extract the elements and relationships that appear most important. Use the approaches described in the "How Do I Identify Core Concepts" section above: identify key entities, map their interactions, identify governing principles, look for repeating patterns, and prioritize the variables that matter most.

At this stage, you are looking for the structural skeleton of the domain: the bones that everything else hangs on. You are not trying to capture every detail; you are trying to identify the minimum set of elements and relationships that would explain the major patterns of behavior you observe in the domain.

Write down your key elements and relationships explicitly. Draw a diagram if it helps. The act of externalizing your model, putting it on paper or screen rather than keeping it in your head, forces you to be precise about what you think and reveals gaps and inconsistencies that remain hidden when the model exists only as a vague intuition.

Step 4: Construct the Model

Assemble the key elements and relationships into a coherent structure. This might be a diagram (a concept map, a causal loop diagram, a flowchart), a set of principles (if X happens, then Y follows because of Z), a narrative (a story that describes how the system works and why it behaves the way it does), or a mathematical model (a set of equations that describe the relationships quantitatively).

The format depends on your purpose and your cognitive style. Visual thinkers may prefer diagrams. Verbal thinkers may prefer narratives. Quantitative thinkers may prefer mathematical models. There is no single "right" format; the best format is the one that most effectively supports your thinking and communication about the domain.

As you construct the model, you will likely discover that some relationships are unclear, some elements do not fit neatly into the structure, and some aspects of the domain that seemed simple are actually more complex than you initially thought. This is normal and productive. The construction process itself is a form of thinking: it forces you to confront ambiguities and complexities that passive learning does not.

Step 5: Test the Model Against Reality

This is the step that separates useful mental models from self-confirming delusions. A mental model that has never been tested against reality may feel accurate (because of confirmation bias) while being wildly wrong. Testing is how you discover and correct the model's errors.

Make predictions and check them. The most direct test of a mental model is to use it to make specific, falsifiable predictions and then check those predictions against what actually happens. "My model predicts that if we reduce the price by 10%, demand will increase by at least 15%. Let's test it." "My model predicts that the new hire will struggle with the legacy codebase because they have no experience with the framework. Let's observe and see." The predictions do not need to be precise to be useful; even qualitative predictions ("demand will increase" versus "demand will stay flat") provide valuable tests.

Look for surprises. The most valuable feedback for model improvement comes from surprises: events, outcomes, or behaviors that the model did not predict. Every surprise reveals a gap in the model, a relationship it missed, an element it overlooked, or a principle it got wrong. Rather than dismissing surprises as anomalies ("that was just bad luck" or "that was an outlier"), treat them as diagnostic information: what does this surprise tell me about where my model is wrong?

Seek disconfirming evidence. Because of confirmation bias, you will naturally notice evidence that supports your model and overlook evidence that contradicts it. Counteract this bias by actively seeking evidence that would prove the model wrong. Ask: "What would I expect to see if my model were completely wrong? Do I see any of those things?" This deliberate search for disconfirming evidence is the most effective defense against the persistence of inaccurate models.

Step 6: Refine Based on Failures

When the model's predictions fail, or when surprises reveal gaps, update the model. This is the step that most people struggle with because updating a mental model requires admitting that your previous understanding was wrong, which is psychologically uncomfortable. The discomfort is proportional to how invested you are in the model, how publicly you have expressed confidence in it, and how central it is to your professional identity.

Philip Tetlock's research on expert prediction found that the experts who were most accurate over time were those he called "foxes": people who held their models lightly, updated them readily in response to new evidence, and were comfortable with uncertainty and ambiguity. The least accurate experts were "hedgehogs": people who had a single, powerful model that they applied to every situation and who resisted updating even when evidence accumulated against it.

The practical implications are clear: treat your mental model as a working hypothesis that is perpetually subject to revision, not as a settled truth that must be defended. When evidence contradicts the model, the correct response is to update the model, not to dismiss the evidence.

Model refinement can take several forms:

  • Adjusting relationships: The model's elements are correct, but the relationships between them need recalibration. "I thought price was the primary driver of purchasing decisions, but the data suggests that convenience matters more."
  • Adding nuance: The model's broad structure is correct, but it needs more detail in specific areas to generate accurate predictions. "My model of team dynamics works for routine projects but fails for high-pressure crisis situations. I need to add a stress-response component."
  • Fundamental restructuring: The model's basic assumptions are wrong and need to be replaced. "I thought customer churn was driven primarily by product quality, but the data shows it's driven by competitor marketing. I need to rebuild my model around competitive dynamics rather than product dynamics."

What If My Mental Model Contradicts Expert Views?

This is a common and important question, and the answer is nuanced. Expert models deserve serious respect because they typically rest on deeper knowledge, broader experience, and more rigorous testing than amateur models. But expert models should not be adopted wholesale without understanding the reasoning behind them.

Understand Why Experts Structure Things That Way

When your model contradicts an expert model, the first step is to understand the expert model's reasoning, not just its conclusions. Why do experts structure the domain this way? What evidence supports their framework? What phenomena does their model explain that yours does not? What predictions does their model make that yours cannot?

Often, exploring the expert's reasoning reveals aspects of the domain that your model has missed: subtleties, exceptions, or dynamics that are not visible from your vantage point but that the expert's deeper knowledge has captured. In these cases, the contradiction is resolved by enriching your model with the insights from the expert model.

Build Your Own Version

Even when expert models are correct, there is value in building your own version rather than simply memorizing the expert's framework. A mental model that you have constructed yourself, that you have assembled piece by piece from your own observations and reasoning, is more deeply understood, more readily applicable, and more naturally updated than a framework you have merely adopted from authority.

The goal is not to ignore experts but to integrate expert knowledge into your own cognitive structure. This means understanding the expert model well enough to explain it in your own words, to identify where it applies and where it does not, and to combine it with your own observations and experience into a model that makes sense to you.

When Legitimate Disagreement Exists

In many domains, expert opinion is not unanimous. Different experts, with different methodological approaches, different data sources, and different theoretical commitments, arrive at different models of the same domain. In these cases, the disagreement itself is informative: it reveals areas where the domain is genuinely uncertain, where the evidence supports multiple interpretations, or where the answer depends on assumptions that different experts make differently.

When experts disagree, your task is not to choose one expert and adopt their model uncritically but to understand the reasons for the disagreement, to identify which aspects of each expert's model are well-supported and which are speculative, and to build a model that incorporates the strongest elements of each while acknowledging the genuine uncertainty that the disagreement reveals.


How Do I Update Mental Models as I Learn More?

The most important skill in mental model management is the willingness and ability to update models as new information emerges. This is harder than it sounds because several psychological forces resist model updating.

Psychological Barriers to Updating

Confirmation bias causes you to notice and remember information that confirms your model while overlooking information that contradicts it. Cognitive dissonance creates psychological discomfort when new information contradicts a firmly held model, and the easiest way to resolve the discomfort is to dismiss the new information rather than revise the model. Identity attachment makes updating especially difficult when the model is central to your professional identity ("I'm the person who understands markets" or "I'm the expert on team management"), because updating the model feels like admitting that your expertise was wrong. Sunk cost bias makes updating difficult when you have invested significant time and effort in building and publicly defending the model.

Practices for Effective Updating

Keep a prediction journal. Record your model's predictions and check them against outcomes. A written record prevents the revisionist memory that allows you to remember your predictions as more accurate than they actually were.

Seek out surprises. Deliberately expose yourself to information and experiences that might challenge your model. Read perspectives you disagree with. Talk to people who see the domain differently. Look at data you have been avoiding.

Schedule periodic reviews. Set a regular cadence (monthly, quarterly, annually) for reviewing your key mental models. Ask: What have I learned since the last review that might change this model? What predictions has this model gotten wrong? What aspects of the domain have changed?

Practice "steel-manning" alternatives. When you encounter a model that contradicts yours, instead of looking for reasons to dismiss it (which confirmation bias will happily supply), construct the strongest possible case for the alternative model. What evidence supports it? Under what conditions would it be more accurate than yours? This practice counteracts the automatic defensive reaction that protects inaccurate models from challenge.

Treat models as hypotheses, not facts. The single most important mindset shift is to maintain a permanent sense of tentativeness about your models. They are your best current understanding, not settled truth. They are useful until they are not, at which point they need to be updated or replaced.

Mental Model Quality Strong Model Weak Model
Relationship to evidence Tested against reality, updated when predictions fail Untested, maintained through confirmation bias
Complexity level Appropriate to purpose; simple enough to use, detailed enough to be useful Either too simple (misses key dynamics) or too complex (unusable)
Awareness Conscious and explicit; can be articulated and examined Unconscious and implicit; operates invisibly
Update mechanism Regular testing, surprise-seeking, scheduled reviews No systematic updating; changes only after major failures
Source of elements Multiple perspectives, diverse information sources Single perspective, limited experience
Treatment of uncertainty Acknowledges gaps and limitations Assumes completeness and correctness

Common Mental Model Failures and How to Avoid Them

Understanding the characteristic ways that mental models fail helps you build more robust models and recognize when your models are leading you astray.

The Map-Territory Confusion

Alfred Korzybski coined the phrase "the map is not the territory" to describe the error of confusing a representation with the thing it represents. In mental model terms, this manifests as forgetting that your model is a simplified representation and treating it as if it were the complete truth. When a manager says "John is a resistant-to-change type" and acts as if this label captures everything relevant about John's response to a proposed change, they have confused the map (the mental model of John) with the territory (the actual, complex, multifaceted person).

This confusion becomes dangerous when the model's simplifications happen to omit exactly the factors that matter most in a given situation. A financial model that omits tail risks works fine 99% of the time and fails catastrophically during the 1% of events that matter most. A leadership model that omits cultural differences works fine within a homogeneous team and fails completely when the team becomes diverse.

The Outdated Model

Mental models that were accurate when they were formed can become dangerously outdated as conditions change. An executive whose mental model of competitive strategy was formed in a pre-internet era may not incorporate network effects, platform dynamics, or data-driven business models. A doctor whose mental model of patient communication was formed before the internet may not account for patients who arrive with extensive online research about their condition.

The most insidious aspect of outdated models is that the person holding the model often does not realize it is outdated. The model continues to feel accurate because confirmation bias filters experience in ways that support the model, and because the situations where the model fails are often attributed to external factors ("the market is just crazy right now") rather than to the model's inadequacy.

The Single-Perspective Model

Mental models built entirely from one perspective are systematically biased in ways that the model-holder cannot detect from within that perspective. An engineer's mental model of product development may overweight technical elegance and underweight user experience. A salesperson's mental model of customer needs may overweight price and underweight implementation complexity. A CEO's mental model of organizational performance may overweight strategy and underweight execution capability.

The antidote is deliberate exposure to diverse perspectives: talking to people in different roles, reading from different disciplines, and actively seeking out viewpoints that challenge your own. Each new perspective reveals aspects of reality that your single-perspective model has systematically obscured.

The Overconfident Model

Research by Daniel Kahneman and others has consistently found that people are overconfident in their mental models, especially in domains where they have significant experience. Experience creates a strong feeling of understanding, but that feeling does not always correspond to actual predictive accuracy. In many complex domains, experienced practitioners are not significantly more accurate in their predictions than well-informed novices, even though they are substantially more confident.

The antidote to overconfidence is to keep score: record your predictions, check them against outcomes, and honestly assess how accurate your model has been. Most people who begin keeping score discover that their models are less accurate than they believed, which is uncomfortable but invaluable for calibrating an appropriate level of confidence.


A Worked Example: Building a Mental Model of Team Productivity

To make the process concrete, here is an example of building a mental model for the question: "Why has our software team's productivity declined over the past six months?"

Step 1: Define purpose. The question is specific: explain the productivity decline and identify interventions that might reverse it.

Step 2: Gather raw material. Sprint velocity data shows a 35% decline. Team size has remained constant. The product backlog has grown. Code review turnaround time has increased from 1 day to 4 days. Two senior engineers were promoted to management six months ago. The remaining engineers report spending more time in meetings and more time helping junior engineers. Bug rates have increased. Technical debt has been accumulating since a major release eight months ago.

Step 3: Identify key elements and relationships. Key elements: senior engineer capacity (stock), junior engineer capacity (stock), knowledge transfer rate (flow), technical debt level (stock), meeting overhead (variable), code review speed (variable), bug rates (variable), sprint velocity (outcome). Key relationships: losing senior engineers to management reduced the team's experienced coding capacity; the remaining senior engineers are spending more time mentoring and in meetings, further reducing their coding capacity; growing technical debt is increasing the time required for each task; rising bug rates are pulling engineers into reactive debugging rather than proactive development.

Step 4: Construct the model. The model reveals two reinforcing loops: (1) Knowledge drain loop: senior engineers promoted out create a knowledge gap that forces remaining senior engineers to spend more time helping junior engineers, reducing their productivity, which increases the pressure to promote more people to management to "fix" the productivity problem, which drains more senior capacity. (2) Technical debt loop: time pressure causes quality shortcuts, which increase technical debt, which slows future development, which increases time pressure, which causes more shortcuts.

Step 5: Test the model. Prediction: if this model is correct, the tasks that have slowed most should be those that require deep system knowledge or that touch heavily-indebted code areas. Check against the sprint data: confirmed. Prediction: the decline should have accelerated (not linear decline) because the reinforcing loops produce exponential deterioration. Check: confirmed, velocity decline was steeper in months 4-6 than months 1-3.

Step 6: Refine. The model did not initially include meeting overhead, but the data shows that meetings have increased by 40% as the team added more coordination mechanisms to compensate for knowledge loss. Adding meeting overhead to the model creates a third reinforcing loop: knowledge loss creates coordination needs, which create meetings, which consume senior engineer time, which accelerates knowledge loss.

Interventions suggested by the model: Rather than parameter-level fixes (hiring more people, which would actually increase meeting overhead and mentoring load in the short term), the model suggests structural fixes: documented knowledge transfer (reducing dependence on person-to-person knowledge transfer), technical debt reduction sprint (breaking the debt loop), meeting audit and reduction (breaking the coordination overhead loop), and returning one senior engineer from management to technical work part-time (directly replenishing the stock of experienced engineering capacity).


Building Mental Models Across Domains

One of the most powerful applications of deliberate mental model construction is building models that span multiple domains. Many of the most important problems in modern life, from organizational management to public policy to personal decision-making, sit at the intersection of multiple domains: economics, psychology, technology, politics, biology. A person who has mental models for each of these domains individually, and who can see the connections between them, has an enormous advantage over someone who understands only one domain deeply.

Charlie Munger, Warren Buffett's long-time partner at Berkshire Hathaway, is perhaps the most famous advocate of multi-domain mental models. Munger argues that reliable wisdom comes from having a "latticework of mental models" drawn from many disciplines: economics, psychology, physics, biology, mathematics, engineering, and more. When you encounter a problem, you bring multiple models to bear, looking at it through the lens of economics, then psychology, then systems theory, then evolutionary biology. Each lens reveals different aspects of the problem, and the combination provides a richer, more accurate understanding than any single lens.

Building multi-domain models requires the same process described in this guide, applied iteratively across domains. Start with the domain most relevant to your current problem. Build a model. Test it. Refine it. Then expand to adjacent domains, looking for connections and analogies. Where are the reinforcing feedback loops from systems theory present in the market dynamics from economics? Where do the cognitive biases from psychology interact with the incentive structures from organizational theory? Where do the evolutionary dynamics from biology parallel the competitive dynamics from business strategy?

The cross-domain connections are where the deepest insights live, insights that specialists in any single domain cannot reach because they lack the models from adjacent domains that would reveal the connections. Building and maintaining a diverse portfolio of mental models is a lifelong practice, but one that compounds in value over time as each new model enriches and illuminates all the others.


References and Further Reading

  1. Craik, K. J. W. (1943). The Nature of Explanation. Cambridge University Press. https://www.cambridge.org/core/books/nature-of-explanation/DCB02E0E1894C0175E7C0A84B3B67B90

  2. Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Harvard University Press. https://www.hup.harvard.edu/books/9780674568822

  3. Munger, C. T. (2005). Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger (3rd edition). Donning Company Publishers. https://www.stripe.press/poor-charlies-almanack

  4. Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing. https://www.chelseagreen.com/product/thinking-in-systems/

  5. Tetlock, P. E. & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown. https://www.penguinrandomhouse.com/books/227815/superforecasting-by-philip-e-tetlock-and-dan-gardner/

  6. Gentner, D. & Stevens, A. L. (1983). Mental Models. Lawrence Erlbaum Associates. https://doi.org/10.4324/9781315802725

  7. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://us.macmillan.com/books/9780374533557/thinkingfastandslow

  8. Senge, P. M. (2006). The Fifth Discipline: The Art and Practice of the Learning Organization (revised edition). Currency/Doubleday. https://www.penguinrandomhouse.com/books/163984/the-fifth-discipline-by-peter-m-senge/

  9. Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791-799. https://doi.org/10.1080/01621459.1976.10480949

  10. Norman, D. A. (1983). Some observations on mental models. In D. Gentner & A. L. Stevens (Eds.), Mental Models. Lawrence Erlbaum Associates. https://doi.org/10.4324/9781315802725

  11. Forrester, J. W. (1971). Counterintuitive behavior of social systems. Technology Review, 73(3), 52-68. https://web.mit.edu/sysdyn/sd-intro/D-4468-2.pdf

  12. Parrish, S. (2019). The Great Mental Models, Volume 1: General Thinking Concepts. Latticework Publishing. https://fs.blog/tgmm/