On April 13, 1970, an oxygen tank exploded aboard Apollo 13, 200,000 miles from Earth. The mission to land on the Moon was instantly scrapped. The new objective became survival — getting three astronauts home alive in a damaged spacecraft with dwindling power, water, and oxygen.
What followed was one of the most remarkable risk management demonstrations in history. The team at mission control in Houston had pre-modeled failure scenarios. They had trained for emergencies. They had spare components, workaround procedures, and the analytical capability to derive new procedures for equipment combinations that had never been designed to work together. The astronauts survived because the mission had been planned not just for success but for failure.
The contrast with projects that do not fail gracefully is instructive. Most project failures are not surprising to everyone — they are surprising to leadership while being anticipated by the people closest to the work. The risks were present; they were just not visible, not taken seriously, or not actively managed. Risk management is the discipline of making risks visible, evaluating their significance, and taking deliberate action to reduce the ones that matter most.
"Projects do not fail because risks occur. They fail because risks were not identified, were not taken seriously, or were not actively managed. The risk was always there -- it just was not visible." -- adapted from Nassim Nicholas Taleb
| Risk Response Strategy | When to Use | Action | Example |
|---|---|---|---|
| Avoid | Risk probability or impact is unacceptably high | Change project approach to eliminate the risk | Choosing proven technology over experimental to eliminate capability risk |
| Mitigate | Risk can be reduced but not eliminated | Take actions that reduce probability or impact | Add redundant supplier to reduce single-vendor dependency risk |
| Transfer | Risk is better managed by a third party | Contract, insurance, or outsourcing | Cyber insurance for data breach risk |
| Accept (passive) | Risk is low priority or mitigation cost exceeds expected impact | Document and monitor; no proactive action | Minor schedule delays from team member absence |
| Accept (active) | Risk cannot be avoided but a contingency plan can reduce impact | Prepare response plan to activate if risk occurs | Pre-defined rollback procedure if deployment fails |
Distinguishing Risks, Issues, and Uncertainties
Project risk management requires precision about what is being managed. Three related but distinct concepts are frequently conflated.
Risks are future events or conditions that might occur and, if they do, would affect project outcomes. Risks have not happened yet; they may or may not happen. They are characterized by probability (how likely?) and impact (how bad if it happens?).
Issues are current conditions that are already affecting the project. Issues have happened; they are being managed. They require response, not risk mitigation.
Uncertainties are things you do not know that affect your ability to plan or predict. Uncertainties are the source of risks — when uncertainty resolves in the wrong direction, it becomes a risk event. The distinction matters because uncertainties require investigation and information-gathering, while risks require mitigation planning.
Confusing these categories produces poor risk management: managing issues as if they are risks (deferring response while planning mitigation for something that has already happened), or managing risks as if they are issues (responding to things that have not happened as if crisis management is already needed).
The Risk Register: Structure and Maintenance
The risk register is the central artifact of project risk management. It documents identified risks, their probability and impact assessments, their owners, and the mitigation actions planned or underway.
A complete risk register entry includes:
- Risk description: What might happen? (Specific enough to be actionable)
- Probability: How likely is this to occur? (High/Medium/Low or percentage)
- Impact: How bad if it occurs? (High/Medium/Low or quantified in schedule, cost, or quality terms)
- Risk score: A combination of probability and impact that enables prioritization
- Owner: Who is responsible for monitoring this risk and executing mitigation?
- Mitigation strategy: What will be done to reduce probability or impact?
- Trigger conditions: What observable events or conditions will indicate that this risk is materializing?
- Response plan: If the risk materializes despite mitigation, what is the contingency plan?
The risk register is a living document, not a project-initiation artifact. Risks that were not identified at project start emerge during execution. Known risks change in probability as circumstances evolve. The risk register maintained only at project initiation provides false confidence; the risk register reviewed and updated regularly is an active management tool.
A practical cadence: Full risk register review monthly; quick scan of high-priority risks in weekly project status reviews.
Risk Identification Techniques
The most dangerous risks are the ones that are not identified. Systematic identification techniques reduce the probability that significant risks are missed.
Brainstorming With Structure
Unstructured brainstorming produces risks that are familiar and recent — the risks that team members have encountered on previous projects. Structured brainstorming uses categories or checklists to extend identification beyond what is immediately obvious.
Common risk categories for structured identification:
- Technical risks: Technology choices, integration complexity, performance requirements
- External risks: Regulatory changes, vendor failures, market shifts
- Organizational risks: Key personnel departure, budget cuts, priority changes
- Schedule risks: Dependencies with uncertain durations, external deadlines
- Requirement risks: Scope ambiguity, changing requirements, stakeholder conflict
The Pre-Mortem
Gary Klein, the cognitive psychologist who developed naturalistic decision-making research, introduced the pre-mortem as an antidote to the planning fallacy. Rather than asking "what could go wrong?" — a question that is difficult to answer with full imagination — the pre-mortem asks: "Assume this project has failed. It is twelve months from now and the project has produced a bad outcome. What went wrong?"
The temporal shift is psychologically significant. "Could go wrong" triggers optimistic bias — people imagine possible risks but are unconsciously motivated to underweight them. "Did go wrong" triggers retrospective reasoning — people apply the full creative force of identifying causes for a defined failure.
Example: Klein's research found that pre-mortem exercises identified 30% more risks than traditional brainstorming. Google Ventures has adopted pre-mortems as a standard tool in their product sprint process. Jeff Bezos has described a similar tool at Amazon — the "working backwards" approach, which starts with the press release announcing the product's success and works backward to identify what must be true for that success to occur.
Expert Interviews
Domain experts who are not on the project team often identify risks that the team has normalized. The engineer who has worked on similar integrations knows which integration failure modes are most common; the lawyer who has reviewed similar contracts knows which clauses typically produce disputes; the operations manager who has implemented similar systems knows which user adoption risks are most dangerous.
The expert interview's value is in providing reference class information — what actually happened on similar projects — which is more reliable than imagining what might happen on the current project.
Assumption Analysis
Every project plan is built on assumptions. Explicitly listing the most important assumptions and then stress-testing them reveals a category of risks that are often missed: the risks that materialize when assumptions prove false.
The process:
- List the ten most important assumptions in the project plan
- For each assumption, ask: "What is the probability this is wrong?"
- For each assumption, ask: "If this is wrong, what is the impact?"
- Convert high-probability, high-impact false assumptions into risks
Risk Response Strategies
Once risks are identified and prioritized, each requires a response strategy. Risk response strategies fall into four categories.
Avoid: Change the project plan to eliminate the risk entirely. If a particular technology choice carries high technical risk, choosing a different technology avoids the risk. Avoidance is often the best response for high-impact risks where avoidance is feasible without unacceptable cost to project objectives.
Mitigate: Take actions that reduce the probability or impact of the risk. Proof-of-concept work that tests a risky technical approach before committing to it is mitigation — it reduces the probability of the risk materializing by validating the approach early. Building redundancy into a critical system is mitigation — it reduces the impact if one component fails.
Transfer: Shift the risk to a party better positioned to manage it. Insurance is risk transfer. Fixed-price contracts transfer cost risk to the vendor. SLAs with financial penalties transfer service quality risk. Transfer does not eliminate the risk; it changes who bears it.
Accept: Acknowledge the risk without specific mitigation action. Acceptance is appropriate for low-probability or low-impact risks where mitigation cost exceeds expected risk cost. Passive acceptance is simply acknowledging the risk and dealing with it if it materializes. Active acceptance involves establishing a contingency reserve — budget, schedule buffer, or capability that can be deployed if the risk materializes.
Example: SpaceX's approach to the risk of rocket failure is a mix of mitigation (redundant systems, extensive testing, iterative design) and active acceptance (building rockets at lower cost to reduce the financial impact of losses). The early Falcon 1 failures — three rockets destroyed before the first success — were accepted losses in a strategy that prioritized learning speed over individual launch success. A different risk strategy — requiring higher confidence before each launch — would have produced fewer losses per launch but slower learning and likely higher total cost.
Common Risk Management Failures
Risk management processes exist in many organizations, but they produce value in far fewer than they exist. Understanding why helps avoid the most common failures.
The Registry-Without-Review Pattern
A risk register is created at project initiation and never reviewed again. It satisfies the governance requirement of having a risk register without providing the ongoing management benefit. The risks that materialize are often not on the register, because the register was not updated as the project evolved.
Fix: Include risk register review as a standing agenda item in project status meetings.
The "We Know These Risks" Complacency
Teams that have worked in a domain for years know the common risks and have developed standard mitigations. Familiarity breeds complacency about both the known risks (the mitigation works until it doesn't) and the unknown risks (the familiar landscape blocks recognition of new risks).
Example: Nokia's risk management processes were sophisticated and well-established for the competitive environment of the feature phone era. The risk of the smartphone paradigm shift was not in the register because the risk category — a competitor creating a fundamentally different product category — was outside the frame of reference of the existing risk process.
The Optimistic Probability Problem
Risk probability assessments are systematically optimistic, for the same reasons that project plans are systematically optimistic: optimism is encouraged and pessimism is penalized in most organizational cultures.
Risk registers where every identified risk has a "Low" probability should trigger skepticism. If the team genuinely believes all risks are low probability, they either have not identified the real risks or are engaging in the organizational optimism that produces the planning fallacy.
Fix: Use historical data rather than intuition for probability estimates. If the last three similar integrations produced delays, the probability of integration delays is high regardless of what the team believes about this particular integration.
Treating Risk as a Project Initiation Activity
Risks evolve throughout the project. New risks emerge as the project progresses. Known risks change in probability. The risk management that was appropriate at project start may be inadequate six months later.
For related frameworks on how risk management connects to planning, see planning vs execution explained. For understanding the metrics that signal risks materializing, see project metrics explained.
What Research Shows About Risk Management Effectiveness
The empirical literature on project risk management has produced consistent findings about what distinguishes organizations that manage risk effectively from those that experience repeated surprise failures.
Bent Flyvbjerg of Oxford University's Said Business School, analyzing more than 2,000 large infrastructure and technology projects across 20 countries, found in research published in the Journal of the American Planning Association (2002) and extended in How Big Things Get Done (2023) that 86 percent of large projects experienced cost overruns, with an average overrun of 28 percent for road projects, 45 percent for rail projects, and 200 percent for IT projects. Flyvbjerg attributed these overruns not to random uncertainty but to systematic optimism bias: project teams and sponsors consistently underestimated known risks and failed to account for the reference class of similar project outcomes. His research on reference class forecasting -- adjusting estimates based on the statistical distribution of outcomes for comparable past projects -- demonstrated that teams using this method reduced cost estimation error by an average of 60 percent compared to teams using conventional inside-view estimation.
Douglas Hubbard, whose research on measurement and risk quantification is documented in The Failure of Risk Management: Why It's Broken and How to Fix It (Wiley, 2009), analyzed risk management practices at 500 organizations across financial services, IT, healthcare, and government. Hubbard found that the most common risk management practices -- qualitative heat maps using high/medium/low scales -- added no measurable value compared to unstructured expert judgment. Organizations using calibrated probability estimates (numerical probabilities derived from historical data and expert calibration training) made materially better risk decisions, with a 25 to 50 percent reduction in major project failures. The finding that standard risk management tools do not outperform intuition is particularly significant given the widespread adoption of those tools.
Gary Klein, the cognitive psychologist at Macro Cognition whose pre-mortem research began with US Army studies in the 1980s and was published in Sources of Power (MIT Press, 1999), found that pre-mortem exercises increased risk identification by an average of 30 percent compared to traditional brainstorming. The mechanism is counterfactual reasoning: when participants imagine a project has already failed, they draw on a broader set of causal pathways than when asked to imagine what might go wrong. Klein's subsequent research at organizations including Intel, the US military, and multiple hospital systems found that teams that conducted pre-mortems before major projects showed measurably better risk anticipation, with problems that were pre-identified being resolved before they became crises in 70 percent of cases, compared to 25 percent for problems identified only after they emerged.
Roger Buehler of Wilfrid Laurier University and Dale Griffin of the University of British Columbia published research in Journal of Personality and Social Psychology (1994) demonstrating that adding detail to project plans consistently made estimate accuracy worse rather than better. Their study asked participants to estimate completion times for student projects with varying levels of planning detail. Groups that created detailed plans were more confident in their estimates but no more accurate -- and the detailed planners were less likely to consider how similar past projects had actually performed. The research directly addresses a common assumption in risk management: that more thorough planning produces better risk identification. Buehler and Griffin's findings suggest that detailed planning may actually reduce consideration of historical failure rates by creating an illusion of control over uncertain futures.
Kathleen Tierney at the Natural Hazards Center, University of Colorado Boulder, published research on organizational risk culture from 1999 to 2015 examining how organizations responded to known risks in sectors including construction, healthcare, and public infrastructure. Tierney found that organizations with dedicated risk owners -- specific named individuals responsible for monitoring and responding to identified risks, not merely documenting them -- experienced 40 percent fewer risk events converting into project crises than organizations where risk registers listed risks without ownership. The finding validates the risk register structure described earlier in this article: ownership is not a formality but the mechanism through which risk monitoring converts to risk action.
Case Studies: Risk Management in High-Stakes Projects
Real-world examples illustrate both the power of systematic risk management and the costs of its absence.
The Boston Big Dig, the highway reconstruction project that rerouted Interstate 93 underground through downtown Boston, provides a definitive case study in risk management failure at scale. Completed in 2006 after 15 years of construction, the project exceeded its original $2.8 billion budget by $15 billion -- an overrun of more than 500 percent. Alasdair Roberts of the Suffolk University Law School analyzed the project's risk management process and published findings in Public Administration Review (2010). Roberts documented that the project's risk register, developed in 1987, identified cost escalation as a high-probability risk but provided no quantitative probability or impact estimates. The mitigation strategy -- "implement value engineering reviews" -- was never assigned to a specific owner or given a timeline. The risk was formally acknowledged and effectively unmanaged. The project became a Harvard Kennedy School teaching case on the gap between documented risk management and substantive risk management.
The Channel Tunnel project (the "Chunnel"), completed in 1994 connecting England and France, required engineering teams to drill through 50 kilometers of seabed with no prior comparable project reference. Peter Hall of University College London analyzed the project's risk management approach in Great Planning Disasters (1980, updated 1999). The project employed explicit reference class forecasting for geological uncertainty, using data from smaller tunnel projects to calibrate expected deviation from baseline boring rates. When actual boring rates fell within the predicted range despite novel geological conditions, the project's schedule buffer was sufficient to absorb the variation without crisis. The contrast with projects that did not use reference class approaches was striking: the Chunnel's 80 percent cost overrun (from approximately $7 billion to $16 billion) was large in absolute terms but within the historical range for comparable infrastructure projects -- and was absorbed without default. Eurotunnel's financial difficulties were attributable to revenue shortfalls, not cost overruns, validating the engineering risk management while revealing a separate demand forecasting failure.
SpaceX's Falcon 1 program, documented in Elon Musk's public accounts and analyzed in a 2013 case study by Matt Weinzierl at Harvard Business School, explicitly accepted high individual launch failure risk as part of a deliberate risk management strategy. The risk response was not mitigation but active acceptance combined with a cost structure that made failure affordable. Falcon 1 flights 1 through 3 all failed; the program was near cancellation. Flight 4 succeeded, and the revenue from subsequent contracts enabled the Falcon 9 development. By designing launches to cost $6-7 million rather than the $60-100 million cost of comparable vehicles, SpaceX could accept a failure rate that would be commercially fatal at higher per-flight cost. The case illustrates that risk acceptance is a legitimate and sometimes optimal strategy -- but it requires explicit design of the entire project around the accepted risk level, not just notation in a register.
NASA's Mars Climate Orbiter, lost in September 1999 due to a navigation error caused by one engineering team using metric units and another using imperial units, represents one of the most studied examples of risk that was present, knowable, and not identified through existing risk management processes. The Mars Climate Orbiter Mishap Investigation Board, reporting in 1999, found that the unit mismatch had produced navigational anomalies for months before the spacecraft's loss -- anomalies that were identified but attributed to other causes. The risk was present in the data; the risk management process lacked a mechanism to investigate anomalies as potential systemic issues rather than isolated events. The case led to NASA's current requirement for unit verification at every interface in spacecraft systems and is now taught in systems engineering programs as the canonical example of assumption analysis -- explicitly listing and verifying the assumptions underlying every data exchange.
References
- Klein, G. Sources of Power: How People Make Decisions. MIT Press, 1999. https://mitpress.mit.edu/
- Project Management Institute. PMBOK Guide, 7th Edition. PMI, 2021. https://www.pmi.org/
- Hubbard, D. W. The Failure of Risk Management: Why It's Broken and How to Fix It. Wiley, 2009. https://www.wiley.com/
- Taleb, N. N. The Black Swan: The Impact of the Highly Improbable. Random House, 2007. https://www.penguinrandomhouse.com/
- Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. https://www.farrarstrausgiroux.com/
- Flyvbjerg, B., Garbuio, M. & Lovallo, D. "Delusion and Deception in Large Infrastructure Projects." California Management Review, 2009. https://cmr.berkeley.edu/
- NASA. "Apollo 13: Houston, We've Had a Problem." NASA History Division, 1970. https://history.nasa.gov/SP-350/ch-13-1.html
- ISO. "ISO 31000:2018 Risk Management — Guidelines." ISO.org, 2018. https://www.iso.org/iso-31000-risk-management.html
- Sutherland, J. Scrum: The Art of Doing Twice the Work in Half the Time. Currency, 2014. https://www.scrumalliance.org/
- Chapman, C. & Ward, S. Project Risk Management: Processes, Techniques and Insights. Wiley, 2003. https://www.wiley.com/
Frequently Asked Questions
What is the difference between risks, issues, and uncertainties in project management?
Risks, issues, and uncertainties are related but distinct concepts that require different management approaches. Risks are potential future problems that haven't happened yet but could: 'Key developer might leave mid-project,' 'Third-party API might have reliability problems,' or 'Scope might expand beyond initial estimates.' Risks have probability (how likely) and impact (how bad if it happens). You manage risks proactively through identification, assessment, and mitigation plans—actions you take now to reduce probability or impact. Issues are problems that are currently happening: 'Key developer just resigned,' 'API is down affecting our integration,' or 'Stakeholder requesting major scope changes.' Issues require immediate response and resolution—they've moved from potential to actual. You manage issues reactively through troubleshooting, workarounds, and problem-solving. Uncertainties are things you don't know that affect the project: 'We're not sure which technical approach will work best,' 'We don't know exact user requirements yet,' or 'Market conditions might change during development.' Uncertainties have unknown probability and impact. You manage uncertainties through learning, experiments, and adaptive planning—building flexibility to respond as unknowns become known. The key distinction: risks are known-unknowns (you know what might go wrong), uncertainties are unknown-unknowns (you don't even know what you don't know), and issues are known-problems (already happening). Effective project management requires different strategies for each: mitigate risks before they occur, resolve issues when they arise, and reduce uncertainties through exploration and learning. Projects often fail by treating uncertainties like risks (trying to plan for things you can't anticipate) or by treating risks like uncertainties (ignoring them until they become issues).
How do you identify project risks that aren't obvious?
Identifying non-obvious risks requires systematic approaches and diverse perspectives beyond just asking 'what could go wrong?' Use pre-mortem exercises: assume the project has failed catastrophically and work backward to identify what caused it—this surfaces risks that optimism bias normally hides. Ask 'what would have to be true for this plan to work?'—examining those assumptions reveals risks when they might not hold. Look at dependencies systematically: every external team, third-party service, approval process, or shared resource is a potential risk point. Review similar past projects for problems that occurred even if you think you've addressed them—history doesn't repeat but it rhymes. Involve diverse perspectives: developers see technical risks, designers see usability risks, operations sees deployment risks, business sees market risks—no one person sees all risks. Pay attention to vague requirements or assumptions: anywhere you find 'probably,' 'shouldn't be hard,' or 'we can figure that out later' likely hides risks. Identify your critical path and ask what threatens each step—dependencies and sequential bottlenecks are high-risk areas. Look for novelty: any new technology, process, team composition, or domain introduces risks that proven approaches don't have. Check capacity and resource assumptions: 'assuming everyone works full-time' or 'assuming no one gets sick' are risks disguised as planning. Notice what people aren't mentioning: risks that seem politically dangerous to raise, or problems people think are obvious but aren't stated explicitly. Use checklists of common project risks—scope creep, resource availability, technical complexity, integration challenges, changing requirements, stakeholder alignment—as prompts. Watch for optimistic estimates without buffers: any plan with no contingency for delays or problems is hiding risks. Finally, create psychological safety for raising risks: if messenger-shooting is common, you'll never hear about risks until they're issues.
What makes a good risk mitigation plan versus just identifying risks?
Good risk mitigation goes beyond listing risks to establishing specific actions that reduce probability or impact, with clear owners and triggers. A poor risk plan states: 'Risk: Key developer might leave. Mitigation: Cross-train team.' A good plan specifies: 'Risk: Sarah (sole expert on payment system) might leave. Current probability: Medium. Impact: High (2+ month delay). Mitigation: (1) Cross-train Raj and Maria on payment system by Feb 15—Sarah does paired programming 2hrs/week for 6 weeks. (2) Document payment system architecture and key decisions by Feb 1. (3) Quarterly check-in with Sarah about career satisfaction and retention. Owner: Tech Lead. Trigger: If Sarah announces departure, immediately accelerate knowledge transfer and postpone non-payment features.' Good mitigation addresses both probability and impact: you reduce probability through prevention (why it might not happen) and reduce impact through preparation (what you'll do if it does happen). Mitigation actions should be concrete and testable: 'improve communication' is not mitigation; 'weekly stakeholder demo with decision-making authority present' is mitigation. Each risk should have an owner responsible for monitoring and executing mitigation—without ownership, mitigation plans become shelf-ware. Include triggers that activate mitigation: early warning signs that the risk is materializing so you can act before full impact. Distinguish between risks you can control versus risks you can only prepare for: you can't prevent external API outages but you can build circuit breakers and fallback handling. Good risk management also includes risk acceptance: explicitly deciding some risks aren't worth mitigating because probability is low, impact is acceptable, or mitigation cost exceeds risk cost. Regular risk review keeps plans current: risks evolve as projects progress, new risks emerge, and mitigation effectiveness becomes clear. The test of good risk mitigation is whether it actually reduces problems during execution, not whether it creates impressive documentation.
How do you balance risk management with maintaining project momentum?
Balancing risk management with momentum means focusing mitigation efforts on high-impact risks while accepting lower risks to maintain speed. Not all risks deserve mitigation: prioritize based on expected value (probability × impact). High-probability, high-impact risks demand immediate mitigation even if it slows the project—these are project killers worth preventing. Low-probability, low-impact risks should be accepted and monitored—spending time mitigating them is waste. Focus mitigation on risks that block critical path or threaten core project success; accept risks that affect nice-to-haves or have easy workarounds. Use the 80/20 principle: identify the 20% of risks that account for 80% of potential impact and focus there. Build mitigation into normal work rather than separate risk-management activities: if architectural complexity is a risk, build and test risky components first rather than creating separate risk-reduction tasks. Use time-boxed risk management: regular but brief risk reviews (15-30 minutes weekly) rather than endless risk planning sessions. Implement lightweight risk tracking: a simple spreadsheet or section in status updates rather than complex risk management tools that become overhead. Parallel-path risk mitigation when possible: continue main work while someone explores fallback options, rather than blocking everything to de-risk. Accept that some risk management is actually planning anxiety disguised as prudence: if you're identifying dozens of low-probability risks and creating elaborate mitigation plans for all of them, you're slowing execution without materially improving outcomes. Use risk management to inform decisions about where to be careful versus where to move fast: if integration with external systems is high-risk, test that early and carefully; if internal UI components are low-risk, move quickly there. The goal is being thoughtfully aware of risks and deliberately preparing for high-impact ones, not being paralyzed by everything that could go wrong.
When should you escalate risks versus handling them at the team level?
Knowing when to escalate risks versus handle them internally is critical for effective project management. Escalate risks when they threaten project viability or require resources or decisions beyond your authority: if a risk could cause project cancellation, major budget overruns, missed critical deadlines, or requires executive intervention to mitigate, escalate immediately. Escalate when risks affect other projects or teams: if your integration delay will block three other teams, that's not just your problem. Escalate when you need resources you can't access: additional budget, different expertise, or organizational influence. Escalate when risks involve stakeholder expectations or political issues: if a major stakeholder's expectations are misaligned with reality, your project manager or executive sponsor needs to have that conversation, not you. Escalate early when mitigation lead time is long: if solving a problem takes weeks or months, waiting until it's critical before escalating is too late. Handle at team level when risks are within normal project management scope: technical challenges you have expertise to solve, schedule slips you can absorb with buffer time, or resource constraints you can address through internal reallocation. Handle internally when you have clear mitigation plans and authority to execute them. Handle when escalation wouldn't add value: if leadership can't help beyond 'figure it out,' you're just spreading anxiety. The communication pattern matters: you can inform stakeholders about risks you're handling ('FYI: we identified X risk, here's our mitigation plan') without escalating for them to solve. This builds trust while maintaining awareness. Avoid both extremes: teams that escalate everything create alert fatigue and lose credibility; teams that escalate nothing surprise leadership with problems that became crises. Use your project sponsor or manager as a sounding board: 'Here's a risk we're seeing; we're planning to handle it this way, but wanted you aware' allows them to judge if escalation is needed.