Step-by-Step: Running a Premortem Analysis

Every project team has experienced the sickening moment when a carefully planned initiative collapses in ways that, in hindsight, seem painfully obvious. The technology that was never going to work. The stakeholder whose opposition was inevitable. The dependency that was always fragile. The timeline that was never realistic. After the failure, team members exchange knowing glances and confess: "I had a bad feeling about that from the beginning." The question that haunts every post-mortem is: if people saw the risks coming, why didn't they say something before the project failed?

The answer, documented extensively by research in organizational psychology, is that prospective risk identification is psychologically difficult in ways that retrospective analysis is not. Before a project launches, team members face powerful social and cognitive pressures that suppress risk-raising behavior. Optimism bias leads people to overestimate the probability of success. Groupthink pressures dissenters to conform. Hierarchical dynamics make junior team members reluctant to challenge plans that senior leaders have endorsed. The sunk cost of planning effort creates emotional investment in the plan's success. And the social cost of being perceived as a "naysayer" or "not a team player" discourages people from voicing concerns even when those concerns are well-founded.

The premortem analysis, developed by psychologist Gary Klein and described in his research on naturalistic decision-making, elegantly solves this problem by inverting the frame. Instead of asking "What might go wrong?" (which triggers optimism bias and social conformity), the premortem asks: "Imagine it's six months from now and this project has failed spectacularly. Write down all the reasons why it failed." This simple reframing has profound psychological effects. By declaring the project already dead, the premortem gives everyone permission to identify failure causes without being seen as negative or unsupportive. By asking "why did it fail?" rather than "what might go wrong?", it activates prospective hindsight, a cognitive mode that research has shown generates 30% more failure reasons than conventional risk assessment.

This guide provides a comprehensive, field-tested process for running a premortem analysis, from preparation through follow-up, with detailed guidance on facilitation, common challenges, and how to convert identified risks into actionable mitigation strategies. Whether you are a project manager running your first premortem or an experienced facilitator looking to deepen your practice, this guide will walk you through every phase of the process with the specificity needed to produce genuinely useful results.


What Is a Premortem and Why Is It Effective?

A premortem is a structured team exercise conducted after a project plan is substantially complete but before significant resources have been committed. The team imagines that the project has already failed and works backward to identify the most plausible causes of that failure. The identified failure causes are then converted into risk mitigation strategies that strengthen the plan before execution begins.

The technique was developed by Gary Klein, a cognitive psychologist whose research on naturalistic decision-making explored how experienced professionals make decisions in complex, high-stakes environments. Klein recognized that post-mortems, while valuable, arrive too late to prevent the failures they analyze. He also observed that conventional "what could go wrong?" risk brainstorming consistently underperformed because of the social and cognitive pressures that suppress risk identification. The premortem was his solution: a technique that harnesses the same hindsight clarity that makes post-mortems valuable but applies it before the failure occurs.

The premortem's effectiveness rests on three psychological mechanisms that are worth understanding in detail because they explain why the technique works so much better than simple risk brainstorming.

Prospective Hindsight

The first mechanism is prospective hindsight, also called the "knew-it-all-along effect." This is the observation that people are significantly better at generating explanations for events when those events are presented as having already occurred rather than as merely possible. Research by Deborah Mitchell, Jay Russo, and Nancy Pennington (published in the Journal of Experimental Psychology: Applied in 1989) found that prospective hindsight increased the number of failure reasons generated by 30% compared to conventional "what could go wrong?" brainstorming.

The reason for this improvement lies in how the brain processes hypothetical versus actual events. When asked "what could go wrong?", the brain treats the question as speculative and filters responses through optimism bias, generating only the risks that seem most probable from the current (optimistic) vantage point. When told "the project has failed" and asked to explain why, the brain shifts into explanatory mode, a cognitive process that is much more generative because explanation-seeking draws on narrative construction, pattern matching, and causal reasoning rather than probability estimation.

Consider the difference concretely. Ask a software team "What could go wrong with our new product launch?" and you will get a polite, sanitized list: "The timeline might slip," "We might have bugs," "Marketing might not reach our target audience." These are generic, obvious, and unhelpful. Now tell the same team: "It's October 15th. The product launch was a disaster. We lost two major clients. The board is demanding answers. Tell me what happened." The responses shift dramatically: "The API couldn't handle the load from the enterprise demo because we never load-tested with more than 50 concurrent users and the enterprise client had 300 people trying it simultaneously." "The onboarding flow assumed users had admin access, but the IT department at the client's company restricted permissions, so nobody could complete setup." "The sales team promised features in the demo that were actually on the Q2 roadmap, and when the client discovered they weren't available, they felt deceived."

The specificity and realism of the second set of responses is a direct product of prospective hindsight. The hypothetical failure activates narrative thinking, and narrative thinking generates concrete, specific, actionable failure scenarios rather than abstract risk categories.

Overcoming Conformity Pressure

The second mechanism is that the premortem overcomes conformity pressure by framing criticism as the assignment rather than as dissent. In a conventional risk discussion, raising a concern about the plan can feel like criticizing the people who created it. Senior leaders who invested significant effort in the plan may unconsciously signal that they do not welcome risk identification. Junior team members quickly learn which concerns are "welcome" and which will be met with defensiveness or dismissal. The result is that the most important risks, the ones that challenge fundamental assumptions or question senior leaders' decisions, are precisely the ones that go unspoken.

In a premortem, raising a concern is literally the task everyone has been asked to perform. The social dynamic shifts fundamentally. Instead of one brave person raising a concern against the grain of group optimism, everyone is expected to identify failure causes. The person who doesn't contribute failure scenarios is the one who seems disengaged, not the person who contributes uncomfortable ones.

This dynamic is particularly powerful in hierarchical organizations where junior team members often have the most direct operational knowledge (they know which systems are fragile, which processes are broken, which timelines are unrealistic) but the least social permission to share that knowledge. The premortem creates a structured, time-limited window where the normal hierarchy is suspended and everyone's role is to be a critic. Research on psychological safety by Amy Edmondson at Harvard Business School has demonstrated that teams where people feel safe raising concerns dramatically outperform teams where concerns are suppressed, and the premortem creates precisely this kind of safety, even in organizations where psychological safety is otherwise lacking.

Legitimizing Intuitive Unease

The third mechanism is that the premortem legitimizes the expression of unease that team members often feel but cannot articulate in conventional settings. Many experienced professionals develop intuitive apprehensions about projects, a sense that something is not right, without being able to point to a specific logical reason for their concern. In a conventional risk discussion, these intuitions are difficult to express because they lack the analytical precision that organizational discourse demands. You cannot easily say "I just have a bad feeling" in a risk review meeting.

The premortem's imaginative framing gives these intuitions a channel for expression. When asked "why did the project fail?", people often surface concerns that they would never have raised in response to "what are the risks?" because the hypothetical failure scenario activates different cognitive processes than analytical risk assessment. The narrative frame allows people to say "The project failed because we were trying to do too much with too few people and everyone was already stretched thin from the last release" in a way that feels like storytelling rather than criticism. This is not a minor distinction; it is the difference between concerns being voiced and concerns being swallowed.

Gary Klein, in his description of the technique, emphasized that some of the most valuable premortem insights come from these intuitive, hard-to-articulate concerns. A senior engineer who "just doesn't trust" a particular vendor's reliability claims. A product manager who senses that the customer segment the team is targeting doesn't actually match the product's strengths. A finance analyst who feels the revenue projections are based on assumptions that are more wishful than realistic. These are the insights that conventional risk analysis systematically fails to capture and that premortems are uniquely designed to surface.


When Should I Run a Premortem?

Timing is one of the most critical decisions in premortem planning, and getting it wrong can render the entire exercise useless. The premortem should be conducted after planning is substantially complete but before significant commitment of resources, reputation, or contractual obligations.

Why Timing Matters So Much

If the premortem is run too early, when the plan is still vague and incomplete, participants will identify obvious, generic risks that anyone could foresee ("We might not have enough budget," "The technology might not work") rather than the subtle, plan-specific risks that the premortem is designed to surface. Without a concrete plan to react against, participants cannot imagine specific failure scenarios because they do not have enough detail to construct narratives about how, exactly, the project might unravel.

If the premortem is run too late, after resources have been committed, contracts have been signed, public announcements have been made, or organizational restructuring has occurred, the results may be psychologically threatening and practically useless. Nobody wants to hear that the project they just publicly committed to is likely to fail, and the plan may be too locked down to adjust based on findings. Late premortems also trigger defensive reactions: participants may unconsciously minimize risks to justify the commitments already made, which is the exact opposite of what the exercise is supposed to achieve.

The Ideal Timing Window

The ideal timing is when the plan contains enough detail that participants can imagine its execution in specific, concrete terms but when there is still sufficient flexibility to modify the plan based on identified risks. This window varies by context:

In agile software development, the ideal moment is after sprint planning or release planning is complete but before coding begins in earnest. The team has committed to a set of stories and has a plan for how they will be implemented, but no code has been written and the plan can still be adjusted. For larger initiatives, running a premortem at the start of each major phase (after phase planning but before phase execution) captures risks that are specific to that phase's challenges.

In product launches, the ideal moment is after the go-to-market plan is drafted, the pricing is proposed, the channel strategy is defined, and the launch timeline is set, but before commitments to advertising spend, channel partners, and media appearances are finalized. At this point, the team can imagine the launch in enough detail to construct specific failure scenarios, but the plan is still flexible enough to adjust.

In organizational change initiatives, the ideal moment is after the change plan is designed, the communication strategy is drafted, and the timeline is established, but before the announcement to employees, the hiring or restructuring of teams, or the commitment of training budgets. Running the premortem at this stage often surfaces critical risks related to stakeholder reactions, communication gaps, and implementation sequencing that the change team has not considered.

In capital investment decisions, the ideal moment is after the business case is prepared and the investment thesis is articulated but before the board vote or final approval. The premortem can surface assumptions in the business case that may not hold, market dynamics that the analysis has not accounted for, and execution risks that the financial model has abstracted away.

Running Multiple Premortems

For long-duration projects, a single premortem at the beginning is not sufficient. The risk landscape evolves as the project progresses: new risks emerge, previously identified risks may resolve or intensify, and the team's understanding of the project's challenges deepens. Running premortems at multiple points, at the start of each major phase, after significant plan changes, or when the team senses that the risk landscape has shifted, captures risks that are specific to the project's current context. Each subsequent premortem will surface different concerns because the project's situation and the team's knowledge have evolved.


How Is Premortem Different from Regular Risk Analysis?

Understanding the differences between the premortem and traditional risk analysis is important because the techniques are complementary, not competing. Traditional risk analysis has genuine value, and the premortem does not replace it. Rather, the premortem fills specific gaps that traditional methods systematically miss.

Limitations of Traditional Risk Analysis

Traditional risk analysis asks "What could go wrong?" and typically produces a risk register: a list of possible risks with probability and impact ratings, assigned owners, and mitigation strategies. This approach has value but suffers from several systematic limitations that the premortem addresses.

Traditional risk analysis is limited by imagination. Participants identify risks they can explicitly conceptualize, which tends to bias the list toward familiar, previously experienced risks and away from novel, unexpected risks. If a team has experienced deadline slippage in the past, "timeline risk" will appear on every risk register they create. But risks they have never encountered, such as a key vendor going bankrupt, a regulatory change that invalidates the project's premise, or an internal reorganization that reassigns critical team members, will rarely appear because these risks require imaginative leaps that conventional risk identification does not encourage.

The premortem's hypothetical framing activates broader cognitive processes, including narrative construction and counterfactual thinking, that generate a wider range of failure scenarios. When told "the project failed," participants construct stories about how it failed, and these stories draw on a much wider range of experience, knowledge, and intuition than a structured "identify the risks" exercise.

Traditional risk analysis is constrained by social dynamics. In a conventional risk discussion, raising a risk can feel like criticizing the plan or the people who created it. This is particularly problematic when the risk involves questioning a senior leader's assumptions, challenging a popular strategic direction, or admitting that the team lacks capabilities that the plan assumes they have. The result is that risk registers tend to be filled with "safe" risks that everyone agrees on and empty of the uncomfortable, politically sensitive risks that are often the most dangerous.

Traditional risk analysis tends to be analytical and abstract. Participants identify risk categories (technology risk, market risk, resource risk) without constructing specific failure narratives. The premortem produces concrete failure stories ("The project failed because the API team was pulled to work on the security incident, leaving our integration unfinished for three weeks during the critical pre-launch period") that are more vivid, more memorable, and more actionable than abstract risk categories. Abstract risks generate abstract mitigations ("monitor technology risk"); specific failure scenarios generate specific mitigations ("get a written commitment from the API team's manager that security incidents will not pull more than one developer from the integration work during the pre-launch window").

Traditional risk analysis underweights correlated risks. Risk registers treat each risk as independent, assigning separate probabilities and impacts. In reality, many project risks are correlated: a delay in one area causes cascading delays in dependent areas, a budget overrun in one component forces cuts in another, a key person leaving creates knowledge gaps across multiple workstreams. The premortem's narrative approach naturally captures these correlations because failure stories inherently describe chains of causation rather than isolated events.

How the Two Approaches Complement Each Other

The most effective risk management combines both approaches. Use the premortem to generate a rich, narrative-driven inventory of failure scenarios that captures risks traditional analysis misses. Then use traditional risk analysis methods (probability/impact assessment, risk registers, quantitative modeling) to organize, prioritize, and track the risks that the premortem has surfaced. The premortem provides breadth and creativity; traditional analysis provides structure and ongoing management.

Dimension Traditional Risk Analysis Premortem Analysis
Core question "What could go wrong?" "Why did it fail?"
Cognitive mode Analytical, probability-based Narrative, explanation-based
Social dynamics Risk-raising = dissent Risk-raising = the assignment
Output format Risk register with ratings Failure narratives with causal chains
Strengths Structured, trackable, quantifiable Creative, intuitive, socially safe
Weaknesses Limited by imagination and social pressure Less structured, harder to track
Best for Ongoing risk monitoring Initial risk discovery

The Complete Premortem Process

Step 1: Preparation (Before the Session)

Thorough preparation separates effective premortems from exercises that feel productive but generate little actionable insight. The facilitator's preparation should address four areas: participant selection, advance briefing, plan documentation, and logistical setup.

Select participants carefully. The quality of a premortem is directly proportional to the diversity of perspectives in the room. Include everyone involved in the project's execution: developers, designers, project managers, quality assurance, operations, and anyone else who will do the hands-on work. But also include key stakeholders whose support or cooperation the project depends upon: executives who approved the budget, partner teams who will provide dependencies, customers or customer-facing teams who understand user needs.

Diversity of perspective is more important than seniority. A junior developer who has been debugging the legacy system for six months may have more insight into technical risks than the CTO who approved the architecture diagram. A customer support representative who fields user complaints daily may understand user behavior risks better than the product manager who designed the feature. If possible, include one or two people who are familiar with the project but not deeply invested in its success. These "informed outsiders" are less constrained by optimism bias and more likely to ask uncomfortable questions that insiders have unconsciously suppressed.

The ideal group size is 6 to 12 participants. Fewer than six limits the diversity of perspectives and makes the exercise feel like a small meeting rather than a structured event. More than twelve makes the sharing phase too long and can create social loafing, where participants contribute less because they assume others will cover the important risks.

Brief participants in advance. Send a brief message one to three days before the session explaining what a premortem is and what to expect. Something like:

"We'll be doing a premortem analysis for [project name] on [date]. This is a structured exercise where we imagine the project has already failed and identify the most likely reasons. The purpose is to strengthen our plan by identifying risks we might have missed. Please come prepared to think creatively about what could go wrong. Review the project plan summary [attached/linked] before the session."

This briefing serves two purposes. First, it prepares participants mentally so they arrive ready to engage rather than spending the first fifteen minutes understanding the exercise. Second, it signals that risk identification is expected and valued, which begins the process of creating psychological safety before the session even starts. Participants who know in advance that they will be asked to identify failure causes often begin thinking about risks informally in the days before the session, which produces richer contributions during the actual exercise.

Prepare a concise plan summary. Have a one-to-two-page summary of the project plan available for reference during the session. This should include the project's goals and success criteria, the timeline with key milestones, major dependencies (other teams, vendors, technologies), resource allocation (who is working on what), budget constraints, and any critical assumptions that the plan rests upon. Participants need enough context to generate specific, plan-relevant failure scenarios rather than generic concerns.

If the plan is complex, consider creating a visual summary, a timeline with milestones, a dependency map, or an architecture diagram, that participants can reference during the individual reflection phase. The more concretely participants can envision the plan's execution, the more specific and actionable their failure scenarios will be.

Arrange the logistics. For in-person sessions, you need a whiteboard or flip chart for recording failure causes during the sharing phase, sticky notes or index cards for individual reflection (one idea per note), and dot stickers for the prioritization vote. For virtual sessions, use a collaborative tool like Miro, Mural, FIGJAM, or even a shared spreadsheet where each participant has their own column for individual reflection and a shared area for clustering and voting. The tool matters less than ensuring that individual reflection can happen privately (not in a shared chat where people can see each other's responses in real time) and that the sharing phase is structured to prevent a few voices from dominating.

How long should a premortem session take? Plan for 60 to 90 minutes for most projects. A simple, small-team project can be covered in 45 minutes if the team is experienced with the format. A large, complex initiative with many stakeholders and numerous dependencies may need the full 90 minutes or even two hours. The key phases (individual reflection, sharing, discussion, prioritization, and action planning) each need adequate time; rushing any phase, particularly the individual reflection phase, undermines the exercise's effectiveness. It is far better to run a thorough 90-minute premortem than a rushed 30-minute exercise that generates a shallow list of obvious risks.

Step 2: Set the Stage (5 Minutes)

The facilitator's opening sets the tone for the entire exercise. In the first five minutes, you need to accomplish three things: explain the technique, establish psychological safety, and create the imaginative frame.

Explain the exercise clearly and briefly. Most participants will not have done a premortem before, and those who have may have experienced a poorly facilitated version that they associate with wasting time. Explain the concept in plain language:

"We're going to do something unusual today. Instead of asking what might go wrong with our plan, we're going to assume the project has already failed. I want you to imagine that it's [date six months or one year in the future, adjusted to the project timeline] and this project has been a complete disaster. Not just a minor setback, a full-blown failure. The client pulled out. The product flopped. The initiative was cancelled. Leadership is asking what went wrong. Your job is to explain why it failed."

Establish the psychological frame explicitly. This is the most important element of the opening and the one that facilitators most often skip or underemphasize:

"This exercise is not a test of our plan or a criticism of anyone's work. We built a good plan. The purpose of this exercise is to make that plan even better by surfacing risks we might not see otherwise. Research shows that when people imagine an event has already happened, they generate 30% more explanations than when they're asked what might happen. We're hacking our own cognitive biases to find blind spots. The more creative and specific you can be about failure causes, the more useful this exercise will be. No idea is too crazy or too uncomfortable."

Set explicit ground rules. State them clearly:

  • Every failure cause is valid. We will not evaluate, debate, or dismiss ideas during the brainstorming phase.
  • Be specific. "The project ran over budget" is less useful than "The project ran over budget because the vendor's estimate was based on their standard API and our custom requirements required three times the integration work."
  • Think broadly: technical failures, people failures, organizational failures, market failures, assumption failures, dependency failures, communication failures.
  • This is individual work first, then group sharing. Please do not discuss your ideas with neighbors during the reflection phase.

Step 3: Individual Reflection (5-10 Minutes)

This is the most critical phase of the premortem, and it is the phase that inexperienced facilitators most often rush or skip. Silent individual reflection must come before any group discussion. If the exercise moves directly to group discussion, the first few people to speak will anchor the conversation, and subsequent participants will gravitate toward similar failure causes rather than generating genuinely independent perspectives. This anchoring effect is one of the most robust findings in group decision-making research, and silent writing is the simplest and most effective countermeasure.

Give each participant sticky notes, index cards, or a private section of a digital collaboration tool. Ask them to write down every reason they can think of for why the project failed, with one reason per note. Encourage quantity and specificity:

"Take the next eight minutes to write down every reason you can think of for why this project failed. Don't censor yourself. Don't worry about whether a reason seems likely or unlikely. Write down everything that comes to mind. One idea per sticky note. Try to be as specific as possible: not just 'technology problems' but what specific technology problem, involving which system, caused by what circumstance."

Prompt participants to think across multiple dimensions:

  • Technical failures: Systems that didn't perform, integrations that broke, technologies that didn't scale, architectures that couldn't handle requirements
  • People failures: Key people who left, skills gaps that emerged, conflicts that derailed collaboration, burnout that reduced productivity
  • Organizational failures: Budget cuts, reorganizations, priority shifts, political opposition, stakeholder withdrawal
  • Market and customer failures: Customer needs that changed, competitors that moved faster, market conditions that shifted, user behavior that defied assumptions
  • Process failures: Communication breakdowns, handoff failures, testing gaps, deployment problems, coordination failures between teams
  • Assumption failures: Things we assumed to be true that turned out to be false, dependencies we assumed to be reliable that proved fragile, timelines we assumed to be realistic that proved impossible
  • External failures: Regulatory changes, economic shifts, partner company changes, vendor failures, force majeure events

During the silent reflection phase, the facilitator should resist the urge to speak. The silence may feel uncomfortable, especially if some participants finish writing quickly while others are still deep in thought. Let the silence work. Often the most valuable insights come in the final minutes of reflection, after the obvious risks have been written down and participants are forced to dig deeper into their intuitions and concerns. If participants seem to finish early, offer an extension prompt: "If you've written down the obvious failure causes, now think about the ones that are harder to articulate. What's the failure cause that you feel in your gut but can't quite put into words?"

Step 4: Share and Record (15-25 Minutes)

Once individual reflection is complete, move to structured sharing. The goal is to collect every failure cause from every participant in a way that gives each person equal voice and prevents dominant personalities from controlling the conversation.

Use a round-robin format. Go around the room (or through a participant list in virtual settings) and ask each person to share one failure cause at a time. After everyone has shared one, start a second round where each person shares another. Continue until all failure causes have been shared. This format ensures that junior participants share their ideas with the same prominence as senior leaders, and it prevents any single person from dominating the sharing phase.

As failure causes are shared, the facilitator should perform three tasks simultaneously:

Record each failure cause on a whiteboard, flip chart, or digital board, using the participant's own words as much as possible. Paraphrasing can inadvertently soften or distort the original concern, which undermines the exercise's purpose. If a participant says "The project failed because the VP of Engineering never really supported it and quietly redirected resources to his pet project," record that statement, don't sanitize it to "Resource allocation challenges."

Cluster related items visually, grouping failure causes that address similar themes (all dependency-related risks together, all people-related risks together, etc.). Clustering makes the prioritization phase more efficient and helps participants see patterns in the failure causes, which often reveals systemic risks that are more fundamental than any individual failure cause.

Ask brief clarifying questions when a failure cause is ambiguous or could be made more specific. "When you say 'the integration didn't work,' can you say more about which integration and what specifically you imagine went wrong?" But do not evaluate, debate, or challenge any failure cause at this stage. If someone shares a failure cause that seems unlikely or unusual, record it without comment. The facilitator's role during this phase is as a neutral recorder, not an analyst. Some of the most valuable premortem insights initially seem implausible because they challenge assumptions that the team has not questioned.

Watch for duplication without dismissing it. If multiple people independently identify the same failure cause, that convergence is information. Note it: "That's the fourth person to identify vendor reliability as a failure cause." Convergence signals that a risk is widely perceived, which may mean it is both more likely and more important than risks identified by only one person.

Common patterns that typically emerge include: dependency failures (a team, vendor, or technology that the project depends on doesn't deliver), resource conflicts (key people are pulled to other priorities), scope creep (requirements expand beyond what the plan can accommodate), communication breakdowns (stakeholders don't understand or support the project's goals), technical risks (a technology doesn't work as expected), assumption failures (something the plan assumed to be true turns out to be false), timeline compression (deadlines that don't accommodate realistic task durations), knowledge concentration (critical knowledge held by one person who becomes a bottleneck or single point of failure), and external disruptions (market changes, regulatory changes, competitive moves, or organizational changes that invalidate the plan's premises).

Step 5: Prioritize (10-15 Minutes)

With all failure causes collected and clustered, the team needs to prioritize. A thorough premortem will typically generate 20 to 40 failure causes, and attempting to mitigate all of them would be impractical and counterproductive. The goal is to focus the team's attention on the risks that are both most likely and most impactful.

What do I do with identified failure causes? Categorize them along two dimensions: likelihood (how probable is this failure cause?) and impact (how damaging would it be if it occurred?). A simple 2x2 matrix works well for initial triage:

  • High likelihood, high impact: These are the failure causes that demand immediate, aggressive attention. They represent the most probable paths to project failure and should drive specific, concrete changes to the project plan. If a high-likelihood, high-impact risk cannot be adequately mitigated, the team should seriously consider whether the project should proceed in its current form.
  • High likelihood, low impact: These are nuisances and friction sources that should be monitored and managed but may not require major plan changes. They are worth addressing because they drain team energy and morale even when they don't threaten the project's overall success.
  • Low likelihood, high impact: These are "black swan" risks that warrant contingency planning even though they are unlikely. The premortem is particularly valuable for surfacing these risks because they often involve scenarios that teams find psychologically uncomfortable to contemplate under normal circumstances.
  • Low likelihood, low impact: These can be noted and accepted without active mitigation. Attempting to mitigate every conceivable risk consumes resources that are better spent on higher-priority threats.

Dot voting provides an efficient and democratic way to assess collective judgment about priority. Give each participant a fixed number of votes (typically 3 to 5) and ask them to vote on the failure causes they consider most critical, considering both likelihood and impact together. Participants can distribute their votes however they wish: all on one failure cause they consider overwhelmingly important, or spread across several. The vote results provide a clear, quantified picture of the team's collective risk assessment.

After voting, identify the top 3 to 5 failure causes based on vote counts. These become the focus of the mitigation planning phase. In some cases, the vote will produce a clear hierarchy; in others, there may be a cluster of failure causes with similar vote counts. If the top-voted items are all in the same category (for example, all technical risks), consider also including the top-voted item from underrepresented categories to ensure the mitigation plan addresses a broader range of failure modes.

Step 6: Develop Mitigation Strategies (15-25 Minutes)

For each top-priority failure cause, the team develops specific, concrete mitigation strategies. This is where the premortem transitions from diagnosis to treatment, and it is the phase that determines whether the exercise produces lasting value or merely a list of worries.

For each failure cause, work through three questions:

What can we do to prevent this failure cause from occurring? Preventive measures change the plan to reduce the probability that the failure cause materializes. If the failure cause is "the API team doesn't deliver on time because they're pulled to another project," preventive measures might include: getting a written commitment from the API team's manager that they will protect the team's capacity for this project, building the integration schedule with buffer time that accommodates potential API team delays, or identifying an alternative API team or contractor who could step in if the primary team is diverted.

Good preventive measures are specific (they describe exactly what will be done, by whom, and by when), verifiable (you can check whether the measure has been implemented), and proportionate (the cost and effort of prevention is reasonable relative to the risk). Avoid vague preventive measures like "improve communication with the API team," which sound reasonable but do not actually reduce risk because they do not specify what improved communication looks like, who is responsible for it, or how you would know if it is happening.

What early warning indicators should we monitor? Even with preventive measures in place, risks may materialize. Early warning indicators allow the team to detect problems when they are still small and manageable rather than waiting until they have grown into crises. Good early warning indicators are leading (they signal emerging problems before those problems become visible in project outcomes), observable (they can be measured or detected without requiring special effort), and timely (they provide enough advance warning to allow a meaningful response).

For the API dependency example, early warning indicators might include: the API team misses their first internal milestone (a signal that their capacity or commitment is at risk), the API team's manager starts attending meetings for the competing project (a signal that organizational priorities may be shifting), or the API team's backlog shows more items being added than completed (a signal that their workload exceeds their capacity).

What will we do if the failure cause occurs despite our prevention efforts? Contingency plans specify what the team will do if the risk materializes. Contingency plans should be developed before the risk occurs so that the team can respond quickly and deliberately rather than scrambling to improvise under pressure. For the API dependency, contingency plans might include: "If the API team is more than one week behind their milestone, we will implement a mock API that allows our team to continue development without the actual API, decoupling our timeline from theirs," or "If the API team is reassigned entirely, we will engage [specific contractor or alternative team] under the pre-negotiated emergency contract."

The most effective contingency plans include triggers (specific, observable conditions that activate the contingency), actions (exactly what will be done), owners (who is responsible for executing the contingency), and resources (what budget, time, or capacity is reserved for contingency execution).

Mitigation Element Key Question Quality Criteria Example
Prevention What reduces the probability? Specific, verifiable, proportionate Written capacity commitment from API team manager
Early Warning What signals the risk is materializing? Leading, observable, timely API team misses first internal milestone
Contingency What do we do if it happens? Triggered, actionable, owned, resourced Switch to mock API if delay exceeds one week
Owner Who is responsible? Single accountable person, with authority Integration lead for mock API; PM for escalation

Step 7: Assign Owners and Document (5-10 Minutes)

Every mitigation strategy needs a single, named owner who is responsible for implementation. Without clear ownership, premortem findings disappear into the organizational ether within days of the session. The owner is not necessarily the person who will do all the work, but they are the person who is accountable for ensuring that the preventive measure is implemented, the early warning indicator is monitored, and the contingency plan is ready to execute.

Document the premortem findings in a format that integrates with the team's existing project management practices. If the team uses a risk register, add the premortem-identified risks to it. If the team uses a project plan with milestones and tasks, add the preventive measures as specific tasks with deadlines and owners. If the team uses a shared document or wiki, create a premortem summary page that is linked from the project's main page.

The documentation should include:

  • All failure causes identified (not just the prioritized ones), organized by category
  • Vote counts for each failure cause, showing the team's collective assessment of priority
  • Detailed mitigation strategies for the top-priority failure causes, including preventive measures, early warning indicators, contingency plans, and owners
  • A schedule for reviewing premortem findings during project execution (monthly review recommended for most projects)

Step 8: Close with Purpose (5 Minutes)

The closing is where the facilitator addresses a common concern: How do I prevent the premortem from becoming too negative? After spending an hour imagining failure, participants may feel deflated or anxious about the project's prospects. The facilitator's closing should reframe the exercise's output as a source of strength rather than a reason for concern.

"We identified [number] failure causes today, and we've developed specific strategies for the [number] most critical risks. Our plan is now significantly stronger than it was ninety minutes ago, not because the plan was bad, but because we've applied our collective experience and intuition to strengthen it against failure modes that we might otherwise have discovered too late. We are not saying this project will fail. We are saying that we have dramatically reduced the probability of failure by anticipating these risks and preparing for them."

Acknowledge the emotional difficulty of the exercise. Some participants may have shared concerns they have been carrying for weeks or months without a venue for expression. The relief of finally voicing these concerns can be significant, and acknowledging it validates the courage it took:

"Thank you for your candor and creativity today. Some of the failure causes we identified were uncomfortable to raise, and I appreciate that people were willing to name them. The fact that this team can look at its own plan critically and honestly is a sign of strength, not pessimism."

End with a clear statement of next steps: who will finalize the documentation, when the mitigation strategies will be integrated into the project plan, and when the first review of premortem findings will occur.


After the Premortem: Integration and Follow-Up

The premortem's value is realized only if its findings are integrated into the project plan and monitored during execution. A beautifully facilitated premortem that produces a document nobody reads is worse than no premortem at all because it creates the illusion of risk management without the substance.

Integrating Findings into the Project Plan

Within 48 hours of the premortem session, the facilitator or project manager should complete several integration tasks:

Update the project plan to incorporate preventive measures. If the premortem identified "key engineer burnout" as a high-priority risk and the preventive measure is "cap working hours at 45 per week and add a backup engineer to the critical path," then the project plan should be updated to reflect the reduced capacity (fewer productive hours per week) and the additional resource (backup engineer's allocation). Preventive measures that require plan changes should actually change the plan, not just appear in a supplementary risk document that nobody consults.

Add early warning indicators to the monitoring cadence. If the team has weekly status meetings, add the premortem's early warning indicators to the standing agenda. The project manager or designated risk owner should review each early warning indicator weekly and report on whether the signals suggest the associated risk is stable, increasing, or materializing.

Prepare contingency plans for activation. This means ensuring that the resources, relationships, and decisions needed to execute contingency plans are in place before they are needed. If the contingency plan involves engaging a backup contractor, the contract should be negotiated and ready to sign. If the contingency involves switching to an alternative technology, a proof of concept should be completed to verify the alternative actually works. Contingency plans that require weeks of preparation to execute are not contingency plans; they are aspirations.

Periodic Review During Execution

During project execution, review the premortem findings at least monthly (or at each sprint retrospective in agile contexts). The review should address several questions:

  • Are the identified risks materializing? Have any early warning indicators been triggered?
  • Are the mitigation strategies working? Are preventive measures actually reducing risk, or have they proved insufficient?
  • Have new risks emerged that weren't identified in the premortem? The premortem captures risks that are visible at the time of the exercise, but new risks will emerge as the project progresses and as the external environment changes.
  • Have any previously identified risks been resolved or become irrelevant? As the project evolves, some risks that seemed significant at the outset may dissipate. These can be removed from active monitoring to avoid risk fatigue.

The periodic review also provides an opportunity to assess the accuracy of the premortem's prioritization. Sometimes the team's collective judgment about which risks are most critical proves correct; other times, risks that received few votes prove to be more significant than anyone expected. Tracking these outcomes improves the team's risk assessment capabilities for future projects and future premortems.

Building a Premortem Culture

Organizations that run premortems regularly develop a culture where surfacing risks is valued rather than punished. Team members learn that their concerns will be heard and acted upon, which increases their willingness to raise concerns not just during formal premortems but in everyday project discussions as well. Over time, the premortem habit creates what amounts to an organizational immune system: a distributed network of risk sensors (every team member) connected to a response mechanism (the mitigation process) that catches problems early, before they metastasize into failures.

Building this culture requires several things from leadership. Consistency is essential: premortems should be a standard part of project initiation, not a special event that happens only when someone feels particularly worried. Follow-through is critical: if the team identifies risks and develops mitigations, leadership must support the implementation of those mitigations, even when they require additional time, budget, or resources. Recognition matters: when a premortem-identified risk materializes and the team's contingency plan handles it effectively, celebrate the premortem as the reason the team was prepared. And psychological safety must be maintained: if a team member raises an uncomfortable risk in a premortem and subsequently experiences retaliation or marginalization, the premortem culture will collapse and future exercises will produce only safe, sanitized, useless risk lists.


Facilitating Common Challenges

Even with careful preparation, premortems can encounter several challenges that the facilitator should be prepared to address.

The Dominant Voice Problem

In nearly every group exercise, one or two participants will attempt to dominate the discussion, whether through volume, seniority, or sheer verbal agility. In a premortem, this manifests as a senior leader offering lengthy commentary on each failure cause, a voluble team member turning the sharing phase into a monologue, or an expert dismissing other participants' failure causes as unlikely or uninformed.

The facilitator's primary defense is the structure of the exercise itself. The individual reflection phase prevents domination during idea generation. The round-robin sharing format gives each person equal turns. If a participant begins offering extended commentary during the sharing phase, the facilitator should gently redirect: "That's a great point. Let's capture it as a failure cause and save the discussion for the analysis phase. [Next person], what's your next failure cause?"

The "Everything Is Fine" Problem

Occasionally, a team will produce surprisingly few failure causes, not because the project is genuinely low-risk but because participants are reluctant to engage with the exercise. This can happen when the premortem is perceived as a check-the-box exercise that leadership doesn't take seriously, when participants fear that identifying risks will be interpreted as criticizing the plan's authors, or when the organizational culture punishes "negativity" so severely that even a structured exercise cannot overcome the learned behavior.

If the individual reflection phase produces very few failure causes (fewer than two per participant), the facilitator should extend the reflection time and offer more targeted prompts: "Think about the last project you worked on that failed or struggled. What went wrong there? Now imagine similar dynamics affecting this project." Another technique is to share a few example failure causes to demonstrate the expected level of specificity and to signal that uncomfortable topics are genuinely welcome: "For example, one failure cause might be: the project failed because the marketing team's launch campaign targeted early adopters, but the product's current UX is too complex for anyone except power users. Does that kind of specific scenario resonate? What's your version?"

The Blame Game

Despite the facilitator's best efforts, some participants may use the premortem as an opportunity to assign blame rather than identify risks. "The project failed because [specific person] is incompetent" or "The project failed because management always makes bad decisions" are attributions of blame, not useful failure causes. The facilitator should redirect these toward structural and situational factors: "Let's focus on what happened rather than who's at fault. If you think there's a capability gap on the team, how would you describe the failure cause in terms of what went wrong rather than who caused it? For example, 'The project failed because the team lacked experience with the database migration technology and didn't have a training plan to close the gap.'"

Remote and Hybrid Facilitation

Running premortems with distributed teams requires adapting the process to address the specific challenges of remote collaboration. Individual reflection translates well to virtual settings because it is inherently silent and independent; participants can write their failure causes in a private section of a shared digital tool. Sharing requires more structure in virtual settings because the social cues that manage turn-taking in person (eye contact, physical proximity, body language) are absent or muted online. Use an explicit turn order (alphabetical, by time zone, by random assignment) and enforce it strictly.

The facilitator's energy is more important in virtual settings because participants' attention is more fragile and distractions are more available. Open with a brief, engaging story about a project failure that a premortem might have prevented. Use participants' names frequently during the sharing phase to maintain engagement. Consider using breakout rooms for the mitigation planning phase, assigning each breakout group one or two top-priority failure causes to develop mitigation strategies for, and then reconvening to share results with the full group.


Advanced Techniques for Experienced Teams

As teams become experienced with premortems, several advanced techniques can deepen the analysis and address limitations of the basic format.

Role-Based Premortems

Assign different participants to imagine failure from different perspectives: the customer's perspective, the technical team's perspective, the finance team's perspective, the competitor's perspective, the regulator's perspective. This technique forces participants to step outside their own functional viewpoint and consider failure causes that their professional role might not naturally highlight. A developer assigned to think from the customer's perspective may identify usability and adoption risks that would never occur to them from their usual technical vantage point. A marketer assigned to think from the competitor's perspective may identify competitive response risks that the team's internal focus has obscured.

Severity Levels

Instead of imagining a single type of failure, ask participants to imagine three different levels of failure: a minor setback (the project delivers late or over budget but eventually succeeds), a significant failure (the project fails to achieve its primary objectives), and a catastrophic failure (the project damages the organization's reputation, loses key customers, or triggers a crisis). Different severity levels surface different failure causes. Minor setback scenarios tend to identify operational and execution risks. Significant failure scenarios identify strategic and stakeholder risks. Catastrophic failure scenarios identify systemic, cascading, and external risks.

Pre-Premortem Data Collection

Send a brief survey before the session asking participants to independently submit their top failure concerns. This serves three purposes: it allows the facilitator to identify common themes in advance and prepare the session accordingly, it ensures that participants who may be hesitant to speak in group settings have already contributed their input, and it creates a written record of pre-session concerns that can be compared to what emerges during the live exercise, providing insight into which concerns are shared widely versus which emerge only through the group process.

Quantitative Risk Assessment

For high-stakes projects where investment decisions depend on risk evaluation, follow the qualitative premortem with quantitative risk assessment. For each high-priority failure cause, estimate the probability of occurrence (expressed as a percentage) and the cost of occurrence (expressed in dollars, time, or other relevant units). Multiply probability by cost to calculate the expected value of each risk. Sum the expected values of all significant risks to estimate the project's total risk-adjusted cost, which can be compared to the project's expected benefits to produce a more realistic assessment of whether the project is worth pursuing.

This quantitative extension bridges the gap between the premortem's intuitive, narrative approach and the financial rigor that large project investments require. The premortem provides the breadth of risk identification; the quantitative assessment provides the precision of risk valuation.

Iterative Premortems

For long-duration projects spanning six months or more, run premortems at multiple points in the project lifecycle: at the start of each major phase, after significant plan changes, when new team members join (they bring fresh perspectives and different biases), or when the team senses that the risk landscape has shifted. Each iteration will surface different risks because the project's context, the team's knowledge, and the external environment have all evolved since the previous premortem. Iterative premortems also allow the team to assess the accuracy of previous premortem predictions: which risks materialized, which did not, and what does this tell us about our collective risk assessment capabilities?


Common Premortem Failure Causes by Project Type

While every project is unique, certain categories of failure cause appear with striking regularity across different types of projects. Familiarity with these common patterns helps facilitators prompt participants during the reflection phase and ensures that frequently overlooked risk categories are considered.

Software development projects commonly surface: scope creep driven by stakeholder additions, technical debt in legacy systems that makes changes riskier than estimated, dependency on other teams who have competing priorities, underestimation of testing and quality assurance time, key engineer departure or burnout, integration failures between components developed by different teams, and performance problems that only emerge at scale.

Product launch projects commonly surface: mismatch between product capabilities and market messaging, competitor response that changes the competitive landscape, channel partner readiness problems, pricing that doesn't match customer willingness to pay, onboarding friction that prevents user adoption, customer support capacity that can't handle launch volume, and timing collisions with other organizational initiatives that compete for attention and resources.

Organizational change projects commonly surface: middle management resistance that passive-aggressively undermines the change, communication plans that inform but don't persuade, training programs that teach new processes without building genuine capability, change fatigue from previous initiatives that consumed goodwill, loss of key talent who leave rather than adapt, systems and processes that haven't been updated to support the new way of working, and metrics that continue to reward old behaviors.

Strategic initiatives commonly surface: market assumptions that prove wrong as the initiative unfolds, resource commitments that erode as competing priorities emerge, stakeholder support that was conditional on conditions that change, technology choices that become obsolete during the initiative's long timeline, organizational politics that shift the balance of power between initiative supporters and opponents, and "success theater" where teams report progress that masks underlying problems until the initiative's final phase.


Why the Premortem Works Better Than You Expect

The premortem is deceptively simple: imagine failure, explain it, and prepare for it. Many project managers initially dismiss it as "just another brainstorming exercise" and are surprised by the depth and specificity of the risks it surfaces. The reason is that the premortem is not a brainstorming exercise at all. It is a carefully designed psychological intervention that harnesses three of the most powerful cognitive mechanisms available, prospective hindsight, social permission, and narrative thinking, to systematically overcome the cognitive and social barriers that make conventional risk identification so limited.

The research evidence supports this assessment. Klein's original work demonstrated that premortems consistently surface risks that traditional risk analysis misses. The Mitchell, Russo, and Pennington research quantified the 30% improvement in failure cause generation. Studies by Veinott, Klein, and Wiggins found that premortems appropriately calibrate plan confidence, reducing overconfidence without creating excessive pessimism. And Edmondson's research on psychological safety provides the theoretical foundation for understanding why the premortem's social dynamics are so effective at legitimizing risk identification.

Any project worth doing is worth a 60-to-90-minute investment in understanding how it might fail. The premortem provides a structured, proven, psychologically grounded process for that investment, one that consistently delivers insights that no other risk management technique can match.


References and Further Reading

  1. Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18-19. https://hbr.org/2007/09/performing-a-project-premortem

  2. Klein, G. (2004). The Power of Intuition: How to Use Your Gut Feelings to Make Better Decisions at Work. Currency/Doubleday. https://www.penguinrandomhouse.com/books/288763/the-power-of-intuition-by-gary-klein/

  3. Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). Back to the future: Temporal perspective in the explanation of events. Journal of Behavioral Decision Making, 2(1), 25-38. https://doi.org/10.1002/bdm.3960020103

  4. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. https://us.macmillan.com/books/9780374533557/thinkingfastandslow

  5. Janis, I. L. (1982). Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Houghton Mifflin. https://www.cengage.com/c/groupthink-2e-janis/

  6. Taleb, N. N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House. https://www.penguinrandomhouse.com/books/176226/the-black-swan-second-edition-by-nassim-nicholas-taleb/

  7. Edmondson, A. C. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley. https://www.wiley.com/en-us/The+Fearless+Organization-p-9781119477266

  8. Hubbard, D. W. (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. Wiley. https://www.wiley.com/en-us/The+Failure+of+Risk+Management-p-9780470387955

  9. Veinott, E. S., Klein, G., & Wiggins, S. (2010). Evaluating the effectiveness of the PreMortem technique on plan confidence. Proceedings of the 7th International ISCRAM Conference. https://idl.iscram.org/files/veinott/2010/909_Veinott_etal2010.pdf

  10. Lovallo, D. & Kahneman, D. (2003). Delusions of success: How optimism undermines executives' decisions. Harvard Business Review, 81(7), 56-63. https://hbr.org/2003/07/delusions-of-success-how-optimism-undermines-executives-decisions

  11. Senge, P. M. (2006). The Fifth Discipline: The Art and Practice of the Learning Organization (revised edition). Currency/Doubleday. https://www.penguinrandomhouse.com/books/163984/the-fifth-discipline-by-peter-m-senge/

  12. Tetlock, P. E. & Gardner, D. (2015). Superforecasting: The Art and Science of Prediction. Crown. https://www.penguinrandomhouse.com/books/227815/superforecasting-by-philip-e-tetlock-and-dan-gardner/

  13. Sunstein, C. R. & Hastie, R. (2015). Wiser: Getting Beyond Groupthink to Make Groups Smarter. Harvard Business Review Press. https://store.hbr.org/product/wiser-getting-beyond-groupthink-to-make-groups-smarter/15064

  14. Reason, J. (1990). Human Error. Cambridge University Press. https://www.cambridge.org/core/books/human-error/F5BAB7D8AFC27B93A35AD70EED91127E