Risk Assessment Template: A Comprehensive Guide to Identifying, Evaluating, Prioritizing, and Mitigating Risks in Projects, Organizations, and Decisions

On April 20, 2010, the Deepwater Horizon drilling rig in the Gulf of Mexico experienced a catastrophic blowout that killed 11 workers and triggered the largest marine oil spill in history. Over 87 days, approximately 4.9 million barrels of crude oil poured into the Gulf before the well was finally sealed. The environmental, economic, and human costs were staggering: an estimated $65 billion in total damages, the devastation of Gulf Coast fishing and tourism industries, and ecological damage that scientists continued to document more than a decade later.

The subsequent investigation by the National Commission on the BP Deepwater Horizon Oil Spill revealed that the disaster was not caused by a single unforeseeable event. It was caused by a cascade of identified risks that were inadequately assessed, improperly mitigated, and systematically underestimated. Cement integrity tests showed anomalies that were rationalized away. Pressure test results indicated potential well control problems that were misinterpreted as normal. The blowout preventer, the last line of defense against exactly this type of failure, had known maintenance issues that had not been addressed. At each stage, risks were visible to someone in the organization--but no systematic process existed to aggregate these individual risk signals into a coherent picture of overall risk exposure.

The Deepwater Horizon disaster is an extreme case, but the pattern it illustrates--risks identified but not properly assessed, risks assessed but not adequately mitigated, risks mitigated on paper but not in practice--is replicated in organizations of every size and type. Projects fail because risks that were foreseeable were not foreseen. Businesses collapse because risks that were manageable were not managed. Decisions produce catastrophic outcomes because risks that were assessable were not assessed.

What is risk assessment? Risk assessment is the systematic identification and evaluation of potential problems--understanding what could go wrong, how likely it is to go wrong, how severe the consequences would be if it did go wrong, and what can be done to prevent it from going wrong or reduce the damage if it does. Risk assessment does not eliminate uncertainty. It converts unexamined uncertainty into examined, categorized, and managed risk--transforming "something bad might happen" into "here are the specific things that might happen, here is how likely each one is, here is how bad each one would be, and here is what we are doing about the most important ones."

This template provides a structured, comprehensive framework for conducting risk assessments across any domain--project management, organizational strategy, product development, financial planning, or personal decision-making.


Phase 1: Risk Identification

Why Identification Must Be Systematic

The most dangerous risks are not the ones you assess incorrectly. They are the ones you fail to identify at all. An unidentified risk cannot be assessed, cannot be mitigated, and cannot be monitored. It exists as a blind spot--invisible until the moment it materializes as a crisis.

How do you identify risks? You brainstorm what could go wrong, learn from past failures in similar contexts, consider risk categories systematically (technical, market, execution, financial, regulatory, human), and actively seek diverse perspectives from people with different expertise and vantage points. The key insight is that risk identification must be systematic rather than ad hoc--relying on a structured process rather than on individuals' ability to think of things that might go wrong off the top of their heads.

Human cognition is poorly suited to risk identification for several well-documented reasons:

Availability bias. People assess the likelihood of risks based on how easily examples come to mind. Risks that have been recently experienced or dramatically publicized (terrorist attacks, plane crashes) are overestimated, while risks that are common but mundane (car accidents, falls) are underestimated. In organizational contexts, this means teams tend to over-prepare for the type of problem they encountered most recently and under-prepare for the full range of potential problems.

Optimism bias. People systematically underestimate the probability of negative events affecting them personally. Psychologist Neil Weinstein's research demonstrated that people rate their own chances of experiencing negative events (illness, job loss, natural disaster) as lower than average--a mathematical impossibility if everyone is doing it. In project management, optimism bias manifests as planning fallacy: the consistent underestimation of time, cost, and risk in project plans.

Normalcy bias. People have difficulty believing that unprecedented events can occur. Because something has never happened before, the brain treats it as something that cannot happen. This bias was a factor in the response to Hurricane Katrina, where many residents of New Orleans did not evacuate despite warnings because major hurricanes had threatened the city many times without producing catastrophic flooding.

Structured Risk Identification Methods

Category-based brainstorming. Rather than asking "what could go wrong?" as an open-ended question, work through risk categories systematically. Each category prompts thinking about a different type of risk:

  • Technical risks: Technology failures, performance shortfalls, integration problems, scalability limits, technical debt, dependency vulnerabilities
  • Market risks: Customer demand changes, competitive responses, pricing pressures, market timing, regulatory changes affecting market access
  • Financial risks: Cost overruns, revenue shortfalls, funding gaps, currency fluctuations, liquidity crises, credit defaults
  • Operational risks: Process failures, supply chain disruptions, quality control breakdowns, capacity constraints, key person dependencies
  • Human risks: Skills gaps, turnover, communication failures, resistance to change, decision-making biases, ethical violations
  • External risks: Regulatory changes, economic downturns, natural disasters, political instability, pandemic, supply chain disruptions
  • Reputational risks: Public relations crises, social media backlash, customer trust erosion, partner relationship damage

Pre-mortem analysis. Psychologist Gary Klein developed the pre-mortem technique as a counterweight to the optimism that typically characterizes project planning. In a pre-mortem, the team imagines that the project has already failed and works backward to identify the most likely causes of failure. This reframing--from "what might go wrong?" to "what did go wrong?"--bypasses optimism bias by treating failure as a certainty rather than a possibility, freeing participants to articulate risks they might otherwise suppress.

Research by Mitchell, Russo, and Pennington found that pre-mortem analysis increased the ability to identify reasons for future outcomes by 30 percent compared to standard prospective thinking. The technique works because it changes the psychological frame: instead of defending a plan against hypothetical objections, participants are explaining an outcome that has already occurred.

Historical analysis. What has gone wrong in similar projects, organizations, or decisions in the past? Post-mortems, incident reports, and case studies from comparable situations provide empirical data about which risks actually materialize and how frequently. A software development team launching a new product should study the failure modes of previous product launches. A construction firm bidding on a large project should study the risk events that affected previous projects of similar scope and complexity.

Stakeholder interviews. Different stakeholders see different risks based on their vantage points. Engineers see technical risks that executives miss. Customers see usability risks that developers miss. Front-line employees see operational risks that managers miss. Finance teams see cost risks that product teams miss. A comprehensive risk identification process incorporates perspectives from all relevant stakeholder groups.

Risk identification checklist:

  • Category-based brainstorming completed across all relevant risk categories
  • Pre-mortem analysis conducted with diverse team members
  • Historical analysis reviewed: what went wrong in similar past situations?
  • Stakeholder interviews completed: what risks do different perspectives reveal?
  • External environment scanned: regulatory, economic, competitive, technological changes
  • Dependencies mapped: what external factors must hold true for success?
  • Assumptions listed: what are we taking for granted that might not be true?
  • Risks documented in a risk register with clear, specific descriptions

Phase 2: Risk Analysis and Evaluation

Assessing Probability and Impact

Once risks have been identified, each one must be evaluated along two fundamental dimensions: how likely is it to occur (probability) and how bad would it be if it did occur (impact). These two dimensions determine the risk's priority--its claim on attention and resources.

What's the difference between risk and uncertainty? Risk is quantifiable--you can estimate probabilities based on data, experience, or expert judgment. A manufacturer can estimate the probability of a machine failure based on maintenance records and mean time between failures. An investor can estimate the probability of a stock declining based on volatility data and market conditions. Uncertainty is unquantifiable--it involves unknowns that cannot be estimated because the relevant variables are not understood. Donald Rumsfeld's much-mocked phrase "unknown unknowns" captures a genuine epistemological category: things we do not know that we do not know. Risk assessment addresses risk (the quantifiable). Scenario planning and resilience building address uncertainty (the unquantifiable).

Probability assessment methods:

Historical frequency. If the risk has occurred before in similar contexts, the historical frequency provides an empirical basis for probability estimation. If 15 percent of software projects in your organization have experienced critical security vulnerabilities in production, a reasonable baseline probability for this risk in a new project is 15 percent (adjusted for specific circumstances).

Expert judgment. For risks without historical frequency data, expert judgment provides the best available probability estimate. Structured expert elicitation techniques--such as the Delphi method, where experts provide independent estimates that are aggregated and refined through multiple rounds--reduce the influence of individual biases and produce more calibrated estimates than informal guesses.

Probability scales. For practical risk assessment, a simple ordinal scale is often more useful than precise percentages:

  • Very Low (1): Less than 10% probability. Unlikely under normal conditions.
  • Low (2): 10-25% probability. Possible but not expected.
  • Medium (3): 25-50% probability. Reasonably likely; should be planned for.
  • High (4): 50-75% probability. More likely than not; expect this to occur.
  • Very High (5): Greater than 75% probability. Almost certain; plan as if it will happen.

Impact assessment dimensions:

Impact is not a single dimension. A risk event can have different levels of impact across different dimensions:

  • Financial impact: Direct costs, revenue loss, penalties, remediation expenses
  • Schedule impact: Delays to milestones, deadlines, deliverables, launch dates
  • Quality impact: Product defects, service degradation, customer experience damage
  • Reputational impact: Brand damage, customer trust erosion, partner relationship harm
  • Safety impact: Physical harm to people, health consequences, environmental damage
  • Strategic impact: Competitive position damage, market opportunity loss, organizational capability degradation
Impact Level Financial Schedule Quality Reputation
Negligible (1) Less than $1K Less than 1 day Cosmetic defect No public awareness
Minor (2) $1K-$10K 1-5 days Minor functionality issue Limited complaints
Moderate (3) $10K-$100K 1-4 weeks Significant feature gap Media mention
Major (4) $100K-$1M 1-3 months Core functionality failure Sustained media coverage
Severe (5) Greater than $1M More than 3 months Product unusable Permanent brand damage

Note: These thresholds should be calibrated to your organization's size and context. A $10K financial impact is severe for a small startup and negligible for a Fortune 500 company.

The Risk Matrix

How do you prioritize risks? Consider likelihood and impact together--focus on high-probability or high-impact risks first. A risk matrix provides visual prioritization by plotting each risk on a grid where one axis represents probability and the other represents impact.

The risk priority score is calculated by multiplying probability and impact ratings:

  • Critical (16-25): Requires immediate mitigation action. These risks can derail the project or damage the organization.
  • High (10-15): Requires active mitigation planning. These risks should have specific response plans.
  • Medium (5-9): Requires monitoring and contingency planning. These risks should be tracked and reassessed regularly.
  • Low (1-4): Requires awareness. These risks should be documented but may not justify active mitigation effort.

The risk matrix has known limitations. It treats probability and impact as independent when they may be correlated. It uses ordinal scales as if they were ratio scales (a "4" is not necessarily twice as bad as a "2"). It can produce misleading rankings when risks have very different probability-impact profiles (a very-high-probability, low-impact risk and a very-low-probability, catastrophic-impact risk may receive the same score but require very different responses).

Despite these limitations, the risk matrix remains the most widely used risk prioritization tool because it provides a simple, visual, communicable summary of risk exposure that facilitates discussion and decision-making. The goal is not mathematical precision--it is structured thinking about risk.

Risk analysis checklist:

  • Each identified risk assessed for probability (using historical data, expert judgment, or structured estimation)
  • Each identified risk assessed for impact across relevant dimensions (financial, schedule, quality, reputation, safety)
  • Risk priority scores calculated (probability x impact)
  • Risks plotted on a risk matrix for visual prioritization
  • Top risks (critical and high priority) identified for immediate mitigation planning
  • Risk assessment reviewed by someone not involved in the initial assessment (fresh eyes catch biases)
  • Assessment documented with rationale (not just scores, but the reasoning behind them)

Phase 3: Risk Mitigation Planning

The Four Mitigation Strategies

What are risk mitigation strategies? There are four fundamental approaches to handling identified risks: avoid the risk entirely, reduce the probability or impact of the risk, transfer the risk to another party, or accept the risk with a contingency plan. Each strategy is appropriate in different circumstances, and the choice depends on the risk's priority, the cost of mitigation, and the organization's risk tolerance.

Strategy 1: Avoidance. Eliminate the risk by changing the plan to make the risk impossible. If a project faces a high risk from using an untested technology, avoidance means choosing a proven technology instead. If a business expansion faces a high risk from entering a volatile market, avoidance means choosing a more stable market.

Avoidance is the most effective mitigation strategy--a risk that cannot occur cannot materialize. But avoidance often means forgoing the opportunity that the risk accompanies. Should you try to eliminate all risks? No--risk-taking is necessary for progress. A company that avoids all risks avoids all opportunities. An individual who avoids all risks avoids all growth. The goal is not to eliminate risk but to take risks that are well-understood, appropriately sized, and matched by potential rewards. Risk assessment enables informed risk-taking, not risk avoidance.

Strategy 2: Reduction. Decrease the probability of the risk occurring, the impact if it does occur, or both. This is the most common mitigation strategy because it allows the organization to pursue the opportunity while managing the associated risk to an acceptable level.

Probability reduction examples:

  • Adding code review processes to reduce the probability of software defects
  • Conducting regular equipment maintenance to reduce the probability of equipment failure
  • Providing training to reduce the probability of human error
  • Diversifying suppliers to reduce the probability of supply chain disruption

Impact reduction examples:

  • Creating backup systems to reduce the impact of primary system failure
  • Building financial reserves to reduce the impact of revenue shortfalls
  • Developing crisis communication plans to reduce the reputational impact of incidents
  • Implementing graceful degradation in systems to reduce the impact of partial failures

Strategy 3: Transfer. Shift the risk to another party that is better positioned to bear it. Insurance is the most common form of risk transfer: the organization pays a premium to transfer the financial impact of specified risk events to an insurance company. Other forms of risk transfer include:

  • Contracts: Fixed-price contracts transfer cost overrun risk to the contractor. Warranties transfer product failure risk to the manufacturer.
  • Outsourcing: Outsourcing a function transfers the operational risks associated with that function to the service provider (though it introduces new risks related to vendor management and dependency).
  • Hedging: Financial hedging instruments transfer market risks (currency, commodity, interest rate) to counterparties willing to accept those risks for a fee.

Risk transfer does not eliminate the risk--it moves it to another party. The risk still exists; the question is who bears the consequences if it materializes. And risk transfer creates new risks: the insurance company might not pay, the contractor might go bankrupt, the outsourced service provider might fail to perform.

Strategy 4: Acceptance. Acknowledge the risk and choose to bear it without active mitigation. Acceptance is appropriate when the cost of mitigation exceeds the expected cost of the risk, when the risk is low-priority, or when no effective mitigation is available.

Acceptance does not mean ignoring the risk. Active acceptance includes developing a contingency plan--a predefined response that will be activated if the risk materializes. The contingency plan ensures that when the accepted risk occurs, the response is swift and organized rather than improvised and chaotic.

Mitigation planning checklist:

  • Each critical and high-priority risk has a specific mitigation strategy (avoid, reduce, transfer, or accept)
  • Mitigation actions are specific, actionable, and assigned to named individuals
  • Mitigation actions have deadlines and resource requirements identified
  • Cost of mitigation compared to expected cost of unmitigated risk (mitigation should not cost more than the risk it addresses)
  • Contingency plans developed for accepted risks and for the possibility that mitigation actions fail
  • Trigger conditions defined: what observable event will activate the contingency plan?
  • Residual risk assessed: after mitigation, what risk remains?

Phase 4: Risk Monitoring and Review

Why Continuous Monitoring Is Essential

Risk assessment is not a one-time activity. It is a continuous process because risks change as the project progresses, as the environment shifts, and as new information becomes available. A risk that was low-priority at the beginning of a project may become critical as circumstances change. A risk that was not identified during initial assessment may emerge as the project encounters unanticipated conditions. A mitigation action that was effective initially may become inadequate as the risk evolves.

How often should you reassess risks? Regularly throughout the project--risks change as you learn, as the context shifts, and as new information emerges. The frequency depends on the pace of change and the stakes involved:

  • High-velocity, high-stakes environments (trading floors, emergency response, surgical teams): Continuous real-time risk monitoring
  • Project environments (software development, construction, product launches): Weekly or bi-weekly risk review at minimum, with event-triggered reviews when significant changes occur
  • Strategic environments (annual planning, investment decisions, organizational strategy): Monthly or quarterly risk review, with event-triggered reviews for major external changes
  • Personal decisions (career choices, financial planning, health decisions): Periodic review at natural decision points, plus triggered review when circumstances change significantly

Risk Monitoring Mechanisms

Key Risk Indicators (KRIs). KRIs are measurable values that provide early warning that a risk is increasing in probability or potential impact. They function as the organizational equivalent of a smoke detector--alerting to danger before the fire starts.

Examples:

  • For schedule risk: percentage of milestones completed on time (declining trend indicates increasing schedule risk)
  • For financial risk: actual spending versus budget (overrun trend indicates increasing cost risk)
  • For quality risk: defect discovery rate (increasing trend indicates increasing quality risk)
  • For market risk: customer churn rate (increasing trend indicates increasing market risk)
  • For operational risk: system downtime incidents (increasing frequency indicates increasing operational risk)

Effective KRIs are:

  • Leading indicators (predict future problems) rather than lagging indicators (confirm past problems)
  • Quantitative and measurable (not subjective assessments)
  • Sensitive enough to detect meaningful changes without triggering false alarms
  • Tied to specific risks in the risk register, not generic health metrics

Risk review meetings. Scheduled reviews where the team examines the risk register, updates assessments based on new information, evaluates the effectiveness of mitigation actions, and identifies new risks. Risk review meetings are most effective when they are:

  • Regular and scheduled (not ad hoc, which tends to mean they do not happen)
  • Brief and focused (30-60 minutes; not combined with general project status meetings)
  • Action-oriented (each review produces specific actions, not just discussion)
  • Inclusive (perspectives from different roles and levels prevent blind spots)

Risk monitoring checklist:

  • Risk register updated regularly (at least at each review cycle)
  • Key Risk Indicators defined for top risks and monitored at appropriate frequency
  • Risk review meetings scheduled at appropriate intervals
  • New risks added to register as they are identified
  • Closed risks archived with resolution documentation
  • Mitigation action effectiveness evaluated: are mitigation actions actually reducing risk?
  • Risk trends tracked over time: is overall risk exposure increasing or decreasing?
  • Escalation criteria defined: what risk level triggers escalation to senior management?

Phase 5: Risk Communication

Making Risk Visible to Decision-Makers

The most thorough risk assessment is useless if its findings do not reach and influence the people making decisions. Risk communication is the process of making risk information accessible, understandable, and actionable for decision-makers who may not have participated in the assessment process.

Effective risk communication faces several challenges:

Complexity reduction without distortion. Decision-makers need summaries, not the full risk register. But summarizing inherently loses nuance. The challenge is to reduce complexity while preserving the essential information needed for good decisions. A risk heat map that shows ten risks as colored dots on a matrix communicates priorities quickly but does not convey the reasoning behind the assessments or the interactions between risks.

Cognitive biases in risk perception. Decision-makers bring the same cognitive biases to risk information that affect risk identification: availability bias, optimism bias, anchoring, and the tendency to treat low-probability, high-impact risks as if they will not occur. Risk communication must be designed to counteract these biases--for example, by presenting risks in terms of concrete scenarios rather than abstract probabilities.

Risk appetite alignment. Different stakeholders have different risk tolerances. An entrepreneur may be comfortable with risks that a corporate board would find unacceptable. A startup investor expects a portfolio approach where most investments fail but a few succeed enormously. A public utility is expected to minimize all risks because the consequences of failure are borne by the public. Risk communication must account for the risk appetite of the audience and frame findings accordingly.

Risk communication checklist:

  • Risk summary report prepared for decision-makers (clear, concise, visual)
  • Top risks highlighted with specific recommended actions
  • Risk information presented at appropriate level of detail for the audience
  • Risk trends communicated (not just current state, but direction of change)
  • Risk interdependencies noted (risks that could trigger or amplify each other)
  • Recommended risk responses include resource requirements and trade-offs
  • Risk communication is regular and proactive, not just reactive to crises

The Complete Risk Assessment Template

What Should Risk Assessment Templates Include?

A comprehensive risk assessment template includes risk identification, likelihood assessment, impact evaluation, mitigation strategies, monitoring plan, and contingencies. The following template synthesizes the phases described above into a practical, usable format.

Section 1: Context and Scope

  • What is being assessed? (Project, decision, organization, product)
  • What is the time horizon? (Near-term, medium-term, long-term)
  • Who are the stakeholders? (Who is affected by risks and risk decisions?)
  • What is the risk appetite? (How much risk is acceptable?)
  • What are the boundaries? (What is in scope and out of scope?)

Section 2: Risk Identification

  • Category-based brainstorming completed
  • Pre-mortem analysis conducted
  • Historical analysis reviewed
  • Stakeholder perspectives gathered
  • Assumptions documented
  • Dependencies mapped
  • All identified risks entered in risk register

Section 3: Risk Analysis

  • Probability assessed for each risk
  • Impact assessed for each risk (across relevant dimensions)
  • Priority scores calculated
  • Risk matrix populated
  • Risk rankings reviewed and validated

Section 4: Risk Response

  • Mitigation strategy selected for each significant risk (avoid, reduce, transfer, accept)
  • Specific mitigation actions defined with owners and deadlines
  • Contingency plans developed for accepted risks
  • Residual risk assessed after mitigation
  • Cost of mitigation justified relative to risk exposure

Section 5: Monitoring and Control

  • Key Risk Indicators defined for top risks
  • Review schedule established
  • Escalation criteria defined
  • Risk register maintenance process established
  • Lessons learned capture process established

Common Risk Assessment Failures

Risk assessment processes fail in predictable ways. Understanding these failure modes helps you design assessments that avoid them:

Identifying risks but not acting on them. Many organizations conduct risk assessments as a compliance exercise--the risk register is created, reviewed, filed, and forgotten. The assessment exists as a document but does not influence decisions or resource allocation. This failure is sometimes called risk theater: performing the appearance of risk management without the substance.

Anchoring on initial assessments. Once a risk has been assessed as "low" or "medium," there is a strong tendency to maintain that assessment even when new evidence suggests the risk has increased. This anchoring effect means that risk registers become stale--reflecting the conditions that existed when the assessment was conducted rather than current conditions.

Ignoring risk interactions. Individual risks are assessed in isolation, but risks interact. A supply chain disruption (Risk A) combined with a demand spike (Risk B) produces an inventory crisis that is worse than either risk alone. A key employee departure (Risk C) combined with a technology migration (Risk D) produces a knowledge loss that neither risk would produce independently. Risk assessments that evaluate each risk in isolation miss these interaction effects.

Confusing risk assessment with risk elimination. Some organizations treat the goal of risk assessment as producing a risk register with no high-priority risks--which incentivizes underestimating risk rather than accurately assessing it. The goal is not to have a clean risk register. The goal is to have an accurate risk register that informs decisions about which risks to mitigate, which to accept, and how to allocate limited risk management resources.

Over-quantifying subjective judgments. Assigning a precise probability of 23 percent to a risk event implies a level of precision that the assessment does not support. Using Monte Carlo simulations to produce cost-at-risk distributions with three decimal places on inputs that are educated guesses does not improve the accuracy of the assessment--it produces the illusion of precision that can be more dangerous than acknowledged imprecision. The appropriate level of quantification depends on the quality of available data. When data is rich, precise quantification is warranted. When data is sparse, simple ordinal scales (low/medium/high) more honestly represent the state of knowledge.

Risk assessment is ultimately an exercise in disciplined humility--the acknowledgment that the future is uncertain, that plans will encounter problems, that some of those problems can be anticipated and prepared for, and that the effort of anticipation and preparation, while imperfect, produces better outcomes than hoping for the best. The template in this guide provides the structure for that exercise. The discipline of using it honestly and consistently provides the value.


References and Further Reading

  1. Kaplan, S. & Garrick, B.J. (1981). "On the Quantitative Definition of Risk." Risk Analysis, 1(1), 11-27. https://doi.org/10.1111/j.1539-6924.1981.tb01350.x

  2. National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling. (2011). Deep Water: The Gulf Oil Disaster and the Future of Offshore Drilling. https://www.govinfo.gov/content/pkg/GPO-OILCOMMISSION/pdf/GPO-OILCOMMISSION.pdf

  3. Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review. https://hbr.org/2007/09/performing-a-project-premortem

  4. Kahneman, D. & Tversky, A. (1979). "Prospect Theory: An Analysis of Decision Under Risk." Econometrica, 47(2), 263-291. https://doi.org/10.2307/1914185

  5. Taleb, N.N. (2007). The Black Swan: The Impact of the Highly Improbable. Random House. https://en.wikipedia.org/wiki/The_Black_Swan_(Taleb_book)

  6. Hillson, D. (2009). Managing Risk in Projects. Gower Publishing. https://risk-doctor.com/

  7. Hubbard, D.W. (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. Wiley. https://www.wiley.com/en-us/The+Failure+of+Risk+Management-p-9780470387955

  8. Weinstein, N.D. (1980). "Unrealistic Optimism About Future Life Events." Journal of Personality and Social Psychology, 39(5), 806-820. https://doi.org/10.1037/0022-3514.39.5.806

  9. Mitchell, D.J., Russo, J.E. & Pennington, N. (1989). "Back to the Future: Temporal Perspective in the Explanation of Events." Journal of Behavioral Decision Making, 2(1), 25-38. https://doi.org/10.1002/bdm.3960020103

  10. ISO 31000:2018. Risk Management -- Guidelines. International Organization for Standardization. https://www.iso.org/iso-31000-risk-management.html

  11. Flyvbjerg, B. (2006). "From Nobel Prize to Project Management: Getting Risks Right." Project Management Journal, 37(3), 5-15. https://doi.org/10.1177/875697280603700302

  12. Slovic, P. (1987). "Perception of Risk." Science, 236(4799), 280-285. https://doi.org/10.1126/science.3563507

  13. Knight, F.H. (1921). Risk, Uncertainty and Profit. Houghton Mifflin. https://en.wikipedia.org/wiki/Risk,_Uncertainty_and_Profit

  14. Reason, J. (1997). Managing the Risks of Organizational Accidents. Ashgate. https://doi.org/10.4324/9781315543543