Stop project risk analysis failure – 6 tips from (secret) intelligence estimation

Risk analysis failures in projects have a human impact as well as a financial one. Here are 6 tips (ideas) derived from techniques used in intelligence analysis and estimation.

Project managers are often blamed for failing to manage risks effectively. And in intelligence work, disasters are regularly blamed on the analysts. In both cases, how much is really their fault? And how much is systemic or the fault of other people?

To see my take, see the commentary at the end. For more on project risk analysis, please see the FAQs below. And for a counterview see Risk analysis is an unscientific nonsense, also below.

Early morning sunlight on the cliff face of Mte Sainte Victoire
“A risk analysis failure when climbing cliffs could be fatal.” Picture of Mte Sainte Victoire, in southern France. © Adrian Cowderoy.

6 tips from intelligence estimation

Below, we look at what goes wrong with analysis, and how to learn from the intelligence profession. This is just for analysis, so we’re not looking at other major influences on managing risk, such as the importance of good leadership.

Risk analysis failure #1: Treating humans as though they were dice

Humans behave differently to dice

Risk analysis techniques use Bayesian probability theory. It’s a huge assumption because humans are self-aware, seeing and learning. Dice are not.

We choose our actions, as do groups and organisations. We respond to our competitors, react to our environment, and have motives. And even when there is only one good option, we sometimes act obtusely.

In intelligence analysis, estimates are made for options of how an opponent may behave. It’s expressed in terms of words that have very probabilistic meanings.

For example: “For this suspicious person, there’s a remote chance he takes Outcome A, it’s unlikely he’ll take Outcome B and Outcome C is likely but not certain.” (Words like “remote” and “unlikely” have quantitative meanings, as in the table below.)

For project risk analysis, we break total risk into multiple individual risks that could potentially be sized and measured. The estimates come from project managers and others trained in the basics – and very seldom from professional risk analysts. So to make it easier, we often use simple scales like “Very low” to “very high”. It’s similar to intelligence estimation, but not as rigorous or as time consuming.

In intelligence work, suspects (enemies) don’t work with others, and they watch their opponents including other people’s intelligence services. In project management, the managers of risks are watching each other – for example, to hide their growing risk by waiting until some other workstream causes delays.

Conclusion: probability estimates of project risk are based on false assumptions:

  • that the risks are independent,
  • that we have a precise understanding of how they are being managed,
  • and that the people estimating the risk size have analytic skills.

Tip:

  • Don’t rely on the numbers being accurate – they’re just a simple indicator.

Risk analysis failure #2: There are usually more than 2 outcomes

In projects, we are simplistic about outcomes for an individual risk: Outcome A is it happens, Outcome B is that it doesn’t happen, and there’s no Outcome C. It’s like seeing the world in black or white, with no greys or colours.

We made this simplification intentionally because it simplifies the analysis of multiple risks in a risk register, and so that it would be easy to teach and use. It’s worked well for managing individual risks.  

Where it fails, is in reporting risk to executive decision makers such as a Project Steering Committee. At this level, we’re looking at the big picture, in terms of high level numbers and the biggest 3-6 risks. Some of those high level risks consist of smaller risks that have been rolled-up into a summary view. They are often not simple binary risks that happen or don’t happen. There’s can be  a range of outcomes, yet we still report them as a single probability and impact. That’s a gross simplification.

Why? Because of the limited time that can be spent by some of the people on a steering committee – the story needs to be short and simple, and have a narrative that continues from one meeting to the next.

But that same argument applies in strategic intelligence when the customers (audience) are politicians and senior decision makers. Good intelligence analysis gives options and caveats. It may be simplified in a headline statement, but the detail exists elsewhere. And the wording they use is precise enough for accurate decisions.

Conclusion:

  • Analysis of individual risks helps with risk management.
  • High level summaries of risks can include gross simplifications.

Options (tips) to avoid overloading your work with a risk analysis based on big assumptions:

  • Consider different outcomes and their individual probability. Use precise wording in summaries.

Probability scale used by the US Intelligence Community?

0-5%: Almost no chance (or “remote”)
5-20%: Unlikely (“highly improbable”)
20-45%: Unlikely (“improbable”)
45-55%: Roughly even chance (“roughly even odds”)
55-80%: Likely (“probable”)
80-95%: Very likely (“highly probable”)
95-100%: Almost certain (“nearly certain”)

Probability scale used in the UK’s National Strategic Assessments

0-5%: Remote chance
10-20%: Highly unlikely
25-35%: Unlikely
40-50%: Realistic probability
55-75%: Likely / Probable
80-90%: Highly likely
95-100%: Almost certain

The ranges are purposely designed to force reasoned choices on wording, and to deter simplistic answers of 50:50.

Risk analysis failure #3: The outcome may be … different to what was expected

The previous section was about estimating the probability for different outcomes. Intelligence analysts point out that the way things happen is not necessarily the way that is expected.

(Simple map of Afghanistan)
Following the US’s 2021 withdraw from Afghanistan, the speed of the Taliban advance was not anticipated or considered so “remote” it was not necessary to plan for it. The military assessment had missed that without critical props, the existing government would collapse.

Risks in projects often evolve. For examples, look at the risk ID and then track back through the weekly or monthly changes to the risk. In its earliest incarnations it can be very different. And it may have later been split or merged with other risks. So whatever was predicted at the start, something else subsequently happened (or was avoided).

The point? For some risks, the risk is a “known unknown” and putting numbers to it could be dangerously misleading.

  • At the start of the US pull out from Afghanistan I read a strategic analysis that consider 3 possible outcomes. None of those involved a rapid and total victory by the Taliban. A lesson from Afghanistan’s history: nothing is predictable.

Tip:

  • Call out risks that can’t be defined or can’t be predicted. Treat these unknowns separately from the total of the known-knowns.

Related reading on Afghanistan:

Ben Barry’s article on “Three scenarios for Afghanistan’s future”, IISS blogs, 4th August 2021. https://www.iiss.org/blogs/analysis/2021/08/afghanistan-us-nato-withdrawal-taliban
And see his subsequent blog just 15 days later: “Understanding the Taliban’s military victory”, https://www.iiss.org/blogs/analysis/2021/08/taliban-military-victory.

Risk analysis failure #4: Excluding the unknown unknowns

In many of the hard situations we face, there are unknowns. Sometimes it’s things that completely surprise or confuse us. And sometimes it’s risks and opportunities we know exist, but we can’t describe properly or choose to ignore – the so-called “known unknowns”.

Unknown unknowns are so abstract we may not see them until they hit. All we can assess are the symptoms of anomalies, which may or may not be important. (For projects, this is covered in depth on this website in 4 ways to manage unknown unknowns and their opportunities, and illustrated in stories such as “Too many presidents“ and “An Agency man”.)

Tip:

  • Watch for the symptoms or rumours of major anomalies, and do preliminary investigations.

Risk analysis failure #5: Failed communications

The quality of communications varies enormously between different companies, in my experience.

I had one client, where they required that all project managers kept detailed analysis, updated date weekly and analysed in depth. There was so much rapidly changing information that only the author read it. Not only was there failed communication, but the project manager ended up with the responsibility for managing all the risks. (It was also rude, assuming the project manager wasn’t capable of remembering their risks without writing them.)

At the other extreme, I’ve worked in environments where risk and risk management is half of the dialogue of every management meeting. And that includes watching for the risks that may have been missed, or symptoms of unknowns that need investigating. The communication is active, so people understand and the risk descriptions and analysis are improved.

In intelligence, analysis reports are often distributed widely to anyone (with suitable security clearance) who might be interested. Of these, most are only superficially interested. But of those who are genuinely interested, many will already have seen parts of the research and provisionally formed their own opinions. So analysts are competing with preconceptions. The presentations and dialogue become important, to compete with the preconceptions. (In that, it’s similar to the positive end of project risk management.)

A challenge that has doggedly followed intelligence estimates for decades, is the tendency to use precise and very subtle wording that is technically correct, but misunderstood by the recipient. Political leaders have made misjudgements because of that. … Within projects, there are similar dangers for project managers reporting risks to sponsors and Steering Committees.

Tips for projects:

  • Don’t assume people read everything in the risk registers and analysis.
  • Drive management meetings using the risks and problems that you’re struggling to manage, and the decisions that need to be made.
  • As a project or risk manager, if others don’t take your warning seriously, keep fighting to improve the evidence and argument. It’s not good enough to say, “I told them, they ignored me.”

For the importance of communications in risk management:

See Chapter 3 of “Risk: A User’s Guide” by Gen (retired) Stanley McChrystal and Anna Butrico. The whole book is informative, with my favourite being the chapter on Leadership. https://www.mcchrystalgroup.com/library/risk-a-users-guide/

For anecdotes about intelligence communication failures:

See David Omand, “How Spies Think”, 2020. It is focussed on the methods of intelligence research, analysis and use. Reviews at https://www.goodreads.com/book/show/51780145-how-spies-think and https://2bookspermonth.com/how-spies-think/ and https://inews.co.uk/inews-lifestyle/people/thinking-like-a-spy-truth-lies-former-head-of-gchq-david-omand-748465

Risk analysis failure #6: Not analysing the impact on people and the business

A risk analysis failure dramatically changes a project: it takes longer, costs more, delivers less, … or is stopped entirely. There are also others who are vulnerable to project failure: the project team and their reputations, the customers, and the lost business opportunity.

In the UK, the 2011 counter-terrorism strategy CONTEST relies on a risk equation that includes “vulnerability” as well as likelihood and impacts. Collectively these are used to identify which of the five types of response are needed: Pursue, Prevent, Protect, and Prepare for the initial response, and build Resilience. See https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/97995/strategy-contest.pdf

Obviously terrorism is very different to project failures. What’s interesting about their approach is that they introduce a new variable, “vulnerability” because “impact” is not sufficient to cover their needs. They also use the results of the analysis to drive choices about Pursue, Prevent, etc.

In projects, we increasingly use benefits analysis to describe the tangible upsides (and perhaps downsides) that will come from the things the project delivers. The finance team focus on those with financial return, but there’s others than help people, and many of those can be quantified.

I’ve used project risk analysis where we look at both risk to project goals and separately risk to benefits. Like for CONTEST, the type of risk drives different kinds of response.

Tips:

  • Include potential problems that impact on business benefits as well as those that impact on the project.
  • Label each potential problem as being primarily about “benefits” or about “project”.
  • The project manager will look to the business for someone to manage a risk about benefits, and they’ll look to the internal team for project risks.

A counter opinion: “Risk analysis is unscientific nonsense”

I have a statistician friend who hates project risk analysis. She it as mumbo jumbo with no scientific foundation. She’s a scientist by training, and enjoys stating her opinions.

“What kind of science is it,” she asked, “when you estimate impact and probability, but don’t collect and analyse data on what actually happens?”

And later: “You are estimating probability based on nothing but guesses. It’s meaningless.”

Here’s the conundrum. If it was a science, it would be resolvable with maths and logic. Potentially Artificial Intelligence heuristics could even be used, if there were enough examples. But our world is changing in unpredictable ways, in part because we’re taking new challenges for which there is no history.

Commentary: My take on risk analysis failures

My own journey into risk management started in 1995 in a pan-European research project. The project was about improve software project management, and within it my own research area was to adapt risk analysis to suit the broader approach. I contributed to the APM’s original PRAM risk management – see latest at https://www.apm.org.uk/resources/find-a-resource/pram-mini-guide/. This subsequently influenced the risk management within Prince 2 and Managing Successful Programmes (MSP).

Over the last 15 years, risk management has been a core tool for me, and I’ve watched project managers and analysts using it. And I’ve spent endless hours talking to the customers of risk analysis (senior managers and decision makers).

The way it’s practiced is often very effective at reducing risk and changing attitudes. But in some places it’s not what we intended. For example, I’ve regularly seen it used defensively to prove that anything bad that happens was due to something predicted, and which is someone else’s fault.

Recently I’ve been studying intelligence analysis, as part of writing spy stores from the point of view of support staff. (For more, see stories and articles on intelligence analysis and intelligence research). The way they estimate probability for specific security threats fascinates me. It also horrifies me in its naivety.

Intelligence analysis has achieved many successes over the last eight years. But there have also been tragic failings. Many of those are well documented. My interest here is in exploring how the lessons from intelligence failures can help us improve project risk analysis.

FAQs on project risk analysis

What is project risk analysis?

Risk analysis is about sizing risk, for that part of the overall risk we understand.

There are different techniques, varying from simplistic through to building probability distributions and examining the different ways the risk can impact on us.

In projects risk management, individual risks (potential problems) are listed, and assessed in terms of probability and impact on project success and/or business benefits. A total risk picture can be calculated.

This is only addressing risks that can be quantified, and the science is questionable.

What is project risk management?

Risk management is the process of improving the risk for a project and its subsequent benefits.

Improvement includes avoiding it, reducing its impact, making it less likely, delaying it, or transferring the impact to someone else.

Terminology variations: Some practitioners use “risk management” for just the risk reduction actions. Others use it to cover the entire process of risk identification, estimating sizes for as much as possible, reporting it, and reducing it.

What’s the difference between a risk and an unknown?

In project management terms, a “risk” is a potential problem that would be clearly recognized if it had happened.

An “unknown” is a negative threat that cannot be defined sufficiently to estimate its probability or the impacts it could have. Some unknowns later evolve into a risk or a problem, but they can be significantly different to the original expectation.

What’s the difference between a known-unknown and an unknown-unknown?

A known-unknown can be described and it may be possible to take risk management actions on it.

At best, an unknown-unknown consists of symptoms that indicate risk exists and should be investigated. At worst, the term unknown-unknown is a reference to a surprise that might have been anticipated.

How do I estimate the size of a potential problem?

In project management terms, it’s the probability of the event occurring multiplied by the impact.

The purist viewpoint has risk estimated against linear scales, such as project cost, delay, and reduction in business benefits.
{Risk exposure of a potential problem} = {likely impact of potential problem} x probability

A simple approach is to use a simple linear scale from 1 to 5 for impact and perhaps also for probability. It’s simplistic and statistical nonsense, but it’s easy to use and helps focus attention on the larger risks.

If your risk comes from an enemy, they will probably be trying to exploit your weaknesses and increase the size of your risk. So in anti-terrorism, your vulnerability is also included:
{Risk exposure to an enemy} = {likely impact} x probability x vulnerability

What is the total risk of a project?

For a risk register, it’s the sum of risk from each of the potential problems.

This assumes they risks are independent (which is seldom true) and that they can be sized by weighting their likely impact by their probability.
Total risk exposure = for all quantifiable potential problems, the sum of their risks
… where impact is measured on a linear scale

Total risk exposure excludes known-unknowns and unknown-unknowns because they can’t be quantified.