The Reality of Risk Exposure

Over the past few weeks I’ve been thinking a lot about risk exposure in the context of managing projects. Exposure is a technique used almost universally when managing risks, yet as I’ve already discussed, exposure can cause major problems because it’s a precise number based on mostly made-up information. At the same time, exposure is used widely and successfully - otherwise there wouldn’t be as much literature throughout the web telling you to calculate risk exposure.

This begs the question: is risk exposure really as meaningless as I’ve made it out to be? I've collected some data that helps answer this question.

Data Collection and Context


Risk management is one of the basic subjects covered in the Managing Software Development course, one of the five core courses students of the Carnegie Mellon Master of Software Engineering program take in completing their degree. Students learn about the continuous risk management paradigm from the Software Engineering Institute. Two of the cornerstones of this technique are threshold of success and condition-consequence based risk statements.

Having ready access to risk management experts at the SEI, nearly every team conducts a facilitated small team risk evaluation workshop in which risks are collected with the help of a taxonomy-based questionnaire (pdf), analyzed, and prioritized using group multi-voting. The basic workshop has been conducted the same way for close to a decade and many teams have put their risk data collected during the workshop in the MSE’s project archive.

I’ve gathered data from these small team risk evaluation workshops for 9 MSE Studio teams, a total of 164 identified, analyzed, prioritized risks.

What’s in the Data?


During a risk evaluation workshop, teams identify risks using their threshold of success as a guide. Once identified, risks are briefly analyzed and assigned an impact, probability, and time frame value based on a rough average from the team members’ initial gut feeling on the risk. These values are assigned simply so when a manager asks to see the probability, for example, there is a value to give him. Each of impact, probability, and time frame can only be one of 3-4 values. The idea is that by decreasing the precision we can increase the accuracy. Values are assigned based on a rubric. For the purposes of calculating a risk exposure I assigned each of the analysis categories a number. Time frame is not used in calculating exposure.

Impact
  • Catastrophic - The team will be unable to meet threshold of success. (numeric value 4)
  • Critical - The team can only meet the threshold of success with significant additional effort and stress. (numeric value 3)
  • Marginal - The team can meet the threshold of success with minimal extra effort. (numeric value 2)
  • Negligible - There is no real impact on achieve the threshold of success or little increase in effort. (numeric value 1)
Probability
  • High - Chance of becoming a problem is above about 80%. (numeric value .8)
  • Medium - Chance of becoming a problem is about 50/50. (numeric value .5)
  • Low - Chance of becoming a problem is below about 20%. (numeric value .2)
Time Frame
  • Short - May occur in about a month or less.
  • Medium - May occur in 1 to 3 months.
  • Long - May occur in more than 3 months.
Instead of relying on the results from the analysis, teams perform 3 to 4 rounds of multi-voting. The final multi-voting rank is shown. Not all teams ranked all risks since teams generally only deal with the top few risks, usually less than 10. This idea is captured in the priority. A risk is either a high priority, meaning the team is actively addressing it, or a low priority meaning the team is aware of it but it was not ranked high enough to deal with yet. Teams might choose different strategies for determining priority. The two most popular are to only examine the top X or to rely on consensus derived from how the risks clustered as a result of multi-voting. Usually there is strong team consensus for the top 4 to 5 risks and weak consensus after this.

Analysis and Discussion


My hypothesis is that teams’ rankings will generally match exposure, meaning that risks that are ranked highly will also have a high exposure. As the data shows, this is generally the case. On average nearly every team’s high priority risks were also the ones with the highest exposure.



Examining the risks rank and exposure tells a similar story but not convincingly. There is a relatively weak negative correlation (correlation coefficient of -0.22) between exposure and team assigned rank. Basically the best that can be said is that there is a general downward trend in exposure as the rank increases but there is enough variation that I can’t really say anything for certain.



I have two possible explanations for this. First, traditional risk exposure does not take into account time frame while teams evaluating risks in this data set do. So, all things equal from an exposure perspective, a long term risk might be ranked very low while a short term risk will be ranked much higher. If this were the case, we’d see more short-term risks assigned high ranks than long-term risks and this is indeed the case. In fact, the majority of risks identified are short-term risks with nearly three times more short-term risks being identified than long term risks. Mid-term risks are, unsurprisingly in the middle. A better exposure number might be had by taking into account risks’ time frame values.



The second possible explanation I have is that 3 - 4 buckets isn’t sufficient to allow for enough variation to form a strong correlation between rank and exposure. Indeed this is one of the greatest differences between this data set and traditional risk exposure calculations in which impact might take on nearly any number and exposure is usually a percentage from 10 - 100%. That said there still is a general trend which shows that most of the time, multi-vote ranking very roughly corresponds to exposure.

There is one more catch about this data and it’s a subtle but important one. Values for probability, impact, and time frame were determined as a team using a sort of rough average approach where team members vote and the approximate averages are rounded to the nearest bucket. Since all the values and rankings were determined through a group effort, it would make sense that they should roughly correspond.

Conclusions


As it turns out, risk exposure is a rough and somewhat accurate indicator for relative risk priority, at least when calculating exposure or rank using group-driven techniques. Teams relying only on exposure are likely to rank some risks higher than they otherwise might. Part of this is due to exclusion of the concept of time from traditional exposure, part of it might be differences of opinion within the group as far as impact or probability are concerned.

Talking with other MSE alumni, and I mostly agree with them, the most important thing about risk management is bringing up concerns and talking about them. Delphi mutli-voting is an easy way to encourage conversation since differences of opinion are addressed as part of the multi-voting process. No matter what technique you use, exposure (with time somehow included), multi-voting, or some combination, do not reduce risk management to simple numbers. It’s really all about communication. Encourage this communication using whatever techniques work for your team.

Raw data used for analysis in CSV format.

Comments

Popular posts from this blog

Managing Multiple Ruby Versions with uru on Windows

Dealing with Constraints in Software Architecture Design

Architecture Haiku