Risk quantification is the process of evaluating the risks that have been identified and developing the data that will be needed for making decisions as to what should be done about them. Risk management is done from very early in the project until the very end. For this reason qualitative analysis should be used at some points in the project, and quantitative techniques should be used at other times.
The objective of quantification is to establish a way of arranging the risks in the order of importance. In most projects there will not be enough time or money to take action against every risk that is identified.
The severity of the risk is a practical measure for quantifying risks. Severity is a combination of the risk probability and the risk impact. In its simplest form the risks can be ranked as high and low severity or possibly high, medium, and low. At the other extreme, the probability of the risk can be a percentage or a decimal value between zero and one, and the impact can be estimated in dollars. When the impact in dollars and the probability in decimal are multiplied together, the result is the quantitative expected value of the risk.
Various statistical techniques such as PERT (program evaluation and review technique), statistical sampling, sensitivity analysis, decision tree analysis, financial ratios, Monte Carlo, and critical chain can all be used to evaluate and quantify risks.
Tell me more …
Qualitative risk analysis is appropriate early in the project and is effective in categorizing which risks should or should not be planned for and what corrective action should be taken for them. Qualitative analysis techniques will not give us the precise values for the risk that we would like to have. They are very effective when we have little time to evaluate risks before they actually happen.
Quantitative values may be applied to risks when using qualitative analysis. Values such as very risky, not so risky; high and low; high, medium, and low; high, high medium, medium, medium low, and low are generally used. Qualitative evaluation might also evaluate the risks on a scale of one to ten. These values can be applied to both the probability and the impact of the risk. The impact and probability can then be combined to give similar descriptions to the severity of the risk.
The table in RISK QUALITATIVE EVALUATION TABLE illustrates a method of determining the qualitative value of severity for various values of impact and probability.
If an evaluation of impact and probability used a scaled evaluation of one to ten, the numbers could be multiplied to get the severity. In this way a probability of 7 with an impact of 9 would give us a severity of 63. This number for severity should give us plenty of information for ranking the risks. Using the high, medium, and low version sometimes creates disagreements about risks that are on the borderline between one value and another. For example, does this risk have an impact of medium or high when it is close to the border between the two values? And what happens when the impact is very high or very low and the probability is the opposite?
If the organization is especially averse to high-impact risks, the resulting severities can be modified to reflect this desire no matter what scale is used. In Figure 8-2, a low probability combined with a high impact results in a medium severity while a high probability with a low impact results in a low severity. It may also be necessary to modify scales to recognize stakeholder risk tolerance relevant to some aspects of the project but not others. Cost risks may be valued much less than schedule risks or the opposite may be true, depending on stakeholder aversion to cost and schedule variances.
While qualitative analysis is less precise than quantitative analysis, evaluating the results is far less expensive in terms of both time and money. The results are good enough to indicate the overall risk of the project and identify the high-priority risks in order to begin taking some corrective action. This kind of information may assist in pricing the project to a client.
Quantitative risk analysis attempts to attach specific numerical values to the risks. The severity can be assessed from these numerical values for impact and probability. Numerical techniques for decision analysis are used for this approach. These techniques include Monte Carlo analysis, PERT, computer simulations, decision tree analysis, critical chain scheduling, statistical estimating techniques, and expected value analysis. Generally we find the use of statistics and probability theory to be useful in quantitative analysis.
Care should always be used in quantitative analysis because using a good quantitative technique with bad data is worse than not using the technique at all. Many people are impressed with statistical models and simulations and never look at the data to see how good they are. It is quite possible to impress people into making the wrong decision based on excellent analysis of bad data. Care should also be exercised in the use of quantitative techniques because the cost of applying the technique and collecting the data can sometimes be more than the cost of the risks the technique helps to quantify.
General statistical techniques
Since risks are associated with probability techniques, it seems natural that we use probability distribution functions to help describe the impact and probability of the various risks in the project.
TYPICAL PROBABILITY DISTRIBUTION FOR COST AND SCHEDULE RISK shows a skewed probability distribution. This is typical for project cost and schedule risks. The distribution shows the possible occurrences of cost or schedule completion for a particular task along the X axis and relates them to the probability of that possibility occurring along the Y axis. The various possibilities are due to the risk associated with the task. For example, a task in a project has a most likely date. This date is plotted along the X axis. The probability associated with this date has the highest probability of any other date and is plotted at the corresponding point on the Y axis.
In a probability distribution the most likely date will always be at the peak of the probability distribution curve. This is not necessarily the average date for the task, which in a skewed distribution can be higher or lower, earlier or later, than the most likely date. Notice that the optimistic and pessimistic dates are the earliest and latest dates on the X axis and correspond to the lowest probability.
There are many distributions that can be applied. They can be symmetrical or skewed. PROBABILITY DISTRIBUTIONS shows a few that might be used in risk analysis—triangular, even, normal, and skewed distributions. The triangular distribution shows that probabilities increase uniformly from the optimistic point to a certain point where the highest probability is reached and then decrease uniformly until the pessimistic point is reached.
The even distribution has the characteristic that any value on the X axis has exactly the same probability of occurring. There is no optimistic, pessimistic, average, or most likely point.
The normal distribution is one that most of us have seen many times. It is a convenient distribution because calculations associated with it are simple to make and are generally close enough for most phenomena that we need to estimate. In the normal distribution the mean value and the most likely value are the same because the distribution is symmetrical. A special measurement, the standard deviation, relates specific ranges of values along the X axis with the probability that the actual value will be between the high and low value. This is particularly useful in project management because it allows us to predict a range of values along with a probability that the actual value will occur when we do the project. A skewed distribution is one where the most likely value on the X axis is different from the mean value. The skewed distribution is frequently encountered in cost estimating and schedule estimating for projects.
Today computer simulations are quite simple and inexpensive to use. Not many years ago simulations were done on analog computers, which made them expensive and not very accurate. The digital computers that most people have on their desks now are able to run simulations quite easily. Simulations use a model to simulate the real phenomena that we are trying to find out something about. There are two reasons to use simulations. One is that solving the problem mathematically is very difficult and expensive or even impossible. The second is that studying the actual phenomena is impossible or impractical in full scale. In either case simulation or modeling can be practical.
The most popular simulation for project management is the Monte Carlo simulation. This technique was discussed in Time Management. Monte Carlo analysis is important because it completes the PERT analysis of schedule estimation. In the PERT analysis we predict schedules based on ranges of values and probability for the durations of the project tasks. Since the durations of the tasks can be a range of values, it is possible that the actual duration values will determine a critical path that is not the one that is predicted by the most likely values. The Monte Carlo analysis evaluates these possibilities and gives us statistical guidelines for the project schedule.
Other computer simulations can be used to analyze the risks associated with the engineering, manufacturing, sales, marketing, quality, and reliability of the project deliverables.
Expected value analysis
Expected value analysis is a special way of determining severity in risks. To do this, we must measure the probability of the risk in numbers between 0.0 and 1.0. Of course the numbers 0.0 and 1.0 themselves are not used since these would mean that the risk was either an impossibility or a certainty. If the risk is a certainty, it should be put into the project plan as a required task; if it is an impossibility, it should be ignored.
The values for the impact of the risks are estimated in dollars or some other monetary value. By evaluating the impact and probability this way, we can multiply the two values together and come up with what is called the expected value of the risk. This value for severity has quantitative meaning. The resulting value is the average value of the risk. In other words, if we were to do this project many times, the risk would happen some of the time and not happen some of the time. The full cost of the risk each time it happens is the impact of the risk. Of course, since the probability is less than 1.0, the risk does not occur each time. Adding up the cost of the risk each time it occurred and dividing by the number of times the project was done would give an average value. This is the expected value.
The expected value is extremely useful because it gives us a value that could be spent on the risk to avoid it. If the cost of avoiding a risk is less than its expected value, we should probably spend the money to avoid it. If the cost of the corrective action to avoid a risk is greater than the expected value, the action should not be taken.
The same is true with the other risk strategies. If the difference between the expected value of the unmitigated risk and the mitigated risk is less than the cost of the mitigation, then the mitigation should not be done. If the difference between the expected value of the nontransferred risk and the transferred risk is less than the cost of the transfer or insurance premium, then the transfer should not be done.
The expected value of several risks can be summarized by their expected values into best-case, worst-case, and expected-value scenarios as well. The best-case scenario is the summation of all the good things, but none of the bad things, that can happen in the project or subproject. It assumes that all of the opportunities will occur but that none of the risks will materialize. The worst-case scenario is the situation that assumes that none of the good things will happen but that all of the risks will happen.
The following example illustrates the use of expected value and a best-case, worst-case scenario:
Suppose a project has a 65 percent chance of being completed successfully and earning $2,000,000. It also has a 15 percent chance of earning an additional $3,000,000 in revenue, and it has a 20 percent chance of an additional cost of $700,000.
It can be seen in Figures 8-5, 8-6, and 8-7 that for this project the total expected value is $1,610,000. The best of all situations that can occur is that the project earns $5,000,000. The worst possible situation is that the project loses $700,000.
Decision tree analysis
Another technique that allows us to make risk management decisions based on evaluating expected values for different possible outcomes of the risk event is called the decision tree. This technique is a way of looking at interdependent multiple risks. It also allows us to evaluate risks with multiple outcomes. For a project environment, this technique becomes extremely useful because one chosen unplanned event can often result in multiple outcomes of various levels of severity depending on the situation and on decisions made by people who are responsible for risk management.
The decision tree can also be useful for us in our further work of developing workarounds in case of active acceptance of risk event (see risk response, later in this chapter).
As shown in DECISION TREES, decision tree diagrams are composed of boxes, which identify decision choices that must be made, and circles, which represent places where probabilistic multiple outcomes are possible. From the boxes, lines are drawn showing each possible decision. The lines lead to other decisions or probabilistic multiple outcomes. On the probabilistic circles, notice that the sum of the probabilities of all the possible outcomes of this point equals 1.0. This is because all of the possible outcomes are included.
Here is an example of decision tree analysis:
Suppose a farmer must decide what to do with his land for the next growing season. He can choose to plant corn or soybeans or to not plant anything at all. If he plants nothing at all, the government farm subsidy will pay him $30 per acre.
If the farmer decides to plant corn or soybeans on his land, there is some risk involved. The yield per acre depends on the amount of rainfall. Too much rain or too little rain will give poorer results than the right amount of rainfall. There is a 40 percent probability that the rainfall will be low; there is a 40 percent probability that the rainfall will be medium; and there is a 20 percent chance that the rainfall will be high.
If the farmer decides to plant corn, the yield per acre will be $0, $90, and $50, respectively, if the rainfall is low, medium, or high. If the farmer decides to plant soybeans, the yield per acre will be $40, $70, and $20, respectively, for low, medium, and high amounts of rainfall.
The decision to be made is whether the farmer should plant corn, soybeans, or nothing at all. There are three lines coming out of the decision box to indicate the three choices. Each choice leads to a probabilistic occurrence—how much rainfall will occur. Each probabilistic occurrence has three possible outcomes—low, medium, or high amounts of rainfall. For each of these events there is an associated payoff. The payoff amount multiplied by the probability of that event occurring is the expected value of each occurrence.
In order to evaluate the decisions, we must add the expected value of each event associated with each decision to get the expected value for each decision. For corn, low rainfall means that no money will be made from the crop. For medium rainfall there is a 40 percent chance and a $90 yield, giving an expected value of $36. For high rainfall there is not as much yield per acre at $50 and there is a 20 percent probability of that occurring. The expected value for high rainfall is thus $10 per acre. Adding the expected values for the events gives us the expected value for the decision. This is $46 per acre.
Using the same calculation for the soybeans and for not planting at all, we see that of the three decisions, planting soybeans has the greatest yield.
Critical chain scheduling
Critical chain scheduling is a method of improving schedules. Critical chain schedules were discussed in Time Management. In a critical chain schedule, buffers are created between the early schedule completion date of the project and the promise date. The activities that have float are first moved to their late schedule dates and then moved back the amount of buffer that they are given from the point where they join the critical path.
One of the important things that critical chain schedules do is to recognize that by delaying the feeder chains toward the late schedule, the risks are reduced. Risks are reduced in the later schedule because knowledge learned in the course of doing the project can be applied to these activities because they have been delayed from their early schedule. Early in the project most of the project team is inexperienced, and mistakes are much more likely to be made. Later in the project experience has been gained by all, which means that risks that might otherwise cause trouble can be avoided.
An important part of the risk quantification process is the ranking of the risks into the order of severity. When quantitative or semiquantitative analysis is done, this is relatively simple. The expected value of the risk can be used to list the risks. Using the expected value, the risks with the highest expected value are going to be at the top of the list. This is simply because the risks with the highest expected value are the ones that, on average, will cost the project the most.
Even if we are not using expected value analysis to evaluate the risks, we can still rank them. If we are using semi-quantitative techniques, we can multiply the values that we used to qualitatively estimate the impacts and probabilities to get a relative severity. This will be fine as long as the relative scale used for evaluating probability and impact is the same for all of the risks.
In the case of purely qualitative evaluations, judgment can be used to rank the risks. There are several methods for doing this. One of the easiest is to have each person in a meeting rank the risks individually and then combine the rankings of each person into a composite ranking. This will give an overall consensus ranking of the risks.
Another method that can be used either on an individual basis or by groups is the comparison matrix.
In the comparison matrix each risk is numbered from 1 to the highest number of the risks. Numbers assigned to the risks have no particular meaning. In COMPARISON MATRIX the risks are numbered 1 through 7. Above the risk numbers are the comparison boxes. These are organized to ensure that two risks at a time are compared and that every risk is compared to every other risk. Human brains have trouble dealing with seven things simultaneously, but they can easily deal with two things. The comparison matrix allows many items to be ranked by comparing only two items at a time.
In the column above risk 1, the comparisons made, starting at the top of the column, are: 1 to 2, 1 to 3, 1 to 4, 1 to 5, 1 to 6, and 1 to 7. In subsequent columns the numbers of comparisons are smaller since there is no point in repeating comparisons: 1 compared to 2 is the same as 2 compared to 1. In comparing and ranking risks, each time a comparison is made, the more severe risk is noted in the comparison of only those two risks. Each time a risk is considered to be the more severe one, a hash mark is put in the box below that risk. After all comparisons have been made, the risk with the highest number of hash marks is the most severe risk. The one with the next highest number of marks is ranked number two, and so on.