Scientifically Valid Risk Measurement
The risk of an event is defined as the expected loss if the event occurs, which in turn is the product of the likelihood of the event occurring and the impact of the event on the organizations objectives:
Risk = Likelihood times Impact.
In order for this multiplication to be mathematically valid, the measures of both the likelihood and impact must be ratio scale. This is not the case for most risk processes today, as evidenced by today’s ERM and GRC systems’ use of 1 to 5 ordinal scales for both likelihood and impact, as depicted on the left of the following figure:

Not all risks in a given color region based on ordinal measures of likelihood and impact on the left are equal to one another. In addition, some risks in red regions may actually belong in yellow regions and vice versa. According to Hubbard “There is no evidence that the types of scoring and risk matrix methods widely used in cybersecurity improve judgment. On the contrary, there is evidence these methods add noise and error to the judgment process. One researcher— Tony Cox— goes as far as to say they can be worse than random.” (Hubbard, Douglas W.; Seiersen, Richard. How to Measure Anything in Cybersecurity Risk). Ratio scale measures do not suffer from these deficiencies.
In order to manage risks effectively, it is necessary to drill down to the components of likelihoods and impacts so that controls can be considered for application to each of the components. Likelihoods of risks can be decomposed into the sources of the risks and the likelihoods of the risks given the sources. Impacts of risks can be decomposed into the consequences of the risks on objectives and the importance of the objectives. This is illustrated in the ‘bow tie’ diagram figure below which consists of three risk elements: a risk event in the center, possible sources of the event on the left, and one or more objectives on the right.

The likelihood of the event (bottom left of the bow tie diagram above) can be computed using one of several ratio scale likelihood measurement methods with or without considering its sources. For example, the likelihood of a cyber risk event can be estimated with historical data for the number of occurrences of the risk event in a given time period or it can be estimated by considering the sources (causes, threats, hazards, intents, targeting) of the risk event, such as threats from adversaries, user errors, hardware and software defects, and environmental causes such as hurricane and earthquakes, and the likelihood of the event given each of the sources.
Estimating the likelihood of an event based on its sources (causes, threats, hazards,intents, targeting) has two important considerations. First, an estimate of the likelihood of the event by estimating the likelihood of its sources and the likelihood of the event given its sources will, in general, be more specific and accurate. Second, in order to reduce the risk of an event, controls can be applied to reduce the likelihood of the sources of the risk as well as the risk given it’s sources. The estimation of the effectiveness of these controls on sources as well as controls to reduce the risk given the sources, and the resources required to implement these controls are an integral part of effective risk management.
The impact of the event on objectives (bottom right of the bow-tie diagram above) depends on both the consequences of the event on each objective as well as the importance of the objectives. If a risk event has consequences to more than one objective, as is usually the case, for example short term revenue and reputation, then the impact to each objective is the product of the consequence of the event to the objective times the importance of the objective and the total impact of the risk event is the sum of the impacts to each of the objectives. The importance of the objectives is subjective, and can be measured with judgments from management throughout the organization using pairwise comparisons to produce ratio scale priorities as practiced successfully for many years as part of the Analytic Hierarchy Process (AHP).
As discussed above, the risk of an event can be computed as the product of its likelihood and impact, each of which in turn can be computed as the sum products of source likelihoods with event likelihoods given the sources, and event consequences with the objective priorities. However, these computed risks are distorted by non-linearities when there are multiple sources that are not independent, and/or when there are multiple events with consequences to one or more objectives. Monte Carlo simulations are used to correct for these nonlinearities.
Long and Short Term Risks: In addition to correcting for nonlinearities in expected losses or ‘long term’ risks, Monte Carlo simulations are also useful for computing what we can refer to as ‘short term risks’. The risks discussed above are, as expected values or means, long term in the sense that the central limit theorem assures that the losses that actually occur over the long term, will be very very close to this expected value. In addition to these long term risks, ‘short term’ or catastrophic risks are also important because the organization must survive short term fluctuations in order realize their long term objectives. See: Optimal Risk Management Allocation of Resources Commensurate with Risk Tolerance.