Author: Lauren C Culver
Critique: Bryant C Kimball
Standard Monte Carlo simulations consist of a collection of hundreds of thousands of random functions that cycle through a given distribution. The results of these thousands of interactions is considered the generation of potential outcomes. While historical risk assessment forces the analyst to consider possible outcomes based off of all the possibilities that have already happened, the Monte Carlo method combines stochastics and simulation; cycling a random sampling of inputs into a virtual representation of a problem over and over and over again to obtain a distribution of results.
In 2017, analysts supporting decision making at the intersection of energy and U.S. foreign policy teamed Monte Carlo analysis with decision analysis, predictive scenario analysis, and exploratory modeling to understand the threat of sixteen countries’ energy demands on US foreign policy. However, the experiment itself pits the four models of uncertainty analysis against one other with the intentions of issuing a recommendation about the most appropriate approach to uncertainty analysis for foreign policy.
The simulation produced a series of outcomes for each country as well as policies to match (Figure 4). The researcher found that the results of the Monte Carlo analysis correctly convey the policy and that one policy will not always be beneficial. It requires additional, time-consuming analysis by the analyst to more specifically identify the input set that drives net benefits of a particular policy.
Essentially, Monte Carlo simulations can present an analyst with a distribution of outcomes for a given situation. This research serves as evidence that Monte Carlo simulations can help reduce uncertainty as well as issue recommendations based on specific outcomes across a distribution. This validated tool clearly reduces uncertainty by allowing the analyst to easily deconstruct the potential outcomes. Even more so in intelligence analysis that forces the analyst to discriminate between numeric ranges of likelihood, Monte Carlo as a tool, can help add to an analyst’s judgment by either validating or helping guide the analyst to the most accurate WEP.
What this particular model does an even better job of depicting is that no one methodology is holistic. Each uncertainty model relies on the others to complete the picture. This holds true with all the methodologies we’ve studied thus far, and highlights the importance of building a toolkit with a wide range of methods and modifiers to use when deemed relevant.