Friday, November 9, 2018

Monte Carlo Simulations and Increasing Confidence in Estimates


Daily, C., and Solis, D. (2017). Monte Carlo Simulation: Assessing a Reasonable Degree of Certainty. The Value Examiner. May/June. 

Discussion:

      In this article, the authors demonstrate the value of conducting a Monte Carlo simulation in a business scenario. The authors present a company that will be conducting a damages analysis to claim lost profits. This scenario is suitable for a Monte Carlo simulation because of the number of inputs (i.e., loss period in years, lost revenue per year, save expenses as a percentage of revenues per year) and the variety of those inputs (i.e., 5-7 years, $50K-$150K in lost revenues, 20-30% in saved expense percentages). Prior to conducting the Monte Carlo simulation, the authors conduct a simple damages analysis by assuming the middle value of each range and calculating an estimate of the profits lost. The simple analysis produced an estimate of $450K profits lost (Figure 1).
 
Figure 1. Simple Analysis of Damages







    After completing the simple analysis, the authors then run 10,000 separate Monte Carlo simulations with the @RISK software program. To do so, the authors modified their assumptions by defining probability distributions for each of the inputs. As a result, for each simulation run by @RISK, the inputs were randomly altered according to those probabilities. The results of the Monte Carlo simulations produced a mean lost profits of $450,056, which is nearly the same as the $450K produced by the simple analysis (Figure 2).

Figure 2. Monte Carlo Simulation Analysis.




      Although the Monte Carlo simulations produced approximately the same estimate, the authors claim that the true value in the simulations is that they strengthen the degree of certainty in the estimate. Supported by statistical analysis, the estimate produced by Monte Carlo simulations includes confidence intervals for analysts to cite when expressing confidence in the estimate.

Critique:

    At first glance, the nearly identical estimates between a simple analysis and Monte Carlo simulations may prompt an analyst to question whether he/she should even bother with the simulations for this scenario. However, the authors make a great argument for the value of Monte Carlo simulations by stating that the simulations increase the degree of certainty in the estimate. This is especially valuable because analytic confidence may otherwise be determined by a degree of subjective opinion.
   Although Monte Carlo simulations have its benefits, there are limitations. For example, Monte Carlo simulations are not suitable for intelligence tasks that are qualitative in nature since it requires quantitative data. Furthermore, analysts will need to be familiar with statistics and have access to a software program to run the simulations. Despite these limitations, Monte Carlo simulations have the capacity to improve confidence in an estimate and as such should be applied when possible.




16 comments:

  1. Monte carlo method definitely adds value to an analysis but I am hesitant on whether you can draw sound conclusions from it. It is beneficial and valuable having quantitative results that support the argument. Although the results in this case are useful, I can agree with the authors' reaction to the simulation that it increases one's analytic confidence to an estimate.

    ReplyDelete
  2. In my experience the value of Monte Carlo simulation (assuming the inputs are as good as possible, more on that later) is the probability that the result is outside the bounds of what the decision maker or organization can bear. So for the paper's example, can the business insure against a profit loss of greater than $600K? If not, what is the probability that this loss or greater will occur? And how much effort/$ should be put into avoiding this low-probability catastrophic loss? MC simulation allows one to address questions like this. The average or estimate is of marginal interest.

    I carried out a Monte Carlo analysis of a business's "Book of Bets" -- i.e. the list of possible contracts the business was chasing, called s book of bets because the business had to allocate funds to the proposal and sales teams chasing each contract, similar to deciding how much to bet on which horses in which races. The CEO was receiving minimal wage and his actual payout depended on him making his sales target for the year. The average or estimate came in above his sales target, so he thought he was safe. However the MC simulation implies a 20% probability he would not make the target. He decided that was too high, and we worked out how to reduce that probability by shifting company assets between sales efforts in such a way that the probability of high payoff sales increased (among other criteria).

    There is a similarity here between allocating ISR assets to targets.

    Which brings us to the critical issue of subjective intelligence information and the drive by senior commanders for "arithmetical accuracy" (many commanders are arithmomaniacs). One can and should use professional judgement to quantify subjective qualitative measures (even if using ordinal ranking -- in which case use analysis of categorical or ordinal variables, an entire branch of useful maths, learn it). BUT, the intelligence analyst uses such methods PRIVATELY to explore qualitative shifts in the system and to provide him or her insight allowing further qualitative analysis. NEVER let the boss see the simulation, he or she will get sucked down into its details believing every decimal position has meaning.

    The truth is that we all have models in our heads about what is going on, adding MC simulation to the mix of our techniques forces us to understand the measures that are quantitative and pushes us to deeper understand the interactions between qualitative measures by making us quantify them and explore what different quantifications might mean.

    ReplyDelete
    Replies
    1. "One can and should use professional judgement to quantify subjective qualitative measures (even if using ordinal ranking -- in which case use analysis of categorical or ordinal variables, an entire branch of useful maths, learn it). BUT, the intelligence analyst uses such methods PRIVATELY to explore qualitative shifts in the system and to provide him or her insight allowing further qualitative analysis. NEVER let the boss see the simulation, he or she will get sucked down into its details believing every decimal position has meaning."

      Stephen - if this is the case, do we then, as analysts, consider MC simulation to be a modifier and not a method when used for intelligence analysis? I believe Chelsie expressed this notion in her summary and critique. At the end of the day, if MC will add to your analysis (especially like you said by being able to add quantifying judgment) but is not included in your explanation of how you produced the estimate, then it must be labeled a modifier and not a method.

      Delete
    2. I'm not sure I understand your definition of a modifier. Most decision makers do not want to see, and probably do not understand, much of the analytic methods one uses. For example, if an analysis requires detailed statistical analysis would that automatically make statistics a modifier because the decision maker being reported to is not a statistician? What if another decision maker is a statistician? If something is a modifier or a method depending on the knowledge of the decision maker then I do understand the distinction between method and modifier. I undoubtedly don't understand something here.

      Delete
    3. Sorry - methods and modifiers may very well be terms unique to our program at Mercyhurst in how we use them. Methods are processes used to produce an estimate while modifiers are tools used to support or enhance the method itself.

      I do understand your emphasis on the importance of knowing your decision maker.

      Delete
    4. Hi Stephen,

      Thank you for your thoughtful response. MC simulations can certainly address a lot of important (and as you suggest more valuable) questions than producing the estimate itself. However, the authors discuss the estimate specifically to emphasize the difference between the simple analysis estimate and the MC simulation estimate. The difference highlighted is that an MC simulation estimate is supported by statistical analysis and thus lends more confidence into the estimate. In both examples, the analyst presents the same estimate ($450K) but the one who used an MC simulation can express more confidence to the decision-maker.

      You advise that as analysts we should, “NEVER let the boss see the simulation…”. I agree to an extent. For the sake of brevity, the analyst should present to the decision-maker only what is within the scope of the tasking. In the scenario presented by this paper, the scope of the task is to produce an estimate on profits loss and the analyst would only present the estimate of $450K.

      However, I think it is important for the analyst to be transparent with the decision-maker about the methodologies used to produce the estimate. So although the decision-maker receives the estimate (i.e., $450), if requested the decision-maker has a right to see the methodologies used to produce it (i.e., the MC simulation itself). Allowing the decision-maker to see the MC simulation or any other methodology used (social network analysis, alternative competing hypotheses, etc.) builds trust and transparency between the decision-maker and the analysts.

      Delete
    5. Agreed, the boss has the right to know what you are doing and g\how you are doing it. Just be prepared to justify the "what and how"! And, as my wife keeps telling me "Stephen, stop exaggerating" whenever I say "NEVER"!

      Delete
    6. But I emphasize ... do not lead with the techno-wizardry, and be very careful with your explanation when it is asked for. Technically educated and trained people in my experience vastly underestimate the ability of non-technical people to misunderstand technical issues.

      Delete
    7. ... and to finish answering a previous question I believe MC Simulation is a modifier.

      Delete
    8. I understand the reviewed paper focused on the estimate as part of a study on MC simulation. I wanted to make sure everyone understands that from a decision maker's perspective the estimate is of marginal interest, but that too often the estimate is provided by analysts and used by decision makets.

      Delete
  3. Tom,

    What relevant intelligence problem scenarios could you see MC being applied?

    ReplyDelete
    Replies
    1. Investment strategies into military technologies by other countries springs immediately to my mind.

      Delete
    2. You had mentioned using MC in that book of bets situation and then stated there are similarities to tasking ISR assets to targets. How viable do you see MC in this scenario when you add time elements as well as the dynamic nature of a combat environment?

      Delete
    3. Most (all?) decisions are time horizon bound. That is true whether you use MC simulation or some other method, and its true of investment decisions and tactical/operational decisions. One can build time into the value functions of the different options included in the MC simulation.

      Delete
  4. Tom, my article took a similar approach to yours, however, they did not run a different analysis first. Even though the results were similar, I agree with you that MC simulations add value/ confidence. MC simulations can be applied easily to financial analysis. Do you think it can be applied in other businesses analysis as well?

    ReplyDelete
  5. Stephen, thanks for engaging my students this week! Great stuff! You are always welcome here!

    ReplyDelete