Monday, October 31, 2016

Summary of Findings: Monte Carlo Simulation (4 out of 5 Stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in October 2016 regarding Monte Carlo Simulation as an Analytic Technique specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use structured data.

Description:

Monte Carlo simulations are a method of assessing risk versus uncertainty. It utilizes a number of different factors in a range that is sampled at random and averaged called an interation. These iterations are generated hundreds or thousands of times. The outcome is then used to generate a distribution that helps to visualize what a potential probability could be via a histogram.

Strengths:

  • Flexible in its application
  • Has a vast amount of evidence establishing credibility
  • Many different free pre-existing formulas that can be used for forecasting
  • Can take into account many different variables to increase the level of accuracy
  • Proven effectiveness in increasing forecasting accuracy

Weaknesses:

  • Complexity can be an issue for decision makers
  • Requires mathematical knowledge
  • Dependent on the variables which are input into the model (garbage in / garbage out)

How-To:

  1. Identify a situation which requires a monte carlo analysis to determine a range of outcome probabilities
  2. When creating the model, identify variables which may influence potential outcomes. Be as specific and exhaustive as possible as these variables generate the result figures (many model “shells” are available for free online)
  3. Once the variables are included into the model, generate a sample of random outcomes (iterations) via the use of a random number calculator (often included in the free models / excel formats)
  4. These iterations produce numerical results which are used to identify whether or not the outcomes are acceptable by decision makers
  5. Depending on the decision makers levels of acceptable risk, the variable factors may be adjusted until the point where the likely outcomes are within the range of acceptable risk for the decision maker

Application of Technique:

Two things are required for a successful Monte Carlo simulation: inputs and shape of the data distribution. Ideally, calibrated inputs are used to provide a range of possible outcomes for each variable considered. This range essentially represents an estimator’s 90% confidence interval. Certain types of data lend themselves to different distributions, which must be accounted for in the model. For example, stock prices do not reflect a normal distribution and this should not be used to model the results.

When a range of inputs for all variables are listed, then the computer can be told to randomly sample numbers from each variable and record the output. This is an iteration. In a typical Monte Carlo simulation, hundreds if not thousands of these iterations are run, and then the distribution of the results are visualized with a histogram.

In the class example, a manufacturing company needed to decide if they were going to lease a new piece of equipment, which would cost them $400,000/ year. There is no option to terminate this contract early, so even if the company loses money on it they must remain in the contract. Ranges were input for the variables of amount of maintenance savings, labor savings, raw materials, and these were added together and multiplied against a range of estimated production per year. It is important to note that some of these ranges include negative values to reflect the possibility of the company losing money on their investment.

These ranges were randomly sampled for each variable, and then an output was recorded to determine if the company would break even on their equipment lease. A normal distribution was used for the shape of this data. Over 400 iterations were run, and the histogram showed that 84% of the time the company would break even, while not doing so 16% of the time. Furthermore, on closer inspection it is possible to see the probability of how much money the company is likely to save. The histogram showed that 27% of the time the company would actually save at least $600,000, while also revealing that 3% of the time the company would lose up to $200,000 on the investment.

For Further Information:

Introduction to Monte Carlo Simulation:

Monte Carlo Simulation - Wikipedia:

Monte Carlo Simulation Methods in Finance:

Getguesstimate.com:

Riskamp.com:

Eye in the Sky, movie for decision making:

MathWorks:

Wolfram:

MIT Lecture Series - Sampling and Monte Carlo:

Monte Carlo Simulation Visualization:
https://www.portfoliovisualizer.com/monte-carlo-simulation

Saturday, October 29, 2016

The Effect of Simulation Order on Level Accuracy and Power of Monte Carlo Tests




In this article authors Hall and Titterington test the effectiveness of Monte Carlo Tests against the asymptotic tests. The authors begin by defining their chief question as to whether or not the Monte Carlo testing method increasing statistical accuracy. The authors stated that they believed from the beginning that because of the nature of the Monte Carlo testing, the method would logically increase the accuracy of such tests.

The authors describe the nature of Monte Carlo testing and how it differs from asymptotic testing.  They also discuss the history of the testing method and its base theories.  Their descriptions provide a well-defined basis of understanding for the readers to work from.  Hall and Titterington show the basic mathematical formula that Monte Carlo tests are built from and explain the equations step by step.

Deeper issues are then explained with Monte Carlo tests such as the issues of 'pivotalness'.  Meaning that the accuracy of the experiment can actually be effected by the number of experiments that are run.  If this is not the case with a specific experiment being run then the results of the testing would mathematically prove to be no more accurate than asymptotic testing.  However, it is also explained that the methodology maintains its accuracy even with a smaller number of samples because of the way in which tests are run.

In order to test the effectiveness of the models, the authors ran test two different experiments using both models and compared the predictions to the actual results and to each other.  The authors found that Monte Carlo tests proved to maintain their accuracy even with limited sample sizes.

Critique:

While the authors when into great detail explaining the arithmetic and the logic behind Monte Carlo testing, there is a lot more that could have been done to explain their experiments to test the theory.  The authors were vague on how the models were being applied in order to test their accuracy and so it diminishes the generalizability and verifiability of the experiment run.

Hall, P., & Titterington, D. M. (1989). The effect of simulation order on level accuracy and power of Monte Carlo tests. Journal of the Royal Statistical Society. Series B (Methodological), 51(3), 459–467.

Friday, October 28, 2016

Friday, October 28, 2016
Modeling uncertainty in risk assessment: An integrated approach
with fuzzy set theory and Monte Carlo simulation

Summary:

This journal article uses a fuzzy set theory and Monte Carlo Simulations to model and evaluate uncertainty and risk to a benzene extraction unit (BEU) of a chemical plant in India. They first described the situation that risk plays to many industries, and then went into a literature review of studies using Bayesian Network analysis, and other methods used to reduce uncertainty in analysis.

  1. After reviewing, other methods of analysis to reduce uncertainty and risk, the scientists then moved into their methodology. First they outlined the three major components for risk modeling which were 1) estimation/probability of undesired outcome/situation; 2) estimation of losses due to undesired outcomes/situations; and 3) modeling the risk while including variability and uncertainty in the probability of failure and its resultant consequences. From here the scientists then moved into the method they would use which was a simulation analysis using a Monte Carlo analysis (MCA) simulation technique. Specifically, MCA is used commonly in risk assessment circumstances due to its ability to quantify uncertainty or variability in a probabilistic frameworks.

  1. The particular MCA used by the scientists in this study was a hybrid MCA called 2-dimensional fuzzy MCA or 2D FCMA. In this MCA, 2 loops are used with the inner loop models consisting of the random variables for each fuzzy membership value, leaving the outer loop to model the parameters. The equation used for this is g(R)=f1(P)*f2(C), with P=probability of failure; C=consequences/loss due to failure; and f1 and f2 and g being the functional forms.

  1. Moving to the next step after the scientists used their equation, was the use of a vertex method while substituting a DSW algorithm. These algorithms reduce the computational effort used in estimating the upper and lower intervals, while using a form of standard interval analysis with α-cut concept.

  1. Through a number of mathematical equations the scientists would produce their “1) estimation of fuzzy cumulative distribution function (CDF) of failure probability, 2) estimation of fuzzy consequence intervals, 3) estimation of fuzzy risk, and 4) estimation of support, uncertainty, possibility and necessity measures”(Arunraj, Mandal, & Maiti 2013). All of which would be used to produce the lower and upper bounds of risk.

  1.  Applied to the BEU and its 8 section failures, the scientists used the standard deviation and mean of lognormal distribution of likely failure as the fuzzy numbers. Which were then put into DSWs and came out as 5 different combinations (Table 4). For the 5 pairs of means and standard deviations, 5000 Monte Carlo simulations were used to create the CDFs. Which were then split into 100 numbers of percentiles, and applied into the 8 sections of the BEU for evaluation (Table 5). All of which were set to a benchmark of a compliance guideline of industry operations, or some regulatory authority (i.e. the plant management), and printed in the Table 7 results.

Table 6 Most Likely Value of Risk
Table 7 Final Results For Measures to Compliance Benchmark

In conclusion, the scientists acknowledge that evaluating a point risk is difficult and has serious limitations for decision makers. Yet, the use of interval risk values that use variability and estimation reduce the uncertainty for a decision maker. With the use of the 2D FMCA, it uses two forms of uncertainty assessment models, which are the combination of fuzzy set theory and probability theory. The 2D FMCA method reduced more uncertainty than any of the other methods described in the literature review of past studies, making it a stronger piece of support to aiding a decision maker’s capabilities of making the right decision, particularly in regards to the BEU. Which for the BEU the uncertainty index showed the highest degree of uncertainty for the process condensate system, followed by the solvent regeneration section, benzene stripper column section, and lastly the storage and slop drums when put against the high risk sections (See Table 7 results).

Critique

Due to limited knowledge on the topic of MCA and the resultant other theories used in this piece, I would say the track record for MCA is credible in being able to reduce uncertainty. This is assuming that the person doing all the mathematical equations behind it knows exactly what they are doing. I found it interesting that like intelligence the chemical sectors try to keep their failures from happening for like intelligence their failures are known not their successes. For the researchers acknowledged that finding backing data for their study was difficult to obtain. I personally think the article was well rounded in that it evaluated all methods before going into the methodology section that the researchers selected. It allowed one to see and compare, and MCA by my understanding and by the researchers results proved the better method to reduce uncertainty, particularly if it is for a decision maker.

Sources


Arunraj, N. S., Mandal, S., & Maiti, J. (2013). Modeling uncertainty in risk assessment: An integrated approach with fuzzy set theory and Monte Carlo simulation. Accident Analysis & Prevention, 55, 242-255. <http://www.sciencedirect.com/science/article/pii/S000145751300095X>.

Practical Use of Monte Carlo Simulation for Risk Management within the International Construction Industry.



Summary

1 Introduction

1.1  Theoretical Model of Risk Management Circle

Within this theoretical model and risk management circle, risks can be seen as controllable and assessable. According to the author, Dr. Tilo Nemuth, “risk identification at an early stage and an integrated in-house risk management is therefore an indispensable requirement for a monetarily positive result of a project.” The author uses the following risk management circle as depicted in figure 1, for an overall guideline of a risk management system. “Risk” is also defined in this article as “Risk = probability of risk occuring  x  impact of risk occurring.” 

Figure 1: Stempowski’s Risk Management Circle.
1.2  Objectives for Risk Management of Project Cost 

a.     Project risks must be identified early on in the tender and acquisition phase.
b.     Monetary analysis of risk impacts must be conducted.
c.     Display of the impact of failures.
d.     Improved risk awareness.
e.     Filtering of high risk projects and implementation of knock-out-criteria for projects in the early stages of growth.

2 Implementation of Risk Assessment in Estimation Procedure and Tender Process

2.1 Two-stage system and comprehension of Monte Carlo Simulation

In this section, Dr. Nemuth claimed that project risks can be placed into categories for more of a organized process of evaluation. This section also introduces the implementation of a two-stage system meant for the “aggregation of project risks.” The first stage is an analysis of all risks and the second stage is a detailed evaluation of the critical risks found from that first analysis. Emphasis is then placed on a Monte Carlo Simulation due to its superiority when compared to other risk analysis methods and techniques.

With reference to the risk management circle presented earlier, the two-stage process is further explained by the following example illustrated in this article. 

Stage 1 = Phase 1 + 2 (identify and analyze the project risks)
Stage 2 = Phase 3 (evaluate the risks with MCS) and preparation for Phase 4 (monitoring)

The results of a Monte Carlo Simulation can be seen as a probability distribution. Below in figure 2 is a probability density, while figure 3 is an example of the results displayed in a cumulative ascending chart. 

Figure 2: Probability Density for Monte Carlo Simulation.

Figure 3: An example of the cumulative ascending chart.

 
3 Conclusion 

The purpose of this article was to illustrate that risks for projects are capable of being analyzed and evaluated. This simulation provides decision makers with a better scope of understanding regarding the risks that are present but also the results as well, whether positive or negative. The Monte Carlo Simulation allows for a more concentrated focus on the critical risks at play. Filtering high risks at an early stage can assist the decision maker with avoiding failure later on.

Source:
Nemuth, T. (2008). Practical Use of Monte Carlo Simulation for Risk Management within the International Construction Industry. International Probabilistic Workshop, 1-12.