Tuesday, November 21, 2017

Summary of Findings:  Intuition ( 4 out of 5 Stars)

Note: This post represents the synthesis of the thoughts, procedures, and experiences of others as represented in the articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University, in November 2017 regarding Intuition as an Analytic Method, specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Intuition is an analytic method that relies on a) quick and ready insight, b) immediate apprehension or cognition, c) knowledge or conviction gained by intuition, d) the power or faculty of attaining to direct knowledge or cognition without evident rational thought and inference.

  • Processes information extremely quickly
  • Free (Built-in function of human brain)  
  • Focuses on well-defined patterns, relationships, and possibilities
  • Intuition involves factors such as expertise, processing styles, task structure, feedback, and time pressure in making decisions
  • Heavily influenced by individual biases
  • Subjective is a matter of opinion
  • Current research suggests that intuition works best only in certain situations
  • Hard to articulate decision-making process
  • Flawed information's can lead to wrong intuition
  • Short-term emotional bias can lead to bad intuition
  • Unstructured questions damage the effectiveness of intuition
  • Insufficient consideration of alternative
  • Lack of openness; every person has a different experience base

How-To: Holistic Hunch
  1. Be presented a problem that you have never seen before
  2. Internalization of associated mental frames and value judgements, into habitualized and internalized patterns of thinking and understanding
  3. Make a decision based on system two thinking based on past experiences combined with novel information in new ways

How-To: Automated Expertise
  1. Be presented a problem that is well within your subject matter expertise
  2. Internalization of associated mental frames and value judgements, into habitualized and internalized patterns of thinking and understanding
  3. Make a decision based on past situation-specific experiences and a replay of those experiences.

For Further Information:

Monday, November 20, 2017

The Intelligence Community Debate over Intuition versus Structured Technique: Implications for Improving Intelligence Warning and Analysis

Sundri Khalsa of the United States Marine Corps examined the debate of intuition versus structured analytic techniques in intelligence analysis. The role of bias, filtering out key information, including vague information when it supports a hypothesis, and distributing fractional information both intentionally and unintentionally were mistakes noted as reasons for intelligence failures. Using vague information since it supported a hypothesis was the mistake most repeated in intelligence analysis.

Pearl Harbor and Iraq were two cases of intelligence failure due to vague information within the analysis. The analysts told the decision makers about a possible attack by the Japanese, but they did not name Hawaii as a possible target because they assumed it would be included in the warning and Hawaii was a low probability target. They made the same mistake Iraq's WMD program.

The study goes on to examine the arguments for intuition, structured technique, and a systematic process which is a mixture of intuition and structured technique. Khalsa argues that the systematic approach is better because it allows for unforeseen variables while limiting the bias that is problematic with intuition.


The researcher is very thorough in the examination of intuition vs. structured technique and the solution of systematic process. However, his technical language would make it difficult for someone not familiar with intelligence language to understand.

Citation: Khalsa, Sundri.  GREGG CENTER for the Study of War and Society. https://journals.lib.unb.ca/index.php/jcs/article/view/15234/20838

Sunday, November 19, 2017

Intuition in strategic decision making: Friend or foe in the fast-paced 21st century?

Matthew Haines

In this article, authors C. Chet Miller and R. Duane Ireland discuss the pros and cons of intuition and its use in decision making. They begin by describing Honda’s entrance into the American market and how preliminary analysis was completely contrary to the endeavor being a success. However, Honda took a risk on its executive’s intuition and had a huge success in the American Market. The authors then begin by defining what intuition is saying that at its core there are two types of intuition, holistic hunches, and automated expertise. Holistic hunches are judgements that come from a subconscious process involving a synthesis of diverse experiences, novel combinations of information, and strong feelings of being right[i].

Automated expertise on the other hand is a partially subconscious choice. It comes from past situation-specific experiences and a replay of those experiences.[ii] Then, the authors outline two specific types of strategies and how intuition can be used within these strategies. First, exploratory strategies can benefit from holistic hunches because they can lead to novel ideas and aide in trial and error type testing. Automated expertise on the other hand can create consistency in an uncertain environment. Exploitation strategies benefit more from automated expertise because executives have some sense of factors influencing the decision. However, holistic hunches also provide a break from the norm and help generate new ideas.

This article did very little in defining the actual effectiveness of intuition in decision making and forecasting accuracy. However, the authors do a great job of defining intuition and how it differs in application. It also cites a number of different studies on the accuracy of an experts forecasting ability. The authors also show some of the steps a manager can take to limit the negative effects intuition can have. I especially liked their recommendations for using holistic hunches to experiment with business practices that won’t break the bank and to encourage a culture where failed experiments are ok. These practices lead to better brainstorming collaborations and help break down some biases people have. However, I still believe this article did a poor job in taking any kind of stance on the use of intuition. It could have easily been improved if the authors had included data on topics other than automated expertise.


Saturday, November 18, 2017

Intuitive Decision-Making

By Vicki L. Sauter

Summary and Critique by: Jared Leets
The author begins by explaining some of the mainstream types of decision-making styles which include: left brain, right brain, and others as well. The left brain style tends to prefer working with variables which can be controlled, measured, or quantified when information is accessible. The other side, the right brain style, uses intuitive techniques typically placing a tremendous amount of importance on feelings rather than facts. These decision-makers usually employ spontaneous procedures when considering something. Common examples of this style include brainstorming and emergent trend projections. While the left brain style decides on developing solution methodologies, having an orderly method of searching for information, and aiming for predictability. The right brain style completely avoids any one strategy at all. The decision-maker acts without stating any procedures, tends to experiment with uncertainty to develop an understanding of what is the requirement, and will usually think of all possibilities simultaneously, while remembering the main problem at hand. The author states that problems with this are that there is no data-tested theories and a methodology that cannot be tested or replicated at all.

Intuition can be developed through experience, for example a decision-maker or manager can acquire expertise in a subject from internalizing events and then make them automatic (Sauter 1999). The decision-maker can develop approaches to problem-solving that facilitates the collection of information, and can look for ways to connect information in ambiguous ways. This in return can trigger intuitive approaches that have not been seen or used from the decision-maker’s past. However, there are negatives as well. Managers and decision-makers employing intuition can become impatient with routine, details, or repetition which can make them seek conclusions too fast and disregard important information. But if these types of intuitive decision-makers realize this they can overcome it. What they must do is evaluate all intuitively acquired information with analytic examinations and look at all facets of it while eliminating bias. They must completely eliminate confirmation bias, since most decision-makers attempt to confirm their beliefs and causation and probability must be reviewed and examined thoroughly.

Towards the end of the article, the author states that a decision-maker must know not only
what the best way to solve the problem is regarding the data but also why and how, simply providing an answer is not enough. Inductive technology tools can help decision-makers test assumptions such as statistical cluster analysis. The author concludes that intuition is becoming increasingly important for decision-makers, and that they will need decision-making system tools to help them with incorporating intuition.

I thought this article explained intuition being used, with decision-makers and the different styles used, fairly well. The author explained the difference between left and right brain styles of decision-making and gave examples as well. She also described intuitive decision-making and gave the pros and cons for both sides. Overall it presented its argument for intuition well and also provided contrary information to help see how intuition can fail but also how it can help and improve when combining it with left brain decision-making techniques.

Sauter, V. L. (1999). Intuitive decision-making. Communications of the ACM, 42(6), 109-115.

Friday, November 17, 2017

Measuring Intuition: Nonconscious Emotional Information Boosts Decision Accuracy and Confidence

Summary and Critique by Claude Bingham


The researchers, Lufityanto, Donkin, and Pearson, sought to test just how much influence intuition can exert on decision-making accuracy, specifically emotionally-based intuition. Previous research had shown that the amount of nonconscious information plays a role in accuracy. 

They coupled a random-dot-motion exercise with emotionally changed images that were shown nonconsciously to test participants via flash suppression. Using two control groups, and four experimental groups, they found there could be a link between intuition and unrelated categorical decision-making.

33% of the participants in the first experiment was exclude because they were not able to improve at tracking the random dot motions over time, their final sample size became 16, 7 males and 9 females. The control group was 10 subjects, 5 males and 5 females, who had previous psychophysical experience in lab settings. 

When shown both negative and positive images that were consciously suppressed but intact, participants had an approximately 3% increase in accuracy, than when the image was scrambled. This was most clear when participants had not become used to the motion exercise. 

The second experiment also had a sample size of 16, 6 males and 10 females. When adjusted for negative images vs positive images, the test participants showed no change in performance. Participants responded better to intact images than scrambled images; they showed higher confidence about decisions, faster reaction times, and higher accuracy. 

The third experiment, which had 5 male and 11 female participants, showed similar increased accuracy and response when images were shown with a corresponding directional motion. When the researchers switched the direction, accuracy went back to neutral. 

The final experiment, 22 participants (9 males, 13 females), sought to test how nonconscious information is tied to consciously available information. The results showed intact images were still linked to higher accuracy and using Skin Conductance Response, the researchers found participants were responding consciously to difficult random dot motions in concert with the nonconscious images.


This was a very technical research experiment with rigorous procedures to protect result integrity. Still, I feel there are holes in the test, such as individual participant reactions to particular emotional images may be the reverse of expected. Additionally, I am not sure a seemingly random situation, like random dot movements are an accurate approximation of non-random real-life situations that require people to use intuition, such as the movements of fellow drivers in traffic. Still, this experiment does show that subconsciously, our brains do make decision-related information processing outside of, and in addition to consciously available information.

Full research experiment available here: http://www.pearsonlab.org/images/human_intuition.pdf

Managerial Decision Making: Importance of Intuition in the Rational Process

by Ivana TICHÁ, Jan HRON, Jiří FIEDLER


This article discusses the role of intuition in managerial decision making. The article aims at contributing by reviewing the current literature on intuitive decision making and by describing the use of accumulated knowledge and intuition as applied by the managers. The rational analysis approach has been adopted when it comes to decision making particularly in the business environment. The process involves collating, analyzing, and interpreting the collected information, then formulating alternatives. Afterwards, the choice of the best option is derived using common sense. The speed of communication, reduced time to examine data and relationships, and the lower stability and predictability of the business environment has increased the complexity of decision making.

The paper describes intuition as a highly complex and highly developed form of reasoning that is based on years of experience and learning, and on the facts, patterns, concepts, procedures, and abstractions stored in the decision maker’s head. Depending on the situation and the experience of those involved, the use of intuition relies on the manager. Managers are able to develop an internal reservoir based on the accumulation of experience and expertise.  

According to the paper, the intuitive decision making can be trusted only when following four tests which also support the argument that intuition is not rooted in emotions rather in reason.

·       The familiarity test – this test builds on the nature of intuition working on the pattern recognition. The appropriate familiarity is judged against major uncertainties of the decision-making situation.
·      The feedback test – previous decisions build into the reservoir of lessons learned and associated with the decision emotional tags. Positive emotions associated with the previous decision support further decisions.
·    The measured emotions test – relates to the strengths of emotions associated with the previous decisions.
·       The independence test – aims to avoid any conflict of personal interests.

The paper concludes that intuition is integral in the decision-making process and generally accepted by both practitioners as well as the academia. The paper points out the need for further research into many aspects such as specific situations in which intuition works well, the decision type, which can be associated and supported by intuition, the decision maker’s profession, the experience and industry, the nature of the organizational culture. The direct correlation between intuition and rationality is identified due to their nature of being able to meet the needs of a decision-making situation. 


The paper properly addresses the integral nature of intuition in relation to decision making by managers. The four tests that are necessary before intuitive decision making can be trusted serve as a good baseline for evaluating the decisions that are produced. There was no mention of how cognitive bias plays into the intuitive decision-making process. It would have been good to see if there are any distinctions between making decisions by drawing from intuition or bias. A manager’s well-honed intuition can be very instrumental in the success of the organization which is reflected in the quality of decisions that are made.

Thursday, November 16, 2017


Summary and Critique by Michael Pouch

This study explores intuition and how it is defined through a development of models and propositions that incorporate the role of domain knowledge, implicit and explicit learning, and task characteristics on intuition effectiveness. In addition, the author suggests how intuition can be applied to future research and managerial decision making.

The authors suggest that there are two major barriers to a productive discourse on the topic of intuition within the management literature. The first concerns the considerable confusion surrounding what intuition is and the second is the failure to distinguish between when intuitions are used and when they are used effectively, as shown in Figure 1. These barriers come to fulfillment due to the various perspectives used to understand intuition and the failure to recognize when it is effectively used in existing work, including when intuition is simply most likely to be used.

Figure 1: Multiple Definitions of Intuition 

The authors then construct a definition that is built upon the bridge work in psychology, philosophy, and management. They found that four characteristics help make up the core of the construct of their definition of intuition, which is a non-conscious process, involving holistic associations, that are produced rapidly, which result in effectively charged judgments. The author’s objectives within this context were to help clarify which types of decision-making processes are intuitive and which are not.

From there the authors looked into the conditions that influence the effectiveness of intuitive decision making. While exploring the conditions that influence whether intuition is effective as a decision-making approach, the authors suggest that two broad sets of factors influence intuition effectiveness, domain knowledge factors, and task characteristics. Domain knowledge factors consist of schemas (Heuristic and Expert) and Learning (Explicit and Implicit). While task characteristic factors consist of Intellective versus judgmental tasks and Environmental uncertainty. In the end, the authors are trying to delineate what intuition is and when people are likely to use it well.

Figure: 2 Factors that Influence Intuitive Decision Making

Lastly, the authors explain the managerial implications for the use of intuition. One example they mention is that managers should be mindful of their environments in order to facilitate implicit learning. In addition to this comment, the author continues by saying that being alert and viewing problems from multiple perspectives, “mindful” managers may form new cognitive categories and distinctions.

Overall, the authors try to explain how and why speed serves as one characteristic of intuition and identified factors that make intuitive judgments effective in decision making.

I feel the authors have set up a great framework for explaining, defining, and applications of intuition approach.  The authors essentially laid out that intuition is a key role for decision-making in rapidly changing environments. Ironically, the fact is that for some decisions data alone isn’t enough. In addition, intuitive decision making is far more than using common sense because it involves additional sensors to perceive and get aware of the information from outside, as the authors explained the conditions within intuitive decision making.

Reference: https://www.researchgate.net/publication/254412101_Exploring_intuition_and_its_role_in_managerial_decision_making

Tuesday, November 14, 2017

Summary of Findings: Monte Carlo Simulations (3.5 out of 5 Stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University, in November 2017 regarding Monte Carlo Simulations as an Analytic Method, specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Monte Carlo simulations are mathematically-based algorithms that use random sampling to obtain estimates. Typically, Monte Carlo simulations are used for physically or mathematically complex problems that would be difficult to solve by other means. Many forms of Monte Carlo simulations are designed to be run with many test iterations. The results of these interactions is a range of probabilities, distributions, or possible outcomes.

  • Allows decision makers to determine a range of possible outcomes  and the probability that an outcome may occur rather than a single-point estimate
  • Flexible in its application
  • Proven effectiveness for increasing forecasting accuracy
  • Can take into account and use several variables
  • Has vast amounts of evidence giving it credibility
  • Effective when used in conjunction with decision trees/complex scenario based forecasting
  • Time and cost-effective

  • Assumptions need to be fair; if a number is derived from unrealistic assumptions, then they possess no real value
  • Requires a very in depth knowledge of math; for many that can be extremely challenging
  • Validity dependent on the variables, if the input is inaccurate then the output will be inaccurate.
  • Many samples may be required to obtain an acceptable precision in the answer


The exercise used was a simplistic version of a Monte Carlo simulation in order to show the method in a short amount of time. We attempted to forecast the number of turns it would take us to kill a fictional monster in an RPG if we were rolling dice.

  1. Set the “Monster’s” health, ours was 20
  2. Role die and subtract that number from the “Monster’s” health
  3. Role until monster dies
  4. Run simulation over several iterations. (10,000 is suggested but for time we ran 5)
  5. Take average and build a distribution model to evaluate findings and scenarios

Application of Technique:
The class was given a graph with the x-axis numbering up to 24. They were required to start at the number 20 and after rolling a six sided dice, subtract the number rolled from 20 and keep rolling until they got to zero. They plotted the numbers they rolled on a graph to see how many turns it took to get to zero. This was done five times for each student. The number of attempts was collected from everyone in class and average was calculated for the number of attempts required for a dice to reduce 20 to zero. The method used in class was a simplistic process of Monte Carlo, in a real life scenario the method is repeated 1,000 or more times to get a better output. Our exercise launched a discussion over the actual value of a monte carlo simulation compared to other forms of regression. We came to the conclusion that Monte Carlo does offer different kinds of answers that could be more valuable in the uncertain world of intelligence analysis.

For Further Information:

Saturday, November 11, 2017

The Problems with Monte Carlo Simulation

Summary and Critique by Keith Robinson Jr.


David Nawrocki, professor of Finance at Villanova University, examines the failings of Monte Carlo simulation in the financial realm. The author explains that Monte Carlo simulation is only useful in situations where data and analytic models are unavailable; we possess some knowledge about the population, however, sampling data is unavailable. Rubinstein (1981) explains Monte Carlo simulation is appropriate when:

  • It is impossible or too expensive to obtain data
  • The observed system is too complex
  • The analytical solution is difficult to obtain
  • It is impossible to validate the mathematical experiment
Further exploring the extant literature at the time, Nawrocki found no articles in support of using Monte Carlo simulation with financial return markets. Additional literature suggested that the while the methodology educates people on uncertainty and risk, it does not reduce uncertainty, instead increasing it because it is derived from assumptions which can lead to incorrect decisions. Implementation is not easy. Nawrocki provides a number of cases in which Monte Carlo simulation is unnecessary or fails. Because Monte Carlo simulation assumes all distributions are normal and correlations are zero, it does not accurately capture the interrelationships between multiple variables contained in historical data, therefore, it does not depict real-world complexities.  

He suggests that Monte Carlo simulation lacks an adequate benefit/cost ratio and provides no demonstrably better answers than other analytic techniques. The researcher argues that if a number is derived from unrealistic assumptions, then they possess no real value. Regarding policy implications, Nawrocki explains that the best policy adapts to uncertain conditions, rather than relying on the most likely course of action produced by the simulation. So, while useful for cases where data and analytic models are unavailable, Monte Carlo simulation requires more work and does not necessarily produce better answers than other analytic techniques.


When analyzing Monte Carlo simulation's for use in intelligence, it very well could be a powerful tool for assessing risk, but not necessarily for reducing uncertainty.  Monte Carlo simulations can provide the most likely outcomes, but that does not necessarily reflect real-world scenario. Assuming an adversary/competitor will proceed down a specific path is comparable to mirror imaging bias. While an outcome may be the most likely, one cannot assume all actors are rational actors.

Source: Nawroki, D. (2001). The Problems with Monte Carlo Simulation. Journal Of Financial Planning, 14(11), 92-106.


1. R. Y. Rubinstein, R. Y., (1981). Simulation and the Monte Carlo Method. New York: John Wiley and Sons.

Water Quality In River Systems: Monte-Carlo Analysis

Summary and Critique By: Ian Abplanalp


In 1979, a study was conducted n England on the Bedford Ouse River using a Monte-Carlo simulation. At the time there were poorly defined nature of water resource systems coupled with poor sampling and measurement errors regarding water quality. Researchers wished to establish a statistical methodology that would be able to assess and forecast water quality of the river. By establishing a proper statistical based procedure to assess for the uncertainty regarding water resource systems. This would allow an analyst to reasonably asses their own uncertainty regarding their forecasts, and allow them to keep that in mind when making a forecast. This is similar to assessing a margin of error. To flush out the answer to this question the researchers used a Monte-Carlo simulation.

A Monte-Carlo simulation boils down to four basic elements:

1) Identifying the mathematical model of the activity you want to explore 
2) Define parameters for each factor in your model
3) Create random data for those parameters
4) Simulate and analyze the output of your process

The researchers defined the mathematical model they wished to use as a black box model was chosen. This was due to its being the best model to adhere to both the model structure that is determined by the field data collection, or water samples. This model reflects a single input single output, or a stochastic difference equation to help account for a time series of events. 

The researchers created parameters of both water quality and multi-reach flow and quality model. With established parameters for oxygen absorption volumetric flow rate in the stream, volumetric holdup in the reach, input from the precessing stream, saturation and concentration, turbulence, and sunlight dependent (for algae growth). All of these parameters were controlled for time series event to measure their effect on the overall water resource system over the course of the experiment. 

The model was populated using upstream readings to be used as downstream forecasts. The deviation of this sample would be generate from the Monte-Carlo simulation but the forecasting data would be projected through the multiple iterations of the data within the parameters. 

The Monte-Carlo simulation was able to predict the amount of dissolved oxygen within the water within .3% of expectations within the model. However the results were determined with some degree of uncertainty.


While the article was good at demonstrating how the Monte Carlo simulation can work to help forecast with increased accuracy. It also demonstrated how particular the model you need to build is to get an accurate output from the model properly. The specificity of the model that is created must be tailored to the specific problem put when done properly it can be used to accurately forecast within mathematical models.


Friday, November 10, 2017

Analyzing the Risks of Information Security Investments With Monte Carlo Simulations

By: Sam Farnan


James Conrad writes how he utilizes Monte Carlo software simulations to help quantify uncertainty that is so prevalent in cyber-security for businesses. The author details that these simulations can pay off incredibly well when compared to more conventional information-security models. Specifically, these models use "assumed" or "expected" values. The author writes "For example, an expert might estimate the particular frequency of an attack to be 2 intrusions per year. Could it be only 1? Perhaps. Could it be 4? Sure. Is 4 more probable than 1? Well yes. How about 100? No that would be unlikely. A Monte Carlo simulation enables an analyst to quantify the uncertainty in an expert's estimate by defining it as a probability distribution rather than just a single expected value".

The author also points out that the analyst is able to account for uncertainty in expert opinions as well when running these simulations. All of this will be displayed as a forecast range that is comprehensible to managers. Following the gathering of variables (ideally based on expert opinions) 
"the tool selects a random value for each parameter, executes the hosted security model with those
values, and collects the forecasted results from the model. Selection, execution and collection are repeated in many (often thousands of) iterations of the model. Commercial Monte-Carlo tools offer a capability to display the result of the simulation as a chart plotting the forecast’s distribution". 

The author highlights that these models are easier on experts providing the range of uncertainty instead of providing a singlue value which may or may not be a full representation of the chances of something, in this case a cyber attack, happening. He then concludes that these simulations are very useful for systems-level applications and the fact that uncertainty can be recognized and accounted for in these simulations. 


Although the author presents evidence that Monte Carlo simulations can account for uncertainty and provide a range instead of a value to the odds how many times something might happen, I feel this is article is highly technical in nature and is not suited for someone that does not already have experience utilizing Monte Carlo simulation software. 

Monte Carlo Simulation: Assessing A Reasonable Degree of Certainty

Monte Carlo Simulation: Assessing  A Reasonable Degree of Certainty
A summary By Kevin Muvunyi


In their article “Monte Carlo Simulation: Assessing A Reasonable Degree of Certainty”, Daily and Solis apply the Monte Carlo simulation technique to two hypothetical scenarios that seek to determine future financial outcomes with a certain degree of confidence by examining the benefits and drawbacks of the methodology. According to the authors, Monte Carlo simulation has a large scope of applicability in various fields and more so in financial analysis hence the interest therein. 

In their analysis, Daily and Solis examine a simple lost profits analysis and then a more complex construction delay claim requiring the evaluation of lost profits. First, they begin by tackling each scenario based on known facts, evidence, and assumptions followed by a repeat of the same process but this time using the Monte Carlo simulation. In the case of the lost profits analysis, the researchers first utilize single inputs as part of their assumption based analysis to get the lost profits values. They then proceed to use the Monte Carlo technique with the help of the Microsoft Excel based RISK program, whereby with the use of probability distributions they are able to run 10000 iterations to get final results. What the authors were able to discern in this particular case is that the lost profits value in both instances were approximately similar. In the second case of the construction delay claim, the researchers repeated the same processes but this time around due to the complexity of the scenarios they were inclined to use multiple inputs, thus, there was significant material differences between the results of the two methodologies, namely the Monte Carlo simulation and the assumption based technique. Ultimately, Daily and Solis conclude that the Monte Carlo simulation is able to have a material effect on the ultimate outcome or no material effect at all in regards to financial analysis. Nonetheless, they stress the fact that in both scenarios Monte Carlo provided them with helpful statistics regarding the possible outcomes of their analyses.


Although the article provides two practical examples of how the Monte Carlo simulation can be applied to real world scenarios it nonetheless fails to clearly demonstrate the drawbacks of the technique in a financial analysis context. 

Source: http://eds.b.ebscohost.com.ezproxy.mercyhurst.edu/ehost/pdfviewer/pdfviewer?vid=6&sid=f08e7165-fad1-4f17-945c-45825adca828%40pdc-v-sessmgr01