Thursday, November 3, 2016

Confronting prejudiced comments: Effectiveness of a role-playing exercise



In this article, authors Lawson, McDonough, and Bodle discuss their social experiment aimed at identifying whether role-playing can be effective at reducing prejudiced comments. The experiment was established similarly to that of Plous (2000) in that the point was to not only inform students about prejudice but also ways in which they can combat prejudice outside the classroom. The object of Plous’ experiment was for the speaker of the exercise to discuss a topic and inject a prejudiced statement at some point. The responder’s role is to engage the speaker in a manner that does not make him/her hostile or defensive. Coaches then gave feedback to the quality of the response. The goal of Plous’ experiment was to confront prejudice to lead to its reduction rather than reinforcement.
In this article’s experiment, the authors wanted to see if the subjects who participated in a role-playing exercise were more or less likely to effectively confront instances of prejudice than those subjects in the control groups. The experiment included 61 students from three different undergraduate courses (social psychology, police and society, and intro to psychology). The social psychology students (23) were the ones exposed to the role-playing exercise while the police and society (12) and into to psychology (26) were in the control group and did not participate in the role-playing exercise. The social psychology students kept a log for a week of all the instances of prejudice they experienced in their daily lives. Prior to the role-playing exercise, all participants took a pre-test consisting of 5 scenarios containing brief background information and a prejudice statement. Each participant was asked to write down how they would respond. Responses were coded as either being effective or ineffective. For the role-playing exercise, 5 scenarios were chosen and given to each group (4-5 students) so each participant could select a different scenario previously unseen by the group. Someone would read the scenario and include the prejudiced statement, a responder would retort, a coach would provide feedback, and the remaining students were there to provide dialog for the scenario. After discussion on which types of responses were most effective, the students in the experiment were asked to go out and use these techniques they learned in real life situations. They were then to record these incidents in a second log. Afterward, all students took a post-test that was identical to the pre-test.
The results of the experiment showed that those who participated in the role-playing exercise demonstrated significantly higher levels of effective responses in the post-test when compared to the pre-test. Those students in the two control groups showed no significant changes between the pre and post-tests. However, the intro to psychology students showed a significant decrease in the number of effective responses from the pre to post-test.
Critique:
This article’s findings suggest that role-playing can be an effective tool at training the mind to respond in a certain way. I am not surprised those who participated in an experiment where they were told what the right answers look like did better on the post-test than those who didn’t have it spelled out for them. The authors themselves even admit that even though their experiment suggests role-playing works, they have no proof of its effectiveness in the real world. As was pointed out by the authors in the article, the human response to prejudice is similar to that of bystander intervention in an emergency. One has to first identify an act as prejudice, decide it constitutes something harmful, take responsibility for responding, and select the appropriate response. The audience is another variable not discussed in the set-up of this scenario. One will undoubtedly respond differently to family members, friends, and strangers depending on the scenario at hand. I believe role-playing can be effective at preparing the participant for a potential future scenario. However, the effectiveness of the role-playing depends largely on the details of the scenario. Much the same way war-gaming depends on the details in order to be effective. Simply running participants through a couple exercises is by no means enough training to be prepared for all possible future scenarios. But like many of the other methods we’ve discussed so far, it will at least make the participants more comfortable and knowledgeable by giving them a broader base of experiences on which to draw. 

Resources:
Lawson, T. J., McDonough, T. A., & Bodle, J. H. (2010). Confronting prejudiced comments: Effectiveness of a role-playing exercise. Teaching of Psychology, 37(4), 257-261.

Plous, S. (2000). Responding to overt displays of prejudice: A role-playing exercise. Teaching of Psychology, 27, 198–200.

Monday, October 31, 2016

Summary of Findings: Monte Carlo Simulation (4 out of 5 Stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in October 2016 regarding Monte Carlo Simulation as an Analytic Technique specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use structured data.

Description:

Monte Carlo simulations are a method of assessing risk versus uncertainty. It utilizes a number of different factors in a range that is sampled at random and averaged called an interation. These iterations are generated hundreds or thousands of times. The outcome is then used to generate a distribution that helps to visualize what a potential probability could be via a histogram.

Strengths:

  • Flexible in its application
  • Has a vast amount of evidence establishing credibility
  • Many different free pre-existing formulas that can be used for forecasting
  • Can take into account many different variables to increase the level of accuracy
  • Proven effectiveness in increasing forecasting accuracy

Weaknesses:

  • Complexity can be an issue for decision makers
  • Requires mathematical knowledge
  • Dependent on the variables which are input into the model (garbage in / garbage out)

How-To:

  1. Identify a situation which requires a monte carlo analysis to determine a range of outcome probabilities
  2. When creating the model, identify variables which may influence potential outcomes. Be as specific and exhaustive as possible as these variables generate the result figures (many model “shells” are available for free online)
  3. Once the variables are included into the model, generate a sample of random outcomes (iterations) via the use of a random number calculator (often included in the free models / excel formats)
  4. These iterations produce numerical results which are used to identify whether or not the outcomes are acceptable by decision makers
  5. Depending on the decision makers levels of acceptable risk, the variable factors may be adjusted until the point where the likely outcomes are within the range of acceptable risk for the decision maker

Application of Technique:

Two things are required for a successful Monte Carlo simulation: inputs and shape of the data distribution. Ideally, calibrated inputs are used to provide a range of possible outcomes for each variable considered. This range essentially represents an estimator’s 90% confidence interval. Certain types of data lend themselves to different distributions, which must be accounted for in the model. For example, stock prices do not reflect a normal distribution and this should not be used to model the results.

When a range of inputs for all variables are listed, then the computer can be told to randomly sample numbers from each variable and record the output. This is an iteration. In a typical Monte Carlo simulation, hundreds if not thousands of these iterations are run, and then the distribution of the results are visualized with a histogram.

In the class example, a manufacturing company needed to decide if they were going to lease a new piece of equipment, which would cost them $400,000/ year. There is no option to terminate this contract early, so even if the company loses money on it they must remain in the contract. Ranges were input for the variables of amount of maintenance savings, labor savings, raw materials, and these were added together and multiplied against a range of estimated production per year. It is important to note that some of these ranges include negative values to reflect the possibility of the company losing money on their investment.

These ranges were randomly sampled for each variable, and then an output was recorded to determine if the company would break even on their equipment lease. A normal distribution was used for the shape of this data. Over 400 iterations were run, and the histogram showed that 84% of the time the company would break even, while not doing so 16% of the time. Furthermore, on closer inspection it is possible to see the probability of how much money the company is likely to save. The histogram showed that 27% of the time the company would actually save at least $600,000, while also revealing that 3% of the time the company would lose up to $200,000 on the investment.

For Further Information:

Introduction to Monte Carlo Simulation:

Monte Carlo Simulation - Wikipedia:

Monte Carlo Simulation Methods in Finance:

Getguesstimate.com:

Riskamp.com:

Eye in the Sky, movie for decision making:

MathWorks:

Wolfram:

MIT Lecture Series - Sampling and Monte Carlo:

Monte Carlo Simulation Visualization:
https://www.portfoliovisualizer.com/monte-carlo-simulation

Saturday, October 29, 2016

The Effect of Simulation Order on Level Accuracy and Power of Monte Carlo Tests




In this article authors Hall and Titterington test the effectiveness of Monte Carlo Tests against the asymptotic tests. The authors begin by defining their chief question as to whether or not the Monte Carlo testing method increasing statistical accuracy. The authors stated that they believed from the beginning that because of the nature of the Monte Carlo testing, the method would logically increase the accuracy of such tests.

The authors describe the nature of Monte Carlo testing and how it differs from asymptotic testing.  They also discuss the history of the testing method and its base theories.  Their descriptions provide a well-defined basis of understanding for the readers to work from.  Hall and Titterington show the basic mathematical formula that Monte Carlo tests are built from and explain the equations step by step.

Deeper issues are then explained with Monte Carlo tests such as the issues of 'pivotalness'.  Meaning that the accuracy of the experiment can actually be effected by the number of experiments that are run.  If this is not the case with a specific experiment being run then the results of the testing would mathematically prove to be no more accurate than asymptotic testing.  However, it is also explained that the methodology maintains its accuracy even with a smaller number of samples because of the way in which tests are run.

In order to test the effectiveness of the models, the authors ran test two different experiments using both models and compared the predictions to the actual results and to each other.  The authors found that Monte Carlo tests proved to maintain their accuracy even with limited sample sizes.

Critique:

While the authors when into great detail explaining the arithmetic and the logic behind Monte Carlo testing, there is a lot more that could have been done to explain their experiments to test the theory.  The authors were vague on how the models were being applied in order to test their accuracy and so it diminishes the generalizability and verifiability of the experiment run.

Hall, P., & Titterington, D. M. (1989). The effect of simulation order on level accuracy and power of Monte Carlo tests. Journal of the Royal Statistical Society. Series B (Methodological), 51(3), 459–467.

Friday, October 28, 2016

Friday, October 28, 2016
Modeling uncertainty in risk assessment: An integrated approach
with fuzzy set theory and Monte Carlo simulation

Summary:

This journal article uses a fuzzy set theory and Monte Carlo Simulations to model and evaluate uncertainty and risk to a benzene extraction unit (BEU) of a chemical plant in India. They first described the situation that risk plays to many industries, and then went into a literature review of studies using Bayesian Network analysis, and other methods used to reduce uncertainty in analysis.

  1. After reviewing, other methods of analysis to reduce uncertainty and risk, the scientists then moved into their methodology. First they outlined the three major components for risk modeling which were 1) estimation/probability of undesired outcome/situation; 2) estimation of losses due to undesired outcomes/situations; and 3) modeling the risk while including variability and uncertainty in the probability of failure and its resultant consequences. From here the scientists then moved into the method they would use which was a simulation analysis using a Monte Carlo analysis (MCA) simulation technique. Specifically, MCA is used commonly in risk assessment circumstances due to its ability to quantify uncertainty or variability in a probabilistic frameworks.

  1. The particular MCA used by the scientists in this study was a hybrid MCA called 2-dimensional fuzzy MCA or 2D FCMA. In this MCA, 2 loops are used with the inner loop models consisting of the random variables for each fuzzy membership value, leaving the outer loop to model the parameters. The equation used for this is g(R)=f1(P)*f2(C), with P=probability of failure; C=consequences/loss due to failure; and f1 and f2 and g being the functional forms.

  1. Moving to the next step after the scientists used their equation, was the use of a vertex method while substituting a DSW algorithm. These algorithms reduce the computational effort used in estimating the upper and lower intervals, while using a form of standard interval analysis with α-cut concept.

  1. Through a number of mathematical equations the scientists would produce their “1) estimation of fuzzy cumulative distribution function (CDF) of failure probability, 2) estimation of fuzzy consequence intervals, 3) estimation of fuzzy risk, and 4) estimation of support, uncertainty, possibility and necessity measures”(Arunraj, Mandal, & Maiti 2013). All of which would be used to produce the lower and upper bounds of risk.

  1.  Applied to the BEU and its 8 section failures, the scientists used the standard deviation and mean of lognormal distribution of likely failure as the fuzzy numbers. Which were then put into DSWs and came out as 5 different combinations (Table 4). For the 5 pairs of means and standard deviations, 5000 Monte Carlo simulations were used to create the CDFs. Which were then split into 100 numbers of percentiles, and applied into the 8 sections of the BEU for evaluation (Table 5). All of which were set to a benchmark of a compliance guideline of industry operations, or some regulatory authority (i.e. the plant management), and printed in the Table 7 results.

Table 6 Most Likely Value of Risk
Table 7 Final Results For Measures to Compliance Benchmark

In conclusion, the scientists acknowledge that evaluating a point risk is difficult and has serious limitations for decision makers. Yet, the use of interval risk values that use variability and estimation reduce the uncertainty for a decision maker. With the use of the 2D FMCA, it uses two forms of uncertainty assessment models, which are the combination of fuzzy set theory and probability theory. The 2D FMCA method reduced more uncertainty than any of the other methods described in the literature review of past studies, making it a stronger piece of support to aiding a decision maker’s capabilities of making the right decision, particularly in regards to the BEU. Which for the BEU the uncertainty index showed the highest degree of uncertainty for the process condensate system, followed by the solvent regeneration section, benzene stripper column section, and lastly the storage and slop drums when put against the high risk sections (See Table 7 results).

Critique

Due to limited knowledge on the topic of MCA and the resultant other theories used in this piece, I would say the track record for MCA is credible in being able to reduce uncertainty. This is assuming that the person doing all the mathematical equations behind it knows exactly what they are doing. I found it interesting that like intelligence the chemical sectors try to keep their failures from happening for like intelligence their failures are known not their successes. For the researchers acknowledged that finding backing data for their study was difficult to obtain. I personally think the article was well rounded in that it evaluated all methods before going into the methodology section that the researchers selected. It allowed one to see and compare, and MCA by my understanding and by the researchers results proved the better method to reduce uncertainty, particularly if it is for a decision maker.

Sources


Arunraj, N. S., Mandal, S., & Maiti, J. (2013). Modeling uncertainty in risk assessment: An integrated approach with fuzzy set theory and Monte Carlo simulation. Accident Analysis & Prevention, 55, 242-255. <http://www.sciencedirect.com/science/article/pii/S000145751300095X>.