This article published in 2011 by Kesten Green, Senior Lecturer at Ehrenberg-Bass Institute for Marketing Science at the University of South Australia and Scott Armstrong, Professor of Marketing at The Wharton School at the University of Pennsylvania highlights the significant differences between simulated interactions and a term they coined "role thinking" on the accuracy of decision forecasts in novel situations. They used an experimental design to conclude that asking groups of people to think about the roles and interactions influencing the reactions of a protagonist in a given novel conflict situation to forecast protagonist decisions is an ineffective forecasting technique. Forecasts from role thinking are unlikely to be accurate due to the difficulty of analyzing complex interactions between different protagonists with different roles in a manner that accurately represents the conflict in the absence of experiencing the complex interactions.
Note that the authors prefer the term "simulated interaction" instead of "role playing" to refer to the method of forecasting people's decisions by simulating the situation using interacting role players because the term "role playing" is used to refer to various techniques with purposes other than forecasting. The authors find that simulated interactions provide much better forecasting accuracy than unaided judgment and role thinking, particularly in novel conflict situations.
Evidence from previous findings indicate that much better forecasting accuracy for the decisions the protagonist in a given novel conflict situation will make is attained by prompting groups of people to adopt roles in addition to simulating the interactions between protagonist groups with divergent interests. The decision the protagonists in each group elect to make in the simulated interaction is taken as a forecast of the actual protagonist's decision. The authors highlight that simulating novel conflict situations faced by divergent protagonists using interacting role playing members solves the absence of experience problem according to prior research from the same authors using the same conflicts utilized in this experiment.
The authors tested role thinking in an experimental design consisting of forecasts for the decisions of the protagonist of a given conflict from an expert sample and a novice sample. The accuracy of the role thinking forecasts were compared to chance in addition to the accuracy of unaided judgment and simulated interactions from previous studies utilizing the same conflict scenarios. The authors obtained 101 role thinking forecasts for nine conflicts from 27 Naval postgraduate students (the expert sample) and 107 role thinking forecasts from 103 second year organizational behavior students (the novice sample). The results are illustrated below:
The average forecasting accuracy from the novice sample and the expert sample were only marginally better than chance, which was 28% versus the 33% accuracy from the novice forecasts and the 31% accuracy from the expert forecasts. Previous research from Green and Armstrong using the same conflict situations found forecasting accuracy of 60% when using the simulated interaction method instead of the role thinking method.
In the role thinking experiments, participants were provided with descriptions of some or all of the situations and of all the associated roles. Participants were prompted to predict what actions each party in the situation would prefer and assess how likely it is that each party's preferred decision will actually occur. Each prompt had a list of between three and six decisions that the researchers believed could plausibly have been made in each situation.
In previous simulated interaction experiments with forecasting accuracy of 60%, participants were divided into groups and assigned information only on their own role. Participants were prompted to read their role description, put on a name badge for the role, and adopt the role for the duration of the simulation. Participants were free to meet with others as often as they would like to reach a decision. Each group's decision was taken as a forecast of the actual protagonist's decision. In addition to the 60% accuracy finding, the forecasts from simulated interactions were more accurate than the role thinking forecasts for all nine conflict situations. The authors also point out that neither statistical nor casual models have been found to be feasible for predicting decisions people make in novel conflict situations therefore decision makers rely on judgmental methods.
The forecasts from the role thinking experiment were derived from individuals while the forecasts from the simulated interaction experiments were derived from group forecasts. The authors acknowledge that a key assumption driving their analysis of the ineffectiveness of role thinking versus simulated interactions is that forecasting accuracy from group role thinking forecasts would differ little from the individual role thinking forecasts in this experiment due to both unaided judgment and role thinking forecasts differing little from chance. One way to test this assumption is implementing an experiment where different groups of participants arrive at a group forecast using role thinking then comparing those results to the simulated interaction experiments. This would also be a more consistent representation of what occurs when a group of people are tasked to engage in role thinking together on a team.
In Table 2, the expert sample in the unaided judgment experiments and the expert sample in the role thinking experiment are qualitatively different in that the unaided judgment expert sample participants were from academia and professional conflict management and forecasting organizations while the expert sample participants in the role thinking experiment were Naval postgraduate students. However, the authors point out that there is little evidence that top experts can perform judgmental tasks better than generalists. In addition, the naval postgraduates had experience in conflicts over pay negotiations and commercial takeovers, the authors suggest that knowledge of conflicts from one domain is likely to transfer over to other domains involving predicting human behavior in conflict situations.
One thing I did not see addressed is best practice for developing effective scenarios, roles, and choices for simulated interactions when the conflict being assessed is a current event. I estimate that the procedures for developing effective simulated interactions is as much an art as it is a science, especially when the simulated scenario in question is a model of a current novel situation where incomplete or intentionally deceptive information is an issue. Using simulated interactions with current conflicts and events as opposed to historical situations where the outcome is known by the developers of the simulated interaction in advance adds a layer of complexity to developing the simulation interaction requiring further study.
Green, K. and Armstrong, J.S. (2011). Role Thinking: Standing in Other People's Shoes to Forecast Decisions in Conflicts. International Journal of Forecasting. Vol. 27(1). p. 69-80.