In this paper, Wright analyzes some of Green’s work and does a small meta-analysis of other authors that have analyzed both Green’s work and game theory in general. While Green demonstrated in his study that game theorists’ predictions were more accurate than the unaided judgement of university students, other authors have mixed feelings about the validity of his study. Bolton, for example, challenges Green’s method of testing game theory and argues that role-playing is dependent on game theory – in that knowledge of game theory is necessary in order to make an initial design of the role-plays.
Wright argues that, based on results from several authors’ studies, game theory is limited in its ability to analyze and forecast the outcomes of one-off, real world conflict situations such as the ones Green studied. He notes that the forecasting of generalized market behavior and the outcomes of context-free, laboratory gaming seems the best game-theory-based forecasting are the best currently possible. Since Green implies in his study that the expert judgement of game theorists should produce more accurate forecasts than non-expert judgment, this assumption led others to examine if there is strong underpinning evidence to support this.
Bolger and Wright examined 20 extant studies about expertise within social and decision science literature, and documented that 6 of the 20 extant studies had showed ‘good’ performance by experts in the domains of weather forecasting, the number of tricks to be made in bridge playing, odds forecasting in horse racing, interest rate prediction, and research and development outcome prediction. Of the other 14 studies, 9 showed poor expert performance while the remaining 5 showed equivocal performance. Bolger and Wright concluded from the patterns of these studies that expert performance will be largely a function of the interaction between the dimensions of ecological validity and learnability. Ecological validity is the degree to which experts are required to make judgements inside or outside the domain of their professional experience, and learnability is the degree to which good judgement can be learned in the task domain.
So how did role-playing by non-experts in Green’s study make them produce better forecasts? Research on the effectiveness of Delphi provides a clue. Rowe and Wright have shown that the provision of feedback of the rationales/arguments for fellow panelists’ forecast is the essential cause of improvements in forecasting accuracy over Delphi rounds. As for the experts' accuracy, Wright concludes that Green’s game theorists had two sorts of experience that might serve as a basis for predicting the outcomes of the conflict situations: the game theorists had their own, individual experiences of real-life conflicts and their resolutions. While the university students will have of course had similar experiences, they are likely to have had fewer since as a group they are younger. This leads Wright to hypothesize that it is “only when individuals are enmeshed in role-play simulations will the relevance of this experience become obvious – since Green’s conflicts will, initially, have been seen as outside the domain of this experience at a superficial, face-content, level” (p. 387).
Wright mentions several points of future research that are worth reiterating in this section. For example, does the partial role-playing of conflicts to near-resolution enable individual participants to predict the actual outcomes of both: 1) the (continued) role-play, and 2) the real situation that the role-play was designed to model? Another excellent point of future research is examining these two questions: 1) Are older, more experienced people better able to make forecasts of actual outcomes after such partial role-playing? and 2) does simulation of role-playing enhance that individual’s forecasting ability?
Wright, G. (2002). Game theory, game theorists, university students, role-playing and forecasting ability. International Journal of Forecasting, 18(3), 383-387.