In this paper,
Wright analyzes some of Green’s work and does a small meta-analysis of other
authors that have analyzed both Green’s work and game theory in general. While Green
demonstrated in his study that game theorists’ predictions were more accurate
than the unaided judgement of university students, other authors have mixed
feelings about the validity of his study. Bolton, for example, challenges Green’s
method of testing game theory and argues that role-playing is dependent on game
theory – in that knowledge of game theory is necessary in order to make an
initial design of the role-plays.
Wright argues that,
based on results from several authors’ studies, game theory is limited in its
ability to analyze and forecast the outcomes of one-off, real world conflict
situations such as the ones Green studied. He notes that the forecasting of generalized
market behavior and the outcomes of context-free, laboratory gaming seems the
best game-theory-based forecasting are the best currently possible. Since Green
implies in his study that the expert judgement of game theorists should produce
more accurate forecasts than non-expert judgment, this assumption led others to
examine if there is strong underpinning evidence to support this.
Bolger and Wright
examined 20 extant studies about expertise within social and decision science
literature, and documented that 6 of the 20 extant studies had showed ‘good’ performance
by experts in the domains of weather forecasting, the number of tricks to be
made in bridge playing, odds forecasting in horse racing, interest rate
prediction, and research and development outcome prediction. Of the other 14
studies, 9 showed poor expert performance while the remaining 5 showed
equivocal performance. Bolger and Wright concluded from the patterns of these
studies that expert performance will be largely a function of the interaction
between the dimensions of ecological validity
and learnability. Ecological validity
is the degree to which experts are
required to make judgements inside or outside the domain of their professional
experience, and learnability is the degree to which good judgement can be
learned in the task domain.
So how did
role-playing by non-experts in Green’s study make them produce better
forecasts? Research on the effectiveness of Delphi provides a clue. Rowe and
Wright have shown that the provision of feedback of the rationales/arguments
for fellow panelists’ forecast is the essential cause of improvements in
forecasting accuracy over Delphi rounds. As for the experts' accuracy, Wright concludes that
Green’s game theorists had two sorts of experience that might serve as a basis for
predicting the outcomes of the conflict situations: the game theorists had
their own, individual experiences of real-life conflicts and their resolutions.
While the university students will have of course had similar experiences, they
are likely to have had fewer since as a group they are younger. This leads
Wright to hypothesize that it is “only when individuals are enmeshed in role-play
simulations will the relevance of this experience become obvious – since Green’s
conflicts will, initially, have been seen as outside the domain of this
experience at a superficial, face-content, level” (p. 387).
Critique
Wright mentions
several points of future research that are worth reiterating in this section.
For example, does the partial role-playing of conflicts to near-resolution
enable individual participants to predict the actual outcomes of both: 1) the
(continued) role-play, and 2) the real situation that the role-play was
designed to model? Another excellent point of future research is examining
these two questions: 1) Are older, more experienced people better able to make
forecasts of actual outcomes after such partial role-playing? and 2) does
simulation of role-playing enhance that individual’s forecasting ability?
Source
Wright,
G. (2002). Game theory, game theorists, university students, role-playing and
forecasting ability. International Journal of Forecasting, 18(3),
383-387.
Hank, nice review. This was an astute and significant one to take a look at. I enjoyed how it looked at and compared the effectiveness between role-playing and game theory. Also, you as some good questions at the end in the critique.
ReplyDeleteWhen the article drew the distinction between younger versus older respondents I thought back to Hackman (2011) as I am relatively certain his views would have clashed insofar as that individuals working on a problem regardless of their professional age can find significant results. I think it is interesting Wright (2002) draws such a distinction.
Your thoughts?
I agree with your point about Hackman feeling individuals, regardless of age and put on a team in the ideal spectrum, would have the ability to perform well. I think that the age distinction Wright is making with respect to forecasting accuracy is inquisitive and, in a sense, merely speculative. It is an interesting point, however, and probably worthy of further study.
ReplyDeleteDo you think that the personal experiences an individual has had affects their forecasting ability? I would guess that they might in the right circumstances, but would am hesitant to suggest they would across the board. Additionally, I think the specific experiences an individual has had might have more of an effect on their forecasting accuracy than their age, since even relatively young individuals might have had significant experiences that shape the way they think about the world.
Hank, I found this interesting because I read a few articles that also compared roleplaying to game theory, but could not find an explanation on what made them different or if one was better in terms of forecasting. It seems like the two are frequently compared but aren't fully explained.
ReplyDeleteHank, this article seems to bring forward several questions that are not normally asked about role playing. I have a question more for clarification, what level of expertise did the participants have on those they were role playing as?
ReplyDelete