By Sam Farnan
Researchers utilized prediction markets to determine its feasibility with predicting the accuracy of research evaluations. Specifically, they sought to explore if a prediction market model would produce similar results to the Research Excellence Framework of 2014 (REF2014) The REF2014 was six-year process of evaluating research quality in educations in the United Kingdom. REF2014 came under criticism due to the lengthy, costly, and complex way it was conducted.
Researchers in the UK hypothesized whether a prediction market would offer the same results with much less bureaucracy than the REF2014 brought upon academic institutions. For a sample size, they examined 33 chemistry departments within the UK's higher education system. A total of 16 participants aided in the study, and ultimately concluded that in this case, the prediction market actually had less errors overall and showed similar results to the REF2014 as it related to the selected chemistry departments. There were still a number of errors, specifically with regards to institutions sacrificing research quality and ranking to gain research income.
I feel the number of participants that the researchers utilized was far too small to implicate that prediction market models could replicate the imperfect, yet expansive, REF2014. Additionally, prediction markets may not account for the more detailed aspects of large evaluations similar to this one as shown in the study above. Although this study shows potential in utilizing prediction markets in this case and had a solid overall design, I believe much more research is needed in order to make a claim that prediction markets are able to reliably replicate results from a large evaluation such as REF2014.