Friday, October 10, 2014

Who’s Good at Forecasts?

Who’s Good at Forecasts?
By: The Economist

The Economist took a look at predictive markets in their special edition, The World in 2014. They looked at Philip Tetlock’s 1980s forecasting tournament involving 284 economists, political scientists, intelligence analysts and journalists. This research collected around 28,000 predictions which concluded that “the average expert did only slightly better than random guessing.” The forecasts were expressed numerically so an expert could not provide vague words such as “may” or “possible”.  The results also concluded that “experts with the most inflated views of their batting averages tended to attract the most media attention.”

The Intelligence Advanced Research Projects Activity (IARPA) used Tetlock’s forecasting tournament as a pilot to sponsor a more ambitious tournament called The Good Judgment Project. The project has collected over one million forecasts from 5,000 forecasters on 250 questions. These questions range from the euro-zone to the Syrian civil war. From this research, IRAPA has been able to discover which methods of training promote accuracy.

This research also explores the super-forecaster hypothesis. Within the first year of the tournament, two percent of forecasters showed that luck was involved. However, after a while, forecasters became better and the super-forecasters were assigned to teams. These forecasters beat the “unweighted average (wisdom-of-overall-crowd) by 65%; beat the best algorithm for four competitor institutions by 35-60%; and beat two prediction markets by 20-35%.”
To be a part of The Good Judgment Project you can register here

Although the Economist did a very good job explaining both forecasting tournaments, I found the analysis of the research lacking. I would have liked to see a more in-depth look at how the tournaments came to their conclusions in addition to how super forecasters are grouped into teams.  


Who’s good at forecasts? (2013, November 18). The Economist 


  1. This comment has been removed by the author.

  2. Joy,

    This quote from the article reminds me of Rob Johnston's Center for the Study of Intelligence article on the paradox of expertise:

    "The top 2% of forecasters in Year 1 showed that there is more than luck at play. If it were just luck, the “supers” would regress to the mean: yesterday’s champs would be today’s chumps. But they actually got better. "

    Johnston says:
    "Since the 1930s, researchers have been testing the ability of experts to make forecasts. The performance of experts has been tested against actuarial tables to determine if they are better at making predictions than simple statistical models. Seventy years later, with more than two hundred experiments in different domains, it is clear that the answer is no."

    I noticed that this article falls short of describing the characteristics or problem-solving processes of the top 2% of forecasters.

    Do you think that the top 2% of forecasters share at least one common attribute that could explain why they are consistently great forecasters?

  3. Joy, could you explain what the super-forecaster hypothesis is?

  4. Ricardo that is a good question. As stated in my critique, I wished the article went into detail on the conclusions of the analysis. To answer your question I researched “super forecasters”. Apparently are distinguished by three characteristics: "(1) an intense curiosity about the workings of the political-economic world; (2) an intense curiosity about the workings of the human mind; (3) cognitive crunching power (“fluid intelligence” and a capacity for “timely self correction”)." You can see more at:

    John- the super-forecaster hypothesis states that there are some people who are just better forecasters than others.