By: Kurt Matzler, Christopher Grabher, Jürgen Huber, and Johann Füller
Source: http://eds.b.ebscohost.com/eds/pdfviewer/pdfviewer?sid=fc8cd7f5-3da1-460d-802f-8d40d58bd464%40sessionmgr111&vid=3&hid=103
Summary:
The introduction of this article discusses the history of predicting new product success and compares the pros and cons of the traditionally-used methods to the use of prediction markets. It first describes how there are many methods that are extremely error prone and how it is believed that prediction markets could be a very viable alternative to those methods. This is due to the many cons of the traditional ways as well at the many pros of prediction markets. Some of the many issues with the current methods are that experts are difficult to identify, there are declining survey response rates, consumer resentment is growing, and cost associated with traditional market research are increasing. The pros to using prediction markets include the fact that they can
be effective with small and non-representative pools
of participants, can efficiently
aggregate asymmetrically dispersed information, and have other benefits such as speed, adaptive interactivity,
and task engagement.
According to this article, decades of research and studies have proven that the accuracy of prediction markets tends to be much higher than that of other research tools such as polls, questionnaires, and censuses. In addition to high accuracy, this method has a very wide range of applications and has been used to predicts everything from election outcomes in 1988, outcomes of sports competitions, statistical weather forecasts, and many different business events. This article also states that under the right circumstances, groups are often very intelligent (a.k.a. the concept of "wisdom of the crowd") which would support the statement that prediction markets have a high degree of accuracy.
This article also describes an experiment in which prediction markets were tested for accuracy when trying to predict the sale of new skis. In order to do this, 62 participants used PIM Sports (which was set up like a Facebook application) to predict the sales of skis before the main 2010/2011 skiing season, and then the results were compared to the actual sales. Each person was given $40,000 ($10,000/market, four markets created in total) in virtual money to use in order buy and sell individual assets. The four markets in this scenario were race skis, technology products, the powder segment,
and women's skis. In total, the application was open for 12 days. After the 12 days were over, this data was compared to the actual sales and ended up showing that prediction markets are mostly accurate. In this instance, race skis, technology products, and the powder segment had pretty high levels of accuracy while the accuracy for women's skis was fairly low in comparison. The article explained that this was probably due the the fact that liquidity (trade volume) is a key factor in the accuracy of prediction markets, and in this case there was much less liquidity in the women's skis market. Overall, I found this article to be a very interesting read.
Critique:
The main issue that I found with this study was that the PIM Sports application had a total 1,345 users, yet the experiment was only based on the 62 active users. While there was technically a great amount of diversity in total (people from over 50 countries visited the sight), that diversity could have been lacking in the sample of people that were actually used. Basically, even though the results matched those of many other studies, it is hard to say that they were accurate because there was a small sample of people who were used for the results, and because diversity (which is very important for this method) could have been lacking because of the low active participation. Also, like any other experiment there is always room for error so that could always have a possible impact on the accuracy of the results.
Subscribe to:
Post Comments (Atom)
The most interesting part of this study was that online communities did not require any type of incentives despite recommendations from past studies. Ho and Chen (2007) recommended approximately $500 and cash dividends. The authors also mentioned many companies use employees or experts when employing prediction markets. Chen and Plott (2002), conducted a study comparing the accuracy of prediction markets with the accuracy of expert opinions, and 6 out of the 8 tests conducted, prediction markets were more accurate. This implies the prediction market method is more cost effective when compared to other techniques, and yields more accurate results. The other implication is that the predication market method opens new grounds of research within online communities.
ReplyDeleteI disagree with your assessment that the study was inaccurate for two reasons. The first reason is that the authors measured the degree of forecast error in using prediction markets. Based upon expert opinion, the acceptable degree of error is between 5% and 15%. However, the authors stated errors can range between 50-75%, but did not mention specific reasons for the high percentage. Nonetheless, the degree of error in this study was 2.74% in powder segment, 4.64% in women skis, 9.09% in product technology, and 3.99% in the Mean absolute error (MAE) product technology section. Degree of error in this study was similar to prior studies the authors conducted.
The second reason is prediction market accuracy relies on a high number of volume trading. Although you only mentioned the activity of 62 participants was measured, Table 2 shows that the trading volume between the participants was as high as 2,517 on some of the products. The products with the highest trading volume had the least margin of error. The authors also mentioned that less trading volume typically results in less accurate prediction results. The results of their study as shown in Table 2, confirms the validity of their study, and accuracy of the predication market methodology.
In general, I did think that this was a good study and I didn't find too many issues with it at all. Really the only reason I was questioning the sample size was because of the issue of diversity also being an important factor. I can't say for sure that the results were inaccurate since the volume of trading is probably the most important factor. Basically what I'm saying is that if the results were not accurate, then I would infer that it is due to lack of diversity among the participants.
DeleteI can infer from the article that the number of participants matters. Do the authors offer a minimum number of participants in order to get reliable results from the prediction market?
ReplyDeleteYes the number of participants matters. However, in this study the number of participants had a negligible effect on the success rate of the technique due to the high volume of trading between the 62 participants.
DeleteAlso, did anyone have another link to the study? The current link doesn’t seem to work for me.
Osman, I did not read anything about a specific minimum number of participants that would yield reliable results. Based on the article, I do believe that a small sample could yield accurate results though if there is diversity among them along with a high volume of trading occurring.
DeleteThe study states that the trade volume may be a key factor in predictions` accuracy. And you criticize them since they have small sample (62). Can`t some topics require more expertise and thus the number of people who make bid on them be relatively less? But, still there can be enough aggregated quality information. So, do the size of the sample or the quality of the information aggregated matter?
ReplyDeleteBased on the article, I do not believe that this method in particular requires more expertise. I do, however, believe that the sample size only matters if there is a very low amount of diversity. If the sample size is small with great diversity then I would be more inclined to find the results accurate. Unfortunately, we do not know that the sample was diverse in this study which is my only issue. The results seem to match other studies and be accurate, but that could happen by chance. I guess I would just like to know how diverse those 62 people actually were.
DeleteI agreed with Ertugrul as I was reading all of these articles wondering about the kinds of people participating in these prediction markets. Obviously it varies market to market and study to study but it does beg the question of the credibility behind these participants.
DeleteThe article mentions that one of the pros to using prediction markets include the fact that they can be effective with small and non-representative pools of participants. On the other hand, having a large and diverse pool of participants is mentioned as being important to this particular method. Is there specific circumstances needed for when a small or a large polling pool should be utilized?
ReplyDelete