Thursday, October 26, 2017

Bayesian Versus Orthodox Statistics: Which Side Are You On?

Summary and Critique by Michael Pouch

Summary:

This study takes note of psychology and other disciplines benefiting from a set procedures that extract inferences from data. The author of the study, Zoltan Dienes, the purpose of the study is to know if we could be doing the procedure better. Two approaches he compares are orthodox statistics versus the Bayesian approach.  Throughout the article, Dienes breaks down these two into scenarios. First, he presents how hypothesis testing between orthodox statistics differ from Bayesian inference. Second, he shows how Bayesian inference follows from the axioms of probability, which motivate the ‘likelihood principle’ of inference. In addition, he explains how orthodox answers to the scenarios in the test violate the likelihood principle and the axioms of probability. Then he draws a distinction between Bayesian and orthodox approaches to statistics is framed in terms of different notions of rationality. Lastly, he uses the Bayesian approach to enable the most rational inferences from the data.

As the author begins to explain the differences between the two approaches he gives a quick explanation by showing that the orthodox view of sampling is infinite and decision rules can be sharp, while the Bayesian approach treats unknown quantities as probabilistically and the state of the world can always be updated. In general, the orthodox view sees data as a repeatable random sample that has a frequency, where the underlying parameters remain constant during this repeatable process. On the other hand, Bayesian approach observes data from the realized sample, where the parameters are unknown and described probabilistically.

After giving a brief overview of the differences, the author shows how each approach test the axioms of probability through 3 different research scenarios. As he ran each approach, he found that the Bayesian approach is most likely to demand that researchers draw appropriate conclusions from a body of relevant data involving multiple testing. He also identified that the orthodox approach is irrational because different people with the same data and same hypotheses could come to different conclusions.
              
The author then next explains the rationality between the Bayesian and orthodox approaches. He reveals that notion of rationality is about having sufficient justification for one’s beliefs. In addition, if the researcher can assign numerical continuous degrees of justification to beliefs, then the desired requirement can lead to the likelihood principle of inference. With hypothesis testing, the author explains that it violates the likelihood principle, this due to held intuitions we train ourselves with the orthodox method of statistics are irrational toward the key notion of rationality.  The Bayesian approach factors in a connect theory into the data in appropriate ways where it considers an effect size. Bayes factors, but not orthodox statistics, tell us when there is no evidence for a relevant effect and when there is evidence against there being a relevant effect.
             
In conclusion of this study, the author suggests that the Bayesian approach is sufficiently compelling that researchers should be aware of logical foundations of their statistics and make an informed choice between approaches for research questions.

Critique:

The argument that the author lays out is philosophical. He tries to see how researchers can extract inferences better by putting orthodox statistics against the Bayesian approach.  Where Bayesian analysis treats unknown quantities as random variables and where the orthodox treats it as a fixed, the author lays out certain test to show the notional truth behind any sampling model is that is not fixed but random. In the end, The Bayesian reply is twofold. First, by treating the prior distribution as the random variable does not mean that we believe the result is a random variable but rather, it expresses the state of our knowledge about the result. In addition, the Bayesian approach helps us to make inferences while also learning from the data. Despite this, the author did not consider the problems that most Bayesian assessments face. One problem the author did not mention is the choice of prior distributions can be distorted through cognitive bias or little prior information.  Having prior information can help develop a probabilistic result but having noninformative priors not only affects your confidence of the prior information but also your confidence in the result. In other words, people tend to believe results that support their preconceptions and disbelieve results that surprise them.

References:
Dienes, Z. (2011). Bayesian Versus Orthodox Statistics: Which Side Are You On? Perspectives on Psychological Science, 6(3), 274-290

5 comments:

  1. I liked the information presented in this article. Was there a discussed method for the researcher to assign numerical continuous degrees of justification to beliefs? Just wondering how this is being quantified.

    ReplyDelete
    Replies
    1. The author does not go into great detail into way he chooses the method he does but the main focus in his research was that he presents common situations in which both approaches come to different conclusions and where you can see where your intuitions initially lie.

      Delete
  2. I liked your critique of the research. Do you think the philosophical debate has merit and/or do you think statistics should include some semantic variables?

    ReplyDelete
    Replies
    1. To answer your question Claude, Bayesian analysis seems like updating the probability according to new pieces of evidences in order to reach the most accurate one. While statistics you can keep changing the variables to way you like to get the results you want.

      Delete
  3. This is a good article that demonstrates both the fundamentals of Bayesian statistics as well as frequency statistics, as well as the basic ideas that separate them. I believe that even though Bayesian statistics is a good place to handle more intelligence based problems as often we deal with problems that are not easily quantifiable and do not have the ease of being replicated that is demanded from frequentist.

    ReplyDelete