Saturday, October 28, 2017


Nuffield College, Oxford OX1 1NF, UK


The paper discusses the distinction between frequentist and Bayesian approach for statistical inference where they consider historical background discussing the evolution of the approaches over time. The paper continues to discuss the critique on frequentist approach and discusses the two contrasting Bayesian views. The difficulties with the notion of a flat or uninformative prior distribution are discussed.

Critique of Frequentist approach

  1. It provides a methodical approach to a wide range of statistical methods and does not require additional specification beyond that of the probabilistic representation of the data-generating process.
  2. It provides a way of assessing methods that may have been suggested on relatively informal grounds.
  1. The problem in principle in frequentist formulations is that of ensuring that the long-run used in calibration is relevant to the analysis of the specific data being analyzed. Proposed solutions are applicable in only certain limited situations. 
Critique of Bayesian Methods

The paper discusses Bayesian methods we have to extend the notion of probability so that we can specify a prior distribution for the unknown constant. There are two radically different ways of doing this.
  1. Personalistic theory: This approach has the ambitious aim of introducing into the quantitative discussion uncertain information of a more general kind that is represented by statistical data in the narrow sense. There is an emphasis on trying to achieve self-consistency and coherency in probability assessments.
  2. Probability as Rational Degree of Belief: This approach involves a notion of rational degree of belief in an attempt to address the question of assessing the evidence in a specific dataset by seemingly being indifferent or representing ignorance to focus attention on the data.
Some of the difficulties with Bayesian Statistics are:

• Finding the prior weight is often complicated
• The nuisance parameters must be arranged in the sequence of importance, even though none of them is of intrinsic interest
• If the parameter of interest changes the whole prior structure may change
• If the sampling rule or design changes the prior will in general change
• It is emphasized that the prior weights are not to be thought of as prior probabilities, raising a question-mark over the interpretation of the posterior
• Many of the formal simplifications arising from all calculations being probabilistic are lost.


The article provides various situations in which both approaches may not be ideal when applied statistically depending on the type and/or complexity of the data. Both approaches clearly cannot be treated as one-size fits all when applied to various kinds of datasets. The statistician bears the burden of understanding the most suitable approach that will yield the most appropriate results. As more extensive studies are conducted, these approaches will continue to evolve, as new ones are developed.

1 comment:

  1. Praveen:

    I like the fact that article lays out the pros and cons for both methods. Fundamentally, both approaches vary in the way parameters are used. While one may need some known parameters, the other works with unknown parameters and can be modified based on new data.