Sunday, August 26, 2018

Critical Epistemology for Analysis of Competing Hypotheses by Prof. Nicholaos Jones, Ph.D


Summary:

In his 2018 article in the Journal of Intelligence and National Security, Mr. Nicholaos Jones, a Professor of Philosophy at the University of Alabama at Huntsville, gives an epistemological critique of Analysis of Competing Hypotheses (ACH).

Mr. Jones begins by laying out his process for evaluating hypothesis testing methodologies. He first asserts that a methodology is “good” or “better” based on how it meets the reasoning goals of the analytical process. He identifies two initial reasoning goals: reliability and discrimination. Jones defines reliability in terms of how some body of evidence regards one hypothesis is more likely than a second hypothesis; a hypothesis that is more truthful than its alternative(s) will rank higher. In this case, discrimination means being able to partition hypotheses into ranks. Jones states that the reasoning criteria/goal of discrimination is easier to meet than reliability. He also states that discrimination is necessary, but not necessary to be approximate, to meet the criteria of reliability.

Before identifying alternative/competing methodologies to ACH, in this case Falsificationism (strict falsification), Bayesianism, Explanationism (inference of best explanation), Jones identifies additional reasoning goals of tractability and objectivity. Jones states that a methodology has tractability if it is efficient and elegant (i.e. simple) to use. Objectivity means the ability to minimize the effects of cognitive bias on reasoning by reducing the reliance on “subject evidence.” In assessing alternative methodologies, Jones states, in his assessment, that determining reliability of a method is difficult and avoids it. Each of the three identified alternatives have two strong and one weak reasoning goals.

Jones believes that ACH does about as well as alternative methodologies in terms of three reasoning goals that he stated were easier to assess. Jones believe the core of ACH is the third through fifth steps of ACH as laid out by Heuer. Jones interprets these steps through either strict or colloquial interpretations of Heuer’s third step on evidence consistency with regards to the hypothesis. Strict interpretation requires the absence of any logical contradictions in evidence whereas the colloquial interpretation requires evidence be “at least weakly or moderately plausible.” Under both interpretations Jones, regards ACH as stronger than alternative methodologies.

Jones then decides to complicate his ongoing analysis with a final reasoning goal: stability. In order to define stability, Jones breaks down hypothesis testing into three steps: 1) Inputs; 2) Operation; and 3) Rankings. Stability in this case means that every execution of a hypothesis testing (methodology) operation, using the same evidence or inputs, will yield the same result. Jones suggests that once stability is added, ACH becomes unstable. He lists three scenarios to highlight his critique: “first, when multiple competing hypotheses are consistent with all available evidence; second, when exactly one hypothesis is consistent with all available evidence; third, when none of the competing hypotheses are consistent with all available evidence.” The first case represents a problem of abundant fit, which Jones believes can be remedied with more evidence which changes the nature of the problem into the second or third case. Meanwhile the second case represents a problem of redundancy because if all evidence fits a single hypothesis, then revising competing theses, deleting or simplifying evidence, and further refining the process will yield an identical result thus adding complexity to the methodology, making it less tractable.

The third case highlights Jones case that ACH is unstable under both strict and colloquial interpretations of evidence consistency, albeit for different reasons. In a strict interpretation of the third case, ACH gives no way of determining which of the inconsistent hypotheses is more likely. In a colloquial interpretation, Jones argues that in cases where a generalized piece of evidence is separated into its constituent pieces of evidence, arbitrary factors influence the outcome. So, in a case where the likely hypothesis doesn’t chance, the individualization of evidence can affect which of the other two hypotheses is more or less likely, making ACH unstable.

Jones the suggests that by favoring generality of evidence or increasing nuance or subtlety of evidence diagnosticity, the stability critique can be avoided. Jones argues that the generality rule fails because 1) it is too strong; 2) the rule is ad hoc and presents bias; and 3) the rule is unnecessary. Jones then argues that considering the diagnosticity of the evidence, as suggested by Heuer, may provide the solution to the generality problem. Jones concludes that weighting diagnosticity of evidence further complicates ACH because of the way to select for individualized evidence is arbitrary and therefore does not lead to stability.

Jones believes that ACH is superior to other discriminatory methodologies by using a procedure to “aggregate consistency judgements.” While it is superior in this regard, ACH’s shortcoming is in the necessity to summarize evidence over specific individual items within the evidence. Jones sees this a structural flaw in ACH. In his view, there is no adequate solution to what he terms as the Generality Problem. Jones ultimately suggests that the methodology of ACH does not matter as much as the way the analyst uses the method and the analysts “luck or intuition.” While an analyst can yield effective results from ACH, Jones’ key criticism is in the lack of universal tactics for ranking/individuating between analysts’ evidence. He suggests that further research on ACH focus on techniques for individuating evidence, which will enhance transparency of the ACH operation and eliminating arbitrary or subjective bias into their analytic products.

Criticsim:

My primary disagreement with Jones’ approach to ACH is that he only tackles the methodology from the approach of a pure academic, as opposed to that of a practitioner. As an analyst, we are taught that there are logical shortcomings from the get go. The application of ACH to an analytic problem is intended to be a guide when examining your hypotheses and evidence. Jones’ critique that the process is redundant is correct but, in my opinion, I believe he misses the primary purpose of ACH. The methodology provides a structured approach to evaluating evidence that may be self-evident but also allows the analyst to ask essential questions of the process: are the hypotheses too broad/narrow? Is there more evidence to be found? Is the evidence redundant? Etc… In Chapter 8 of The Psychology of Intelligence Analysis, Heuer starts by stating,

“Analysis of competing hypotheses…is a tool to aid judgment on important issues requiring careful weighing of alternative explanations or conclusions. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve… Because of its thoroughness, it is particularly appropriate for controversial issues when analysts want to leave an audit trail to show what they considered and how they arrived at their judgment.”

In the excerpt shown above, Heuer clearly writes that ACH is a tool for the analyst to question his process and show their steps in their work. Again, by purely approaching ACH from an academic or logical standpoint, I believe Jones overplays ACH’s pitfalls which are clearly highlighted upfront by Heuer. 

Going back to the redundancy issues that Jones highlights, I believe that, again, Jones’ misses the point. The grand simplicity of ACH is that the process is easily repeated to show all possible alternatives. While that adds, in Jones’ view, complexity, it is as Heuer states, the auditing process for the analyst to show how he came to his final analytic conclusion.

Overall, I agree with Jones’ conclusion that a procedure or technique for objectively individualizing evidence would enhance ACH’s reliability and improve analytic products. 


Link: https://www.tandfonline.com/doi/abs/10.1080/02684527.2017.1395948

4 comments:

  1. Based off your summary and critique I would agree that not approaching the evaluation of ACH from a practitioner perspective is a major flaw of his assessment. Did Jones address this as a shortcoming in his research?

    ReplyDelete
    Replies
    1. No. Jones chose to purely examine ACH from a epistemological perspective (which is by looking at the logic and functionality of ACH's basic method and steps). I believe that critique has some valid points, which I highlighted, but he does not seem to reflect on Heuer's intentions for ACH. Heuer did not intend to create a "one-stop shop" for testing analytic theories. He chose it as a simple and, what Jones point out as a structural flaw, repeatable method.

      Delete
  2. Hi Harry, interesting critique on Jones' arguments regarding the effectiveness of the ACH methodology. In reading several of these blog posts on ACH, it appears that a common issue that arises is the lack of a standard way to weight individual evidence. Jones in particular, seems to suggest that evidence diagnosticity can resolve the issue of stability. At the same time, however, Jones also states that as long as the diagnosticity is arbitrary it does not resolve the stability issue. Does Jones mention or consider any alternative means of weighing evidence, whether through assigning numerical values for strength of consistency, ranking of evidence, or otherwise? Or does Jones simply write-off weighing evidence as an arbitrary task without considering possible alternative measures?

    ReplyDelete
  3. Tom,

    Jones does not suggest other methods for weighing evidence. As is the title, this is purely a critique. Jones does not suggest other methods for weighing or assessing the diagnosticity of the evidence to add the efficacy of the method.

    ReplyDelete