Tim van
Gelder, 31 December 2007
Summary and Critique by
Jillian J
Van Gelder begins
by describing the ACH method—whereby an analyst can “determine which of a range
of hypotheses is most likely to be true, given the available evidence” and identifies the hypothesis-testing structure and external representation
aid as some of the method’s strengths. He then lodges five complaints about the
method: too many judgements, no e is
an island, flat structure of hypotheses, subordinate deliberation, and
decontextualisation and discombobulations.
Van Gelder
writes that analysts must make a judgement of consistency for each piece of
evidence they enter into the matrix, further stating that the number of
judgements can quickly become cumbersome. On top of that workload, the
relationship between a given piece of evidence (e) and the given hypothesis (h)
may be irrelevant or inconclusive.
Next he cites
the matrix structure as a problem that makes ACH treat “an item of
evidence as consistent of inconsistent on
its own with each of the hypothesis”. He writes that
while the analyst may deem e
consistent with h, this judgment is
only valid in the context of other relevant information (or auxiliary
hypotheses). If that other information said the opposite, then that same e would be inconsistent with h—in essence, a multi-premise structure.
He acknowledges the short term utility of organizing h’s and e’s, but writes
that further along the process, if the analyst finds information that
challenges an initial assumption such that e1
is inconsistent with h only until combined
with additional information (a),
she/he has a problem. ACH doesn’t allow for that type of nuance.
The flat structure
of hypotheses presents an issue because hypotheses can be, and often are
complex. Van Gelder posits that ACH doesn’t efficiently address the multiple
facets of complex hypotheses.
His fourth
qualm is that ACH doesn’t have a way of weighting the salience of a given e. ACH allows the analyst to judge the
magnitude of consistency (very consistent, consistent, neutral, not applicable,
inconsistent, or very inconsistent), but doesn’t let the analyst delineate how
seriously she/he takes the e itself.
The final
issue is that while ACH tries to strip away excess details, the result is an e without context. This leaves the
analyst uncertain of the relationship between e and h which leads to a
muddied analysis.
Critique:
I
would add that ACH can also perpetuate cognitive biases. When I search for evidence,
sometimes the result is a(n) “(in)consistency heavy” matrix. Then I might
actively seek out disconfirming evidence to make the inconsistency to consistency
ratio a little closer. While it’s useful to search for disconfirming evidence,
I have to stop somewhere. I run the risk of choosing a stopping point that fits
my bias.
I also think
there’s a quick fix for van Gelder’s fourth issue about weighting salience. ACH
allows the analyst to control the order in which the evidence appears on the
matrix. It would be easy for the analyst to rank the evidence according to
importance.
I found the
flat structure critique only moderately valid. ACH allows the analyst to create
virtually unlimited matrices. It may be tedious, but it is possible to break
down a complex hypothesis into multiple matrices and apply evidence to each
facet.
Overall, I
agree with van Gelder. ACH has problems and its utility is limited. The issues he identified resonated with the frustrations I've had while using the method. But I
maintain that challenging our analyses is as important as coming up with our
analyses in the first place. Therefore, even with its flaws, analysts should apply the principles of ACH to their analyses, if not the ACH method itself.
In your summary you note that the author states that analysts must make an individual determination of whether the evidence is "consistent" or "inconsistent" with the Hypothesis, and that doing so can become burdensome. Does the author address how evidence volume at all? Can there be too much evidence or too little to effectively use ACH?
ReplyDeleteVan Gelder does say, "...with 20 items of evidence and 5 hypothesis, you'd have to make 100 distinct judgments, each taking some modicum of conscious mental effort. Ugh!" but beyond that, he doesn't directly address evidence volume. In my critique I briefly discuss how entering pieces of evidence is a tricky task-- the matrix will reflect, and only reflect, the (in)consistencies you find, and therefore is subject to cognitive bias when you decide when to stop entering evidence. In theory, for your hypotheses to compete you need at least one piece of evidence. But if you have 20, 30, even 50 pieces of evidence or more, you certainly could enter all of it. The matrix will still only give you a result of one hypothesis appearing more likely (or as likely in the case of a tie) than the other(s). I assert that in practice, yes there can be too much or too little information to effectively use ACH. Finding the appropriate amount is a task for the analyst and is virtually certain to vary depending on the requirement.
DeleteJillian - although an appropriate amount of evidence is discretionary, I do think that there is a limit to the amount of evidence in an ACH. This is where the idea of "grouping" can become beneficial. When you get to 20,30 or even 50 pieces of evidence in a matrix, its almost certain that many pieces of evidence overlap significantly. Grouping these pieces into a single idea and possibly giving weight to the idea may prove more effective.
DeleteVan Gelder addresses inefficient problems that a user encounters while exercising ACH. He has valid points about cognitive biases but when done correctly, the ACH model is valuable. I guess something I am struggling with for achieving the best result is, what is the correct way and who determines this? As you mention in your critique, isn't it in the hands of an analyst to do so? Yes, the structure of the ACH may seem flat but isn't this the purpose of ACH, to give a straightforward answer and determine the validity of the hypothesis? These are rhetorical questions. This post had me going back and forth in my opinion on ACH! I enjoyed reading your critique!
ReplyDeleteHi Jillian, thanks for your insightful critique of ACH! Van Gelder makes a great point that a limitation for the ACH methodology is the risk of evidence being taken out of context. However, part of the value of ACH is the transparency it provides in justifying estimates and decisions. In the imperfect world we live in, analysts are under time constraints to make estimates with limited (and out of context) information. Naturally, if new evidence emerges that challenges an analyst's previous estimate and provides greater context, then the analyst will simply rebuild the matrix in light of the new evidence. So although this is a limitation to the accuracy of the methodology, in practice (and in the real world) I find this to be an acceptable limitation when employing ACH for timely decisions.
ReplyDelete