## Saturday, May 2, 2009

### Bayes' Theorem and Intelligence

Net Wars

Summary:
According to the author, "Beliefs are based on probabilistic information. Bayes Theorem says that our initial beliefs are updated to to posterior beliefs after observing new conditions." As an analytic method, Bayesian analysis provides a formula which allows the analyst to upgrade original assertions as new evidence is discovered, and assign likelihoods to events; the more we observe the better we can predict the likelihood of a certain event. The formula used to update the analysts initial beliefs to posterior beliefs is: p(C|O) = p(O|C)p(C)/p(O|C)p(C) + p(O|¬C)p(¬C). According to Bayes, initial beliefs have a high margin of error; this is alleviated by incorporating new evidence through this formula. This formula "produces interesting results because it accounts for uncertainties created by False Positives and False Negatives."

The author provides the following example of using Bayesian analysis to update your beliefs:

"There is a case of this occurring. Europeans believed that swans were always White and there could be no Black Swans. They updated their probability of a Swan being White to 99% based on their limited experiences. As they explored the world, they found Black Swans in Australia. This reduced the probability of a swan being white and increased the probability of a swan being black. This process of inductive reasoning can be explained via Bayesian probability."

The author provides the following example of using Bayesian analysis as an intelligence methodology:

"There are 10,000 civilians. 1% of whom are insurgents pretending to be civilians. Police can investigate individuals and determine if they are an insurgent or civilian with 95% certainty.

Prior Probability is this: 0.01 (10,000) and 0.99(10,000). So
Group 1: 100 insurgents
Group 2: 9,900 Civilians

The Police investigate the entire population. This produces four groups:
Group 1: Insurgents - Positive test (0.95)
Group 2: Insurgents - False Negative test (0.05)
Group 3: Civilians - False Postive test (0.05)
Group 4: Civilians - Negative test (0.95)

How certain are the police that the men they captured are actually insurgents? The answer is 16%.
(0.95 x 0.01)/ (0.95 x 0.01) + (0.05 x 0.99) =
0.0095/0.0590 = 0.161"

The 16% certainty rate stems from the uncertainty that always exists as some insurgents escape detection while some innocents test positive as insurgents. The author notes that this is an extremely oversimplified example; actual Bayesian analysis in this situation would require some serious computing power that takes into account many other factors, as well as multiple testing to insure that the most accurate results were reached. Nonetheless, the example highlights the use of Bayesian analysis as a method of predictive analysis. However, the method predicts the probability of a particular event happening, and not whether that event will actually occur.

The author returns to the "black swan" example, stating that these highly unlikely, yet possible, intelligence "black swans" are events that can occur, but are highly unlikely too. Just because they haven't happened, doesn't mean they won't. Bayesian analysis provides a method for determining their likelihood. The author concludes by reiterating that intelligence analysis is not about predicting future events, but rather about predicting the likelihood of future events. "The inability to stop a Black Swan event, or a false prediction of a Black Swan event, does not always mean that the intelligence community 'failed'." Rather, the notion that Intel failed comes from the distorted view of Intel analysts as fortune tellers, rather than the reducers of uncertainty that they truly are.