Friday, November 7, 2014

Does Intuition Beat Fast and Frugal Heuristics?

Summary
In his chapter titled “Does Intuition Beat Fast and Frugal Heuristics?” Glockner examined whether the automatic process of developing a decision is quicker and more accurate than other heuristic approaches.

Glockner discussed three methods by which a person can make a decision, two of which are method based, and the third is an automatic process.

            Weighted Additive Strategies (WADD): is the process of choosing the option with the highest weighted sum of criteria. This method requires an individual to process all available information to make a decision.

            Take the Best (TTB): is the process of choosing the best option with respect to one specific criterion deemed the most important. A person will only use information pertaining to the one criterion to make their decision.

            Consistency-Maximizing Strategy (CMS): Is an automatic process by which a person identifies consistencies between available information, whether it is provided or in their memory, to make a decision.

The CMS is a three step process by which a person can make a decision:
  1. The person must first activate all associated information within their memory
  2. A person then automatically reduces the number of inconsistencies between pieces of information
  3. A resulting decision is formed based on the connection between available information led to
Glockner hypothesized that choices made by those using CMS would mirror the choices made by WADD group. Additonally, Glockner hypothesized that those using CMS would have shorten decision making times and their times would increase with the number of inconsistencies.
To test this hypothesis, Glockner asked an experimental group to make a decision. Participants were provided a list of cities and were asked to determine which city possessed more inhabitants. Each participant was given 3 facts about each of the cities: (A) whether the city was a capital or not, (B) the city has or does not have a university, and (C) the city has or does not have a major league sports team. This process was repeated using 6 different questions. After the fact, the participants disclosed their decision making process and were divided into groups (WADD, TTB, and CMS). Glockner  identified that 63% used CMS, 24% used TTB, and 2% used WADD. Those using CMS had the lowest decision making times; however, there was no significant difference.

Results of this study found that CMS was the most frequently used decision process model. Only a small portion opted for a complex process (WADD) in making their decisions. Glockner found in this study that the results between decisions made using CMS (i.e. intuition) and a complex processing model (WADD) did not differ.

Critique
Glockner’s findings adequately show that CMS is the most used decision processing model; however, his experimental model cannot unequivocally determine it is the better of the two. The number of participants using a complex decision making model (WADD) waned in comparison to those using CMS; therefore, this study is not statistically sound. Further research must be done to have an equal group size of both intuitive decision makers and heuristic decision makers to make the leap as to one is better than the other.

Source
Glockner, Andreas. (2007). Does intuition beat fast and frugal heuristics? A systematic empirical analysis. 

The Paradox of Intuitive Analysis and the Implications for Professionalism

Authors: Kirsty Martin, Mark Kebbell, Louise Porter, and Michael Townsley

Summary:

Martin et al. examined the available research pertaining to how analysis is done by intelligence analysts, with the intention of apply their findings specifically to criminal intelligence analysts.  The main question, identified by the authors, is should intelligence be treated like an art (where intuition and experiences play a key role), a science (where the scientific method is the driving force) or a combination of the two?  Their research identifies three different models: traditional decision making (TDM), naturalistic decision making (NDM), and cognitive continuum theory (CCT).

Martin et al. in their research identify the analyst as the decision maker, and not the client, which is what most research usually identifies as the decision makers.  The focus is to identify how analysts should make decisions as they examine a piece of evidence, either in an intuitive fashion (art) or a rule based method (science).

TDM Models
TDM sets formal rules for analysts to move through while making decisions on pieces of evidence.  These models are based on the three key principles of rationality, maximization and transitivity. Analysts will follow logical, rule based processes when making decisions and will attempt to maximize their accuracy to seek for the best decision available.

Examples of TDM models include decision tree analysis, sleipnir, analysis of competing hypotheses (ACH) and Bayesian analysis.These methods allow the analysts to clearly show the process taken while coming to an estimate than can be judged and altered if needed.  Their research found that while TDM models often result in a good estimate, if a mistake is made in the application, these models have been found to be less accurate than intuitive strategies.

NDM Models
NDM models are based on identifying the analysts experiences with key pieces of evidence to come to a conclusion.  Recognition-primed decision making (PRDM) is the most known model where the analysts immerses himself in the situation, in which the analyst then identifies certain indicators which they can match to previous experiences.  The literature has shown the NDM 10 main characteristics,

1) Ill-defined goals and ill -structured tasks
2) Uncertainty, ambiguity and missing data
3) Shifting and competing goals
4) Dynamic and changing conditions
5) Action -feedback loops
6) Time constraints
7) High stakes
8) Multiple Layers
9) Organizational goals and norms
10) Experienced decision makers

CCT Models
CCT models identify that both intuitive and cognitive models are often used when analyzing a problem, just to varying degrees depending on the task.  The two operate along a line at different ends, the position of the analyst on this line will change in accordance with the task.  For example, time restraints may prevent an analysts from exploring all the available courses of action, here the analysts will use his/her intuition to identify the top COA's to examine.  From there, they can use an analytic method, ACH, to breakdown that COA.

Critique:
The purpose of this study was to identify the different schools of thinking towards intelligence as a science vs an art in applying it to criminal intelligence.  For all three models, the authors stated no researchers have examined these models specifically for criminal analysts.  In terms of developing a foundation for future researchers to conduct experiments on this topic, the authors did a good job.

As this was more of a literature review, it offers no clear and decisive evidence as to which method works best for criminal analysts.  The authors do agree with CCT models as the likely best approach to intelligence analysis.  I agree with this conclusion as this approach allows the analysts to take the good aspects of each approach and use them to mitigate the weaknesses.  Empirical research must be carried out though to confirm this hypothesis.

Source:

Martin, K., Kebbell, M., Porter, L., Townsley, M. (2011).  The paradox of intuitive analysis and the implications for professionalism.  Journal of the AIPIO.  Vol. 19, Num. 1.


 

Thursday, November 6, 2014

A machine for jumping to conclusions

Summary:
In his 2013 book, Thinking: Fast and Slow, Nobel Prize winner Daniel Kahneman reflects on decades of psychological research to explain the fundamental heuristics and processes of decision-making.  This research earned him a Nobel Prize in economics.

Most of the chapters in his book explore cognitive biases.  A central theme of this book is that intuition is the fuel of cognitive bias.  System 1 thinking refers to our fast thinking, the decision making we use almost all of the time.  System 2 thinking is our more logical and cautious approach to decision making.  System 1 thinking is automatic and constant, whereas System 2 thinking is only active when we force it to be.

System 1 thinking is where intuition influences us the most, and numerous studies Kahneman cites show that intuition damage our ability to make correct answers and good decisions.  Our brains, particularly with System 1 thinking, make decisions using only the information we have (WYSIATI, what you see is all there is).  According to Kahneman, “System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”  Our System 1 thinking also latches on to information only if it can create a coherent story out of it.  This fact is the reason why intuitive judgments are flawed a majority, if not all, of the time. 

For instance, a study from Stanford involved one group of participants listening to a one-sided description of a situation being arrested.  Another group of participants listened to arguments representing both sides of the case.  The information for both groups were the same – it was mainly pro-plaintiff, but it also allowed participants to infer a pro-defense story.  Rather than take the time to infer or deduce an alternate story, even as the participants knew the study was to bias their decision, the presentation of a one-sided argument affected jurors’ judgments profoundly.

Critique:
Thinking: Fast and Slow should be a required read for any professional involved with decision-making support or, especially, those in power to make decisions.  The plethora of research Kahneman uses to support his conclusions make this book one of the best meta-analysis books on decision making ever written. 

Intuition allows people to take shortcuts (System 1 thinking).  Intuition has served us well in evolution.  If we hear wrestling in a nearby bush, we run – just as our distinct ancestors must have, or else we wouldn't be here today.  However, in a more civilized and interconnected world where the actions of one major country leader or business can have profound effects on the state of affairs in a location on the other side of the planet, we must use System 2 thinking more often.  Intuition is dangerous for decision makers and intelligence personnel.

Source:
Kahneman, D. (2013). A machine for jumping to conclusions. In Thinking: fast and slow (pp. 79–89).

Monday, November 3, 2014

Summary of Findings: Speed Reading (3.5 out of 5 stars)

Summary of Findings: Speed Reading (3.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 5 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in November 2014  regarding Speed Reading specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
Speed reading is an analytic  modifier used to improve the average reading speed measured in words per minute of a reader without reducing comprehension. Speed reading attempts to eliminate fixation, regression, and auditory reassurance. Teaching speed reading involves a fundamental paradigm shift in how most people are traditionally taught how to read, moving away from phonetic reading in favor of direct semantic processing from visual cues to reduce cognitive load and enable faster reading.

Strengths:
1. Speed reading can significantly reduce the amount of time needed in collection and analysis
2. There are several ways to increase reading speed and maintain comprehension
3. There are a variety of courses and supplemental tools to accelerate the speed reading learning process
4. Past research has identified the leading causes of slow reading speeds, enabling readers to reflect and adjust on those causes at their own pace

Weaknesses:
1. There are many different speed reading techniques, with no universally accepted best practice
2. Slow process that requires a significant time commitment (approximately an hour a day)
3. Reading at high speeds can have a tiring effect on the individual as opposed to reading at normal speeds
4. While there is evidence that shows these techniques help increase speed, there is a lack of strong evidence showing increased reading comprehension
5. When learning speed reading, reading comprehension is lower until the participant becomes more comfortable with the technique
Step by Step:
Note: This step by step discussion lists only techniques examined by this group of analysts
  1. Increase your peripheral vision
  2. Chunking: Read multiple words at a time (3 to4) as clusters.
  3. Pen or Hand: The reader uses a pen to move the eyes word to word at an accelerated pace
  4. Deadline Strategy: set a determined amount of time to read a passage. Continue decreasing the amount of time to increase speed.
  5. First Sentence: Slow down on the first sentence of a paragraph and speed up for the rest of the paragraph
  6. Read passages or books that are interesting to you. This will prevent your mind from wandering off while reading.

Exercise:
Students first went to www.readingsoft.com, a free speed reading online program, and read the reading passage at their normal reading speed to get their words per minute score, which they wrote down. After the initial reading test, students spoke about factors that can slow down a persons reading speed: fixation, regression, and auditory reassurance. Students then discussed ways to increase their reading speed (please refer to step-by-step). Again, the students were asked to retake the www.readingsoft.com reading test using one of the techniques discessed. The presenter  then told the class to write down their second reading speed and to go online and find a short article of interest and copy and paste it into www.accelareader.com  After changing the settings in this online program, students were able to view and use some of the techniques used to increase their reading speed.
 
What did we learn from the Speed Reading Exercise
Additionally, the use of various speed-reading techniques lead to increase reading-speeds for the participants.

The relationship between speed-reading and forecasting accuracy is unclear and untested.

Friday, October 31, 2014

Don't Believe What You Read (Only Once): Comprehension Is Supported by Regressions During Reading

Authors: Elizabeth Schotter, Randy Tran and Keith Rayner

Summary:
Schotter, Tran and Rayner examined the effectiveness of a speed reading technique that takes away the need for eye regressions.  Eye regressions are times where your eyes actually move backwards while reading to look at past words.  An app called Splitz spurred this research.  Splitz allows users to read words independently as they flash across a screen.  This speed reading technique allows users to read quicker and allows the user to save time and have increased comprehension, according to the app developers.

The authors developed an experiment that tested the claims that Splitz helps users to comprehend statements at a higher rate than normal reading techniques.  This was accomplished by using a technology developed to track eye movements.  As the participants read through a serious of sentences, the technology would track the participants eyes.  As they left a word, the word would be replaced with "x's" so that the reader could not regress back to that particular word in the sentence and must continue to move forward.  For example,

I ran to the mall.
x ran to the mall.
x xxx to the mall.
x xxx xx the mall.
x xxx xx xxx mall

Forty undergraduate's from the University of California were selected to participate in the experiment.  In total, the participants read through a mix of 40 sentences, that were categorized into three categories of difficulty, ambiguous, unambiguous, and filler.  Ambiguous sentences were rendered structurally impossible when the reader reached the disambiguated verb in the main clause. These sentences followed a structure similar to the following sentence,

"While the man drank the water that was clear and cold overflowed from the toilet"

Unambiguous sentences replaced the initial verb from the ambiguous sentences with an intransitive verb to make it hard for readers to recognize the following noun as a direct object. Unambiguous sentences took the general form of,

"While the man slept the water that was clear and cold overflowed from the toilet"

Filler sentences were well constructed sentences that were easy to read and understand such as,

"It was very hard to find the truck inside of the messy toy box"

After reading the sentences, the participants answered a question that tested their comprehension of the previous sentence.

The study found that comprehension accuracy was less for both ambiguous and unambiguous sentences than on filler sentences.  Participants who lacked the ability to re-analyze sentences that were confusing to the participant could not comprehend them as well than while reading sentences normally.  As participants had reduced accuracy for both ambiguous and unambiguous sentences, this suggests that there is regressions are important to reading comprehension globally, and not just across varying degrees of sentence structure difficulty.

Critique:
Schotter, Tran and Rayner lay out a compelling argument against Splitz.  The authors lay out the format and design of their experiment thoroughly, making the process easily reproducible with the right equipment.  The main objection I have with this research is with how words were removed from the participants.  While the participants were unable to re-read the word after reading it (due to the word being replaced with x's) the x's matched up with number of letters in the previous word. For example,

elephant = xxxxxxxx

There may have been an even more significant impact of the participants inability to regress if the word had been completely removed.  The length of the string of x's may have given participant's brains some help in identifying the previous word.  I agree with the results and the analysis of what the results meant, I do believe that the evidence could be more strongly weighed in favor of regressions assisting in brain comprehension than the study suggests.

Source:
Rayner, K. , Schotter, E. R. , Tran, R. (2014).  Don't believe what you read(only once): Comprehension is supported by regressions during reading.  Psychological Science.  P. 1-9.

Altered resting functional connectivity of expressive language regions after speed reading training

Summary
This 2014 study in the Journal of Clinical and Experimental Neuropsychology used functional connectivity magnetic resonance imaging (fcMRI) to examine if speed reading training affects the functional architecture of neural networks involved in reading in 9 participants selected for the speed reading training. 


A central premise of speed reading proponents is that subvocalization significantly slows down reading and that there are more efficient ways to read without the need of subvocalization; it is not necessary to cognitively voice words as one reads them in order to understand them. Subvocalization is a consequence of learning how to read phonetically, or sounding out strings of words in the brain. A typical example of reading something without subvocalization is reading a stop sign without cognitively sounding out the syllables. When someone relies on subvocalization as a primary method of reading, their reading speed is limited to their maximum talking speed. Reduction or elimination of subvocalization in favor of direct semantic processing from visual cues (instead of semantically processing subvocalized phonological cues) represents reduced cognitive load and faster reading. 

By measuring detectable changes in functional connectivity in brain regions associated with language via neuroimaging, the researchers found that the participants disassociated the visual input of orthographic word representations from internalized voicing and subvocalization of text while the text is being read after the speed reading training. Furthermore, reading speed measured in words per minute increased at a statistically significant level.

 The researchers recruited 9 participants with comparable reading proficiency and completed initial and follow-up MRI scans before and after performing the 6-week internet-based speed reading training program. EyeQ Advantage based in Salt Lake City provided the program, which consisted of 12 modules designed to facilitate progressively faster reading speed and increased comprehension. Each training exercise lasted ten minutes and participants performed many modules multiple times, with training sessions 3 to 5 times a week. Some of the exercises consisted of reading passages at slow, medium, and fast presentation speeds as well as following a sequence of geometric images around the screen. 

Reading speed (words per minute) pre versus post training (p = .0021 for differences between reading speed pre versus post training).

Critique 
Unfortunately the research makes no reference to the importance of sustaining a satisfactory reading comprehension level when increasing reading speed nor does the research make a reference to measuring comprehension pre versus post training. An increase in reading speed from 200 words per minute to 800 words per minute is not impressive if reading comprehension suffers. The research does provide evidence that speed reading training reduces or eliminates subvocalization using neuroimaging techniques. Subvocalization is one of the obstacles to reading faster identified by proponents of speed reading programs. Generalizability of findings to the general population could be an issue because of the small sample size of 9. 

Source
Ferguson, M., Nielsen, J. and Anderson, J. (2014). Altered resting functional connectivity of expressive language regions after speed reading training. Journal of Clinical and Experimental Neuropsychology, 36(5), pp.482-493.

The Effects of a Speed Reading Course and Speed Transfer to Other Types of Texts

The Effects of a Speed Reading Course and Speed Transfer to Other Types of Texts
By: Tran Thi Ngoc Yen

Summary:
Professors at colleges and universities are teaching speed-reading courses to students to help improve their reading speed. Yen conducted this research to determine the effects speed-reading courses have on reading rate improvements of students in, and outside, of the classroom. According to Yen, there are three fundamental indicators of speed-reading: automaticity, accuracy, and reading speed for silent reading or prosody for oral reading. Researchers suggest students need to maintain a reading comprehension rate of at least 75 percent to have speed-reading be efficient. Normally in a speed-reading course, students maintain a graph of their speed in words per minute (wpm) as well as their reading comprehension score to track progress.

Yen used first year students at a Vietnamese university as participants. The 116 participants were placed “into four groups: two experimental groups, hereafter called group A (31 students) and group B (30 students); and two control groups, hereafter called group C (26 students) and group D (29 students).” Participants in groups A, B, and C were English majors, while the participants in group D were not. Groups A and B took the speed-reading course with additional English classes. The control groups ,C and D, did not follow the speed-reading course, but group C followed the English program at the university and group D attended an English course at a language center.

Participants in groups A and B were required to reach a desired vocabulary level of 1,000 to attend the speed-reading course. In addition, all participants had to read pre and post-test texts from a 1,000 reading level and answer ten reading comprehension questions. Participants read the texts and answered the questions on a computer program. According to the study, the “texts differed from those in the course by being longer, being read on a computer screen rather than in hard copy, and involving different topics from those in the course.” The researchers told the participants to read the texts normally and not as quickly as possible. Researchers distributed 20 texts at 550 words to ensure that few participants were reading the same texts. To score the participants and make the results more reliable, researchers used four scoring methods: the 20th minus 1st scoring method, the average scoring method, the extreme scoring method, and the three extremes scoring method. In addition, “participants’ comprehension accuracy was measured by counting the number of correct answers they made on each of the 20 texts in the speed reading course.” 



Participants used progress charts to track their development. There were four main types of charts that participants plotted: gradual increase, erratic increase, plateau increase, and mixed increase. As seen in table 4, groups A and B had an 82 percent gradual increase in their speed change.



With respect to the speed increase transfer from the speed-reading course to other types of texts, the control groups increased an average of 15 wpm and the treatment groups averaged an increase of 48 wpm. In addition, the treatment groups outperformed the control groups on comprehension; most of the treatment groups increased their comprehension accuracy while most of the participants’ in the control groups did not. This research concluded that speed-reading courses helped participants maintain or increase their comprehension while also increasing their reading speed. In addition, “there may be a link between comprehension and reading speed improvement in that participants who greatly increased their speed tended to improve their comprehension accuracy while it was less likely that participants who marginally increased their speeds would improve their comprehension accuracy.” Results also concluded that speed-reading courses were beneficial to the participants because it helped them increase their speed on other types of texts by at least 30 wpm.




Critique:
Although the author did a very thorough job throughout the study, I would have liked to see if there were any differences in other majors besides English in speed-reading, especially since English was not their first language. In addition, the way the author scored the speed-reading was very subjective depending on which one of the four different methods were used. As the results show, taking a speed-reading course could be beneficial. Being able to read and produce more work in the same amount of time would allow analysts to increase their work output. 

Source:

Yen, T. T. N. (2012). The Effects of a Speed Reading Course and Speed Transfer to Other Types of Texts. RELC Journal43(1), 23–37. doi:10.1177/0033688212439996

Extensive reading: Speed and comprehension

Summary:
Bell (2001) examined the correlation between reading speed and reading comprehension in both intensive and extensive environments of children.  Subjects in the extensive environments were given longer texts.  Those in the intensive environments received about 30 short passages, usually no longer than 300 words.  Bell expected those in the extensive program to adapt speed-reading in order to meet the time demands.  Those is in the groups were then given comprehension tests on their respective passages.  A correlation analysis was then performed on the reading comprehension results and reading speed.

Subjects in the extensive environment did adapt higher reading speeds to meet time demands.  More importantly, the extensive environment scored higher on reading comprehension tests than those in the intensive environment.  Bell concludes that extensive readings improve reading speed and comprehension than “intensive language exploitation activities.” Furthermore, the more extensive readings a student does, the more his/her reading speed and comprehension increase.




Critique:
There are few items worth noting concerning the findings of this study.  First, it was focused on elementary learners.  Therefore, reading longer texts may not have the same improvements on adults – although that is certainly desirable, especially in the intelligence community.  Second, it is near impossible to make comprehension tests for different texts at the same level of difficulty.  While the tests may have been valid for each text, they may not be valid when considered as a whole in the context of this study.

Now, the findings of this study have interesting implications for intelligence summaries (INTSUMs) and short-form analytic reports (SFARs).  INTSUMs may actually be harmful to decision makers if these findings are applicable to adults and their reading comprehension.  Admittedly, SFARs would not be as harmful as they contain more content, but these findings suggest that long-form analytic reports (LFARs, usually 2 or more pages) are the most preferable tools to spread information and understanding of a current issue.

Future studies on reading speed and comprehension are required in order to support such assertions.  The current study, while interesting, is not enough.

Source:
Bell, T. (2001, April). Extensive reading: Speed and comprehension. Retrieved October 31, 2014, from http://www.readingmatrix.com/articles/bell/ 

Thursday, October 30, 2014

The Effect of a Timed Reading Activity on EFL Learners: Speed, Comprehension, and Perceptions

Summary
Anna Chang, in this study, tested the effectiveness of a reading class on reading speed and comprehension. The results of the experiments showed that students doing the timed reading activities increased their reading speed on average by 29 words per minute (wpm) (25%) and comprehension by .63 (4%).

Participants of the study were divided into two groups, an experimental group (n=46) and a control group (n=38). Both groups were enrolled in a required English course for the purpose of preparing a student for the TOEIC (Test of English International Communication). The course ran for 13 weeks with one 2 hour class session per week. The experimental group spent 15 minutes at the end of each class on timed reading exercises and the control group spent the 15 minutes reviewing the previous week’s lesson.

Chang, at the beginning of the course, subjected both groups to a pretest where they were required to read two texts while it was timed. After reading the texts, the participants took a test consisting of 5 multiple choice questions. Chang repeated the process after the 13 week course.

Results of the experiment showed that reading speed increased, on average, in the experimental group by 25% from 118 wpm to 147 wpm compared to only 5% in the control group. Additionally, the number of participants reading above 150 wpm in the experimental group increased from 10% to 39%. 
Figure 1. Reading speed for the experimental and controls groups at Time 1 and 2 (in wpm)

Experimental results for comprehension did not show as much of a difference.
Reading comprehension increased by only a .05 difference between the experimental and control groups. Therefore, this study showed that increasing speed did not decrease comprehension which was found to be true in other studies.

Critique
I do not believe comprehension was fully assessed by this study. The pre and post-tests administered were only 5 to 8 questions. Additionally, the questions were multiple choice; therefore, increasing the chance of a correct answer without actually comprehending it. A pre and post with a greater amount of questions would be a better measure for assessing whether a student fully comprehends what they are reading.

Source:

Chang, Anna. (2010). The effect of a time reading activity of EFL learners: Speed, comprehension, and perceptions. Reading in a Foreign Language, 22(2). 

Monday, October 27, 2014

Summary of Findings: Delphi Technique (4 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 5 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in October 2014  regarding Delphi Technique specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
The Delphi technique is a method that relies on expert and group knowledge to make more accurate forecasts using incomplete information.  The individual forecasts are compiled after a series of rounds.  Then, the individuals’ responses are anonymized and dispersed to the remainder of the group for consideration, and new individuals forecasts are given.  

The RAND corporation created the Delphi technique in order to support accurate decision making in the face of incomplete information.  There is a substantial amount of research on the validity of the Delphi technique dating back to its creation in the 1950s, but the methodologies scholars have used to test Delphi’s effectiveness have varied in almost every study.  

Strengths:
1. Conducted in writing or electronically and does not require face-to-face meetings
2. Helps generate consensus or identify divergence of opinions among group members
3. Participants are relatively free of social pressure, influence, and dominance from other group members
4. Anonymous responses allow respondents to keep opinion until they are comfortable changing an estimate
5. Is inexpensive

Weaknesses:
1.  Time for answers may not be given to the problem and consensus may not be obtained
2. Participants may ignore feedback
3. Experts may not be defined among the group
4. Requires adequate time and participant commitment
5. More time consuming than other group methods
6.Broad guidelines-- there are at least 27 different ways to conduct the method

Step by Step:  
  1. Use a group of 5-20 heterogeneous experts or people with appropriate knowledge of the subject.
  2. The entire process must use a systematic process, particularly with anonymous feedback and a controlled method of dispersing responses and feedback.
  3. A minimum of three iterations should be conducted with polling continuing until there is a stability in responses.

Exercise:
We used Delphi Decision Aid online software to conduct three 5 minute rounds of Delphi to forecast how many second year Applied Intelligence graduate students will have at least one full time job offer in an intelligence-related field by graduation and how many pages second year Applied Intelligence will have completed on average by October 29 toward a thesis. The first round also contained a ranking question to rank panel expertise on various topics to inform further Delphi questions for subsequent rounds. Subsequent rounds asked the two original questions in addition to predicting the outcome of the National Football League AFC division this session, how many selfies Kim Kardashian will have in her book scheduled for publishing in April 2015, and what the S&P 500 index will be in early November. After each round, the panel had a few minutes to review the feedback of the round through statistical aggregation of responses and written comments explaining why a panelist made the estimate that they did. 

What did we learn from the Delphi Exercise
  1. Delphi works well with broad questions where the expertise of one person is not sufficient to encompass the entire scope of the question.
  2. Literature suggests that panelists tend to perform poorly on questions asking them to rank various items from best to worst and that self-reported expertise is not a best practice for panel selection.
  3. Delphi is designed to collect expert estimates  in cases where a variety of relevant factors (economic, technical, etc.) ensure that individual panelists have limited knowledge and could reasonably benefit from communicating with other experts possessing different information.   
  4. Estimates from panelists do not have to be quantitative such as in prediction markets.

Additional Resources Of Interest: