Friday, November 14, 2014

Collaborative visualization: Definition, challenges, and research agenda

By: Petra Isenberg, Niklas Elmqvist, Jean Scholtz, Daniel Cernea, Kwan-Liu Ma, Hans Hagen

Summary:
According to the authors of this research paper, “collaboration has been named one of the grand challenges for visualization and visual analytics.” Traditionally, visualization and visual analytic tools were designed for a single person on a desktop computer. However, today’s world calls for increased visualization tools that encompass collaboration and communication. Experts and non-experts can take advantage of collaborative visualization scenarios to learn from one another’s analysis processes and viewpoints. The authors define collaborative visualization as “the shared use of computer-supported, [interactive], visual representations of data by more than one person with the common goal of contribution to joint information processing activities.” The term social data analysis has also been created to describe the different social interactions, which is central to collaborative visualization.

There are three main levels of engagement where digital systems support collaborative visualizations: viewing, interacting/exploring, and sharing/creating. Software systems like PowerPoint and videoconferencing allow people to learn, discuss, interpret and form decisions on a certain set of information. People that use and share interactive visualization software can communicate through chat, comments, email, or video/audio links. Utilizing these features allows discussions of alternative interpretations, and multiple viewpoints to emerge. Programs such as Many Eyes, allow users to upload and create new datasets for the community to explore.  The authors present this argument that the purpose of having an online collaboratory (data warehouse) “is to focus the collective effort of the group in order to produce significant and useful methods.” However, it is important for the users of the program to understand the overall data, the user space and the application space.

Computer-supported collaborative visualization software helps decision makers: distill knowledge through mining large multi-dimensional datasets, run models and simulation to explore the consequences of particular actions, communicate results, scenarios, and opinions to other stakeholders, and discuss debate, and develop support for specific courses of action. In addition, collaborative technology supports the social interaction of large audiences, which allows for a range of backgrounds, connections and goals. This provides the group with an environment where individuals can generate ideas and analysis alone or together.

Critique:
This article gave a broad overview of collaborative visualization and the areas where future research should be addressed. However, the authors did not integrate the challenges of collaborative visualization throughout the piece, instead they arranged it in future research. As collaborative visualization becomes utilized as an everyday tool, it will be important for people to learn these programs at school or at work. Knowing how these analytic tools work will be key to group interactions and their analyses.

Source:

Isenberg, P., Elmqvist, N., Scholtz, J., Cernea, D., Ma, K.-L., & Hagen, H. (2011). Collaborative visualization: Definition, challenges, and research agenda. Information Visualization, 10(4), 310–326. doi:10.1177/1473871611412817

Thursday, November 13, 2014

Visualization and cognition: Drawing things together

Summary:
Latour takes an anthropological looks at what gives visualizations their cognitive value and comprehension.  After reviewing several anthropological, psychological, and business-related works, he found that Visualizations are most effective when they contain certain characteristics. 

First, visualization must have elements of optical consistency.  One of the most effective elements of optical consistency is perspective.  Perspective is the reason why many graphs and, especially, maps seem incomplete or confusing without legends or scales.  Our brains are nearly automatic when it comes to taking something we see in one picture, and comparing it with object in another picture as long as we have a baseline to do so.

Second, it must obey by the “visual culture” at the time of the visualization’s creation.  Visual culture is an abstract requirement that essentially requires the photographer or artist to have elements in the photo or work that allows the observer to assessing its own worldly attributes to it.  The work can be viewed at a future time, but still be understood to be a snapshot of a different time.  The overall picture or message is still clear, regardless of when the picture is viewed.

Third, and related to the second requirement, a visualization is most effective when is can be understood relatively.  The ability to publish visualizations have made this requirement easier to meet.  The ability to publish makes visualization mobile (able to be view across a wider time and space) and immutable (able to remain unchanged over time).

After outlining what makes for the greatest mobile and immutable visualizations, Latour explores how the use visualizations help people understand otherwise overwhelmingly complex phenomena. 

While anything can be re-imaged or re-visualized, Latour argues that consistency is key.  A dissenter can go find various illustrations of his/her positions, but too many visualizations may actually harm his/her cause.  Like scientific theories, visualizations are best understood when being conveyed in a consistent fashion.  As a very simplified example, ‘bar graph’ issues can be become convoluted when too many start to use pie graphs to portray them.  Spatial dynamics would be much more confusing displayed in a table rather than a map.  Since visualization be produced and dispersed at low costs, consistency is key.

In addition, visualizations make otherwise complicated, 3 or more-dimensional phenomena into flat representations.  When these issues are illustration sufficiently on a flat venue, greater comprehension and communication is achieved – especially when the visualization is coupled with a written text.

Critique:
However, this requirement seems to be mainly useful for photography and art, and is of little importance to intelligence analysts.  Latour’s exploration of visualizations makes intuitive sense, but there are little experimental citations in his writing.  However, he does include plenty of anthropological and scientific research to guide his exploration.  Until his intuitive points are proven wrong in an intelligence-setting experiment, analysts should follow his recommendations.  Visualizations are a valuable modifier, if not a method.

Source:

Latour, B. (1983). Visualization and cognition: Drawing things together (pp. 1–33). Boston, MA: Harvard University. Retrieved from http://isites.harvard.edu/fs/docs/icb.topic1270717.files/Visualization%20and%20Cognition.pdf

Monday, November 10, 2014

The Use of Visualization in the Communication of Business Strategies: An Experimental Evaluation

Summary
This 2014 experiment in the International Journal of Business Communication tested whether the use of visualization is superior to blocks of text for the communication of a seven minute business strategy presentation for the financial services branch of an international car manufacturer. A total of 76 managers saw a seven minute business strategy presentation. The experiment split participants into one control group and two visualization technique treatment groups: block of text slides, visual metaphor, and temporal diagram. Each manager saw one presentation. The experiment found that managers exposed to a graphic representation of the strategy paid more attention, were more likely to endorse the strategy, and better recalled the strategy after one hour of working on an unrelated case study than the managers who saw a textually identical block of text presentation at a statistically significant level. Additionally, managers in the treatment groups perceived the presenter significantly more positively than managers in the control group. 


The perception of the visual accounted for 68.7% of the variation in the perception of the presenter; the perception of the visual is a strong predictor of the perception of the presenter. Although the experiment found significant differences between the treatment groups and the control group for attention, agreement, retention, perception of visual, and perception of presenter; comprehension of the strategy was not significantly different among the three groups when measured immediately after the presentation. The researchers postulate that the results for comprehension was due to measuring comprehension through two multiple choice questions, one which could have been answered without seeing the presentation. Visualization via spatially mapping the strategy content instead of listing it was significantly better than text for measures of attention, agreement with the strategy, and retention. The perception of the presentation and the presenter were significantly better when visualized. 


The three types of visual support that the experiment used. Two text slides (top), visual metaphor (bottom left), and temporal diagram (bottom right).

Visual metaphors explain strategy as a stream of actions geared toward a destination with intermediate goals and restricted by several legal and historic factors. Visual metaphors also highlight elements of strategy that are emergent while others remain unrealized. Visual metaphors make abstract content concrete, memorable, and accessible. 

Temporal diagrams are visual language signs with the primary purpose of denoting function and relationships. Temporal diagrams organize content by location so that the audience accesses and processes the information simultaneously. Temporal diagrams use standard shapes to convey mostly analytical knowledge in a structured and systemic format and make abstract concepts accessible by reducing complexity and aligning planned actions in an ideal sequence. Temporal diagrams provide cues as to where the organization currently is, where it can move to, intermediate objectives, and relationships between parts on different levels. 

Each seven minute presentation consisted of  the same 17 information units. A major overall strategic goal, three sub-goals with three elements each, three success factors for the strategy, and one barrier. In each instance, a large 3x2 meter screen projected the presentation aid. The presenter briefed the content in identical order for all three conditions. Directly after the presentation, a questionnaire measured attention, comprehension, agreement, perception of the visualization, and the perception of the presenter. After a one-hour distraction task in which participants worked on an unrelated case study, a second questionnaire measured the retention of the strategy by the participants. Additional control variables measured included participant background information, perception of legibility, and individual differences on a verbalizer-visualizer dimension of cognitive style using existing, prevalidated scales of measurement.

The experiment did not use extensive guidelines for the use of color and other design considerations because the focus was on the evaluation of the visualizations as a communication aid. The experiment used a real strategy and real managers in a controlled environment. 
  
Critique 
Although the presentation length of seven minutes in the experiment and the findings pertaining to appropriate visualization as a communication aid apply to intelligence analysis, a better understanding for measuring participant comprehension needed to inform future experiments. The researchers postulate that differences in comprehension immediately after the presentation were not statistically significant due to the lack of applicable questions pertaining to comprehension. However, the questionnaire distributed immediately after the presentation contained 33 items such as nominal, ordinal, and interval level data for the other measurements in the experiment; and the questionnaire an hour after the presentation contained eight open-ended items. The researchers state that only two multiple choice questions measured comprehension of the strategy and one of the questions could be answered without attending the presentation at all. That particular question prompted participants to select an objective that entails the most risk from three choices. 

Source
Kernbach, S., Eppler, M. J., & Bresciani, S. (2014). The Use of Visualization in the Communication of Business Strategies: An Experimental Evaluation. International Journal of Business Communication 

Summary of Findings: Intuition (4 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 5 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in November 2014  regarding Intuitive Judgement  specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
Intuition is a rule-based method for making estimates based on cognitive shortcuts and subconscious instincts that are not explicitly stated, but nevertheless guide the estimate and subsequent action. Intuition is used to assess multiple alternatives without a structured analytic process to make estimates under the most dire time constraints. Intuition is the fastest estimative process available to analysts.

Strengths:
1. Intuition focuses on well-defined patterns, relationships and possibilities
2. Intuition involves factors such as expertise, processing styles, task structure, feedback, and time pressure in making decisions
3. Works well with well defined problems
4. Intuition allows people to take shortcuts (System 1 thinking)
5.Intuition is a fast process

Weaknesses:
1. Current evidence shows that intuition works best only in certain situations (limited time frame)
2. Unknown if cognitive biases either hurt or help forecasting accuracy
3. Unstructured or ambiguous questions hurt the effectiveness of using intuition
4. No documentation, as there is with a structured method, as to why the analysts pursued a certain path or came to a certain conclusion when intuition is used

Step by Step:
Note: This is reasonable description of the steps one would take in making an intuitive judgement.  It is derived from: Glockner, Andreas. (2007). Does intuition beat fast and frugal heuristics? A systematic empirical analysis.
  1. A Person must first activate all associated information within their memory
  1. A person then automatically reduces the number of inconsistencies between pieces of information
  2. A resulting decision is formed based on the connection between available information

Exercise:
Participants were given 3 minutes to answer 8 questions purposely intended to induce cognitive biases (form here).  The short time allowed was enforced to make participants rely on their System 1 intuition.  After completing the questions, participants were instructed to answer the same questions again, but without a time limit.  Participants could not look up answers to the questions.  System 2 thinking allows for a greater logical capacity.

Accuracy and logic improved in the System 2 round in every question.  These findings suggest that intuition may be a reliable method for accurate assessment but that an analyst that slows down and takes the time to consider the logical complexities inherent in a problem will likely produce a better answer.
 
What did we learn from the Intuition Exercise
Cognitive biases may not have the negative effects on intelligence analysis that the common-pejorative sense of the term implies.  Thinking slow may actually overcome many of the biases that could negatively affect assessment anyways.  More research is needed to prove how beneficial the use of structured analytic methods benefit intelligence analysis (i.e. “Bayesian increase accuracy over intuition by 7 percent.”).

Friday, November 7, 2014

Feeling before knowing why: The role of the orbitofrontal cortex in intuitive judgments

Summary
This 2014 study in the Cognitive, Affective, & Behavioral Neuroscience journal used
magnetoencephalography (MEG) to clarify the role of the orbitofrontal cortex (OFC) in intuitive processes for making judgments. Previous studies suggest that the OFC is crucial to intuitive processes but the specific role of the OFC was unclear. The study delineates "decisions under uncertainty" from "decisions under risk". When an analyst needs to decide quickly between multiple alternatives and all consequences of the outcome are unknown, decisions are made on the basis of incomplete information and usually with time limitations. This is different than situations where the analyst knows all possible alternatives and outcomes as well as their probabilities beforehand. To deal with decisions under uncertainty in certain situations, an analyst needs rapid judgment abilities that do not depend on a conscious though process moving through all the steps of reasoning. The research defines intuition as rapid judgments based on hunches that cannot be explicitly  described but nevertheless guide subsequent action.

The study hypothesized that the OFC functions as an early integrator of incomplete stimulus input guiding subsequent processing by means of a coarse representation of the gist of the incomplete information. The researchers used MEG to record participant electromagnetic brain responses during a visual coherence judgment task. The results indicate that OFC activation occurred independently of physical stimulus characteristics, task requirements, and participant explicit recognition of the stimulus presented.

Preliminary neural model of intuitive processing. The OFC functions as an integrator of stimulus input and processes input toward a coarse representation. An initial hunch or gut feeling that forms a judgment and leads to subsequent action reflects the coarse representation.
To test the empirical plausibility of the model suggesting that OFC activation reflects the initial intuitive perception and precedes later stimulus processing geared toward explicit reasoning, the study's experiment used MEG to record participant responses during a visual coherence judgment task.

Ten participants worked through 285 trials in five blocks containing 57 trials each. For each participant, 285 line drawings were randomized in sequence. On every trial, a line drawing was presented for 500 milliseconds. Subsequently, each participant had 2 seconds to decide whether or not the presented line drawing showed a nameable object (was the line drawing coherent?). If the participant judged a line drawing as coherent, they had another 2 seconds to indicate if they could actually name the object. Participants were presented with line drawings of either fragmented but still nameable objects or their scrambled counterparts and had to decide for each stimulus whether they believed it was nameable and, if so, whether they could actually name it. Participants were able to discriminate above chance between fragmented and scrambled stimuli. This result also held true when participants stated that they were not able to name the object, which supports the assertion that the present task involves intuitive coherence judgments. The researchers defined three levels of fragmentation according to three filters differing in their capacity to mask the object in the line drawing. Fragmented and scrambled line drawings had exactly the same pixel information and differed only in their higher-order meaning.

The results suggest indicate that OFC activation in intuitive judgments is linked to initial feelings of coherence that guide subsequent decision and action. Alternative interpretations of OFC activation, reflecting differences in physical stimulus characteristics, task requirements, or explicit object recognition, were ruled out. The OFC has a high number of anatomical, as well as functional, connections to many different brain areas. Previous research showed that the OFC has strong interconnections with ubcortical structures responsible for emotional behavior and memory functions (i.e., the amygdala, the entorhinal cortex, and the hippocampus), as well as visceral and motor control (i.e., the hypothalamus, the brainstem, and the striatum.

The researchers postulate that subcortical structures likely make an integration of experience and current stimulus possible, a prerequisite to extracting the overall gist of a concept. Ubcortical structures likely enable the triggering of quick behavioral outcomes, with rapidity as a main attribute of intuitive decision making. However, further research is required to confirm the postulations. Findings reaffirm that the OFC plays a crucial role in intuitive processing and creating abstract perceptions that lead to initial feelings of coherence and trigger quick action.

Critique
Even though this experiment uses line drawings derived from a database of Snodgrass figures and tests inferences from visual detections, it is an effective proxy for the mental shortcuts and symbolic processing that takes place during general intuitive judgments. The framework the study uses draws upon previous literature on intuition and suggests that intuition is a process with four discrete levels of awareness representing knowing without being able to explain how something is known. 
  1. Physical - Associated with bodily sensations 
  2. Emotional - Intuition enters consciousness through feelings and a vague sense that one is supposed to do something and instances of immediate preferences based on prior experience and feelings
  3. Mental - Comes into awareness through images and an inner vision 
  4. Decision - Ability to come to conclusions on the basis of insufficient information 
Source
Horr NK, Braun C, Volz KG. Feeling before knowing why: The role of the orbitofrontal cortex in intuitive judgments-an MEG study. Cogn Affect Behav Neurosci. 2014;14(4):1271-85.

Expertise-Based Intuition and Decision Making in Organizations

By: Eduardo Salas, Michael A. Rosen and Deborah DiazGranados

Summary:
The authors of this review examined literature on intuition, expertise, and how expertise-based intuition plays a role for decision makers in organizations. According to the authors “Expertise is at the root of effective intuitive decision making in complex organizational settings, and therefore understanding how to develop and manage effective intuition in organizations is, in part, linked to an understanding of human expertise.”  Expertise-based intuition is different because it draws on domain-specific knowledge to answer questions. Expertise and intuition are not synonymous.




However, a person cannot rely on intuition alone. Overlaying on intuition can also be a source of error. Instead, an organization should set performance and developmental mechanisms for expertise-based intuition.  The literature points to several conditions where intuition is more likely to be accurate: characteristics of the decision makers, the decision task, and the decision environment. A person’s intuition is rooted unconsciously, which provides them with a quick judgment on complex patterns of relationships. Although, if the decision maker is taken out of his/her expertise, then the likelihood of his/her intuition decreases.



Since intuition is based from expertise, experts possess both the experience and knowledge to take advantage of the intuition process. Using expertise and naturalistic decision making (NDM) helps decision makers explain the role of intuition in the decision making process. NDM is a type of decision making research that aims to understand “the way people use their experience to make decisions in field settings.” Tables 2 and 3 provide a summary of the mechanisms of performance and the mechanisms of development.




Critique:
While the authors adequately explained why intuition plays a large role in organizations and how to improve intuitive expertise-based decision making, they only laid out the existing knowledge based on management science. Looking at intuition from a different field, such as organizational leadership or intelligence, could cause different results. In addition, the authors did not touch on the level of expertise needed to be considered an expert, what situations would call for intuitive decision-making, and how people react to the intuitive expertise-based decisions.

Source:

Salas, E., Rosen, M. A., & DiazGranados, D. (2010). Expertise-Based Intuition and Decision Making in Organizations. Journal of Management36(4), 941–973. doi:10.1177/0149206309350084

Does Intuition Beat Fast and Frugal Heuristics?

Summary
In his chapter titled “Does Intuition Beat Fast and Frugal Heuristics?” Glockner examined whether the automatic process of developing a decision is quicker and more accurate than other heuristic approaches.

Glockner discussed three methods by which a person can make a decision, two of which are method based, and the third is an automatic process.

            Weighted Additive Strategies (WADD): is the process of choosing the option with the highest weighted sum of criteria. This method requires an individual to process all available information to make a decision.

            Take the Best (TTB): is the process of choosing the best option with respect to one specific criterion deemed the most important. A person will only use information pertaining to the one criterion to make their decision.

            Consistency-Maximizing Strategy (CMS): Is an automatic process by which a person identifies consistencies between available information, whether it is provided or in their memory, to make a decision.

The CMS is a three step process by which a person can make a decision:
  1. The person must first activate all associated information within their memory
  2. A person then automatically reduces the number of inconsistencies between pieces of information
  3. A resulting decision is formed based on the connection between available information led to
Glockner hypothesized that choices made by those using CMS would mirror the choices made by WADD group. Additonally, Glockner hypothesized that those using CMS would have shorten decision making times and their times would increase with the number of inconsistencies.
To test this hypothesis, Glockner asked an experimental group to make a decision. Participants were provided a list of cities and were asked to determine which city possessed more inhabitants. Each participant was given 3 facts about each of the cities: (A) whether the city was a capital or not, (B) the city has or does not have a university, and (C) the city has or does not have a major league sports team. This process was repeated using 6 different questions. After the fact, the participants disclosed their decision making process and were divided into groups (WADD, TTB, and CMS). Glockner  identified that 63% used CMS, 24% used TTB, and 2% used WADD. Those using CMS had the lowest decision making times; however, there was no significant difference.

Results of this study found that CMS was the most frequently used decision process model. Only a small portion opted for a complex process (WADD) in making their decisions. Glockner found in this study that the results between decisions made using CMS (i.e. intuition) and a complex processing model (WADD) did not differ.

Critique
Glockner’s findings adequately show that CMS is the most used decision processing model; however, his experimental model cannot unequivocally determine it is the better of the two. The number of participants using a complex decision making model (WADD) waned in comparison to those using CMS; therefore, this study is not statistically sound. Further research must be done to have an equal group size of both intuitive decision makers and heuristic decision makers to make the leap as to one is better than the other.

Source
Glockner, Andreas. (2007). Does intuition beat fast and frugal heuristics? A systematic empirical analysis. 

The Paradox of Intuitive Analysis and the Implications for Professionalism

Authors: Kirsty Martin, Mark Kebbell, Louise Porter, and Michael Townsley

Summary:

Martin et al. examined the available research pertaining to how analysis is done by intelligence analysts, with the intention of apply their findings specifically to criminal intelligence analysts.  The main question, identified by the authors, is should intelligence be treated like an art (where intuition and experiences play a key role), a science (where the scientific method is the driving force) or a combination of the two?  Their research identifies three different models: traditional decision making (TDM), naturalistic decision making (NDM), and cognitive continuum theory (CCT).

Martin et al. in their research identify the analyst as the decision maker, and not the client, which is what most research usually identifies as the decision makers.  The focus is to identify how analysts should make decisions as they examine a piece of evidence, either in an intuitive fashion (art) or a rule based method (science).

TDM Models
TDM sets formal rules for analysts to move through while making decisions on pieces of evidence.  These models are based on the three key principles of rationality, maximization and transitivity. Analysts will follow logical, rule based processes when making decisions and will attempt to maximize their accuracy to seek for the best decision available.

Examples of TDM models include decision tree analysis, sleipnir, analysis of competing hypotheses (ACH) and Bayesian analysis.These methods allow the analysts to clearly show the process taken while coming to an estimate than can be judged and altered if needed.  Their research found that while TDM models often result in a good estimate, if a mistake is made in the application, these models have been found to be less accurate than intuitive strategies.

NDM Models
NDM models are based on identifying the analysts experiences with key pieces of evidence to come to a conclusion.  Recognition-primed decision making (PRDM) is the most known model where the analysts immerses himself in the situation, in which the analyst then identifies certain indicators which they can match to previous experiences.  The literature has shown the NDM 10 main characteristics,

1) Ill-defined goals and ill -structured tasks
2) Uncertainty, ambiguity and missing data
3) Shifting and competing goals
4) Dynamic and changing conditions
5) Action -feedback loops
6) Time constraints
7) High stakes
8) Multiple Layers
9) Organizational goals and norms
10) Experienced decision makers

CCT Models
CCT models identify that both intuitive and cognitive models are often used when analyzing a problem, just to varying degrees depending on the task.  The two operate along a line at different ends, the position of the analyst on this line will change in accordance with the task.  For example, time restraints may prevent an analysts from exploring all the available courses of action, here the analysts will use his/her intuition to identify the top COA's to examine.  From there, they can use an analytic method, ACH, to breakdown that COA.

Critique:
The purpose of this study was to identify the different schools of thinking towards intelligence as a science vs an art in applying it to criminal intelligence.  For all three models, the authors stated no researchers have examined these models specifically for criminal analysts.  In terms of developing a foundation for future researchers to conduct experiments on this topic, the authors did a good job.

As this was more of a literature review, it offers no clear and decisive evidence as to which method works best for criminal analysts.  The authors do agree with CCT models as the likely best approach to intelligence analysis.  I agree with this conclusion as this approach allows the analysts to take the good aspects of each approach and use them to mitigate the weaknesses.  Empirical research must be carried out though to confirm this hypothesis.

Source:

Martin, K., Kebbell, M., Porter, L., Townsley, M. (2011).  The paradox of intuitive analysis and the implications for professionalism.  Journal of the AIPIO.  Vol. 19, Num. 1.


 

Thursday, November 6, 2014

A machine for jumping to conclusions

Summary:
In his 2013 book, Thinking: Fast and Slow, Nobel Prize winner Daniel Kahneman reflects on decades of psychological research to explain the fundamental heuristics and processes of decision-making.  This research earned him a Nobel Prize in economics.

Most of the chapters in his book explore cognitive biases.  A central theme of this book is that intuition is the fuel of cognitive bias.  System 1 thinking refers to our fast thinking, the decision making we use almost all of the time.  System 2 thinking is our more logical and cautious approach to decision making.  System 1 thinking is automatic and constant, whereas System 2 thinking is only active when we force it to be.

System 1 thinking is where intuition influences us the most, and numerous studies Kahneman cites show that intuition damage our ability to make correct answers and good decisions.  Our brains, particularly with System 1 thinking, make decisions using only the information we have (WYSIATI, what you see is all there is).  According to Kahneman, “System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.”  Our System 1 thinking also latches on to information only if it can create a coherent story out of it.  This fact is the reason why intuitive judgments are flawed a majority, if not all, of the time. 

For instance, a study from Stanford involved one group of participants listening to a one-sided description of a situation being arrested.  Another group of participants listened to arguments representing both sides of the case.  The information for both groups were the same – it was mainly pro-plaintiff, but it also allowed participants to infer a pro-defense story.  Rather than take the time to infer or deduce an alternate story, even as the participants knew the study was to bias their decision, the presentation of a one-sided argument affected jurors’ judgments profoundly.

Critique:
Thinking: Fast and Slow should be a required read for any professional involved with decision-making support or, especially, those in power to make decisions.  The plethora of research Kahneman uses to support his conclusions make this book one of the best meta-analysis books on decision making ever written. 

Intuition allows people to take shortcuts (System 1 thinking).  Intuition has served us well in evolution.  If we hear wrestling in a nearby bush, we run – just as our distinct ancestors must have, or else we wouldn't be here today.  However, in a more civilized and interconnected world where the actions of one major country leader or business can have profound effects on the state of affairs in a location on the other side of the planet, we must use System 2 thinking more often.  Intuition is dangerous for decision makers and intelligence personnel.

Source:
Kahneman, D. (2013). A machine for jumping to conclusions. In Thinking: fast and slow (pp. 79–89).

Monday, November 3, 2014

Summary of Findings: Speed Reading (3.5 out of 5 stars)

Summary of Findings: Speed Reading (3.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 5 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in November 2014  regarding Speed Reading specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
Speed reading is an analytic  modifier used to improve the average reading speed measured in words per minute of a reader without reducing comprehension. Speed reading attempts to eliminate fixation, regression, and auditory reassurance. Teaching speed reading involves a fundamental paradigm shift in how most people are traditionally taught how to read, moving away from phonetic reading in favor of direct semantic processing from visual cues to reduce cognitive load and enable faster reading.

Strengths:
1. Speed reading can significantly reduce the amount of time needed in collection and analysis
2. There are several ways to increase reading speed and maintain comprehension
3. There are a variety of courses and supplemental tools to accelerate the speed reading learning process
4. Past research has identified the leading causes of slow reading speeds, enabling readers to reflect and adjust on those causes at their own pace

Weaknesses:
1. There are many different speed reading techniques, with no universally accepted best practice
2. Slow process that requires a significant time commitment (approximately an hour a day)
3. Reading at high speeds can have a tiring effect on the individual as opposed to reading at normal speeds
4. While there is evidence that shows these techniques help increase speed, there is a lack of strong evidence showing increased reading comprehension
5. When learning speed reading, reading comprehension is lower until the participant becomes more comfortable with the technique
Step by Step:
Note: This step by step discussion lists only techniques examined by this group of analysts
  1. Increase your peripheral vision
  2. Chunking: Read multiple words at a time (3 to4) as clusters.
  3. Pen or Hand: The reader uses a pen to move the eyes word to word at an accelerated pace
  4. Deadline Strategy: set a determined amount of time to read a passage. Continue decreasing the amount of time to increase speed.
  5. First Sentence: Slow down on the first sentence of a paragraph and speed up for the rest of the paragraph
  6. Read passages or books that are interesting to you. This will prevent your mind from wandering off while reading.

Exercise:
Students first went to www.readingsoft.com, a free speed reading online program, and read the reading passage at their normal reading speed to get their words per minute score, which they wrote down. After the initial reading test, students spoke about factors that can slow down a persons reading speed: fixation, regression, and auditory reassurance. Students then discussed ways to increase their reading speed (please refer to step-by-step). Again, the students were asked to retake the www.readingsoft.com reading test using one of the techniques discessed. The presenter  then told the class to write down their second reading speed and to go online and find a short article of interest and copy and paste it into www.accelareader.com  After changing the settings in this online program, students were able to view and use some of the techniques used to increase their reading speed.
 
What did we learn from the Speed Reading Exercise
Additionally, the use of various speed-reading techniques lead to increase reading-speeds for the participants.

The relationship between speed-reading and forecasting accuracy is unclear and untested.