Friday, September 11, 2015

The Delphi Method for Graduate Research

Gregory J. Skulmoski, Francis T. Hartman, and Jennifer Krahn (2007)

Summary:
The authors of this article for the Journal of Information Technology Education are from Zayed University in Dubai and the University of Calgary, Canada.  Their approach is to provide graduate students with the knowledge necessary to employ the Delphi method in their own academic research, be it for a thesis or dissertation.  The primary scope of the piece is based around the Information Systems (IS) field as well as the field of Information Technology (IT), but they believe that as a research technique the Delphi method can be used in a wide variety of fields, not just their own.  This is because they view the Delphi method as a “flexible research technique” and define it as follows:

The Delphi method is an iterative process to collect and distill the anonymous judgments of experts using a series of data collection and analysis techniques interspersed with feedback.

The authors also put forth that that this method is most suitable when “the goal is to improve our understanding of problems, opportunities, solutions, or to develop forecasts.” 
In order to improve the reader’s understanding of the methodology there is a brief historical overview of what the authors refer to as “Classical Delphi”.  This is the methodology developed by Norman Dalkey for the RAND Corporation in the 1950’s.  In the Delphi method a panel of experts is given a survey with questions to answer.  After they have returned the survey, a second survey is sent out based on the results of the first, this proceeds over a pre-determined number of rounds. Following the completion of the last survey analysis of the final results is conducted.  Classical Delphi is defined by four key features:  
  1. Anonymity of Delphi participants: participants are freed from social and professional pressures by the use of anonymous surveys.  Those who are regarded as greats in the field will be judged solely on their answers and not by their reputations.  This also frees participants to think outside traditional lines without fear of reprisal.
  2. Iteration: participants are given the opportunity to develop their own understanding and beliefs as the study progresses.
  3. Controlled feedback: as rounds of the Delphi progress, participants will be informed of the anonymous perspectives of other participants.  This is one the core advantages of the Delphi method, that the group as a whole will generate a broad array of ideas to begin with and zero in on the best ones as times moves on 
  4. Statistical aggregation of group response: this allows for statistical analysis of results.

This “Classical Delphi” model has been adapted by many in the decades following its creation, this has made it more widely applicable and more adaptive to diverse requirements.  Below is a visualization of a more modern take on the model which has been used in some of the authors’ graduate students’ projects:



One of the features of the paper is the design considerations that should be taken into account while utilizing the Delphi method in graduate research.  The authors discuss pros and cons of:  
  • Methodological Choices
  • The Broadness of the Initial Question
  • Criteria for who is to be Considered an Expert
  • The Number of Participants
  •  The Number of Rounds
  • The Mode of Interaction with Participants
  • Methodological Rigor
  • The Results
  •  Further Verification
  •  Publication

The authors close with what they consider two important points, “First, the Delphi approach can be aggressively and creatively adapted to a particular situation. Second, when adapting the approach, there is a need to balance validity with innovation. In other words, the greater the departure from classical Delphi, the more likely it is that the researcher will want to validate the results, by triangulation, with another research approach.”

Critique:

This article provides a lot of good information for graduate students who may be unfamiliar with the Delphi method.  It is specifically focused on how they can get out and use the methodology without too much difficulty, and what they need to be thinking about as they do it.  One of the largest limitations of the piece in my opinion is that spends too little time discussing the situations in which the method is not appropriate.  The article goes to great length to explain just how adaptable the methodology is, but even in the Executive Summary it states Delphi, “ is not a method for all types of IS research questions.”  The article raises this concern but rarely returns to it.  This as a result makes it feel like the use of the Delphi method is a foregone conclusion and less that it is just one more valuable tool in your analytic toolbox.  

A Delphi Consensus Approach to Challenging Case Scenarios in Moderate-to-Severe Psoriasis

Summary:
Psoriasis is a condition in which skin cells build up and form scales and itchy, dry patches.  It is a challenging condition to treat considering the lack of literature, the simultaneous presence of 2 chronic diseases in patients, and in its difficulty to diagnose.  A consensus panel of 14 experts in the psoriasis field was formed to use a Delphi method exercise for the purpose of identifying challenging clinical scenarios and to rank treatment approaches, in an effort to provide guidance to the practicing clinician.  The Delphi method is well suited to address healthcare-related issues since the outcome is the representation of the collective judgment of the panel of experts.  The 3 basic characteristics of the Delphi method include:

1.       Repeated individual questioning of the experts
2.       The avoidance of direct confrontation among the experts
3.       Interspersed controlled opinion and feedback\

The Delphi method works to achieve a consensus on complex scenarios where rigorous data is lacking.  The panelists extensively review all available data before presenting and discussing it.  One of the most important aspects is the use of anonymous voting by the panelists as it eliminates the effects of reputation in order to settle controversy.  The anonymity also allows panelists to vote honestly, thus avoiding “groupthink” and as well as any following of charismatic panelists and dogmatism.  Delphi is applied in 3 steps over about 5 months to difficult-to-treat clinical scenarios in patients with moderate-to-severe psoriasis.  The steps are:

1.       Selection of difficult-to-treat psoriasis clinical scenarios;
2.       Selection of potential psoriasis treatment
3.       The matching, through systematic, iterative rounds of voting of clinical scenarios with the most appropriate treatments based on data assessment of peer-reviewed literature.

Once 14 psoriasis experts from the U.S. were identified, each individual panelist was asked to list challenging clinical scenarios and therapeutic options for psoriasis.  The scenarios were then selected and ranked, and the treatment options were listed.  The panelists discussed 24 of the top-ranked scenarios during a live meeting and they voted and ranked the treatment choices for each.  The article presents 5 of the 24 discussed case scenarios.  The Delphi exercise resulted in guidelines for practicing physicians to use when confronted with patients with challenging cases of psoriasis.

Critique:
While the Delphi method is well suited to address healthcare-related issues as the panel of experts select rational treatment choices for each of their discussed scenarios, their solutions are not yet supported by rigorous studies to back up their conclusions as well as the effectiveness of Delphi.  Delphi has potential limitations with conflicting interests among the panelists and their experiences and backgrounds.  Additionally, the experts were only chosen from the U.S. along with treatment options based on what is locally available in the U.S. so conclusions may not be relevant world-wide.  Nonetheless, Delphi’s use of anonymity provides an unbiased view of available clinical data which leads to a more objective consensus in accomplishing the goal.

Source: 
"A Delphi Consensus Approach to Challenging Case Scenarios in Moderate-to-Severe Psoriasis: Part 2" 
By: Bruce E. Strober, Jennifer Clay Cather, David Cohen, Jeffrey J. Crowley, Kenneth B. Gordon, Alice B. Gottlieb, Arthur F. Kavanaugh, Neil J. Korman, Gerald G. Krueger, Craig L. Leonardi, Sergio Schwartzman, Jeffrey M. Sobell, Gary E. Solomon, and Melodie Young
http://link.springer.com/article/10.1007%2Fs13555-012-0002-x

The Delphi method: a powerful tool for strategic management

Summary

The Delphi method structures and facilitates group communication that focus, upon a complex problem so that, over a series of iterations, a group consensus can be achieved about some future direction. It has five major characteristics:
  1. The sample consists of a "panel" of carefully selected experts representing a broad spectrum of opinion on the topic or issue being examined. 
  2. Participants are usually anonymous. 
  3. The "moderator" (i.e. researcher) constructs a series of structured questionnaires and feedback reports for the panel over the course of the Delphi.
  4. It is an iterative process often involving three to four iterations or “rounds” of questionnaires and feedback reports.       
  5. There is an output typically in form of a research report with the Delphi results, the forecasts, policy and program options with their strengths and weaknesses, recommendations to senior management and, possibly, action plans for developing and implementing the policies and programs.
        It has some advantages over some other group decision-making techniques like the nominal group technique (NGT) and interacting group method (IGM). First, panel members are not swayed by group pressures or vocal members as can easily happen with NGT and IGM. Second, interpersonal conflicts and communication problems are virtually nonexistent because panel members do not interact. Third, travel costs and the problem of coordination to get everyone at the same place at the same time are not factors. The Delphi method consists of four key planning and execution activities: 
  1.       Problem definition: It is necessary to define the nature and scope of the problem, expected outcomes of the study and the appropriateness of the Delphi method.
  2.       Panel Selection: The Delphi method requires a panel of subject-matter experts (SMEs). The criteria for determining who qualifies as a SME may rest not only on knowledge, but could include criteria such as personal experiences or being stakeholders. Besides, it is important to inform prospective panel members that their commitment to participate would involve several rounds of questionnaires and feedback possibly extending over a period of months. On the hand, Panel selection might not be random because, in some research fields, there might be very few SMEs; thus, one might select all known SMEs.
  3.       Determining the panel size: While there is no one sample size advocated for Delphi studies, rules-of-thumb suggest that 15-30 carefully selected SMEs could be used for a heterogeneous population and as few as five to ten for a homogeneous population. The careful selection of SMEs is a key factor in the Delphi method that enables a researcher confidently to use a small panel. That is not to say that large panels are never used. Indeed, some Delphi studies have used large panels numbering over 100 members.
  4.       Conducting the Delphi rounds: A Delphi study usually involves three to four rounds or iterations, not just a one-time effort; thus, the moderator is able to set up Round 1 according to some strategy knowing that another two to three rounds could be conducted to achieve consensus or other goals. Although three to four rounds are typically used, the moderator should stop the rounds when the criteria for consensus are achieved, when results become repetitive, or when an impasse is reached. Following the final round, the moderator prepares a comprehensive report and distributes it or a short version to all members.              
The Delphi method deserves serious consideration because the careful design and execution of a Delphi study should lead to useful findings for policy makers and program managers. However, it is recommended that researchers consider using a triangulation of methods rather than the reliance upon a single method. For many situations, researchers may find the combination of a Delphi study and a survey with two independent samples useful and practical.
      
        Critique
        This paper describes the characteristics of the Delphi method, including criticisms of the method, and steps in conducting a Delphi study as well as pitfalls to avoid. It is a good overview in order to understand the basics of the method. However, the author does not mention an important weakness of the Delphi method. From my point of view, this method is heavily contingent upon the researcher’s ability. If the researcher have enough knowledge and ability to conduct this method, it can produce valuable results. Otherwise, it is just a fruitless time-consumption.   
      
        Source:
        Robert Loo, (2002) "The Delphi method: a powerful tool for strategic management", Policing: An International Journal of Police Strategies & Management, Vol. 25 Iss: 4, pp.762 – 769. Retrieved from http://search.proquest.com.ezproxy.mercyhurst.edu/docview/211297480/A90AEE4047D44C01PQ/1?accountid=27687

Wednesday, September 9, 2015

The Selection of Delphi Panels for Strategic Planning Purposes

John F. Preble (1984)

Summary:
Preble develops and answers three main questions in this article:
  1. Do results obtained using an intracompany Delphi panel tend to differ significantly from those obtained using an intercompany Delphi panel?
  2. Are the forecasts generated consistent?
  3. If the results are consistent, which panel type is recommended?
Previous literature on the Delphi panels, particularly Martino (1972) and Johnson (1976), recommend using a panel of experts from outside the organization. These studies operated on the implicit assumption that external panelists were likely to be better qualified or more expert than any panelists within the organization, despite a lack of evidence supporting this assumption.

The author composed two 15-member Delphi panels which consisted of top-level employees from large, successful life insurance firms headquartered in the north-eastern United States. The positions represented included Legal, Public Relations, Human Research, and other such diverse roles. The intracompany panel was composed of 15 members from the same company, while the intercompany panel consisted of 15 members from 15 different companies; three participants dropped out midway through the study, leaving 14 and 12 panelists, respectively. The panelists were asked to provide estimates as to the likelihood and timing of 27 different events, dated 1985, 1990, 1995, and 2000, and provide their degree of familiarity with each event. The author provided three rounds of questionnaires to the panelists, including statistical feedback after the first round and qualitative reasoning behind outliers after the second round.

After collecting the data, the author conducted t-tests to determine statistically significant differences. Seventy-six percent of the t-tests were not significant, meaning that in most cases the forecast from the intracompany was “quite close” to the corresponding intercompany estimate. Considering estimates categorized as unlikely (0.0-2.49), slightly likely (2.50-4.99), likely (5.00-7.49), or very likely (7.50-10.0), 95 of 96 comparisons were either in the same category or the next closest category (see Figure 1).  These results show that intracompany panel estimates are about the same as intercompany panel estimates. Because these estimates are consistent, Preble recommends that intracompany panels be used by strategic planners in order to increase administrative control, decrease the number of dropouts and overall costs, and satisfy the need for confidentiality of proprietary information.

Figure 1 - Mean scores and classification comparisons
Critique:
Methodologically, the only significant change is that a few female members should have been included in the intercompany Delphi panel; otherwise, the panel demographics are very similar (see Figure 2). 
Figure 2 - Panel demographics
One of the downsides to this method is that it took four months to complete the three rounds of the Delphi, therefore, this method could be overlooked under tight deadlines. This method could also drift dangerously close to group-think. While anonymity prevents panelists from being dominated by stronger personalities, the opportunity for panelists to change their responses in the second round based on the statistical data of the first round could eliminate any significant dissent. While the goal of Delphi is to converge on a central estimate, the outliers may have unique insights or experiences that are not reflected in statistical syntheses, but only come through in the qualitative comments.

Source:
Preble, J.F. (1984). The Selection of Delphi Panels for Strategic Planning Purposes. Strategic Management Journal, 5(2), 157-170.

Monday, August 31, 2015

Summary of Findings: Role-Playing (3.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the  articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in August 2015 regarding Role Playing as an Analytic Technique specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.


Description:
Role-Playing is an analytic technique in which participants are divided and assigned specific roles to simulate a problem discussion. Each participant must adopt the specific perspective of their assigned role. Each group that is affected by the problem should be represented. The role-players then engage in a discussion of the topic. This technique is useful for both forecasting solutions to problems and for analyzing previously suggested answers.


Strengths:
  • It helps the groups to solve the problems in a creative way.
  • Role playing can take into account the complexities of the situation.
  • It helps to prevent biases if it is implemented as a proper group work.
  • It helps the group to gain a different perspective.
Weaknesses:
  • In order to solve the problem, it must be launched properly.
  • There could possibly be hesitant participants.
  • It is extremely important to actually put oneself in the role.
  • Important to have background information for participants.
  • The method must identify intelligence gaps.


How-To:
  1. Assign roles and make sure people embody them
  2. Describe the situation
  3. Let the situation develop in a free form manner
  4. Upon conclusion of the exercise debrief the group


Personal Application of Technique:
Role Playing Exercise involving a potential decision by the Mercyhurst school administration to mandate that all student-athletes live on campus.


For this exercise, we grouped students into three teams: Student-Athletes, School Administrators, and Analysts. The three groups were requested to develop pros, cons, and a conclusion on whether this new policy is good for their specific group. The teams were given 10 minutes to develop their arguments. After the initial brainstorming period, the teams presented their ideas, and a list of similarities and differences in points of view was produced.


To improve this exercise, an additional group (non-student-athlete) should have been added. Additionally, all roles should have been assigned prior to the problem statement being given.

For Further Information:

Forecasting decisions in conflict situations
Wisdom of group forecasts: Does role-playing play a role?
Role-Playing our way to solutions

Forecasting decisions in conflict situations: A comparison of game theory, role-playing, and unaided judgement

Summary:

This article compared the accuracy of game theory, role-playing, and unaided judgement in regards to forecasting decisions . The author provided much background information on the subject, including predicting the behavior of participants in the new competitive market for wholesale electricity after the New Zealand government transferred assets to a new private sector electricity company called Contact Energy Ltd. in 1996. When the managers employed the role-playing method, the results were not consistent with the executives' beliefs so they turned to game theory for answers. What they found out was that the game theory method ended up not being helpful at all and that role-playing accurately predicted the behaviors.

The article continues to mention other tests that have been performed by many other people which ultimately led the author to determine that in regards to predicting human behavior, role-playing is the most accurate, game theory is the second most accurate, and unaided judgment is the least accurate. This is due in part to the fact that game theory cannot take into account the complexities of situations like role-playing can. According to the article, there is also not much proof to suggest that predictive validity for game theory in real conflicts exists since it is normally tested using role-played conflicts. Basically, role-playing is a more accurate method to use to determine human behaviors because there is a much greater degree of realism.

After doing this background research, the author decided to run their own experiment by creating six conflicts to test the accuracy of game theory, role-playing, and unaided judgment. To do this, the author created six conflicts that would be attempted to be solved by the participants using one of the three methods that were being compared. The results ended up being consistent with previous research which means that the author's experiment also found that role-playing is the most accurate method, game theory is the second most accurate method, and unaided judgment is the least accurate method to forecast human behaviors (conflict resolutions).

Critique:

In general, the experiment that was created and ran by the author was not very controlled. As a psychology major as well, this really bothered me. The first issue that I noticed was that the conflicts presented in the experiment were not all made up. Some of the situations came from previous research, one came from television, and another one was a real life situation from a company. The author claims that these situations were probably not going to be recognized by the participants, but if they were, then there could be problems with the accuracy of the experiment. In addition to this, nothing in the experiment (methods used, time given to find a solution, which situation the participants needed to solve, and more) were assigned to the participants randomly. Various other aspects of the tests were not controlled as well.

Source:

"Forecasting decisions in conflict situations: A comparison of game theory, role-playing, and unaided judgement"
By: Kesten C. Green
http://www.umsl.edu/~sauterv/DSS/green.pdf

Sunday, August 30, 2015

Wisdom of group forecasts: Does role-playing play a role?

Summary:

The article is trying the find out the level of the efficacy of role playing technique in 'forecasting' by applying it to a sales forecasting case. Since one of the determinants of a good analyzing techniques is not to harm the forecasting accuracy, this study aims to figure out the role of role played groups in the forecasting accuracy of sales rates in the future. Hence, the authors form 7 groups ( each group has 3 participants ) of participants and randomly label them as role-playing groups, while they form 6 groups as no-role-playing groups. Totally, they have 13 groups and and 39 participants.


The no-role-playing groups are asked to make groups discussions and come up with consensus forecasts for each product’s sales for the next period. The group rules prohibited any member acting as a group leader and asked the participants to: (1) act with due consideration for all group members; (2) let the member given the Q identifier (is selected randomly) introduce the initial forecast; (3) record their levels of satisfaction with each of the consensus forecasts; (4) record their preferred forecast (which would be equal to the consensus forecast only if they fully agreed with the group consensus); and (5) evaluate each of the group members along with a self-evaluation upon task completion.

The role-playing groups are asked to draw out unmarked envelopes for their roles. These roles are the Forecasting Executive, Marketing Director, or Production Director. They all are given scripts in which their role descriptions identified. The set of rules given to each group prohibited any member acting as a group leader while asking the participants to: (1) act out their given roles as they believed it would be performed in an organisation; (2) act with due consideration for all group members; (3) let the forecasting executive introduce the initial forecast; (4) record their levels of satisfaction with each of the consensus forecasts; (5) record their preferred forecast (which would be equal to the consensus forecast only if they fully agreed with the group consensus); and (6) evaluate each of the group members along with a self-evaluation upon task completion.


Results:


The study doesn't reveal a significant difference between the no-role-playing groups and role-playing groups regarding the accuracy of consensus forecast. The study also couldn't find no significant difference in forecasting accuracy. However, the study shows that the commitment of no-role-playing groups members is stronger than the role-playing group members. This is due to, the role-playing groups members' commitment to their assigned roles and scripts.


Critique:


The group members' lack of subject matter expertise, lack of background  information regarding those sale products and so forth are very important determinants for a person while judging about those products' future sale figures. Thus, the control groups in role playing technique are always susceptible to misleading the experimenters due to lack aforementioned skill sets or background information. This may have been overcame if they could conduct their experiment with an actual business organization's members.


Source:

http://www.sciencedirect.com.ezproxy.mercyhurst.edu/science/article/pii/S0305048311001459?np=y

Role-Playing our Way to Solutions

"Role-Playing our Way to Solutions"
By: Miriam Axel-Lute
National Housing Institute
Source: https://www.bostonfed.org/commdev/c&b/2013/winter/role-playing-our-way-to-solutions.pdf

Summary:

The article discusses a community development issue that used a megacommunity simulation as a way to find solutions and methods. In this situation, the overall goal is to reduce the state of Connecticut's energy use by 25% by the year 2030. Under the leadership of the Housing Development Fund (HDF) and the guidance of the consultant company Booz Allen Hamilton, this large community role-playing simulation was formed in March 2012.

Participants from across the sphere of the energy industry and the Connecticut community were selected to participate. Examples of those represented are energy suppliers, government agencies, non-profits, private residents, the financial sector, and many more. The goal of the simulation was to find avenues in which all parties could agree on to increase Connecticut household energy efficiency. According to the article, in the beginning the participants were uneasy in regards to fitting into the role-playing process but eventually were able to get comfortable.

As expected, the simulation allowed participants to think differently and many new relationships for future collaboration were formed. Participants mentioned how this simulation awoken them to how many aspects of the issue there were and how many sectors play a role in reaching the overall goal of increased energy efficiency. One interesting comment about the potential of role playing was made by Booz Allen VP Gary Rahl, He said, “You never want to pick a goal that can simply be met by an analytical solution—figure out who needs to do what. You need a goal where, to meet it, there will be tensions between participants and no single way of getting there.” I find that very intriguing and I think it is worthwhile to note about future role-playing simulations.

Critique:

The article discussed it, but a major issue with this study was the lack of one major participant, low-income householders. To get the full benefit of a role-playing simulation all aspects of participation need to be represented. The other negative of the study was that most of the results were that of compromise from previous ideas. While compromise and collaboration are good and important, I would have liked to see more innovation in a role-playing simulation. I realize it is difficult to achieve due to every side have their own goals and agenda, however true innovation would be the best result in my opinion. The benefit of role-playing is it increases the potential for innovation, there is no clear cut answer as Booz Allen VP Gary Rahl said. His statement as mentioned above I believe is very interesting and something we all should keep in mind when discussing role-playing.






Monday, November 17, 2014

Summary of Findings: Visualization (4.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 5 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in November 2014  regarding visualization specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
Visualization is an analytic modifier used to add an additional level of understanding and comprehension of complex data.  Data visualizations are normally associated with quantitative data, but qualitative issues are able to be represented as well.  There are plethora of free, simple to use visualization tools available for those without computer science backgrounds (i.e. Tableau, Google Fusion Tables, CartoDB, etc.).  For a more complete list of, and links to, data visualization tools, see the resources section at the bottom of the post.  

Visualizations are most effective when they obey general rules of graphic design, but personnel knowledgeable of graphic design are rare in the intelligence community.

Strengths:
1.Can provide a tangible two or three-dimensional object to physically show decision makers
2. Provide an effective way to present completed material
3.Can examine and display relationships that might be overlooked in a written form
4. Can bolster analyses
5. Can be outputted automatically through software programs
6.Provides a way to identify and link patterns
7. Can give decision makers an interactive way to visualize information

Weaknesses:
1. The use of visualizations is not a one-size fits all modifier
2. Creating visuals requires time from the analyst
3. Design software is not only expensive, but can be difficult to master and use effectively
4. Analysts creating visualizations are prone to using templates which make their visuals unimaginative and plain
5. There is no set of specific criteria that determines what is an effective visualization

Step by Step:
Note: This is reasonable description of the steps one would take in making an effective visualization.  
  1. At the onset of the project, identify likely sources of data and likely visualizations for the final product
  1. During the data collection process, continue to assess what type of data you are collecting and what will be the best way to present this data to your decision maker
  2. Identify the tools and resources you will need create the visuals
    1. If you lack the resources to acquire a certain tool needed to visualize the data in a certain way, alter your design method to find a way to display the same information in a different format
  3. Assess visuals throughout the entire project cycle to ensure that they are telling the same story that the written report tells

Exercise:
After a brief instruction period, participants visualized seven information units pertaining to their identity as intelligence professionals associated with Mercyhurst for 15 minutes, incorporating images from the internet, a foam presentation board, and traditional art utensils such as markers, colored pencils, and glitter pens. The seven information units were ‘what my friends think I do,’ ‘what my family thinks I do,’ ‘what society thinks I do,’ ‘what Mercyhurst faculty think I do,’ ‘what I think I do,’ ‘what I actually do,’ and a personalized brand identity consistent with a previously identified motto, task capability, teamwork capability, experience, and team resources desired to optimize performance. The final product had to be on the foam presentation board. Each participant had 1 minute to communicate their visualization to the class.
 
What did we learn from the Visualization Exercise
Visualizing disparate sources of information is a time consuming process, particularly when the information is abstract and resistant to immediate quantification. Visual literacy is a skill undervalued in the intelligence community and not taught sufficiently in a general sense, however the proliferation of technology expediting the process and emerging studies supporting the use of visualization as a communication tool is likely to reverse this trend in the future.

Resources:

Friday, November 14, 2014

Comparing Uncertainty Visualizations for a Dynamic Decision-Making Task

Summary
This research compared various visual representations to express uncertainty. Additionally, this research compared graphical representations of uncertainty against numerical representations. Bisantz et al. hypothesized that graphical representations of uncertainty are superior to doing so numerically.

The study performed had 24 participants, aged 19-32, participate in a Missile Game. Bisantz separated the participants into two experimental groups: one with just graphical representation and one with graphical and numeric representation. During this exercise, participants were charged with identifying missile icons amongst bird and plane icons in order to eliminate the threat. Participants had between 5 and 20 seconds to label an icon as a missile or not. There were four different methods for displaying the icons:

  1. Most likely solid: The icon of the outcome that is most likely to occur is displayed
  2. Most likely transparent: The icon of the outcome that is most likely to occur is displayed but its uncertainty is displayed by how transparent it is.
  3. Missile transparent: Only the missile icon shows with its uncertainty displayed by its transparency
  4. Toggle: Participants can switch between the three methods


Each participant completed two trials using each of the four methods for a total of 8 attempts.
Figure 1. Overall score by graphical representation against numeric representation

The result of the study concluded that participants scored better with the inclusion of numeric representation. Of the three methods for displaying uncertainty, Most likely transparent resulted in the highest scores. The use of numeric representation also resulted in a shorter time duration for making decisions.
Figure 2: Distance from endpoint when decision was made


Critique
Bisantz designed to compare the three methods for graphically representing uncertainty; however, there was not an experimental group to compare between just graphic and numeric representation. A slight tweak to the experimental design would have provided insight into whether visualization is needed at all.


Source
Bisantz, A., Cao, D., Jenkins, M., Farry, M., Roth, E., Potter, S. & Pfautz, J. (2011). Comparing uncertainty visualizations for a dynamic decision-making task. Journal of Cognitive Engineering and Decision Making, 5(3).

Visualization and Decision-Making Using Structural Information

Author: Boris Kovalerchuk

Summary:

Kovalerchuck's research aimed to highlight how and when certain types of visualization techniques should be used.  From an intelligence perspective, visualizations should not often not used to their full effectiveness as they are often presented to present a great deal of information to the audiance.  Take for example many of the info-graphics seen in newspapers.  These info-graphics often are quite colorful, look professional, and give the reader a great deal of information.  Kovalerchuk states that these should not be the types of visuals that analysts use with their decision makers.  Kovalerchuck identifies two main purposes of data visualization techniques for intelligence professionals; discovered relations/pattern (DRP) visuals and decision making model (DMM) visuals.

1) DRP Visuals
DRP visuals help the analysts in his or her analysis of the situation.  These are often referred to as exploratory visuals.

2) DMM Visuals
DMM visuals assist decision makers in making decisions.  These visuals are often more simplified than DRP visuals and should create a clear image of what the issue is and lead to ideas on how to address the issue in question.

A DRP visual will guide the analysts to create the DMM model.  The key finding of Kovalerchuk's research into data visualization techniques is that decision makers are comprehend and make better decisions from visuals they are most familiar with.  Examine the following image,


This simple image shows the relation of human deaths (black squares) next to water pumps (black circles) in relation to city blocks.  The deaths all occurred in close proximity to a water pump.  This is an overly simplified geospatial analysis that most analysts are familiar with.  Decision makers may not be familiar with this technique and have to think more about what the visual means.



This is a bar chart image showing the death toll within 250 yards of certain water pumps.  A geospatial graphic could have been designed to show the exact same information.  Decision makers are used to seeing this type of chart.  Kovalerchuk found no disadvantages to using this type of chart over more complicated geospatial charts to present findings.

Critique:
I agree with many of the findings of this research paper.  I agree that, we as analysts should seek to offer visuals that help decision makers best make decisions.  It also makes a great deal of sense to me that showing visuals that decision makers are familiar with, such as bar and pie charts, would be as effective as showing them more complicated charts that analysts are used to.

My main issue with this research is that the author does not give any information into how he came to these conclusions.  It appears that he mostly analyzed literature on the topic.  Some of his statements though make it appear as if he did conduct human research on how visuals effect decision making.  I would be very interested to see a study on how decision makers comprehend advanced visuals usually slated for analysts (geospatial analysis) vs traditional data visuals (turning the content of a geospatial analysis into a bar or line chart).

Source:

Kovalerchuk, B. (2001).  Visualization and Decision-Making Using Structural Information.  Proceedings of International Conference of Imaging Science, Systems, and Technologies.