Friday, September 11, 2015

The Delphi method: a powerful tool for strategic management

Summary

The Delphi method structures and facilitates group communication that focus, upon a complex problem so that, over a series of iterations, a group consensus can be achieved about some future direction. It has five major characteristics:
  1. The sample consists of a "panel" of carefully selected experts representing a broad spectrum of opinion on the topic or issue being examined. 
  2. Participants are usually anonymous. 
  3. The "moderator" (i.e. researcher) constructs a series of structured questionnaires and feedback reports for the panel over the course of the Delphi.
  4. It is an iterative process often involving three to four iterations or “rounds” of questionnaires and feedback reports.       
  5. There is an output typically in form of a research report with the Delphi results, the forecasts, policy and program options with their strengths and weaknesses, recommendations to senior management and, possibly, action plans for developing and implementing the policies and programs.
        It has some advantages over some other group decision-making techniques like the nominal group technique (NGT) and interacting group method (IGM). First, panel members are not swayed by group pressures or vocal members as can easily happen with NGT and IGM. Second, interpersonal conflicts and communication problems are virtually nonexistent because panel members do not interact. Third, travel costs and the problem of coordination to get everyone at the same place at the same time are not factors. The Delphi method consists of four key planning and execution activities: 
  1.       Problem definition: It is necessary to define the nature and scope of the problem, expected outcomes of the study and the appropriateness of the Delphi method.
  2.       Panel Selection: The Delphi method requires a panel of subject-matter experts (SMEs). The criteria for determining who qualifies as a SME may rest not only on knowledge, but could include criteria such as personal experiences or being stakeholders. Besides, it is important to inform prospective panel members that their commitment to participate would involve several rounds of questionnaires and feedback possibly extending over a period of months. On the hand, Panel selection might not be random because, in some research fields, there might be very few SMEs; thus, one might select all known SMEs.
  3.       Determining the panel size: While there is no one sample size advocated for Delphi studies, rules-of-thumb suggest that 15-30 carefully selected SMEs could be used for a heterogeneous population and as few as five to ten for a homogeneous population. The careful selection of SMEs is a key factor in the Delphi method that enables a researcher confidently to use a small panel. That is not to say that large panels are never used. Indeed, some Delphi studies have used large panels numbering over 100 members.
  4.       Conducting the Delphi rounds: A Delphi study usually involves three to four rounds or iterations, not just a one-time effort; thus, the moderator is able to set up Round 1 according to some strategy knowing that another two to three rounds could be conducted to achieve consensus or other goals. Although three to four rounds are typically used, the moderator should stop the rounds when the criteria for consensus are achieved, when results become repetitive, or when an impasse is reached. Following the final round, the moderator prepares a comprehensive report and distributes it or a short version to all members.              
The Delphi method deserves serious consideration because the careful design and execution of a Delphi study should lead to useful findings for policy makers and program managers. However, it is recommended that researchers consider using a triangulation of methods rather than the reliance upon a single method. For many situations, researchers may find the combination of a Delphi study and a survey with two independent samples useful and practical.
      
        Critique
        This paper describes the characteristics of the Delphi method, including criticisms of the method, and steps in conducting a Delphi study as well as pitfalls to avoid. It is a good overview in order to understand the basics of the method. However, the author does not mention an important weakness of the Delphi method. From my point of view, this method is heavily contingent upon the researcher’s ability. If the researcher have enough knowledge and ability to conduct this method, it can produce valuable results. Otherwise, it is just a fruitless time-consumption.   
      
        Source:
        Robert Loo, (2002) "The Delphi method: a powerful tool for strategic management", Policing: An International Journal of Police Strategies & Management, Vol. 25 Iss: 4, pp.762 – 769. Retrieved from http://search.proquest.com.ezproxy.mercyhurst.edu/docview/211297480/A90AEE4047D44C01PQ/1?accountid=27687

Wednesday, September 9, 2015

The Selection of Delphi Panels for Strategic Planning Purposes

John F. Preble (1984)

Summary:
Preble develops and answers three main questions in this article:
  1. Do results obtained using an intracompany Delphi panel tend to differ significantly from those obtained using an intercompany Delphi panel?
  2. Are the forecasts generated consistent?
  3. If the results are consistent, which panel type is recommended?
Previous literature on the Delphi panels, particularly Martino (1972) and Johnson (1976), recommend using a panel of experts from outside the organization. These studies operated on the implicit assumption that external panelists were likely to be better qualified or more expert than any panelists within the organization, despite a lack of evidence supporting this assumption.

The author composed two 15-member Delphi panels which consisted of top-level employees from large, successful life insurance firms headquartered in the north-eastern United States. The positions represented included Legal, Public Relations, Human Research, and other such diverse roles. The intracompany panel was composed of 15 members from the same company, while the intercompany panel consisted of 15 members from 15 different companies; three participants dropped out midway through the study, leaving 14 and 12 panelists, respectively. The panelists were asked to provide estimates as to the likelihood and timing of 27 different events, dated 1985, 1990, 1995, and 2000, and provide their degree of familiarity with each event. The author provided three rounds of questionnaires to the panelists, including statistical feedback after the first round and qualitative reasoning behind outliers after the second round.

After collecting the data, the author conducted t-tests to determine statistically significant differences. Seventy-six percent of the t-tests were not significant, meaning that in most cases the forecast from the intracompany was “quite close” to the corresponding intercompany estimate. Considering estimates categorized as unlikely (0.0-2.49), slightly likely (2.50-4.99), likely (5.00-7.49), or very likely (7.50-10.0), 95 of 96 comparisons were either in the same category or the next closest category (see Figure 1).  These results show that intracompany panel estimates are about the same as intercompany panel estimates. Because these estimates are consistent, Preble recommends that intracompany panels be used by strategic planners in order to increase administrative control, decrease the number of dropouts and overall costs, and satisfy the need for confidentiality of proprietary information.

Figure 1 - Mean scores and classification comparisons
Critique:
Methodologically, the only significant change is that a few female members should have been included in the intercompany Delphi panel; otherwise, the panel demographics are very similar (see Figure 2). 
Figure 2 - Panel demographics
One of the downsides to this method is that it took four months to complete the three rounds of the Delphi, therefore, this method could be overlooked under tight deadlines. This method could also drift dangerously close to group-think. While anonymity prevents panelists from being dominated by stronger personalities, the opportunity for panelists to change their responses in the second round based on the statistical data of the first round could eliminate any significant dissent. While the goal of Delphi is to converge on a central estimate, the outliers may have unique insights or experiences that are not reflected in statistical syntheses, but only come through in the qualitative comments.

Source:
Preble, J.F. (1984). The Selection of Delphi Panels for Strategic Planning Purposes. Strategic Management Journal, 5(2), 157-170.

Monday, August 31, 2015

Summary of Findings: Role-Playing (3.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the  articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in August 2015 regarding Role Playing as an Analytic Technique specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.


Description:
Role-Playing is an analytic technique in which participants are divided and assigned specific roles to simulate a problem discussion. Each participant must adopt the specific perspective of their assigned role. Each group that is affected by the problem should be represented. The role-players then engage in a discussion of the topic. This technique is useful for both forecasting solutions to problems and for analyzing previously suggested answers.


Strengths:
  • It helps the groups to solve the problems in a creative way.
  • Role playing can take into account the complexities of the situation.
  • It helps to prevent biases if it is implemented as a proper group work.
  • It helps the group to gain a different perspective.
Weaknesses:
  • In order to solve the problem, it must be launched properly.
  • There could possibly be hesitant participants.
  • It is extremely important to actually put oneself in the role.
  • Important to have background information for participants.
  • The method must identify intelligence gaps.


How-To:
  1. Assign roles and make sure people embody them
  2. Describe the situation
  3. Let the situation develop in a free form manner
  4. Upon conclusion of the exercise debrief the group


Personal Application of Technique:
Role Playing Exercise involving a potential decision by the Mercyhurst school administration to mandate that all student-athletes live on campus.


For this exercise, we grouped students into three teams: Student-Athletes, School Administrators, and Analysts. The three groups were requested to develop pros, cons, and a conclusion on whether this new policy is good for their specific group. The teams were given 10 minutes to develop their arguments. After the initial brainstorming period, the teams presented their ideas, and a list of similarities and differences in points of view was produced.


To improve this exercise, an additional group (non-student-athlete) should have been added. Additionally, all roles should have been assigned prior to the problem statement being given.

For Further Information:

Forecasting decisions in conflict situations
Wisdom of group forecasts: Does role-playing play a role?
Role-Playing our way to solutions

Forecasting decisions in conflict situations: A comparison of game theory, role-playing, and unaided judgement

Summary:

This article compared the accuracy of game theory, role-playing, and unaided judgement in regards to forecasting decisions . The author provided much background information on the subject, including predicting the behavior of participants in the new competitive market for wholesale electricity after the New Zealand government transferred assets to a new private sector electricity company called Contact Energy Ltd. in 1996. When the managers employed the role-playing method, the results were not consistent with the executives' beliefs so they turned to game theory for answers. What they found out was that the game theory method ended up not being helpful at all and that role-playing accurately predicted the behaviors.

The article continues to mention other tests that have been performed by many other people which ultimately led the author to determine that in regards to predicting human behavior, role-playing is the most accurate, game theory is the second most accurate, and unaided judgment is the least accurate. This is due in part to the fact that game theory cannot take into account the complexities of situations like role-playing can. According to the article, there is also not much proof to suggest that predictive validity for game theory in real conflicts exists since it is normally tested using role-played conflicts. Basically, role-playing is a more accurate method to use to determine human behaviors because there is a much greater degree of realism.

After doing this background research, the author decided to run their own experiment by creating six conflicts to test the accuracy of game theory, role-playing, and unaided judgment. To do this, the author created six conflicts that would be attempted to be solved by the participants using one of the three methods that were being compared. The results ended up being consistent with previous research which means that the author's experiment also found that role-playing is the most accurate method, game theory is the second most accurate method, and unaided judgment is the least accurate method to forecast human behaviors (conflict resolutions).

Critique:

In general, the experiment that was created and ran by the author was not very controlled. As a psychology major as well, this really bothered me. The first issue that I noticed was that the conflicts presented in the experiment were not all made up. Some of the situations came from previous research, one came from television, and another one was a real life situation from a company. The author claims that these situations were probably not going to be recognized by the participants, but if they were, then there could be problems with the accuracy of the experiment. In addition to this, nothing in the experiment (methods used, time given to find a solution, which situation the participants needed to solve, and more) were assigned to the participants randomly. Various other aspects of the tests were not controlled as well.

Source:

"Forecasting decisions in conflict situations: A comparison of game theory, role-playing, and unaided judgement"
By: Kesten C. Green
http://www.umsl.edu/~sauterv/DSS/green.pdf

Sunday, August 30, 2015

Wisdom of group forecasts: Does role-playing play a role?

Summary:

The article is trying the find out the level of the efficacy of role playing technique in 'forecasting' by applying it to a sales forecasting case. Since one of the determinants of a good analyzing techniques is not to harm the forecasting accuracy, this study aims to figure out the role of role played groups in the forecasting accuracy of sales rates in the future. Hence, the authors form 7 groups ( each group has 3 participants ) of participants and randomly label them as role-playing groups, while they form 6 groups as no-role-playing groups. Totally, they have 13 groups and and 39 participants.


The no-role-playing groups are asked to make groups discussions and come up with consensus forecasts for each product’s sales for the next period. The group rules prohibited any member acting as a group leader and asked the participants to: (1) act with due consideration for all group members; (2) let the member given the Q identifier (is selected randomly) introduce the initial forecast; (3) record their levels of satisfaction with each of the consensus forecasts; (4) record their preferred forecast (which would be equal to the consensus forecast only if they fully agreed with the group consensus); and (5) evaluate each of the group members along with a self-evaluation upon task completion.

The role-playing groups are asked to draw out unmarked envelopes for their roles. These roles are the Forecasting Executive, Marketing Director, or Production Director. They all are given scripts in which their role descriptions identified. The set of rules given to each group prohibited any member acting as a group leader while asking the participants to: (1) act out their given roles as they believed it would be performed in an organisation; (2) act with due consideration for all group members; (3) let the forecasting executive introduce the initial forecast; (4) record their levels of satisfaction with each of the consensus forecasts; (5) record their preferred forecast (which would be equal to the consensus forecast only if they fully agreed with the group consensus); and (6) evaluate each of the group members along with a self-evaluation upon task completion.


Results:


The study doesn't reveal a significant difference between the no-role-playing groups and role-playing groups regarding the accuracy of consensus forecast. The study also couldn't find no significant difference in forecasting accuracy. However, the study shows that the commitment of no-role-playing groups members is stronger than the role-playing group members. This is due to, the role-playing groups members' commitment to their assigned roles and scripts.


Critique:


The group members' lack of subject matter expertise, lack of background  information regarding those sale products and so forth are very important determinants for a person while judging about those products' future sale figures. Thus, the control groups in role playing technique are always susceptible to misleading the experimenters due to lack aforementioned skill sets or background information. This may have been overcame if they could conduct their experiment with an actual business organization's members.


Source:

http://www.sciencedirect.com.ezproxy.mercyhurst.edu/science/article/pii/S0305048311001459?np=y

Role-Playing our Way to Solutions

"Role-Playing our Way to Solutions"
By: Miriam Axel-Lute
National Housing Institute
Source: https://www.bostonfed.org/commdev/c&b/2013/winter/role-playing-our-way-to-solutions.pdf

Summary:

The article discusses a community development issue that used a megacommunity simulation as a way to find solutions and methods. In this situation, the overall goal is to reduce the state of Connecticut's energy use by 25% by the year 2030. Under the leadership of the Housing Development Fund (HDF) and the guidance of the consultant company Booz Allen Hamilton, this large community role-playing simulation was formed in March 2012.

Participants from across the sphere of the energy industry and the Connecticut community were selected to participate. Examples of those represented are energy suppliers, government agencies, non-profits, private residents, the financial sector, and many more. The goal of the simulation was to find avenues in which all parties could agree on to increase Connecticut household energy efficiency. According to the article, in the beginning the participants were uneasy in regards to fitting into the role-playing process but eventually were able to get comfortable.

As expected, the simulation allowed participants to think differently and many new relationships for future collaboration were formed. Participants mentioned how this simulation awoken them to how many aspects of the issue there were and how many sectors play a role in reaching the overall goal of increased energy efficiency. One interesting comment about the potential of role playing was made by Booz Allen VP Gary Rahl, He said, “You never want to pick a goal that can simply be met by an analytical solution—figure out who needs to do what. You need a goal where, to meet it, there will be tensions between participants and no single way of getting there.” I find that very intriguing and I think it is worthwhile to note about future role-playing simulations.

Critique:

The article discussed it, but a major issue with this study was the lack of one major participant, low-income householders. To get the full benefit of a role-playing simulation all aspects of participation need to be represented. The other negative of the study was that most of the results were that of compromise from previous ideas. While compromise and collaboration are good and important, I would have liked to see more innovation in a role-playing simulation. I realize it is difficult to achieve due to every side have their own goals and agenda, however true innovation would be the best result in my opinion. The benefit of role-playing is it increases the potential for innovation, there is no clear cut answer as Booz Allen VP Gary Rahl said. His statement as mentioned above I believe is very interesting and something we all should keep in mind when discussing role-playing.






Monday, November 17, 2014

Summary of Findings: Visualization (4.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 5 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in November 2014  regarding visualization specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
Visualization is an analytic modifier used to add an additional level of understanding and comprehension of complex data.  Data visualizations are normally associated with quantitative data, but qualitative issues are able to be represented as well.  There are plethora of free, simple to use visualization tools available for those without computer science backgrounds (i.e. Tableau, Google Fusion Tables, CartoDB, etc.).  For a more complete list of, and links to, data visualization tools, see the resources section at the bottom of the post.  

Visualizations are most effective when they obey general rules of graphic design, but personnel knowledgeable of graphic design are rare in the intelligence community.

Strengths:
1.Can provide a tangible two or three-dimensional object to physically show decision makers
2. Provide an effective way to present completed material
3.Can examine and display relationships that might be overlooked in a written form
4. Can bolster analyses
5. Can be outputted automatically through software programs
6.Provides a way to identify and link patterns
7. Can give decision makers an interactive way to visualize information

Weaknesses:
1. The use of visualizations is not a one-size fits all modifier
2. Creating visuals requires time from the analyst
3. Design software is not only expensive, but can be difficult to master and use effectively
4. Analysts creating visualizations are prone to using templates which make their visuals unimaginative and plain
5. There is no set of specific criteria that determines what is an effective visualization

Step by Step:
Note: This is reasonable description of the steps one would take in making an effective visualization.  
  1. At the onset of the project, identify likely sources of data and likely visualizations for the final product
  1. During the data collection process, continue to assess what type of data you are collecting and what will be the best way to present this data to your decision maker
  2. Identify the tools and resources you will need create the visuals
    1. If you lack the resources to acquire a certain tool needed to visualize the data in a certain way, alter your design method to find a way to display the same information in a different format
  3. Assess visuals throughout the entire project cycle to ensure that they are telling the same story that the written report tells

Exercise:
After a brief instruction period, participants visualized seven information units pertaining to their identity as intelligence professionals associated with Mercyhurst for 15 minutes, incorporating images from the internet, a foam presentation board, and traditional art utensils such as markers, colored pencils, and glitter pens. The seven information units were ‘what my friends think I do,’ ‘what my family thinks I do,’ ‘what society thinks I do,’ ‘what Mercyhurst faculty think I do,’ ‘what I think I do,’ ‘what I actually do,’ and a personalized brand identity consistent with a previously identified motto, task capability, teamwork capability, experience, and team resources desired to optimize performance. The final product had to be on the foam presentation board. Each participant had 1 minute to communicate their visualization to the class.
 
What did we learn from the Visualization Exercise
Visualizing disparate sources of information is a time consuming process, particularly when the information is abstract and resistant to immediate quantification. Visual literacy is a skill undervalued in the intelligence community and not taught sufficiently in a general sense, however the proliferation of technology expediting the process and emerging studies supporting the use of visualization as a communication tool is likely to reverse this trend in the future.

Resources:

Friday, November 14, 2014

Comparing Uncertainty Visualizations for a Dynamic Decision-Making Task

Summary
This research compared various visual representations to express uncertainty. Additionally, this research compared graphical representations of uncertainty against numerical representations. Bisantz et al. hypothesized that graphical representations of uncertainty are superior to doing so numerically.

The study performed had 24 participants, aged 19-32, participate in a Missile Game. Bisantz separated the participants into two experimental groups: one with just graphical representation and one with graphical and numeric representation. During this exercise, participants were charged with identifying missile icons amongst bird and plane icons in order to eliminate the threat. Participants had between 5 and 20 seconds to label an icon as a missile or not. There were four different methods for displaying the icons:

  1. Most likely solid: The icon of the outcome that is most likely to occur is displayed
  2. Most likely transparent: The icon of the outcome that is most likely to occur is displayed but its uncertainty is displayed by how transparent it is.
  3. Missile transparent: Only the missile icon shows with its uncertainty displayed by its transparency
  4. Toggle: Participants can switch between the three methods


Each participant completed two trials using each of the four methods for a total of 8 attempts.
Figure 1. Overall score by graphical representation against numeric representation

The result of the study concluded that participants scored better with the inclusion of numeric representation. Of the three methods for displaying uncertainty, Most likely transparent resulted in the highest scores. The use of numeric representation also resulted in a shorter time duration for making decisions.
Figure 2: Distance from endpoint when decision was made


Critique
Bisantz designed to compare the three methods for graphically representing uncertainty; however, there was not an experimental group to compare between just graphic and numeric representation. A slight tweak to the experimental design would have provided insight into whether visualization is needed at all.


Source
Bisantz, A., Cao, D., Jenkins, M., Farry, M., Roth, E., Potter, S. & Pfautz, J. (2011). Comparing uncertainty visualizations for a dynamic decision-making task. Journal of Cognitive Engineering and Decision Making, 5(3).

Visualization and Decision-Making Using Structural Information

Author: Boris Kovalerchuk

Summary:

Kovalerchuck's research aimed to highlight how and when certain types of visualization techniques should be used.  From an intelligence perspective, visualizations should not often not used to their full effectiveness as they are often presented to present a great deal of information to the audiance.  Take for example many of the info-graphics seen in newspapers.  These info-graphics often are quite colorful, look professional, and give the reader a great deal of information.  Kovalerchuk states that these should not be the types of visuals that analysts use with their decision makers.  Kovalerchuck identifies two main purposes of data visualization techniques for intelligence professionals; discovered relations/pattern (DRP) visuals and decision making model (DMM) visuals.

1) DRP Visuals
DRP visuals help the analysts in his or her analysis of the situation.  These are often referred to as exploratory visuals.

2) DMM Visuals
DMM visuals assist decision makers in making decisions.  These visuals are often more simplified than DRP visuals and should create a clear image of what the issue is and lead to ideas on how to address the issue in question.

A DRP visual will guide the analysts to create the DMM model.  The key finding of Kovalerchuk's research into data visualization techniques is that decision makers are comprehend and make better decisions from visuals they are most familiar with.  Examine the following image,


This simple image shows the relation of human deaths (black squares) next to water pumps (black circles) in relation to city blocks.  The deaths all occurred in close proximity to a water pump.  This is an overly simplified geospatial analysis that most analysts are familiar with.  Decision makers may not be familiar with this technique and have to think more about what the visual means.



This is a bar chart image showing the death toll within 250 yards of certain water pumps.  A geospatial graphic could have been designed to show the exact same information.  Decision makers are used to seeing this type of chart.  Kovalerchuk found no disadvantages to using this type of chart over more complicated geospatial charts to present findings.

Critique:
I agree with many of the findings of this research paper.  I agree that, we as analysts should seek to offer visuals that help decision makers best make decisions.  It also makes a great deal of sense to me that showing visuals that decision makers are familiar with, such as bar and pie charts, would be as effective as showing them more complicated charts that analysts are used to.

My main issue with this research is that the author does not give any information into how he came to these conclusions.  It appears that he mostly analyzed literature on the topic.  Some of his statements though make it appear as if he did conduct human research on how visuals effect decision making.  I would be very interested to see a study on how decision makers comprehend advanced visuals usually slated for analysts (geospatial analysis) vs traditional data visuals (turning the content of a geospatial analysis into a bar or line chart).

Source:

Kovalerchuk, B. (2001).  Visualization and Decision-Making Using Structural Information.  Proceedings of International Conference of Imaging Science, Systems, and Technologies.

Collaborative visualization: Definition, challenges, and research agenda

By: Petra Isenberg, Niklas Elmqvist, Jean Scholtz, Daniel Cernea, Kwan-Liu Ma, Hans Hagen

Summary:
According to the authors of this research paper, “collaboration has been named one of the grand challenges for visualization and visual analytics.” Traditionally, visualization and visual analytic tools were designed for a single person on a desktop computer. However, today’s world calls for increased visualization tools that encompass collaboration and communication. Experts and non-experts can take advantage of collaborative visualization scenarios to learn from one another’s analysis processes and viewpoints. The authors define collaborative visualization as “the shared use of computer-supported, [interactive], visual representations of data by more than one person with the common goal of contribution to joint information processing activities.” The term social data analysis has also been created to describe the different social interactions, which is central to collaborative visualization.

There are three main levels of engagement where digital systems support collaborative visualizations: viewing, interacting/exploring, and sharing/creating. Software systems like PowerPoint and videoconferencing allow people to learn, discuss, interpret and form decisions on a certain set of information. People that use and share interactive visualization software can communicate through chat, comments, email, or video/audio links. Utilizing these features allows discussions of alternative interpretations, and multiple viewpoints to emerge. Programs such as Many Eyes, allow users to upload and create new datasets for the community to explore.  The authors present this argument that the purpose of having an online collaboratory (data warehouse) “is to focus the collective effort of the group in order to produce significant and useful methods.” However, it is important for the users of the program to understand the overall data, the user space and the application space.

Computer-supported collaborative visualization software helps decision makers: distill knowledge through mining large multi-dimensional datasets, run models and simulation to explore the consequences of particular actions, communicate results, scenarios, and opinions to other stakeholders, and discuss debate, and develop support for specific courses of action. In addition, collaborative technology supports the social interaction of large audiences, which allows for a range of backgrounds, connections and goals. This provides the group with an environment where individuals can generate ideas and analysis alone or together.

Critique:
This article gave a broad overview of collaborative visualization and the areas where future research should be addressed. However, the authors did not integrate the challenges of collaborative visualization throughout the piece, instead they arranged it in future research. As collaborative visualization becomes utilized as an everyday tool, it will be important for people to learn these programs at school or at work. Knowing how these analytic tools work will be key to group interactions and their analyses.

Source:

Isenberg, P., Elmqvist, N., Scholtz, J., Cernea, D., Ma, K.-L., & Hagen, H. (2011). Collaborative visualization: Definition, challenges, and research agenda. Information Visualization, 10(4), 310–326. doi:10.1177/1473871611412817

Thursday, November 13, 2014

Visualization and cognition: Drawing things together

Summary:
Latour takes an anthropological looks at what gives visualizations their cognitive value and comprehension.  After reviewing several anthropological, psychological, and business-related works, he found that Visualizations are most effective when they contain certain characteristics. 

First, visualization must have elements of optical consistency.  One of the most effective elements of optical consistency is perspective.  Perspective is the reason why many graphs and, especially, maps seem incomplete or confusing without legends or scales.  Our brains are nearly automatic when it comes to taking something we see in one picture, and comparing it with object in another picture as long as we have a baseline to do so.

Second, it must obey by the “visual culture” at the time of the visualization’s creation.  Visual culture is an abstract requirement that essentially requires the photographer or artist to have elements in the photo or work that allows the observer to assessing its own worldly attributes to it.  The work can be viewed at a future time, but still be understood to be a snapshot of a different time.  The overall picture or message is still clear, regardless of when the picture is viewed.

Third, and related to the second requirement, a visualization is most effective when is can be understood relatively.  The ability to publish visualizations have made this requirement easier to meet.  The ability to publish makes visualization mobile (able to be view across a wider time and space) and immutable (able to remain unchanged over time).

After outlining what makes for the greatest mobile and immutable visualizations, Latour explores how the use visualizations help people understand otherwise overwhelmingly complex phenomena. 

While anything can be re-imaged or re-visualized, Latour argues that consistency is key.  A dissenter can go find various illustrations of his/her positions, but too many visualizations may actually harm his/her cause.  Like scientific theories, visualizations are best understood when being conveyed in a consistent fashion.  As a very simplified example, ‘bar graph’ issues can be become convoluted when too many start to use pie graphs to portray them.  Spatial dynamics would be much more confusing displayed in a table rather than a map.  Since visualization be produced and dispersed at low costs, consistency is key.

In addition, visualizations make otherwise complicated, 3 or more-dimensional phenomena into flat representations.  When these issues are illustration sufficiently on a flat venue, greater comprehension and communication is achieved – especially when the visualization is coupled with a written text.

Critique:
However, this requirement seems to be mainly useful for photography and art, and is of little importance to intelligence analysts.  Latour’s exploration of visualizations makes intuitive sense, but there are little experimental citations in his writing.  However, he does include plenty of anthropological and scientific research to guide his exploration.  Until his intuitive points are proven wrong in an intelligence-setting experiment, analysts should follow his recommendations.  Visualizations are a valuable modifier, if not a method.

Source:

Latour, B. (1983). Visualization and cognition: Drawing things together (pp. 1–33). Boston, MA: Harvard University. Retrieved from http://isites.harvard.edu/fs/docs/icb.topic1270717.files/Visualization%20and%20Cognition.pdf