Tuesday, April 30, 2013

The effects of video game playing on attention, memory, and executive control

The article seeks to compare expert video game players to non-players in order to see if there are any significant differences between how the two groups perform in attention, memory, and executive control skills.  The three games used were Medal of Honor: Allied Assault (from 2002), Tetris (from 2004), and Rise of Nations (from 2003).  Medal of Honor represented a first-person shooter (FPS), Tetris represented a puzzle game, and Rise of Nations represented a strategy game.  In each group there were males who were both expert gamers (played 7+ hours a week) and non-gamers (less than an hour a week), along with a control group who did not participate in any of the games.  Each person within the longitudinal group of the study practiced in 15 game sessions (1.5 hours per session) in their assigned game over 4-5 weeks.  The control group were tested without playing one of the games.

All participants took tests related to visual and attentional tasks, spatial processing and memory tasks, and executive control tasks.  Three of the five visual and attentional tasks included spotting a triangle within a circle when briefly exposed to an image and then correctly choosing the location of the shape, spotting a white letter and if an X followed when exposed to 16 or 22 letters rapidly on a screen, and count the number of dots when briefly exposed to an image of 1 to 8 dots.  Two of the three spatial processing and memory tasks include selecting the pattern of grey blocks that lit up white in the order they flashed and discerning whether two shapes were actually the same shape when presented.  Two of the four executive control tasks included memorizing sets of words while doing math problems with tasked recollection of the terms and a disc switching game involving discs, pegs, and the ability to move one disc at a time in order to match a pattern.

The comparison of expert video games versus non-gamers lead to several conclusions.  First, expert video game players outperformed non-gamers in many of the categories.  The experts were able to better track objects that were moving at great speed, outperform the non-gamers in visual short-term memory tests, alternate between tasks faster, and more accurately perceive and report shapes which had been rotated.  Non-gamers were found to not have achieved much improvement in their overall skills from the tests after playing their assigned video games for 21 hours.  For the rest of the tests, the non-gamers did not perform significantly better in any of the tasks save for the mental rotation of the similar shapes.    

Although this article does not specifically mention, 'brain training' it is clear that this study investigates just that.  The researchers want to answer the question if 21 hours of brain training can lead to improved results in various cognitive tests.  The authors made sure to use several different gaming categories including a FPS, strategy game, and puzzle game.  This makes sense as each interacts with the player in very different ways.  The authors set up control groups, and made sure the test subjects were clearly classified as either expert gamer or non-gamer.  Unfortunately, 21 hours was not found to really be significant time for non-gamers to really gain skills in the visual and attentional tasks, spatial processing and memory tasks, and executive control tasks.  However, the research did prove that expert gamers performed better in the four listed tasks mentioned earlier, potentially due to an already established experience with games.

I like how the authors selected their games and feel they would be some of the best options the authors could have chosen to represent the categories they were examining.  I was a little put-off that the authors excluded females from parts of the study, but I understand that it was important to keep the variables as similar as possible.  I think it would be interesting to run these same tests exclusively with female expert gamers and non-gamers.  Additionally, although it might be cumbersome to have participants play these video games for a longer period of time, I feel that the results of non-gamers gaining extended experience from game play could increase their performance on the cognitive tests.

Although not universally applicable to the intelligence field, this study does have some small contributions to the field.  One of these is that the study shows that a relatively short amount of time to train people in games is not significant enough to improve results.  This could be applied with intelligence analyst work in general- the repeated practice improves analytic products over time.  Personally, I feel that much like sports or any activity that requires practice, great improvement would never be expected over a month but would instead take many years to show significant improvement.  Overall, I feel the article was very thoughtful and made sure to keep the variables as separate as possible for optimal results.  

Boot, W.R., Kramer, A.F., Simons, D.J., Fabiana, M., & Gratton, G. (2008).  The effects of video game playing on attention, memory, and executive control. Acta Psychologica, 3(129). 387-398. Retrieved from  http://www.sciencedirect.com/science/article/pii/S0001691808001200

Extending Brain-Training to the Affective Domain: Increasing Cognitive and Affective Executive Control through Emotional Working Memory Training

Schewizer, Hampshire, and Dalgleish (2011) conducted a study investigating whether brain-training, specifically working memory (WM) training, improves cognitive functions beyond the training task. Their focus was on emotional material, arguing that it constitutes much of the information we process on a daily basis. The research suggests WM training improves performance in other WM tasks and in fluid intelligence, but only WM training involving emotional material improves affective information on an emotional Stroop task.

The authors began by differentiating between the areas of cognitive ability brain-training claims to improve, namely working memory (the capacity to actively maintain bits of information in the presence of distractions), fluid intelligence (abstract reasoning and problem-solving abilities), and control over emotional or personal material you want to disengage or engage with. They asserted that for any brain-training methodology to have a wide impact on real-world cognitive functions, there needs to be a transfer across training content. Their main question of interest  was can cognitive training with only neutral information have transferable benefits to cognitive processing of personally relevant material.

Forty-five participants received WM training using either emotional or neutral material, or an undemanding control task. The authors used already established and validated tasks to test transfer effects by modifying the dual n-back task to examine WM and fluid intelligence. They used three versions, one with neutral words and faces, the second with highly emotional words and negative facial expressions (an emotional Stroop task), and a third non-WM-dependent feature that matched the control group's "training". Both groups receiving training showed linear improvement significantly greater than the control group in terms of completion time. Training performance and cognitive transfer affect between the two trained groups did not vary significantly in the digit span task but did in terms of affective transfer effects where the affective training group showed significant pre- to post-training improvements in emotional Stroop performance. Neither the neutral training group or the control group demonstrated affective training effects.

The study was well done in terms of providing adequate background research in addition to explaining their process and results clearly. They took care for controlling mitigating factors that could affect the results, subjecting all three groups of participants to pre-testing to ensure the mean of each group was on the same level. The authors used already validated tasks for testing which strengthened their results. As the emotional Stroop test only looks at the cognitive reaction effects of words with negative connotations, I would be interested in seeing if there is similar affective transfer effects for positively emotionally charged words.

The findings related to better controlling cognitive abilities despite the presence of distractions, particularly in relation to emotional information, relevant or distracting to the task, is of utmost importance to members of the intelligence community. This study suggests that appropriate brain-training can improve decision making in situations that would require the manipulation of emotional material, something analysts commonly have to do.


Schweizer S, Hampshire A, Dalgleish T (2011) Extending brain-training to the affective domain: Increasing cognitive and affective executive control through emotional working memory training. PLoS ONE 6(9): e24372. doi:10.1371/journal.pone.0024372

Brain Training Game Improves Executive Functions and Processing Speed in the Elderly: A Randomized Controlled Trial

The authors conducted a study involving 32 participants, all of whom were elderly individuals that volunteered to participate in the study.  This study took place in Miyagi prefecture, Japan from March to April 2010.  The purpose of this study was to evaluate the impact of brain training, particularly video games, on the elderly through a double blind experiment.

The participants were split into to different groups, which played a different game (Brain Age or Tetris).  The games were played for five days a week, for four weeks.  Each of the five weekdays, the participants played for 15 minutes; the Brain Age participants recorded both their title and the score they received while the Tetris group recorded just their score.  Participants in the Brain Age groups participated in a number of games that were both simple to preform as well as activated the dorsolateral prefrontal cortex.  The Tetris groups was used as a control group, which looked largely at the positive effect that playing a game had compared to a game that used and stimulated various parts of the brain.

The four cognitive functions that were measured fell under four different categories; global cognitive status, executive function, attention, and processing speed.  A number of cognitive tests were applied to the participants to measure the various levels of cognition both prior and following the four weeks of engaging in activities.

The study found that the scores for the games improved from the initial test to the final test in both groups.  The results of this study further support previous research that indicates the application of games increases the cognitive abilities of elderly individuals.  Therefore, there is a positive correlation between the playing of brain games and the cognitive function of elderly individuals.

This study demonstrates the effectiveness of brain training. particularly in elderly individuals.  The study was effective in the sense that a number of individuals were sampled, and they were randomly placed into two different groups, one of which was a control group.  This provided a base of information in which data from playing mind games and just playing electronic video games and allowed the researchers to compare the two elements.

This is relevant to the intelligence community because it presents a different mindset for increasing cognition level, which potentially provides a different perspective on techniques and tools to use in the workplace to increase cognition levels.

Nouchi, R., Taki, Y., Takeuchi, H., Hashizume, H., Akitsuki, Y., Shigemune, Y., Sekiguchi, A., Kotozaki, Y., Tsukiura, T., Yomogida, Y., & Kawashima, R. (2012). Brain training game improves executive functions and processing speed in the elderly: A randomized controlled trial. PLoS ONE, 7(1), 1-9. Retrieved from http://ehis.ebscohost.com/eds/pdfviewer/pdfviewer?sid=f895ac4a-8a3c-4692-9582-bed455513110%40sessionmgr104&vid=1&hid=101

Brain Training for Silver Gamers: Effects of Age and Game Form on Effectiveness, Efficiency, Self-Assessment, and Gameplay Experience,


Brain training games have caught the attention of western society and have especially caught the attention of the aging demographic.  To analyze the effectiveness of brain training on different age demographics, Nacke et. al (2009) utilized a 2x2 mixed factorial design, which included an age group (young and old) and game form (paper and Nintendo DS game console device).   Nacke et. al (2009) sought to analyze the  effect of age and game form on usability, self-assessment, and gameplay experience.  The results of the experiment were measured in three manners: effectiveness( completion time, error rate), self-assessment measures (arousal, pleasure, dominance) and game experience (challenge, flow, competence, tension, and positive/negative affect) (Nacke et. al, 2009).   The authors had four hypotheses: (1) Younger individuals will finish the test faster on a game-console, while older demographics will finish the test faster with paper and pen form. (2) Younger individuals will complete the task with less errors on a game console, as the older demographics will perform with less errors by completing the test on paper form. (3) Playing on paper will emit less arousal and flow as compared to taking the test on a game console. (4) The challenge level for the test will be much hard on the game console as compared to taking the test with paper and pen form ( Nacke et al., 2009).

The age range for the younger demographic was 18-25 and the age range for the older demographic was 65 and older.  All participants in the study were familiar with physical board and card games.  The game that was utilized for the experiment was Dr. Kawashima's Brain Training, most specifically its game called 20 equations calculation game.  This game requires the user to calculate 20 different random equations using mathematical operations of addition, subtraction, division, and multiplication in the shortest time possible.  A similar test was created for individuals to complete using only pen and paper.   Each of the participants took both sets of tests (Nacke et. al, 2009).

Results of the experiment as listed above found that no matter the participants age, taking the test with pen and paper form was far more effective as compared to taking the test on the gaming console.  The paper and pen form also resulted in less errors.  The authors found that taking the test utilizing the game console intrigued participants more and increased their interests in partaking in the test across both age demographics.  Moreover, it was found that the logic problem solving challenges that were presented with the game console test demonstrated that the elderly population conveyed positive feelings to the game, while the younger population conveyed negative feelings.  The authors concluded that digital logic training games provide a positive experience for the elderly population ( Nacke, 2009).  Additionally, it was found that taking the test on the game console took longer to complete for both demographics and resulted in more errors ( Nacke, 2009).


I found it interesting that the pen and paper method resulted in less errors from both the elderly and young populations even though the pen and paper method questions were answered quicker by both demographics.  Before reading this study I thought that the younger population would have performed less errors taking the test with the game console based on how technologically driven our age demographic has become.   One aspect of this study that the authors did not talk about was accounting for the learning processes of each respondent.  The authors did not determine if the respondents were better visual learners, or would complete the tests more efficiently using pen and paper methods.   Moreover, the Brain Training game console game was designed with an older demographic in mind as its target audience, so it is no surprise that the results of the experiment conveyed that the older demographic had more positive experiences with the game than the younger demographic participants.   I think that this study conveys the success of brain training games that would show positive results for the older and children age demographics, but does not efficiently test the cognitive abilities of young adults and middle age individuals.  I think that it would be necessary to conduct a study of a brain training game that would stimulate the mind more than doing simple mathematical problems.

Nacke, L.E., Ing, D., Nacke, A., & Lindley, C.A. (2009). Brain training for  silver gamers: effects of age and game form on effectiveness, efficiency, self-assessment, and gameplay experience. Cyber Psychology & Behavior, 12(5), 493-499.  Retrieved from http://online.liebertpub.com/doi/abs/10.1089/cpb.2009.0013

Short- and Long-term Benefits of Cognitive Training

According to the article scientific evidence showing the increase in brain capacity due to training interventions is rare. However, some evidence suggests that specific cognitive intervention can improve brain capacity. Most cognitive training studies focus on the effect on fluid intelligence, the ability to reason abstractly. In turn, effect on fluid intelligence is predictive of educational or professional success. Often these experiment tasks target the working memory, which allows an individual to store a limited amount of information for a period of time.

The authors conducted an experiment using elementary and middle school children by assigning them with video game like tasks. The purpose of this experiment was to determine whether working memory tasks will improve untrained fluid intelligence (Gf) tasks. 62 children were trained over a one month period. The test group was presented with a series of stimuli from different locations on the computer screen. This group was tasked with deciding whether a certain stimulus has appeared more than once at the same location. The participants of the control group were tasked with answering general knowledge questions along with vocabulary questions. The tasks for both groups include video game like learning and graphic visuals. Pre- and post-training as well as training after three months, the participants’ performance was assessed using a two matrix reasoning tasks.

The results indicated that the experimental group showed an increased performance in untrained fluid-intelligence tasks. Hence, they were most successful at fluid intelligence transfer. The control group who was tasked with knowledge-based tasks did not show much improvement. The post hoc tests after three months revealed the same results. The participants with most training showed high significance in performance. No significant differences were observed in terms of sex, age, grade, number of training sessions, and initial working memory performance. Based on the results, the authors argued that the transfer of fluid intelligence depends on the amount of participant improvement on working memory tasks. Some students’ lack of improvement was associated with the lack of interest in the activities and difficulties of coping with the challenges. Finally, the study concluded that individuals may gain long-term benefits from cognitive training. No group difference in performance was seen with in the first three weeks of training. However, differences in performance emerged over time.    

The authors successfully conducted an experiment which lead to answering a question of interest. Through experimentation they found tangible evidence that cognitive training have a significant effect on working memory and long-term memory in children. These results can be confirmed by other research conducted on aging populations. However, the methodology section of the article needs to be explained better because it does not provide enough information to replicate the study. Since the participants of the experiment were elementary and middle school children, this study is not generalizable to the rest of the population. One of the important aspects of this article is that the authors mentioned the limitation of this study and proposed recommendation for future research. Rather than studying whether cognitive training is effective, future research should focus on studying conditions that best transfer effects, underlying cognitive mechanisms, and for whom cognitive exercises is most useful.   
I’m not sure how relevant cognitive exercises are to the intelligence community. Although cognitive training improves working memory/fluid intelligence and intelligence analysts may benefit from it, it may not be very practical in the implementation.                  

Jaeggi, S., Buschkuehl, M., Jonides, J., & Shah, P. (2011). Short- and long-term benefits of cognitive training. Retrieved from http://www.pnas.org/content/early/2011/06/03/11032281083/1103228108.

Putting Brain Training to the Test

In the article Putting Brain Training to the Test, Owen, et al. explained how there is little scientific evidence in favor of the efficacy of brain training, which they refer to as "improved cognitive function through the regular use of computerized tests." The ultimate question was not whether brain training  improved performance on cognitive tests but whether those benefits transfer to other untrained tasks. This reduces the likelihood that improvements are simply due to practice.

To test the hypothesis that brain training is not effective, the authors evaluated the results of a study that the BBC science program "Bang Goes The Theory" participated in. The test was conducted over a six-week period and included two experimental groups and one control group with 11,430 people completing the assessments. Four tests were given in an initial benchmark which included baseline measures of reasoning, verbal short-term memory, spatial working memory and paired-associates learning, all of which are sensitive to changes in cognitive function. After the end of the six weeks, a second benchmark test was given.

The results showed that although brain training improved results on cognitive tasks over time, there was no evidence for transfer effects to untrained tasks. Experimental group 1 and 2 and the control group all improved on some or all benchmarking tests but the effect sizes were very small for all of them. Thus, even with statistical significance, the results were not meaningful. Alternatively, improvement on tests that were trained had large effect sizes for both experimental groups. These improvements could have been due to task repetition, adoption of new task strategies, or a combination of the two. Nevertheless, training-related improvements did not generalize to other tasks. The authors accounted for the fact that these results were unlikely due to either the use of the wrong types of cognitive tasks or masking by direct comparison with the control group. They note, however, that it is possible that a more extensive training system could produce different results. The following image shows benchmarking scores at baseline and after the program was finished.

This article holds value primarily because it plays devil's advocate on a topic that many people support with little evidence. The study performed included a large random sample size with two experimental groups and a control group, making it more reliable. When analyzing the results, the authors also took into account the effect size which plays a large role in the comprehension of results.The study also addressed many reasons why the results would have been skewed, disproving most with thorough explanations.

Nevertheless, the study should certainly be repeated to ensure that the results would stay consistent.  Although the analysis does hold merit within the context of this study, I'm not entirely convinced that brain-training is ineffective based solely on these results. Additionally, a significantly larger portion of participants in the control group did not finish the 6-week study. Although this may be due to the fact that they were less engaged than the experimental group, additional tests should be performed that attempt to prevent this.

Owen, A., Hampshire, A., Grahn, J., Stenton, R., Dajani, S., Burns, A., Howard, R., & Ballard, C. (2010). Putting brain training to the test. Nature, 465.

Monday, April 29, 2013

Is Working Memory Training Effective? A Meta-Analytic Review

Charles Hume and Monica Melby-Lervåg conducted a meta-analysis of the results of different memory training studies to determine if there was an improvement in cognitive function.

Hume and Lervåg examined the results of 23 studies. To be included in the review, studies had to be either quasi-experiments without randomization or randomized controlled trials. The studies also had to have a treatment or an untreated control group. After the selection criteria was implemented, 23 studies with 30 group comparisons were reviewed.

Results of the study showed that there was an improvement in short-term working memory skills. However, in verbal working memory skills, the results were not maintained in follow up studies. On the other hand, "limited evidence" suggested that visuospatial working memory enhancements could be maintained.

The study did not find evidence of working memory training transferring to other skills such as nonverbal and verbal ability, inhibitory processes in attention, word decoding, and arithmetic. Hume and Lervåg concluded that memory training programs to not produce long term, generalizeable results but rather short term and targeted results.

The authors acknowledged possible limitations of the study such as the different clinical conditions of the various studies and the age of participants (ranging from children to adults). Even so, the results were applicable to all ages.


The results of this study are directly applicable to intelligence professionals. First and foremost, targeted memory training can produce specific and short term results. As long as we limit our expectations and make it a habit to practice certain exercises on a regular basis, there would be an improvement in short term working memory skills, which can help an analyst remember more information when conducting analysis.

Second, a project manager or team leader can view software and programs that are advertised as improving all cognitive abilities with legitimate skepticism. This can save both time that would have been wasted trying out a new program as well as the money that would have been wasted purchasing it.

As for the study itself, the only criticism I have is that they used both children and adults. They acknowledge that this is a limitation, but I have some doubts about the applicability of the findings to both children and adults, especially with a an examination of only 23 studies.


Melby-Lervåg M, Hulme C. Is working memory training effective? A meta-analytic review.Developmental Psychology [serial online]. February 2013;49(2):270-291. Available from: PsycARTICLES, Ipswich, MA. Accessed April 29, 2013.

Sunday, April 28, 2013

The Science Behind Lumosity


This article counters the idea that the core aspects of cognitive processing are determined and fixed at a young age leaving little to no means of improvement and individuals born with strong cognitive capacities through genetics and early development had an advantage throughout their lives while individuals not encompassing these capabilities had no room for improvement.  The Lumosity process suggests that aside from genetics, the right type of stimulation and activity, at any age, the brain can change and remodel itself to become more efficient and effective in the areas of processing information, paying attention, remembering, thinking creatively, and solving novel problems.  The main areas that make brain training and Lumosity effective include targeting, adaptively, novelty, engagement, and completeness.   

The brain’s ability to reshape itself is called neuroplasticity.   To exemplify the behavioral changes in the brain the study gives an example of a test potential taxi drivers must pass in London, a rigorous exam that tests the point-to-point routes in the city refereed to “The Knowledge."  After this exam and continuous studying, they found major structural changes in the brain including differences in the size and shape of crucial brain structures in taxi drivers relative to control subjects.  The hippocampus, a brain structure critically involved in memory and navigation, was larger in the taxi drivers over the control group. 

Other activities such as video gamers performed better in visual attention than non-players.  Furthermore, a study asking non-players to play an action video games intensively over several weeks found their visual attention capacities improved.   The article then goes on to discuss Lumoncity' s ability to provide effective brain training for all ages including older adults, children (incorporating those with ADHD), and young adults. 

Finally, the study presents a scientific framework of Lumosity.  It talks about a 2006 study that evaluated the effect of Lumosity training on cognition in normal, healthy adults that included 23 participants with a mean age of 54.  These individuals divided into a control group and a group that received Lumosity training 20 minutes per day once a day for five weeks.  The study found that participants improved on the games that they played as well the training gains transferred to measures of cognitive performance that were not directly trained.  The participants did not only learn strategies to play the games but the underlying brain mechanisms fundamentally changed after training. 


Lumos Labs wrote this article, the creator of Lumosity therefore, a degree of bias toward their product is likely to exist.  While this is true, the article cites multiple scholarly articles to back up the claims.   That said, considering Lumosity is the most popular method of brain training today, given my limited knowledge on the topic I thought this article would provide a useful overview and description of not only brain training but Lumisuty’s approach.  The article also includes a variety of other credible studies published by leading scholars that study brain training supporting Lumosity’s claims. 

Understandably, as a business trying to make money, the article had promotional aspects to it.  There were no downsides mentioned in terms of brain training or Lumosity.  Additionally, while this article looked at Lumosity and other key studies on brain training, throughout the research there was not an article tailored to compare and contrast the many types of brain training for consumers to decide which method or product works the best or what product would provide them the most benefit. 

Overall, this article comprises a large sample of the most compelling research done on brain training.  This aspect makes it a very useful reference providing a short synopsis of these studies under the sub heading Broad and Growing Base of Evidence that Cognitive Training Works as well as throughout the article.


Hardy, J., & Scanlon, M. (2009). The Science behind lumosity. Lumosity. 

Thursday, April 25, 2013

Summary of Findings (White Team): Visual Analytics (4.3 out of 5 Stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 8 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in April 2013 regarding Visual Analytics specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Visual analytics is a broad category of modifiers that refers to the visualization of data in a manner that simplifies the comprehension of data and information and aids in the analysis and communication of results.

1. Provides visuals to identify patterns.
2. Can provide a tangible three-dimensional object to physically hold.
3. Provide an effective way of presenting intelligence to decision makers.
4. Provide more depth to the analytical product through referencing to the visual.
5. Has the potential to provide an interactive means to display the estimate or information.
6. Can display relationships visually that might have been overlooked with the utilization of other methods.  
7. Output can be automated.

1. Can be difficult to make it understandable for the consumer or decision-maker.
2. This modifier is not easily defined.
3. Not as helpful for individuals who do not learn effectively visually.

Step by Step Action:
1. Be aware of your decision-maker’s needs and preferences for the product.
2. Determine which form of a visual is most suited to your product and the decision-maker.
3. Make your visuals simple and easy to understand for the decision-maker.

As a class we were given the question to analyze how many Jelly Belly jelly beans would fill a container 5 inches long, 2 inches tall, and 3 inches wide. We were asked as a class three separate times to estimate how many jelly beans would fit into the container. Additionally, we had to give an estimate of how confident we were on each estimate from Activities 1-3. Activity 1 an estimate was made just based on knowing the size of the container and how one perceives the relative size of a Jelly Belly jelly bean. Activity 2 an estimate was made by viewing a picture of the jelly beans in the container. Activity 3 an estimate was calculated by holding the actual container filled with the jelly beans.  

The results as a class were interesting to see how each individual determined their analysis. For some of the class, viewing the container in two different levels of visualization forms increased their level of confidence on their estimate. For other participants in the class their level of confidence in their estimates did not change during each of the activities, but their estimates changed when more visuals were presented. Moreover, it was interesting to see during the first activity many participants in the class tried to draw out the probable size of the container or tried to visualize the size of the container with their hands to make their estimate.

Summary of Findings (Green Team): Visual Analytics (4 out of 5 Stars)

Visual Analytics
Green Team
Rating (4 out of 5 Stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the 8 articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in April 2013 regarding Visual Analytics specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Visual analytics is a collection of modifiers and methods used to improve the analysis of information, as well as representation to increase the effectiveness of the decision making process. Both qualitative and quantitative information is depicted through the modifiers and methods.  The visual representation of the information should increase comprehension of the material in addition to appealing to the viewer’s aesthetics.  

  • Decreases the level of ambiguity in conveying information
  • Supports people who are visually oriented
  • Gains interest of the decision maker much more than a written element
  • Has the potential to increase analytic confidence

  • Not everything has the possibility of integrating a visual component
  • Visual analytic products can introduce bias into an analysis
  • Not necessarily intuitive to an outside party, especially since there is a different level of input between those that created the visuals and those the information is being presented to
  • Does not support people who are not visually oriented
  1. Find a topic that is capable of having some kind of visual component
  2. Compare and contrast various ways in which a visual component can be integrated (ex: graphs, charts, information arranged in a particular fashion, etc.)
  3. Select the visual component that you feel best communicates the idea while being mindful of who the consumer is or decision maker the visual component is geared towards

Personal Application of Technique:
A written description of the dimensions of the container were provided to individuals.  The class was asked to estimate the number of jelly-beans that were in the container along with recording their level of confidence in that estimate.  In the next stage, a static image of the container filled with jelly-beans was presented to the individuals.  A dime was used to provide a size comparison to the image of the container with the jelly-beans.  Following another estimate and confidence level, the individuals were given the container with the jelly-beans to hold and look at in a three-dimensional manner.  The participants were then asked to give another estimate and confidence level.

The introduction of visual and tangible elements to the estimate generally increased the level of confidence in three of the five participants. One participant raised their confidence only after being presented with the three-dimensional, interactive visual aid. The other two raised their confidence level after the introduction of the 2D, static image. All participants changed their estimates each time a visual was presented, getting closer to the actual number. Some changed drastically (originally estimating 900+, then decreasing the estimate to 350, actually 353), while others remained in the same ballpark but improved their estimate.

Rating: 4 of 5 stars