As my last post for the course, I want to share the fruits of my final project - a free, print-and-play card game designed to teach the basics about online security as it pertains to email, social media, and mobile devices: Safety Net.
Safety Net is a single-player card game pitting the player, armed with a
wide array of technological defenses, against a nefarious “Hacker” bent
on breaking the player’s security down and making off with valuable,
sensitive information. The deck is stacked against you! Will the Hacker
win? Or will you be able to protect yourself and your sensitive data by
quickly building up an unbreakable Safety Net?
Each card
also includes flavor text describing the relevant defensive option or
potential threat. This game offers a quick, entertaining way to
introduce someone to the basics of online security. Perfect for
students, casual computer users, and anyone curious about the risks and
defensive options involved in online security. Give it a try, or give a
copy to someone you know!
Safety Net Page on BoardGameGeek:
http://boardgamegeek.com/boardgame/124049/safety-net
Download link:
http://boardgamegeek.com/file/download/8exgyz8ial/SN_Safety_Net_Complete_Game_Bundle_v1.rar
Friday, May 25, 2012
Tuesday, May 22, 2012
Addressing Bias
While browsing ted.com today I found an interesting talk addressing bias on a cognitive, neuroscientific level. It is called optimism bias, and I think that it is related to some of the most important biases we talked about in class. Tali Sharot studies the optimism bias in London, and found the centers of the brain that control optimism and pessimism in humans. Although optimism is directly linked to a better quality of life, in many cases it can distort the analysis that we are counted on to deliver objectively. I thought that this could start a good discussion: should we as analysts pursue medical or technological methods of reducing bias? Or is this something that takes away our essential humanity? For more information, watch the full talk here (its about 18 minutes long).
Thursday, May 17, 2012
Social Network Analysis and the US Intelligence Community
My project sought to employ social network analysis in order to map the relationships between the agencies that comprise the federal Intelligence Community, focusing primarily on how well the agencies interact, i.e. the tone or quality of their relationship. A number of current and retired members of the intelligence community were recruited to give their input on this matter. The results were placed into the *ORA social network software in order to generate the analysis.
The final images, which averaged the results of all participants, indicate that the Intelligence Community is a fairly cohesive social network, with neutral and positive relationships between the agencies far outweighing any negative ones, both in number and weight. Each individual participant had a different picture regarding how the agencies interacted within the community as a whole, no doubt based upon personal knowledge and experience. Overall, however, the individual matrices were not widely different, and this is evident in the final visualizations.
For further details, please visit:
https://sites.google.com/site/intelligencecommunitysna/
For further details, please visit:
https://sites.google.com/site/intelligencecommunitysna/
Wednesday, May 16, 2012
Application of Cost-Benefit Analysis and Risk Analysis
Using CBA and risk assessment, I was able to answer the question:
Is there an optimal level of security investment for small businesses to protect themselves from cyber-threats?
As it turns out, the solution lies in identifying, and then mitigating, the risk-level of the business. I was able to create five questions, when answered, would allow a small business owner to determine which solutions work best for him. Then using those solutions, I can estimate monthly expenses that allow the benefits of protection to be equal to (or less than) the cost to maintain the solution.
The post has been published at lguelch.blogspot.com. The Excel spreadsheet used to define risk levels and compute the cost-benefit analysis can be accessed from: leslie.guelcher/files/cost_benefit_template.xlsx.
Feel free to post any questions or comments about the template or conclusions here or at lguelch.
Is there an optimal level of security investment for small businesses to protect themselves from cyber-threats?
As it turns out, the solution lies in identifying, and then mitigating, the risk-level of the business. I was able to create five questions, when answered, would allow a small business owner to determine which solutions work best for him. Then using those solutions, I can estimate monthly expenses that allow the benefits of protection to be equal to (or less than) the cost to maintain the solution.
The post has been published at lguelch.blogspot.com. The Excel spreadsheet used to define risk levels and compute the cost-benefit analysis can be accessed from: leslie.guelcher/files/cost_benefit_template.xlsx.
Feel free to post any questions or comments about the template or conclusions here or at lguelch.
Geographic Image Analysis: Gerrymandering Case Study
Description:
Geographic image analysis is a
technique used to identify terrestrial conditions or changes to conditions over
time. It is a highly useful technique that allows users to quickly and easily
identify conditions from satellite images. This technique is used in
construction and development, city planning, environmental mapping, meteorology
and military applications. In the exercise below, the author applied geographic
image analysis to changes in the Pennsylvanian political landscape.
Strengths and Weaknesses:
Strengths
Capable of being used in a wide variety of applications
Very easy to quickly understand
Multitude of programs
available for this function
Weaknesses
Data can be misinterpreted
Geographic images are often useless without additional contextual information
Some programs like ArcGIS are
both expensive and time consuming to learn
How-To:
1.
Identify the geographic region you want to analyze in your geographic image
software
2.
Gather any historical and/or contextual information needed to interpret the
data
3.
Apply outside data to satellite images (i.e. image overlays, photographs,
transportation networks)
4. Interpret the geographic
images in context with historical and/or outside data needed make sense of the
images
Personal Application of
Technique:
For this exercise I decided to
apply this technique to the Electoral College. I thought having the raw data of
satellite imagery combined with the nuanced and heavy artificiality of the
congressional districting process could highlight an element of absurdity in our nation’s legislative system. Essentially, I
planned to investigate the historical and probable future employment of
gerrymandering in the redistricting process.
Preliminary
Research
For this I
needed a few general things:
1. An understanding of redistricting policy in general
2. Old district maps to compare to the proposed district maps
3. Population density maps from the census
4. Age and political demographic maps
5. County
voting records of past elections
After scouring
various publications to try and make sense of the concept of gerrymandering I
came across this general description: “The dividing of a state into election
districts so as to give a political party the majority in many districts…” This
suffices for a definition in a general way, and is often recognized as an
acceptable definition in popular culture. However, how does a political party
employ this tactic? There are three methods:
1. “Cracking”- the practice of splitting voters of the
opposing political party into two or more districts in which they will be a
minority.
2. “Stacking”- the practice of gathering as many
voters for your party into a district, no matter how far apart they are, in
order to outnumber opposition voters.
3. “Packing”-
the practice of pushing as many opposition voters into a single district in
order to quarantine them from the rest of the electorate.
By applying
these three general types of gerrymandering to the historical election results,
historical district lines and demographic information, one can observe
interesting election trends. However, these are not the only tactics employed
in the art of gerrymandering. Legislators with the power to redistrict
congressional lines can also:
4. Redraw districts so that the existing congressman’s house
is no longer in their own district, thus making them ineligible to run for
congress in their district.
5. Redraw
districts to put the homes of two congressmen from the opposing political party
in the same district, thus ensuring a costly primary campaign.
Historical
Case Studies:
The Case of
Robert A. Borski Jr.
After the 2000
census Pennsylvania was due to lose two districts because of population
stagnation. Robert Borski Jr. was elected to Pennsylvania’s 3rd District in east Philadelphia in 1982. The district remained
intact for his 20 year tenure until 2002, when his district abruptly changed shape. Borksi’s home was drawn
into Pennsylvania’s 13th District, in which incumbent
Democratic congressman Joe Hoeffel resided. Borski chose to resign rather than
force a primary runoff election.
The Case of
Tim Holden
At the same
time that Robert Borski found himself in a new district, Tim Holden found his
district changing. Tim Holden was elected to central Pennsylvania’s 6th District in 1990. He held the seat for a decade, winning
congressional elections by comfortable margins. After 2000, Pennsylvania lost
two districts due to population change. Holden’s 6th district was folded into the neighboring 17th district, controlled then by George Gekas. To many this was
blatant gerrymandering, as it brought in more voters from the heavily
conservative Harrisburg area to dilute the voting strength of the more liberal
voters in Pottsville area. Gekas also retained 60% of his former district.
However, the plan backfired and Tim Holden continues to serve the 17th District to this day.
The Case of
Frank Mascara
Frank Mascara
won southwestern Pennsylvania’s 20th District
in the 1994 election. He continued to win the 4 ensuing elections in the 20th District with ease. However, in 2002, Mascara’s district was
redrawn and renamed the 18th
District. This district
contained much more affluent upper middle class areas south of Pittsburgh,
likely giving an upper hand to any future Republican challenger. However, this
was of little concern to Mascara because his house was drawn out of the new
district by only a few yards, and his residence was placed in the same district
as long serving Democrat Jack Murtha, forcing a lengthy primary battle which
resulted in Murtha’s reelection.
Post 2010
Census Case Studies:
The Case of
Altmire and Crtiz
The 2010
census required Pennsylvania to lose two congressional districts due to
population changes in the region. However, Pennsylvania’s 4th District (controlled by Jason Altmire) and 12th District (controlled by Mark Critz) were not eliminated, but
instead were geographically adjusted. District 4 of the north Pittsburgh
suburbs moved to south central PA, encompassing York and Adams counties, making
Altmire ineligible to run for congress in the district. District 12 of east and
south Pittsburgh shifted drastically to encompass much of Altmire’s former
district, including his home. Because of this change Altmire and Critz, both of
the Democratic Party, were forced into a primary runoff against each other.
The Case of Erie County
Pennsylvania’s
3rd District, currently controlled by
Republican Mike Kelly, is undergoing moderate changes after the 2010 census.
Although the region hasn’t changed much in terms of population fluctuation, the
district will change shape for the 2012 election. The new district will extend
slightly further south and will not reach as far to the east. Erie county will
be cut in half and the voters split between District 3 and District 5. District
3 is already a Republican stronghold and is likely to become much stronger.
Erie county, the only county in the district that had more than 50% of its
votes go to Barack Obama in 2008, will be divided between Districts 3 and 5,
diluting the influence of these
Democratic voters in future elections.
Old 3rd District:
New 3rd District (2012 Election):
For Further Information:
Altmire v. Critz 2012 Primary Election
http://www.washingtonpost.com/blogs/the-fix/post/mark-critz-defeats-jason-altmire-matt-cartwright-beats-tim-holden/2012/04/24/gIQAvJjmfT_blog.html
Cook Political Partisanship Voting Map
http://cookpolitical.com/node/4201
Demographic Information Regarding 2008 Election
http://www.juiceanalytics.com
Erie County Gerrymandering Case
http://www.eriedems.com/node/1445
Frank Maracara 2002 Election
http://www.yuricareport.com/Campaign2004/NYerGreatElectionGrabGerrymandering.html
Frey, W., & Teixeira, R. (2008). The political geography of pennsylvania: Not another rust belt state.
Blueprint for American Prosperity, Retrieved from
http://www.brookings.edu/papers/2008/04_political_demographics_frey_teixeira.aspx
Historical Presidential Election Data
http://dsl.richmond.edu/voting/
Monmonier, M. (2001). Bushmanders and bullwinkles: How politicians manipulate electronic maps and census data to win elections. Chicago, IL: University Of Chicago Press.
Pennsylvania Population Density Maps
http://www2.census.gov/geo/maps/dc10_thematic/2010_Profile/2010_Profile_Map_Pennsylvania.pdf
Pennsylvania Redistricting 2012 Maps
http://www.redistricting.state.pa.us/Maps/index.cfm
Tim Holden 2002 Election
http://www.pbs.org/newshour/bb/politics/july-dec02/pennsylvania_8-30.html
Wikipedia (Robert A. Borski Jr.)
http://en.wikipedia.org/wiki/Robert_A._Borski,_Jr.
Applications of Game Theoretic Reasoning to Basketball Situations
Introduction
Game theory is a specific model of thinking that uses
precise logic reasoning to solve problems. It is used to help understand
situations in which decision-makers interact. Osborne (2000) provides an
excellent introduction to game theory, focused more on the ability to follow
logical processes than mathematical reasoning. Game theory involves using
models to simplify transform reality into an abstraction which is more easily
analyzed or understood. Most models are based on a set of actions available to
decision makers, and assume that decision makers are rational actors, meaning
that they will always choose the most preferred action. Game theory works best
with a finite action space and a specific set of rules for the environment.
Basketball is an excellent example of a zero-sum game, with
two actors, or teams, both pursuing the same goal, victory. This is zero-sum
because one team’s victory means the other team’s failure. In these game’s the
coach’s only objective is to win, but he or she must juggle a number of factors
to do so. Certain situations within basketball provide great opportunities for
game theoretic analysis. In this paper, two of these will be evaluated. In the
first, I look at the classic issue of foul trouble. Conventional wisdom is that
players should be immediately benched, but using game theoretic principles,
this assumption can be questioned. In the second situation, I modeled an end of
game situation where one team trails another by two points with less than 20
seconds left in the game. This time crunch shrinks the options available to
either coach, and forces them to choose to shoot or defend, respectively, the
two or three point shot. I assigned payoffs and calculated a mixed strategy
equilibrium for both the offensive and defensive teams.
Situation
1: Foul Trouble
1: Total
threshold fouls and yanks per team 2006-2007 to 2009-2010.
Teams are consistent regarding benching players in
foul trouble. Source:
|
Star players often give the team a better chance to win, and
their playing time should be maximized. One factor which reduces the playing
time is fouls. In the National Basketball Association (NBA), a player can
commit at most six personal fouls before he is disqualified from the game.
Referees govern fouls according to the rules set down by the NBA. Coaches may
sit their star players for an extended period of time if they feel that their
number of fouls puts them at risk for disqualification from play. Maymin et al.
discuss this problem from a resource allocation standpoint that addresses
strategic idling. “The advantage of yanking is that the starter will likely be
able to play at the crucial end of the game but the disadvantage is that he may
not play as many minutes as he otherwise would. On the other hand, if a starter
is kept in the game, he may not play at his full potential, as the opposing
team tries to induce him to commit another foul” (Maymin, Maymin, & Shen, 2012) . Conventional wisdom
says that the threshold for acceptable fouls is the quarter plus one: i.e. two
fouls in the first quarter, three in the second, four in the third, or five in
the fourth. As Figure 1 shows, NBA teams are very consistent when applying this
rule.
This is a situation that does not
apply to a strict interpretation of game theory, as there is only one actor,
the coach, who has two choices: sit the star or play the star with foul
trouble. Although the observer may believe that wins are the only thing that
matters, they are not. The coach must also balance the desire to win with
maximizing star player’s times, and keeping the fans happy. These must be
balanced in such a way that the team and the coach and the star are all
happy. When handling foul trouble, the
decision to bench the star may be consistent with winning the game, but as
Weinstein observes, voluntarily benching the star for foul trouble is simply
enacting the penalty the coach wishes to avoid.
This is the one situation that depends more on the quality of player
(disparity between starter and sub) available to the coach; we will look at
some specific examples of NBA teams.
In 2011, the average number of foul
outs per game was 0.3153; I used this number as a proxy to estimate the
possibility of a player fouling out given a threshold foul. Goldman and Rao
show that the value of a point increases as the game wears on. In order to
compensate for this phenomenon I gave quarters 1 and 2 a normal weight,
weighted the 3rd quarter 1.5 times, and gave the fourth quarter 2
times the importance of the first. In order to evaluate the effectiveness of
individual players, I used Wins Produced per 48 minutes, available at www.theNBAgeek.com/teams.
At any state in the game, there are two particular states, not in threshold
foul trouble, or in threshold foul trouble. Once a team’s star player enters
the threshold foul trouble state, the coach has two choices. The payoffs of
these are shown below for a number of different teams. This functions as an iterated payoff matrix,
where in each quarter, the coach should maximize his expected value.
Expected Value = Pfoul
out(Qweight)(WP48player)
Boston Celtics:
Paul Pierce vs. Mickael Pietrus
Quarters 1 and 2:
EVPierce = (1-0.3153)(1)(0.151) EVPietrus = (0.3153)(1)(0.053)
EVPierce =
0.103 EVPietrus
= 0.0167
Quarter 3:
EVPierce =
(1-0.3153)(1.5)(0.151) EVPietrus
= (0.3153)(1.5)(0.053)
EVPierce =
0.1545 EVPietrus = 0.0251
Quarter 4:
EVPierce =
(0.3153)(2)(0.151) EVPietrus
= (1-0.3153)(2)(0.053)
EVPierce =
0.2067 EVPietrus
= 0.0334
The fourth quarter is the only time where a player should
actually be in a position to foul out; for the rest of the evaluations only the
fourth quarter was examined.
Oklahoma City Thunder:
Kevin Durant vs. James Harden
EVDurant =
(1-0.3153)(2)(0.226) EVHarden = (0.3153)(2)(0.263)
EVDurant =
0.3095 EVHarden
= 0.1658
Philadelphia 76ers:
Andre Iguodala vs. Evan Turner
EVIguodala
= (1-0.3153)(2)(0.255) EVTurner
= (0.3153)(2)(0.111)
EVIguodala =
0.3491 EVTurner
= 0.0350
Los Angeles Clippers:
Chris Paul vs. Mo Williams/ Eric Bledsoe
EVIguodala
= (1-0.3153)(2)(0.313) EVTurner
= (0.3153)(2)((0.024+0.040)/2)
EVIguodala = 0.4286 EVTurner
= 0.0201
Implication:
3: Since
1987, the number of foul outs per game has
consistently trended downward. Source:
|
Although it is unclear whether
foul trouble drives performance or vice versa, it is clear that coaches are
being too cautious with their players regarding foul trouble. The benefits of
having your star player in the game out-weigh the possible drawbacks of his
fouling out. This is especially clear because of the fourth quarter scaling.
Although in the first three quarters, the drop off to the bench player may not
be as severe, in the fourth quarter, these differences are magnified by the
heightened value of a point as the amount of time left in a game nears
zero. This is even true for teams where
the disparity in talent between starter and substitute is not drastic. For
example, with the Oklahoma City Thunder, should Kevin Durant get into foul
trouble in the fourth, the payoff of leaving him in to finish the game is much
higher than his value on the bench. It just so happens that, for the Thunder,
James Harden’s play has been great enough that the drop off, should Durant foul
out, is not particularly problematic.
These results are consistent with
findings from Moskowitz and Wertheim (2011), who found that stars actually play
better in the fourth quarter with foul trouble. However, this is contrary to
Maymin et al., who found that teams generally perform better if foul-troubled
starters are benched. Both Maymin et al. and I agree that benching a player in
foul trouble is more beneficial in the early quarters, mostly because early in
the game, “benching a player preserves “option value” since the coach can
reinsert a fresh, non-foul plagued starter back into the game in the fourth
quarter” (Maymin, Maymin, & Shen, 2012) .
Situation
2: Defend Two or Three Pointer
Often in late game situations, a team may find themselves up
by two points with the shot clock turned off. In this situation, the offensive
team must decide to go for the tie and hope for overtime, or shoot the three
and win the game in regulation. Similarly, the defending team must decide which
to defend, the inside or outside shot. For our purposes, it can be assumed that
the probability of winning in overtime will be basically a coin-flip, or 50%,
as there are too many variables to consider in our theoretical setting (Chow, Miller, Nzima, & Winder, 2012) .
In this situation,
most often the offensive team’s coach will call a timeout in order to set up
his play. Simultaneously, the defensive coach must decide the best way to set
his defense in order to ensure the win. This functions as a simultaneous game, where
both coaches make their decisions without knowledge of the other’s strategy (Osborne,
2000) .
Instead of using the data for actual players and teams, I used league averages
for my base assumptions regarding the game. According to Weill (2011) tight
defense drops expected shooting by 12%. Also, I found data for the effective
shooting percentages from Peterson (n.d.) that I feel is representative of the open
FG% (although all teams track contested and open FG%, this information is not
yet available to the casual fan).
League-wide 2 point FG% in 2011-2012: 47.7%
League-wide 3 point FG% in 2011-2012: 34.8%
Open 2 pt. FG% (from eFG%): 62.5% (Peterson, n.d.)
Open 3 pt. FG%: 50.0% (Weill, 2011)
Contested 2 pt. FG%: 35.7% (Weill, 2011)
Contested 3 pt. FG%: 22.8% (Weill, 2011)
The simultaneous game offers no dominant strategy for either
team. The teams should therefore employ a mixed strategy in order to remain
unpredictable. In order to calculate the right balance of two and three point
shots, the Mixed Strategy Equilibrium was calculated for each team.
For offense, let q
equal the percentage of time the defending team defends the three. The expected
payoff to the shooter is: q * 0.228 +
(1 – q) * 0.5 when shooting a three
and q * 0.312 + (1 – q) * 0.178 when shooting a two. The
offensive team should shoot the three if:
q * 0.228 + (1 – q) * 0.5 > q * 0.312 + (1 – q) *
0.178
This simplifies to q
> 0.793, meaning the offensive team should always shoot the three if the
defending team defends against the three point shot less than 79.3% of the
time. The expected payoff for shooting either a two or a three in this case is 0.284.
For defense, let p
equal the percentage of time the offensive team shoots the three. The expected
payoff to the defensive team is: p * 0.772
+ (1 – p) * 0.688 when defending a
three and p * 0.5 + (1 – p) * 0.822 when defending a two. The defensive
team should defend the three if:
p * 0.772 + (1 – p) * 0.688 > p * 0.5 + (1 – p) * 0.822
This simplifies to p
> 0.330, meaning the defensive team should always defend the three if the
offensive team shoots the three point shot more than 33.0% of the time. The
expected payoff for defending either a two or a three in this case is 0.716.
Implication: Simply
put, it is likely in the best interests of the losing team to shoot the three almost
all the time. As long as the defending (winning) team guards the three pointer
less than about 80% of the time, the losing team should seek to end the game in
regulation every time. Similarly, the team that is ahead should fear the three
pointer much more than overtime. As long as the team that is losing shoots the
three at least a third of the time, the defending team should always defend the
three.
Unfortunately, often finding the best three point shot
involves working the ball around and having someone other than the team’s
superstar take the shot. In today’s NBA Culture, Hero Ball (Abbott, 2012) has often taken the place of team
basketball in crunch time. The problem with this is that isolation plays are
good for only 0.78 points per possession (ppp), as opposed to off-the-ball cuts
(1.18 ppp) or transition plays (1.12 ppp). When star players do not take the
last shot, or when role players miss wide open opportunities, the star is
blamed for not taking the shot. However, this analysis shows that the three
pointer, especially if the team is able to get off an open look, dramatically
improves the team’s chances of winning the game.
Works Cited
Abbott, H. (2012). Hero Ball, or how NBA teams fail
by giving the ball to money players in crunch time. ESPN The Magazine.
March 19, 2012. Accessed at: http://espn.go.com/nba/story/_/id/7649571/nba-kobe-bryant-not-money-think-espn-magazine
Chow,
T., Miller, K., Nzima, S., & Winder, S. (2012). Game Theory (MBA 217)
Final Paper. University of California, Berkeley. Accessed at: http://faculty.haas.berkeley.edu/rjmorgan/mba211/Chow%20Heavy%20Industries%20Final%20Project.pdf
Feldman,
D. (2010). NBA Players are Fouling Out Less Often. Detroit:
PistonPowered.com. Accessed at: http://www.pistonpowered.com/2010/12/nba-players-are-fouling-out-less-often-and-other-interesting-facts-you-didnt-think-you-wanted-to-know-about-fouling-out/
Goldman,
M., & Rao, J. M. (2012). Effort vs. Concentration: The Asymmetric Impact of
Pressure on NBA Performance. Boston: MIT Sloan Sports Analytics Conference.
Maymin,
A., Maymin, P., & Shen, a. E. (2012). How Much Trouble is Early Foul
Trouble? Strategically Idling Resources in the NBA. Boston: MIT Sloan Sports
Analytics Conference.
Moskowitz,
T., & Wertheim, L. (2011). Scorecasting: The Hidden Influences Behind
How Sports are Played and Games are Won. New York, NY: Crown Archetype.
Osborne,
M. (2000). An Introduction to Game Theory. Oxford: Oxford University
Press.
Peterson,
E. (n.d.). Open/Contested Shots. 82games.com. Accessed at: http://www.82games.com/saccon.htm
Weill,
S. (2011). The Importance of Being
Open: What optical tracking data says about NBA field goal shooting.
Boston: MIT Sloan Sports Analytics
Conference.
Monday, May 7, 2012
Feeling the future
I found an interesting article slated to be printed in the Journal of Personality and Social Psychology dealing with the science of psi. The article caused some uproar in the scientific community, because many experts in the field of psychology deny the existence of psi and extra sensory perception. Professor Daryl Bem, of Cornell University, New York, said the results of nine experiments he had carried out on students over the past decade suggested humans could accurately predict random events.
In one test, 100 students were presented with a computer screen showing two curtains. They were told an image, which could be erotic, lay behind one curtain and they should guess which.
While the three per cent difference was small, Prof Bem said students consistently outperformed the average when predicting the location of erotic images.
In one test, 100 students were presented with a computer screen showing two curtains. They were told an image, which could be erotic, lay behind one curtain and they should guess which.
While the three per cent difference was small, Prof Bem said students consistently outperformed the average when predicting the location of erotic images.
Thursday, May 3, 2012
Summary of Findings (White Team): Brainstorming (4.5 out of 5 stars)
Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University on 3 May 2012 regarding brainstorming specifically. This technique was evaluated based on its overall validity, simplicity, flexibility, its ability to effectively use unstructured data, and its ease of communication to a decision maker.
Description:
Description:
Brainstorming is a term for a wide array of modifiers used to generate ideas from participants working in a group. If performed with proper structures to avoid groupthink, brainstorming can be useful in promoting creativity and divergent thinking. There are a variety of techniques for brainstorming, including open group discussion, nominal groups, open discussion and round-robin.
Open group discussion is the most common form of brainstorming, but research shows that it is not only less effective than other techniques, but can be detrimental to idea generation by suppressing suggestions from certain individuals in a group setting. Nominal group brainstorming, in which individuals generate ideas separately and compile a list of the results, has been shown to be significantly more effective in comparative studies.
However, Mercyhurst alumnus Shannon Ferrucci discovered in her thesis that only performing divergent thinking actually decreases forecasting accuracy. According to Kris Wheaton’s interpretation of Ferrucci’s thesis, it important to take the process a step further and perform convergent thinking methods, such as mind mapping, to group, prioritize and filter the ideas.
Open group discussion is the most common form of brainstorming, but research shows that it is not only less effective than other techniques, but can be detrimental to idea generation by suppressing suggestions from certain individuals in a group setting. Nominal group brainstorming, in which individuals generate ideas separately and compile a list of the results, has been shown to be significantly more effective in comparative studies.
However, Mercyhurst alumnus Shannon Ferrucci discovered in her thesis that only performing divergent thinking actually decreases forecasting accuracy. According to Kris Wheaton’s interpretation of Ferrucci’s thesis, it important to take the process a step further and perform convergent thinking methods, such as mind mapping, to group, prioritize and filter the ideas.
Strengths:
- Useful first step in creating a mental model of a situation.
- Good for identifying new ideas and capturing individuals’ previous ideas
- Can be used to identify relationships between those ideas.
- Good for summarizing the ideas of individuals as well as the group as a whole.
- Can bring unique or unexpected results to the attention of the team.
- Highly flexible, and can address almost any situation.
Weaknesses:
- Some methods, particularly open discussion brainstorming, can result in suppressed ideas and groupthink.
- Tends to focus on quantity of ideas over quality, thus requiring further review of the resulting ideas.
- In-groups can create or strengthen biases.
- Most people tend to focus on the divergent half of brainstorming and leaving out the follow-up convergent summarizing.
- Divergent thinking alone reduces forecasting accuracy and increases confidence, undermining the utility of generated ideas unless additional consideration is applied.
How-To:
Steps 6 and 7 help illuminate the relationships between ideas, providing more useful, nuanced information than a simple list of ideas. This information can help a group decide on a good path moving forward in dealing with the situation.
While many brainstorming techniques exist, the following is one example of how to go about it in a professional, analytic setting:
- Define the problem or situation under consideration.
- Have each member of the team individually identify what he or she already knows about the situation, and what the team needs to know about the situation.
- Have individuals separately generate a list of relevant ideas on an individual basis.
- Share these results with the group, making sure to get the input of each member.
- Identify which ideas are important to a large number of individuals on the team. Also identify ideas which were not common, but get enthusiastic agreement from the group.
- Do not throw out any ideas, but allow the group to identify the most important/useful.
- These useful ideas are worth noticing as important to the group as a whole.
- Have participants individually group, prioritize, and filter the list of ideas/concepts according to their own organizational model of the situation. Mind-mapping software such as that available on mindmeister.com can be useful for this step.
- Share the individual organizational models as a group and, much like in step 6, note aspects of the models which are common to many individuals or which get significant positive reaction from the group when shared. Also identify areas of disagreement between models.
Steps 6 and 7 help illuminate the relationships between ideas, providing more useful, nuanced information than a simple list of ideas. This information can help a group decide on a good path moving forward in dealing with the situation.
Personal Application of Technique:
For this exercise, our class divided into two groups to test and compare two simple brainstorming methods. One group practiced open-discussion brainstorming, which is the most common and least structured form of brainstorming. The second group practiced Nominal Group brainstorming, in which each participant generated their own list of ideas without group discussion and then the group compiled each individual list into one large final list. Each group was given 10 minutes to generate a final list of ideas. The subject both groups were brainstorming was “What features would you like to see in the new building they’re opening next year for this program?”The end results were that the open discussion group had 34 unique ideas while the nominal group generated over 100, displaying some of the flaws and idea-suppressing effects of open discussion brainstorming.
Rating: 4.5 of 5 Intelligence Stars*
*If brainstorming is performed in a fashion incorporating convergent and divergent thinking that also mitigates the risks of groupthink biases and the resulting idea suppression. Unstructured open-group discussion would receive a lower score.
Further Resources:
Wikipedia: Brainstorming
Wikipedia: Nominal Group Technique
Brainstorming Techniques
Sources and Methods post about Shannon Ferrucci's thesis, Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking
Articles critical of brainstorming:
Brainstorming: An idea past its time
Jonah Lehrer's New Yorker article: Groupthink: The brainstorming myth
Jonah Lehrer: Brainstorming doesn't work, but criticism does
An article in support of brainstorming:
In defense of Brainstorming: against Lehrer's New Yorker article
Videos:
How to brainstorm with mind maps
Imagine: How creativity works
Wikipedia: Brainstorming
Wikipedia: Nominal Group Technique
Brainstorming Techniques
Sources and Methods post about Shannon Ferrucci's thesis, Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking
Articles critical of brainstorming:
Brainstorming: An idea past its time
Jonah Lehrer's New Yorker article: Groupthink: The brainstorming myth
Jonah Lehrer: Brainstorming doesn't work, but criticism does
An article in support of brainstorming:
In defense of Brainstorming: against Lehrer's New Yorker article
Videos:
How to brainstorm with mind maps
Imagine: How creativity works
Subscribe to:
Posts (Atom)