Saturday, October 17, 2015

Recommendations from an Associative Memory Perspective: Making Group Brainstorming More Effective



Making Group Brainstorming More Effective: Recommendations from an Associative Memory Perspective
Vincent R. Brown and Paul B. Paulus
Department of Psychology, Hofstra University, Hempstead, New York (V.R.B.), and Department of Psychology, University of Texas at Arlington, Arlington, Texas (P.B.P.) http://cdp.sagepub.com/content/11/6/208.full.pdf+html

Introduction
Literature on group brainstorming has found it to be less effective than individual brainstorming. The author begins with asserting that the enthusiasm for collective work may not always be justified. The following is some of the evidence the author provides:

·         Controlled studies of idea sharing in groups have shown that groups often overestimate their effectiveness (Paulus, Larey, & Ortega, 1995).

·         Experiments comparing interactive brainstorming groups with sets of individuals who do not interact in performing the same task have found that groups generate fewer ideas and that group members exhibit reduced motivation and do not fully share unique information (e.g., Mullen, Johnson, & Salas, 1991).

·         The strongest inhibitory effect of groups may be production blocking, which is a reduction in productivity due to the fact that group members must take turns in describing their ideas (Diehl & Stroebe, 1991).

Group Creativity
The author states that the most evident critique of group brainstorming is that it hinders creativity. In addition, “Most research on creativity has examined individual creativity because it is typically seen as a personal trait or skill.” Contrary to this literature, the authors assert that individual creativity is hindered when the process of collaboration is taken into account. For example, “much creative work requires collaboration of people with diverse sets of knowledge and skills.” Furthermore, “How can such groups overcome the inevitable liabilities of group interaction to reach their creative potential and is it possible to demonstrate that group interaction can lead to enhanced creativity?”

Intuitively, the cognitive benefits of brainstorming in a group seem clear: People believe that they come up with ideas in a group that they would not have thought of on their own. The potential for mutual stimulation of ideas is one of the reasons for the popularity of group brainstorming. The authors provide a model for ideational creativity in brainstorming. The model is referred to as the “Semantic Networks and an Associative Memory Model of Group Brainstorming.”  To use the semantic network representation as a basis for exploring group brainstorming, many details need to be specified (See article for details). Despite the amount of steps and requirements, it is probable this is the type of rigor and structure brainstorming needs.

Enhancing Group brainstorming: Three brainstorming procedures that appear promising
The authors studied three brainstorming procedures that appear promising for theoretical reasons and continue to garner some empirical support. These are 1) Individual and Group Brainstorming, brainwriting, and electronic brainstorming.

·         Individual and Group Brainstorming: Combining group and solitary brainstorming
·         Brainwriting: Having group brainstormers interact by writing instead of speaking
·         Electronic Brainstorming: Using networked computers on which individuals type their ideas and read the ideas of others.

Summary
In contrast to the literature, the authors argue, “A cognitive perspective suggests that group brainstorming could be an effective technique for generating creative ideas.” Evidence presented by computer simulations of an associative memory model of idea generation in groups suggest that teams, “have the potential to generate ideas that individuals brainstorming alone are less likely to generate.” Moreover, diverse teams are most likely to benefit from the social exchange of ideas. The author further adds, “Although face-to-face interaction is seen as a natural modality for group interaction, using writing or computers can enhance the exchange of ideas.”

Critique
The author’s three recommendations on idea sharing that include the exchanging ideas by means of writing or computers, alternating solitary and group brainstorming, and using diverse groups appear to be useful approaches for enhancing group brainstorming. Although these ideas won’t curtail all group think, brainstorming is starting point for solving complex problems given that brainstorming is done properly. One of the downfalls of this technique is that it does not a specific set of rules or steps. Despite this disadvantage, this could also be viewed as an advantage due to its applicability across domains and problem sets. Regardless of the problem set, it is a given that individual or group brainstorming will always be practiced. As a result, the technique should be practiced and aimed to be done effectively and efficiently. This article provides enough evidence to convince readers to consider the elements of their brainstorming sessions; however, I am curious to what constitutes an effective brainstorming session and could this even be measured.

Brainstorming Pitfalls and Best Practices

By: Chauncey E. Wilson
Source: http://www.umsl.edu/~sauterv/analysis/brainstorming.pdf

Summary:

The main points of this article are to discuss the pitfalls of brainstorming as well as how to carry out brainstorming in the most successful way. It also talks about a technique that the author refers to as "brainwriting" which he says is a good complement to group brainstorming as it helps with the quantity of ideas that are voiced. This article also states that brainstorming is often thought of as a technique that can be successfully done by anyone when in reality deferring judgement can be difficult for people and the concept of quantity can easily be derailed, and both are important aspects to brainstorming.

The first aspect that was mentioned in this article was diversity among the participants. According to this article, many references to brainstorming say that diversity within a group is important because it will lead to many different ideas, but it is important that the comfort and cohesion  of the group is good. The author compares the lack of comfort and cohesion to when there are high-level managers or strangers that are invited to brainstorm with junior participants. This article states that the best practices in regards to diversity are to invite people from different groups that are known to each other, introduce any new people, and don't invite anyone who is feared by others (i.e. volatile people).

The next aspect that was discussed was serial speaking and production blocking. The idea behind this is that the amount of ideas expressed can be hindered when people start to express their ideas and tell stories or go into too much explanation. The author says that when a person is speaking (especially for too long), they block the production of other ideas because other participants might forget what they were thinking or decide that their idea isn't good enough. The best practices in regards to this aspect are to use an experienced facilitator, tell the members at the beginning to keep their responses brief, enforce that one person speaks at once, provide note cards for participants, and encourage those with an idea to raise their hands.

Another aspect is competition which the author states has the potential to increase the quantity of ideas that are generated. A study by Paulus and Dzindolet found that groups who were given goals that were about twice as much as a "typical performance" (goal of 100 ideas when typical is 50-60) then the amount of ideas tended to be increased by about 40% when compared to groups who were not given an aggressive goal. The best practices would be to set an explicit and aggressive goal for the amount of ideas to be generated, number each idea, motivate participants, provide feedback about the quantity generated in previous sessions, and make all the ideas visible and legible so they can serve as catalysts for more ideas.

Next, preparation for the brainstorming sessions was discussed. This part says that if the participants do some type of pre-work (preparation) then more ideas could be generated. It also said that participants might want to consider "warm-ups" where they are exposed to stimuli related to their topic (i.e. visiting a toy store before brainstorming a new toy design). According to the article, the best practices are asking participants to spend a certain amount of time doing individual brainstorming and doing "warm-up" exercises.

The last aspect that was discusses was the use of "brainwriting" as a complement. This method involves the participants writing down all of their ideas on paper instead of saying them out loud like in traditional brainstorming. A variant of this is that after writing down their ideas, the paper is then passed to the next person or collected and redistributed to other participants. Then the participants silently read all the ideas and add on to them without discussing anything with the other participants. The process is repeated several times and the results are posted for all to see. The article states that there are benefits to this which include a decrease in blocking effects. The best practices in regards to this are to consider brainwriting as an alternative to brainstorming when there is contention in the group or the culture doesn't allow for "wild and crazy" ideas and use this technique when time is limited or the group is big.

Critique:

The only thing that really bothered me about this article is the fact that emphasis was put on "quantity not quality" which makes me question how valuable this technique is when applied to intelligence studies. I could see how this would be valuable in the very beginning stages of a project, but I do not think this would serve well as a method to use later on in a study. Quality is very important to the intelligence field because analysts need to have a certain level of confidence and validity in their analysis to be able to steer a decisionmaker in the right direction. In addition to that, there generally just seemed to be a lot of pitfalls to this method which is unfortunate. Because of that, I would definitely consider brainstorming to be a modifier rather than a method since it most likely will not produce an estimate. However, I did like that the author spent time on informing the reader of the best way to conduct a brainstorming session in order to make it the most successful that it can be. Even though brainstorming might not be the best method to use for analysis, I think this article is still worth the read so that people know how to make the most of it as a modifier in order to generate ideas that can get a project started.

Friday, October 16, 2015

Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking?

By Donald W. Taylor, Paul C. Berry, Clifford H. Block
   
Summary

The authors assert that group participation when using brainstorming inhibits creative thinking. They had conducted an experiment at Yale University to test ‘Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking’ with 96 Yale juniors and seniors. Initially, they formed twelve groups of four men and 48 individuals; and after conducting the experiment they randomly divided 48 individuals into twelve nominal groups of four men. These twelve nominal group’s results then used as a measure to test the study title.

The study asked the groups/individuals to generate ideas on three problems which authors described shortly as ‘Tourists Problem’, ‘Thumbs Problem’, and ‘Teacher Problem’. (You may find detailed problem definitions in article).

Results and Findings

On each of the three problems, the mean number of ideas presented by real groups is much larger than that presented by individuals. This suggested that group study stimulate producing more ideas.
The mean numbers of responses produced by nominal groups were considerably larger than that produced by real groups on each of the three problems. The study also compared the nominal groups’ and real groups’ responses in terms of originality and quality. Again the number of mean responses of nominal groups were superior to the real groups. However, the authors were suspicious that this may had occurred because the number of responses of nominal groups were high than the real groups’ ones. Therefore, they conducted covariance analyses to deal with this challenge. What they find was intriguing: there were no significant difference between real and nominal groups in number of unique responses on either the Tourists Problem or the Teachers Problem. Moreover, the covariance analyses favored real groups’ responses on Thumbs Problem in uniqueness of their responses.
The authors conducted another series of analyses to measure and compare the real and nominal groups’ responses’ qualities. They evaluated the responses given to three problems from effectiveness, probability, generality, significance, and feasibility dimensions. On each of the five dimensions for each of the three problems, the mean scores for the nominal groups were much larger than those for the real groups. The authors again conducted a covariance analyses to determine the significant differences between each groups. For the Thumbs, but not for the Tourists and Teachers Problems, there was a superiority of the nominal over the real groups on the five evaluating dimensions over and above that accounted for by a superiority in total number of responses.
To recap the findings, the group interaction inhibits the creative thinking. And the authors provided two possible reasons for that:
  1. It appears probable that the individual working in a group feels less free of possible criticism by others even when such criticism is not expressed at the time than does the individual working alone.
  2. Group participation may reduce the number of different ideas produced. They may opt to follow and generate sub-branches to the previously stated options.
Critique
The authors conducted a very well done experiment. As far as I see, they had tried to answer all possible questions that may popped up. For example, they didn’t just state that the nominal groups were superior to real groups due to the bigger values of produced responses’ means; they continued analyzing that big values to find out whether or not it stemmed from the structure of the groups. Therefore, I can say that the study is pretty unbiased. And they strongly provided evidence that the group interactions (real groups) couldn’t produce responses big in numbers and quality than nominal groups for each problems. Consequently, we can say that this study demonstrates the advantages and merits of the nominal group technique over group interaction in brainstorming.
Source:
http://www.jstor.org/stable/2390603?seq=1#page_scan_tab_contents 

Monday, October 12, 2015

Game Theory (3.5 out of 5 stars)

Note: This post represents the synthesis of the thoughts, procedures and experiences of others as represented in the  articles read in advance (see previous posts) and the discussion among the students and instructor during the Advanced Analytic Techniques class at Mercyhurst University in October 2015 regarding Game Theory as an Analytic Technique specifically. This technique was evaluated based on its overall validity, simplicity, flexibility and its ability to effectively use unstructured data.

Description:
Game theory is a method of analysis that mathematically models conflict and cooperation between rational actors. Game theory can be understood intuitively by assessing how actors behave in situations in which they have a vested interest. Quality game theory models incorporate real-world constraints such as limited time, levels of uncertainty, and incomplete information. Games are solved by looking ahead and anticipating actions of the opposing actor, while the opposing actor is simultaneously engaging in similar activities. It is expected that actors will make decisions in their own self-interest, but this has been refuted when actors cooperate or act in the best interest of the group.

Strengths:
  • Gives quantitative results to analysis
  • Types of game theory studies do not require knowledgeable experimenters
  • Represents costs and benefits for all actors
  • Can account for human behaviors to a degree
  • Various types of game theory exist allowing different applications to a diverse range of problems
  • Useful in various simulations where assigning a numeric value is usually required

Weaknesses:
  • Assumes actors are rational thinkers
  • Human mind is frequently social, not rational
  • Applying this method to intangible problems is very difficult
  • A complicated technique to learn and implement correctly
  • Bundles human nature
  • Can be susceptible to biases
  • Highly favors quantitative inputs
  • Requires 100% accurate information
  • Non-cooperative and cooperative games will yield different results
  • The questions must be asked in a specific way  



How-To: For the scenario described in the personal application, analyst must construct a payoff matrix to answer the questions provided by the exercise.

In a payoff scenario,

  1. Determine the problem and the players involved in the scenario.
  2. Determine the strategies of each player.
  3. Determine the payoff values for the payoff matrix.
  4. Determine the dominant strategies

Personal Application of Technique:
Using the Equilibrium Concepts, we applied game theory to the exercise below:
Exercise 1 (Training and payment system, By Kim Swales)
Two players: The employee (Raquel) and the employer (Vera). Raquel has to choose whether to pursue training that costs £1, 000 to herself or not.  Vera has to decide whether  to pay a fixed wage of £10, 000 to Raquel or share the revenues of the enterprise 50:50 with Raquel. The output is positively affected by both training and revenue sharing. Indeed, with no training and a fixed wage total output is £20, 000, while if either training or profit sharing is implemented the output rises to £22, 000. If both training and revenue sharing are implemented the output is £25,  000.

  1. Construct the payoff matrix
  2. Is there any equilibrium in dominant strategies?
  3. Can you find the solution of the game with Iterated Elimination of Dominated Strategies?
  4. Is there any Nash equilibrium?
Solution
This game has the following characteristics:
  • Players: Raquel and Vera
  • Strategies:
    • Raquel’s:  pursue training (costly to herself: £1, 000), or not
    • Vera’s:  give revenue sharing (50:50), or fixed wage   (£10,000)
Payoffs: depend on total output and the way it is split. Output depends positively upon two factors: whether Raquel has training and if Vera adopts profit  sharing.
  • Fixed wages + no training: output = 20,000
  • Add either training or revenue share: output =  22,000
  • Both training and revenue share: output = 25,000
We can then build the payoff matrix: with unit of account:  £/000
2. No, there is no equilibrium in dominant strategies because Raquel has no dominant strategy.  She prefers to train only if Vera  gives revenue sharing, while prefers not to  train with a fixed  wage.
3.  Yes. Fixed wage is a dominated strategy for Vera. Assuming that players are rational and that this information is common knowledge, Raquel knows that Vera will never choose a fixed wage. Then she will choose to train because No training is a dominated strategy after the elimination of Vera’s dominated strategy.
4. Yes. Every equilibrium identified by Iterated Elimination of Dominated Strategies is a Nash equilibrium.


For additional information:


Friday, October 9, 2015

Game Theory Based Network Security


By Yi Luo , Ferenc Szidarovszky , Youssif Al-Nashif , Salim Hariri, 2010

Summary

Game theory is an appropriate methodology to model the interactions between attackers and network administrator and to determine the best countermeasure strategy against attacks. There are however some difficulties in directly applying classical game theory, since the attackers’ strategies are uncertain, their steps are not instantaneous, the rules of the games might change in time, and so on. Therefore any game theory based methodology has to take these difficulties into account. There are many types of intrusions. Multi-stage attacks are the most destructive and most difficult kinds for any defense system. They use intelligence to strategically compromise the targets in a planned sequence of actions, so the usual methodology designed to protect against single-stage attacks cannot be used.

This paper introduces a multi-stage intrusion defense system, where the interactions between the attacker and the administrator are modeled as a two-player non-cooperative non-zero-sum dynamic game with incomplete information. The two players conduct Fictitious Play along the game tree, which can help the administrator to find quickly the best strategies to defend against attacks launched by different types of attackers. The algorithm is an online procedure, which gives the most appropriate response of the administrator at any stage of the game. So it has to be repeated at all actual decision nodes of the administrator. Their algorithm is different than the usual methods based on decision trees, since at each step only a finite horizon is considered, instead of expected outcomes certain equivalents are used and the probabilities of the different arcs are continuously updated based on new information.

Multi-stage attacks are represented by special game trees. Figure 1 shows the first two interactions on a game tree. The attacker is the leader, the administrator is the follower. The root of the tree is the initial decision node of the attacker, and the possible initial moves of the attacker are represented by the arcs originating at the root. These actions might include attacking the server with different intensity levels, sending a virus to a group of customers, etc. At the end point of each arc the administrator has to respond, so they are its decision nodes. After the administrator’s response the attacker makes the next move, and so on. This tree continuous until the intruder gives up the attack or reaches its goals. This tree can become very large and the payoff values at the decision nodes are uncertain, therefore the classic method, known as backward induction, cannot be used in this case.


Figure 2 shows a network structure. It is assumed that the HTTP server, Database 2, the FTP1 server and the information in the CEO are the vulnerable components in the network system, and access to the information in the CEO is the attacker’s objective. It is also assumed that the CEO needs services provided by the HTTP server, Database 2 and the FTP1 server to do its jobs. The attacker can launch multi-stage attacks to obtain the information from the CEO in many different ways. Then the administrator can respond to it by selecting from a set of options, and so on, which leads to the game tree. Next they assume that in addition to the sensitive data in the CEO the data in the Accounting is another vulnerability of the system. The attacker has now two objectives: the information in CEO and the data in Accounting. The Accounting also needs services provided by Database 2 and the HTTP server, etc. The computer study assume that the attacker always selects the action leading to maximal impact, and the administrator always selects its best action at its decision nodes by using one of the three tested algorithms.

They applied three methods to find the best responses of the administrator: One is a greedy algorithm (GA) in which the administrator completely blocks the traffic of  corresponding services on routers, firewall, or disconnect the machines using managed switches, etc. regardless of what kind of attack occurs or what is the intensity levels of the attack. Another algorithm is also myopic, single-interaction optimization algorithm (SO) in which the administrator tries to minimize the loss from the most current attack at each interaction without considering future interactions with the attacker. The third algorithm is the one they developed. The results are shown Table 1. Two types of attacks were assumed. The risk seeking attacker worried about only the expectation of the impact (α = 0 ), while the risk neutral intruder selected a relatively high risk taking coefficient (α =1). The two scenarios refer to the cases of one or two objectives of the attacker. The last three columns indicate the three methods which were used for comparison. The numbers in the last three columns of the table show the total losses of the system with using different methods.
Results

Clearly their method resulted in the smallest overall losses in all cases, where the loss reduction was 41%, 51%, 52% and 58% in comparison to the Greedy Algorithm, and 23%, 30%, 29% and 36% in comparison to single-interaction optimization.

Critique

They assert that their  algorithm is different than the usual methods based on decision trees, since at each step only a finite horizon is considered, instead of expected outcomes certain equivalents are used and the probabilities of the different arcs are continuously updated based on new information.  The performance of their algorithm is much better than that of other algorithms based on the results of their numerical experiments. The loss reduction varies between 23% and 58%. However, the biggest issue about the article is their ambiguous process . They do not show their calculations and it makes harder to understand their approach.

Luo, Y., Szidarovszky, F., Al-Nashif, Y., & Hariri, S. (2010). Game theory based network security. Journal of Information Security1(01), 41

Source: http://www.scirp.org/journal/PaperInformation.aspx?paperID=2330

Analysis of Urban Car Owners Commute Mode Choice Based on Evolutionary Game Model

Analysis of Urban Car Owners Commute Mode Choice Based on Evolutionary Game Model
Huawei Gong and Wenzhou Jin
http://www.hindawi.com/journals/jcse/2015/291363/abs/ 

Summary:
As major cities develop in China and access to privately owned cars is increasingly possible the infrastructure is struggling to keep up.  Traffic congestion is becoming a larger issue with the increasing affluence of the Chinese people making the ownership of a family car within reach for many.  One method of dealing with the inconvenience of driving a private vehicle on a crowded roadway is public transportation.  For the purposes of this study public transportation is synonymous with buses.

The idea behind this paper is that Chinese citizens will have to choose between driving a privately owned car and taking the bus as a form of public transportation.  This is used to formulate a two-level game model.  The public facilities of the roads have the characteristics of nonexcludablity and nonrivalry, this means that rational actors will take full advantage of them as long as they can by driving their own cars.  According to the authors this situation is known as a “public facilities tragedy”, to avoid this the government must take action.  The authors believe that the government must control the increase in the number of cars and the usage of private cars; at the same time the government should encourage the use of public transportation.

For this analysis the authors assumed that in the future car owners would give up driving and commute by public transit as urban public transportation becomes more developed.  They then define group A as a low income group and group B as a higher income group.  Group A is more likely to choose public transportation over a private car because it is more cost effective, while group B is more likely to utilize a private car even if it costs more.


The authors conclude that based on their analysis, the choice of how to commute is mainly affected by the factors of travel time, travel cost, and comfort level, and so forth.  The choice is influenced by public transportation system development and by private travel restrictions put in place by the government.

Critique:

This article takes a relatively simple idea for a game theory model and makes the explanation far more complicated than it needed to be.  Part of the difficulty may arise from English not being the primary language of the authors.  There were times in the article that words were unnecessarily vague and the logic could be difficult to follow as a result.  The authors do admit that the payoff matrix is only an assumption of the ideal situation and that further study would allow them to come up with more refined and accurate results.  As it stands the conclusions they came to based on the results were pretty broad.

Preferences, Property Rights, and Anonymity in Bargaining Games - Ultimatum Games

By: Elizabeth Hoffman, Kevin McCabe, Keith Shachat, and Vernon Smith

Summary:

Non-cooperative, non-repeated game theory is about strangers with no shared history meeting and interacting strategically in their individual self-interests according to well specified rules and payoffs, and then never meet again.  Experimental studies of these two-person bargaining experiments are generally not consistent with the game theoretic predictions, and they do not always replicate across subject populations, particularly in the absence of monetary rewards. 

In recent experimental research on ultimatum games has found that first movers tend to offer more to their counterparts than non-cooperative game theory would predict.  The common offer is half the surplus to be divided, although non-cooperative game theory would suggest an offer by the first mover of the minimum positive amount that is feasible.   

In an ultimatum game an amount of money M is to be divided between 2 subjects.  One, the designated proposer, announces a split of M – X to the proposer and X to the proposer’s counterpart.  After the proposal is made, the counterpart either accepts or rejects it.  If the counterpart accepts, then the proposal is carried out, but if it is rejected then both the proposer and the counterpart get zero.  A rational and non-overly materialistic counterpart should accept the offer X = e > 0, where e is the minimum unit of account.  The equilibrium prediction is for the proposer to offer X = e and for the counterpart to accept. 

Experiments should that first mover proposers in these bargaining games offer more to their counterparts than non-cooperative game theory leads one to expect.  The tendency toward an equal split is often described as “fairness” or “social norms” of distributive justice, but they do not explain the phenomenon in terms of testable fundamentals. 

A second game adjusted the parameters to create a “posted offer” scenario.  The seller begins the process by choosing a price; this price is communicated to the buyer, who then chooses the quantity, thus ending the game.  Consequently, the seller makes an ultimatum price offer to the buyer.  This differed from the first experiments in that:
  1. All bargaining was described as a buyer/seller transaction
  2. The equilibrium yielded more than an e payoff to the buyer
  3. Both sellers and buyers had multiple price/quantity choices available, but the buyer was free to reject the price offer by choosing a zero quantity.

In the ultimatum game, the proposer must form expectations about their counterpart’s reservation value.  Thus, a risk averse proposer may give his or her counterpart more than is predicted by non-cooperative theory in order to insure acceptance of the proposal.
The article notes that randomization of assigned types may not be neutral as subjects can interpret their assignment as the experimenter treating them fairly, thus facilitators may induce a “fair response,” feeling that they should be fair since the facilitator was fair.  If the first mover earns their right to their role, offers are smaller.  When this earned entitlement is combined with exchange, less than 45% of the first movers offer $4 or more out of $10.  Random Entitlement equated to over 85% offering $4 out of $10.  The strategic/expectational character of ultimatum games makes it impossible to conclude from offer data alone whether offers in excess of $1 are due to other regarding preferences or to the first mover’s concern that their offer might be rejected unless it is deemed satisfactory by the second mover.   

Critique:

The experiment followed instructional procedures for inter-subject anonymity as a partial control for the effect of social influences on choice. The ultimatum game in game theory does not require a knowledgeable experimenter and facilitators must be aware of pregame treatments and careful instruction.  It was interesting to note that the ultimatum experiments, randomization of assigned types may not be neutral and could induce a “fair response.”  This implies that first mover offers are sensitive to the instructional setting of the experiment.  The results suggest that behaviors deemed as “fairness” are actually a social concern for what others may think and for being held in high regard by others.   The article interprets offers in ultimatum games as appearing to be determined by strategic and expectations considerations rather than the result of an autonomous private preference for equity.

Source:
http://pareto.uab.es/prey/hoffmanetalGEB94.pdf

The Dynamics of Deterrence

Kleiman, M., & Kilmer, B. (2009)

Summary:
Kleiman and Kilmer utilize game theory to simulate and analyze two different methods of punishment application (random sanctioning and dynamic concentration) in order to determine which method provides the greatest deterrence to potential violators. The authors state that when rule-breaking is punished more consistently, violations decrease and it is less likely that the threatened punishment will occur, i.e, the deterrence was successful.

In applying the standard rational actor assumption of game theory , the authors test compliance games with one and n potential offenders. In these games, breaking a rule results in payment of penalty P, whereas compliance results in a cost of C. According to the authors, “a rational subject will never violate if the penalty for breaking the rule is above the cost of compliance or gain from violation” (p. 14231). In other words, when P > C, the violation will not occur. In this situation, increasing the severity of the punishment will result in less punishment being used. However, if the punishment is not certain and there is instead a probability of being punished p, then the rule will be broken if and only if pP < C, where the critical value is the probability of punishment. Below that critical value, violations will be consistent, but above that value, violations will be zero. In this situation, increasing the probability of punishment decreases the amount of actual punishment used.

In the next scenario, there are n potential offenders, but less sanctions than players. The first game involves two players who act sequentially (Actor 2 (A2) knows what action Actor 1 (A1) took), only a single available punishment which is randomly assigned, and P > C. If only one player violates, he is punished with certainty, whereas is both players violate, they both have a 50% chance of punishment expected cost of punishment is P/2 (which is assumed to be less than C). Therefore, “comply-comply” and “violate-violate” are both Nash equilibriums. A1 chooses between complying at C or violating at P/2 < C, and because he is rational, he violates and therefore so does A2. Generalized to n players, as long as P/n < C, all will violate. If capacity to punish is increased and made public, where both A1 and A2 know they will be punished, then both actors comply.

The next game changes the punishment from random to a priority assignment on A1. In this situation, A1 will comply because he knows that he, being the highest priority actor, will certainly be punished. Therefore, A2 also complies, because if A1 complies then priority shifts to the next actor. Even if moves are simultaneous instead of sequential, the directly-threatened player will always comply, and therefore so will the rest of the field. In this scenario, regardless of the number of players, the only Nash equilibrium is universal compliance. The authors provide an example of the Texas Ranger with a single bullet in his revolver who prevents an angry mob from rushing the jail by threatening to shoot the first person who steps forward. If no one steps forward, no one is shot, and the jail is not mobbed.

The authors conduct simulations where random sanctions and dynamic concentration are applied to violators in a limited number. In these simulations, as the number of available sanctions increases, the number of violations decrease, regardless of the method of punishment. However, when dynamic concentration is applied, or an offender is given priority, the enforcers “tip” the system from high-violation to the low-violation equilibrium with fewer sanctions than random sanctioning. Increasing the probability of sanctions dramatically increases the advantage of dynamic concentration. In a stochastic world where a single penalty will not deter all offenders, establishing a priority of sanctions reduces violation rates and economizes sanctions.

The authors argue that applying dynamic concentration to groups could continue the current decrease in crime rates while reversing the decades-long prison-building boom that has lead to the US having the world’s highest rate of incarceration per capita. These principles also extend beyond criminal justice to managers, teachers, parents, and others, but may have limited applicability to armed conflict or organized insurgency.

Critique:
The incorporation of game theory into deterrence makes sense utilizing rational actors; however, the authors mention that dynamic concentration can be defeated when players are allowed to communicate and collude, which occurs in the real world. Additionally, they mention that dynamic concentration would reduce the severity of the punishment that would also “tip” high-violation equilibrium to a low-violation equilibrium, and while I agree that it would likely do so, I do not believe it would be as dramatic as they state. In short, I believe that while dynamic concentration would likely be effective as a deterrent, it would need to be modified to each population that it is applied against, which complicates the strategy.

Source:
Kleiman, M., & Kilmer, B. (2009). The dynamics of deterrence. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 106(34), 14230-14235.