Tuesday, April 6, 2010
Red Teaming: A means to Military Transformation
The Role of Red Teaming in Defining Threats
Sandoz writes that in an uncertain security environment, the DoD needs to consider future threats from three perspectives:
1) The evidentiary threat- are studied collected against, analyzed, and reported on exclusively by the intelligence community.
2) Technically feasible threat - are becoming more difficult to asses. This is mainly due to the fact that potential opponents can readily purchase many technologies and systems abroad.
3) The adaptive threat- is difficult to define because U.S. military forces may not know the unexpected ways in which its opponents could counter war-fighting capabilities.
Sandoz makes the point that in all three cases red teaming should play a key role in developing and evaluating potential threats.
Advantages of red teaming:
1) If done at an inter-agency level , adaptive Red Teams can also inform the national policy process by examining alternative strategies and the roles of government agencies, allies, and non-governmental organizations (NGOs) can play in achieving national policy goals.
2) As part of a disciplined process of red-blue interaction , vigorous red teaming can inform the transformation process and lead to a more robust and relevant future military capabilities.
3) Red teaming and joint experimentation can help prevent bad investments and at the same point, provide a means for U.S. forces to become more agile.
Challenges of red teaming:
1) Prevailing cultures and processes that are intolerant of surprise.
2) Adaptive red teaming does introduce elements of uncertainty and can be disruptive to individual programs.
Three levels where interactive red teaming could support joint concept development and experimentation:
Level 1- Red teaming could challenge ones strategic context and visions of future military capabilities by inventing and exploring counter-strategies and challenging scenarios.
Level 2- Red teaming could challenge new operational concepts in ways that future adversaries might use to thwart U.S. military forces in accomplishing their assigned missions.
Level 3- Red teaming activity could be in direct support of experimentation, including the OPFOR for specific experiments. Red teaming at this level would develop context and concepts for opposing specific Blue operational concepts.
Conclusion:
Sandoz concludes that a broader approach to red teaming , featuring a disciplined process of Red-Blue interaction, could inform and help guide transformation at several levels of the national security process in the three levels as stated above. He goes on to write that because war is a phenomenon between thinking opponents, a broad approach to interactive red teaming is important to inform one's thinking about future military challenges and explore ideas for dealing with them .
The “Red Team” Forging a Well-Conceived Contingency Plan
Throughout the lengthy planning effort for Operation Allied Force in 1998–99, allied leaders and planners widely adhered to a significant assumption. When the order arrived to execute the operation- on the very eve of hostilities- that assumption continued to prevail.
What if an enemy, “Red,” announced his intended reaction to a “Blue” campaign plan before Blue executed it? What if Red obligingly pointed out the flaws in Blue’s plan that he intended to exploit and revealed several hidden weaknesses of his own? Surely, once Blue optimized his strengths and protected his vulnerabilities, the operation would stand a much greater chance of success.
Furthermore, what if representatives of the press and the public confided to Blue planners the elements of the operation that concerned them most as well as those with which they might take issue? What if national leadership explained in advance some of the “wrenches” they might throw into the works during execution? What if senior war-fighting commanders and higher headquarters staffs worked alongside the planners to ensure correct understanding of every facet of their guidance and answered the planners’ key questions? If all these pieces of information were synthesized into the plan during the planning process, the plan would have a better chance of surviving any contingency.
For example, Gen Gregory S. Martin, commander of United States Air Forces in Europe (COMUSAFE), tasked his command’s first Red Team to assess an offensive air and space campaign. After analyzing requirements and considering the restrictions imposed by the “need to know,” the Red Team leader formed the team with SMEs from the following areas:
• air operations and strategy
• command and control (C2)
• joint operations
• logistics
• space operations and strategy
• intelligence, surveillance, and reconnaissance (ISR)
• combat search and rescue
• information operations and information warfare
• law
• politics
Weaknesses:
When possible, the commander should draw Red Team members from sources external to the Blue planning organization. Although this may seem intuitive, it is not always easy to accomplish. Most organizations that have the necessary experts are usually fully employed- indeed, the Blue planning orga-nization itself is a perfect example. A commander may be tempted to dual-hat his or her own Blue planners as Red Team members; after all, what better people to assess a plan than the ones most intimately familiar with it? But this seemingly workable solution is fatally flawed: one of the prime benefits of Red Teaming is an independent review of Blue products and reasoning- a second set of eyes on the plan. Try as it might, even the most talented planning group cannot discern its own oversights- if it could, those oversights would not occur in the first place. As concerned as Blue planners must inevitably be with the details, it is sometimes difficult for them to stand back and see the big picture.
Strengths:
When one considers the overall mission of the Red Team- generating a more effective plan- it becomes clear that the team is not consistently “red.” At times, rather than challenging Blue reasoning, its members will provide assistance to the planners, offering another perspective or additional information. This is especially true of the senior mentor, a vital participant in the process although not technically a member of the Red Team. This periodic functional shift on the part of the Red Team- from devil’s advocate to planning partner- does not detract from the overall effort. On the contrary, it broadens the range of thinking and contributions of the entire group, enhancing the planning effort.
Red Team Rules of Engagement
As the Red Team prepares to integrate into the planning effort, it must acknowledge a simple fact: very few people perceive a review and assessment of their efforts as benign. Even assistance, which is ultimately what the Red Team provides, is often not welcome, especially when it comes from people unknown and external to the Blue planning team. To mitigate this friction, the Red Team should meet with the Blue planners as early as possible to explain a number of critical points about a Red Teaming effort. The following ROEs should apply to every Red Teaming event throughout the process:
• The commander’s perceived intent should not limit innovation (e.g., drive certain COAs).
• Red Teaming events are meant to be interactive, candid discussions reminiscent of the flight debrief after a mission.
• The principle of nonattribution is in effect.
• Participants should remain objective in their contributions to the effort; personal agendas or personality conflicts are not welcome.
• Participants should stay professional- no fighting in public.
The first item in this list addresses a problem that can be insidious and deadly to a well-developed plan: the natural tendency to favor a war-fighting commander’s perceived intent in developing COAs. Too often, a planning staff presents the commander with several COAs, knowing full well that all but the perceived favorite are throwaways. As a result, staffers sometimes spend little time seriously developing the COAs.
As the Red Team moves into action, its ability to gain the confidence and trust of the Blue planners is absolutely critical. Failure in this area will lead to Red Team failure. One cannot overstate the importance of avoiding an “us against them” relationship between Blue and Red. Again, the commander’s early buy-in and influence in this area, as well as adherence to the ROEs outlined above, will pay large dividends to the process. When this groundwork is laid successfully, the Blue team will understand why the OPFOR, for instance, is doing its utmost to simulate a realistic, hostile enemy.
Conclusions
USAFE’s early Red Teaming efforts will continue to evolve. Development of the commander’s Red Team becomes more focused with each effort. One thing is already clear- Red Teaming adds great value to contingency planning. It would likely do the same for deliberate planning. Air and space staffs should consider the doctrine already in place, as well as the ideas expounded here, with a view toward making Red Teaming a staple of the planning process.
Field Marshal Helmuth von Moltke’s adage “no plan survives contact with the enemy” is true. But through Red Teaming, a plan can be refined after each contact with a Red Team. This process is valuable because it brings a contingency plan, together with the reasoning and information behind it, under the scrutiny of a well-simulated enemy. Better still, the Red Team can imitate outside agencies, higher headquarters, and even “Murphy’s Law.” A plan that survives this kind of treatment should be healthy indeed. To modify Gen George S. Patton’s famous quotation, “A good plan, well rehearsed, is better than a perfect plan unrehearsed.”
Red Dawn: The Emergence of a Red Teaming Capability in the Canadian Forces
Lauder breaks red teaming down into 3 sectors:
1) Civilian applications
2) Military applications
3) Red teaming in the CF
1) Civilian applications
Some examples of red teaming in civilian applications include Jack Davis teachings at the Sherman Kent Centre, National Security labs within the U.S. Department of Energy , and the Forensic Audits and Special Investigations Team (FSI) of the U.S. Government Accountability Office (GAO) have all conducted similar penetration tests using this technique of red teaming.
2) Military Applications
John F. Sandoz teaches red teaming at the Institute for Defence Analysis .
Red teaming is also being taught at the University of Foreign Military and Cultural Studies (UFMCS).For the UFMCS, the goal of red teaming is to enable planners and decision-makers to avoid group-think, mirror-imaging, and cultural miscalculations.
3) Red Teaming Canadian Forces
In this article it is pointed out that red teaming is done in a much
more informal and irregular manner, and more often in a tactical-training setting. The CF did use red teaming as a technique to help them prepare for the 2010 Winter Olympics.
Why Red Teaming is important?
Lauder outlines two main reasons as to why red teaming is important:
1) Red teaming mitigates complacency, group-think, and mirror-imaging (i.e. imposing blue force
behaviours and tactics on the adversary; in other words, seeing the adversary as we
see ourselves).
2) Red teaming is a process by which blue force may be able to deepen its understanding of, and therefore the ability to respond to, the adversary.
Conceptual Framework of Red Teaming:
Red Teaming broken down into four broad and generic organizational processes:
1) Innovation
2) Planning and Analysis
3) Training and Professional Development
4) Operations
Lauder's six key characteristics of red teaming:
1) Trust
2) Positional Authority
3) Relative Independence
4) Expertise
5) Adaptability
6) Flexibility
The following are a number of areas that require further investigation:
1) What are the qualities and characteristics of good and effective red teamers and
how are red teamers selected?
2) What type of training is required for red teamers?
3) Is there a particular red team composition that is more effective than others?
4) What kind of learning environment is most effective?
5) Does the role of the red team differ in certain environments (i.e. does the role differ
across settings and levels)?
6) What type of interaction is necessary (between red and blue) to encourage
learning?
Conclusion:
In general, civilian applications tend to use red teaming on the tactical level (in particular, but not exclusively, to test physical or synthetic networks, systems, or operational programs), whereas military applications tend to be employed on the operational and strategic levels, and largely within a planning setting or in a decision-support role (although, in the CF, red teaming appears to be most often utilized in exercise or training environments). It is clear that, while application of the red teaming concept may differ across sectors, both the civilian and military communities utilize red teaming in an active, rather than a passive, fashion, and that red teamers must possess a deep understanding of the adversary (i.e. thinking and behaviour) for the purposeof role-playing the adversary (or, advising as to what the adversary may think and do) in training, planning, or operations (i.e. live) setting. Moreover, it is apparent that red teamers must see themselves as
Red Teaming: The Art of Ethical Hacking
1) Assess the current state of risk by evaluating the existing security methods, measures and policies.
2) Based on the Assessment, design a security posture by creating policies that effectively manage the risk to the system/network.
3) Identify and implement the technical tools and physical controls necessary to manage risk.
4) Provide awareness training to the company to protect sensitive information through the cooperation and involvement of the employees.
5) Audit the system/network to confirm that the controls and employees adhere to policy.
This is a revolving process that should be performed continually by companies according to Peake. Red Teaming falls under the assessment stage of the Infosec Process (#1). The Red Team uses tools to probe for vulnerabilities and can project possible threats based on the scope of the assessment requested by the customer. However, the Red Teaming approach is attempting to circumvent security only need to find a single vulnerability, while security professionals need to find all possible vulnerabilities for a given system in order to assess the associated risk. A thorough Red Team assessment should provide an accurate situational awareness of the security posture of a given system/network. But identifying risk through Red Teaming and other methods cannot provide information security alone; the company/organization must continue through the Infosec process in order to appropriately manage risk and provide security protection.
A Red Team assessment evaluates various areas of security in a multi-layered approach. The Red Team tests policy compliance of the security controls at each layer (Operating System, Application, Host, LAN, Perimeter) and the control is tested in a manner specific to the area of security to which it applies. There are six areas of security where vulnerability assessment testing occurs:
-Internet Security
-Communications Security
-Information Security
-Social Engineering
-Wireless Security
-Physical Security
Red Teaming is “ethical hacking.” As such, it must be carried out with the utmost confidentiality, discretion, and clarity. Typically, Red Teams are third-party entities hired to make an impartial assessment of the network or system. The customer sets the scope of the project to specify the area of information to be assessed. The Red Team is responsible for supplying the customer with a detailed plan as well as a list of methods and tools that will be used during the evaluation. Any testing performed outside the scope stated by the customer, can be considered an unwarranted attack by the Red Team.
The most important requirement for Red Teaming is customer consent. Because, by definition and purpose, the Red Team takes an attacker-like approach to testing security, to begin an assessment without explicit permission is legally perceived as an unwarranted attack on the system/network. This being said, many Red Team evaluations are purposefully kept from network and system administrators as a means of testing personnel response to security events. The scope of the Red Teaming assessment can be very general or very specific when defining what the assessment will include or address. The scope of the project depends on time or cost of the assessment and/or on the objective of the assessment as defined by the customer.
Red Teaming is commonly mistaken as just penetration testing (pen-testing) when in fact, pen-testing is a component of the Red Teaming assessment. But pen-testing cannot provide a complete security analysis alone. If a system/network is penetrated, the test proves that there is at least one vulnerability that can be used to gain access to the system/network. And if the pen-test was unsuccessful, the test only proves that the person performing the pen-test was unable to find any exploits in the system, it doesn't guarantee that there are no vulnerabilities present.
A good rule of thumb for companies to follow when planning Red Team assessments is to identify the weakest areas or the "low-hanging fruit" and have these areas tested for vulnerabilities. Hackers will target a specific vulnerability to gain access (rather than numerous) to avoid detection.
Ethical hacking must strictly follow pre-approved testing guidelines that are established with the customer. The team must also document all the steps/procedures in testing in order to retrace the team’s actions in case of an incident due to testing or for retesting/verification of results if necessary. Upon completion of the Red Teaming effort all results should be submitted to the customer in a final report detailing the vulnerabilities that were discovered and how each was discovered. The report should also make an assessment of the overall level of risk of the network/system in addition to the risk level of each vulnerability. The final report is as important as the testing itself because it will direct the customer to take additional security steps.
Finally, when assembling a Red Team it is important to have specialists in a wide variety of areas (Peake lists 21 separate specialties) in order to provide the most thorough security assessment.
Monday, April 5, 2010
Red Teams: Towards Radical Innovation
- Identifying significant vulnerabilities
- Discovering new uses for innovations
- Challenging taboos and assumptions
- Providing a minority report on a new concept or idea
- Revealing the consequences of different perspectives;
- in-particular the perspectives of those with different goals and risk profiles.
- Red team members not taking their assignment seriously
- Red team not getting enough inside information to be credible
- Teams violating trust by leaking information
- Lack of quality members in the red team
- Inability to step into shoes of adversaries
Red-teaming: Improve Your Chances Of Winning The Business
If the proposal has serious problems or if they think it is off base, they will question and respond in a way the client normally would.
- Because of their experience, members of red-teams emulate the process and mindset of the clients that the company is going to present to.
- At least three people serve on each team.
- They are knowledgeable in the company's space.
- Team members must have no significant prior connection with the company that is presenting.
- They must be willing and able to commit the necessary time and attention to the process.
- Red team members are given at least a week to read the materials to be used in the presentation and do a bit of personal research.
- Team members must be committed to helping improve the chances of getting the business.
Comments:
The focus of this article seems to be more on providing information on red teaming techniques for prospective clients. Hence it provides a rather broad overview of red teaming. Though the author does mention a couple of examples of proposals that failed because they did not go through a red team evaluation, he does not mention any examples where red teaming has either helped improve or win a bid. The article could have been more compelling if it had had such examples.
Saturday, April 3, 2010
Modeling Behavior of the Cyber Terrorist
The Experiment
In order to red team an unknown, potential adversary, a set of parameters are set for the red team to follow based on the Defense Advanced Research Project’s Agency’s (DARPA) understanding of terrorist behavior:
- The cyber-terrorist is believed to have a level of sophistication somewhere between that of a sophisticated hacker and a foreign intelligence organization.
- This adversary is assumed to be able to raise funds on the order of hundreds of thousands to a few million dollars, and he is willing to spend these funds to accomplish his mission.
- This adversary is assumed to be able to acquire all design information on a system of interest.
- This adversary is assumed to be very risk averse. Premature detection is a serious negative consequence for the cyber-terrorist.
- This adversary has specific targets or goals in mind when they attack a given system.
- The adversary will also expend only the minimum amount of resources needed to accomplish their mission.
- The cyber-terrorist is assumed to be professional, creative, and very clever. They will seek unorthodox and original methods to accomplish their goals.
Findings
The Information Design Assurance Red Team (IDART) spent most of its time gathering intelligence on the target system. Their results were only considered successful if the team met their objectives and preserved stealth. In this study the red team followed the same basic process repeatedly, and gave up before mounting an attack with a risk threshold that was too high.
Conclusion
DARPA’s experience suggests some improvements to the process that they are using to model the cyber-terrorist adversary including the use of additional red teams, improving the scientific method used to record and test red team behavior, incorporating verified terrorist behavior, war-gaming cyber terrorist scenarios, and improving the library of possible approaches to difficult threats.
Red Teaming Experiments with Deception Technologies
The Experiment
In total, 5 experimental runs of duration 4 hours each were run on each of 6 exercises. This represents 30 runs, including deception "on" and deception "off" control groups (6 each) and random "on" "off" mixes (18).
Each run was preceded by a standard briefing and a run-specific briefing and followed by filling out of standard assessment forms, both individually by all team members and as a group. The exercises were of increasing intensity and difficulty so as to keep the participants challenged. Feedback was provided and varying amounts of information were disclosed to teams during the course of the experiment.
The participants, in this case, were students ranging in age from 16 to 38, all in computer-related fields, all with excellent grade point averages, all US citizens, and all interested in information protection, and all participating in an intensive program of study and research in this area.
Findings
The use of red teams in simulating the effectiveness of deception methods on human network attackers revealed several interesting results:
- Teams which were not aware they were not working in a deceptive environment engaged in self-deception which hindered their progress. The study concluded that the threat of deception offers some protection against attackers.
- Teams unknowingly operating under deceptive conditions who followed a deception to its logical end gave up on the problem earlier than the time allotted because they believed they had finished correctly.
- For teams operating in a deceptive environment, even after being educated on the deceptive techniques that were being employed, were rarely able to move more rapidly past the deceptions, and often followed the same deceptive route they had learned in previous experiments.
- Teams continually subjected to deception became disheartened and only 3 of the original 15 participants under deception finished the study, while 8 of 12 participants not working under deceptive conditions finished the study.
Conclusion
The net objective of combined deceptions is that attackers spend more time going down deception paths rather than real paths, that the deception paths are increasingly indifferentiable to the attackers, and that the defenders can gain time, insight, data, and control over the attackers while reducing defensive costs and improving outcomes. Content-oriented deception can be an effective deterrent against network attackers, and deception capabilities should be improved to combat highly skilled, long term network threats.
Friday, April 2, 2010
The "Red Team" Forging a Well-Concieved Contingency Plan
According to the article, “The Red Team: Forging a Well Conceived Contingency Plan,” a Red Team is described as: “a group of subject-matter experts (SME), with various, appropriate air and space disciplinary backgrounds, that provides an independent peer review of products and processes, acts as a devil’s advocate, and knowledgeably role-plays the enemy and outside agencies, using an iterative, interactive process during operations planning.” This article shows the manner in which the US Air Force In Europe is working to set up a Red Team in an effort to simulate contact with different enemies throughout the planning of Operation Allied Force in 1998 – 99.
Strengths:
· Yields a closely synchronized planning staff.
· Drive a more complete analysis at all phases.
· Reveal overlooked planning opportunities.
· Extrapolate unexplored strategic implications.
Weaknesses:
· Losses effectiveness without Blue planners’ acceptance of Red as a value-adding group.
· Trust of the Blue Team is key to the success of the exercise.
· Only a simulation and is not always an accurate representation of the enemies decision making.
How-To:
1. Create a Red team composed of Subject Matter Experts, external to the Blue team’s sources.
2. Preparation by the Red Team. Team members should immerse themselves in learning everything they can about what has gone before in the crisis at hand and what the enemy and other adversaries are thinking. (Perhaps by creating a checklist of the information that the team needs to know.)
3. Meeting between the Red Team and Blue planners to explain critical points of the Red Team’s purpose, in order to alleviate friction.
4. Conduct a
5. Participate in a Course of Action (COA) War Game.
6. Select a COA and conduct a Plan War Game based on it, in order to create an actual Operations Order (OPORD).
7. Conduct a Mission Rehearsal.
Conclusion:
Thursday, April 1, 2010
Reflections From A Red Team Leader
Introduction
Ms. Susan Craig is a red team analyst at the Joint Intelligence Operations Center at U.S. Pacific Command. In her article, she identified the purpose of red teaming as a technique that questions our assumptions and and allows us to think like the enemy. She has highlighted the skills and attributes (based on personal red teaming experiences) that make an effective red team and red team leader. These include, but are not limited to, critical and creative thinking skills, cultural awareness, effective communication skills and group diversity.
- Develops a more accurate frame of reference of the operational environment
- Causes consideration of the way dependent variables influence each other within that environment
- Thinking within the construct of the enemy's culture helps mitigate underestimating the enemy
- The mindset and skills need to be a "red teamer" can be applied outside of red teaming for better conceptualization/analysis
- Creates opportunity by recognizing cultural myths and the ways they can be exploited to gain operational/psychological advantage
- Dependent upon depth of knowledge in the following areas: physical environment, nature and stability of state, sociological demographics, regional and global relationships, military capabilities, information, technology, external organizations, national will, time, economics and culture. If these are not well understood, the exercise may be prone to mirror-imaging.
- Because it attempts to conceptualize a complex environment, the full extent of actions may not be considered.
Craig argues that red teaming is different from intelligence analysis in the following ways:
- The red team is not bounded by the construct/plan developed by the staff or by the need for evidence and corroboration.
- The red teamer is more like a historian (whose job is to ask big, broad questions) than an intelligence analyst (whose job is often to answer very specific, narrow questions).
- The red team's job goes beyond understanding the environment to include understanding how we can shape it.
Red teaming is used in intelligence analysis, as well as the operational environment, because bias can impede the quality and value of analytic products as it can impede decision making. Utilizing this technique for inhibiting bias improving critical thinking skills, cultural awareness, and more effective communication is critical for understanding the operational environment and determining causality within that environment.
Source
This article came from Military Review, March-April 2007.
Accessed via the Red Team Journal
http://redteamjournal.com/resources/