In total, 5 experimental runs of duration 4 hours each were run on each of 6 exercises. This represents 30 runs, including deception "on" and deception "off" control groups (6 each) and random "on" "off" mixes (18).
Each run was preceded by a standard briefing and a run-specific briefing and followed by filling out of standard assessment forms, both individually by all team members and as a group. The exercises were of increasing intensity and difficulty so as to keep the participants challenged. Feedback was provided and varying amounts of information were disclosed to teams during the course of the experiment.
The participants, in this case, were students ranging in age from 16 to 38, all in computer-related fields, all with excellent grade point averages, all US citizens, and all interested in information protection, and all participating in an intensive program of study and research in this area.
The use of red teams in simulating the effectiveness of deception methods on human network attackers revealed several interesting results:
- Teams which were not aware they were not working in a deceptive environment engaged in self-deception which hindered their progress. The study concluded that the threat of deception offers some protection against attackers.
- Teams unknowingly operating under deceptive conditions who followed a deception to its logical end gave up on the problem earlier than the time allotted because they believed they had finished correctly.
- For teams operating in a deceptive environment, even after being educated on the deceptive techniques that were being employed, were rarely able to move more rapidly past the deceptions, and often followed the same deceptive route they had learned in previous experiments.
- Teams continually subjected to deception became disheartened and only 3 of the original 15 participants under deception finished the study, while 8 of 12 participants not working under deceptive conditions finished the study.
The net objective of combined deceptions is that attackers spend more time going down deception paths rather than real paths, that the deception paths are increasingly indifferentiable to the attackers, and that the defenders can gain time, insight, data, and control over the attackers while reducing defensive costs and improving outcomes. Content-oriented deception can be an effective deterrent against network attackers, and deception capabilities should be improved to combat highly skilled, long term network threats.