Since June 1998, CIDF has held a series of demonstrations and experiments designed to help us test and evaluate how well the CIDF products help intrusion detection and response systems to share information. This page will give some idea of what went on in the previous bake-offs, and what current activities we're engaged in.
Our first experiment took place at UC Davis in June 1998. There were two separate tests. The first test involved Boeing Corporation, MITRE Corporation, and USC/ISI; participants sent binary-encoded GIDOs to each other, where only the sender knew the contents a priori. This was primarily a test of the encoding rules in the CISL specification.
In the end, a few minor bugs needed to be fixed, and the draft was modified to clarify some ambiguities. As a result of discussion held after this test, the encoding rules were simplified somewhat to make it easier to delimit CISL expressions.
The second test involved UC Davis and SRI Corporation, and concerned a simple attack detection scheme. UC Davis would collect data from Unix accounting and other records to derive three streams of information: (1) when users su'd to root; (2) when users logged in as root; and (3) when users executed something as root. This information was then passed to SRI in the form of Perl-encoded (hashed array) GIDOs.
When activity (3) occurred without either (1) or (2) occurring first (within a session), it was flagged as an error. This required SRI to interpret the GIDOs before it could make use of them. This experiment was done completely blind, aside from the Perl encoding, which was not within the scope of the test.
There was one minor bug that required fixing, arising from a misreading of the specification. At the end, it was decided that the specification was clear enough on the point that it did not need to be rewritten.
Our second demonstration took place at the Technology Integration Center (TIC) in June 1999. In this case, a series of recorded attacks were to be replayed on a private network. Several sensors were deployed along the network, along with three analysis systems from Boeing, Silicon Defense, and Stanford University.
The sensors were provided by both academic and commercial institutions. They were to detect the signatures of the attacks being replayed and relay them to the Boeing Discovery Coordinator in the form of GIDOs. The CIDF message layer, designed by Boeing, was also tested here for the first time.
Each sensor in such an environment will make certain kinds of errors: either false positives (detecting an attack when there is none) or false negatives (missing an attack). The benefit of the architecture in this demonstration is that the analysis systems make use of input from a variety of sensors, and by knowing the relative strengths of the sensors, can make a more accurate assessment of the likelihood that an attack occurred.
This demonstration was not "blinded"; the sensors and analysis systems were both alerted as to the attacks that would be replayed, as well as the format of the GIDOs that were to be sent back and forth, although the exact contents of those GIDOs were not predetermined.
Here is some of the output from this demonstration. The code is not considered stable, but if you feel brave, please try them out and report bugs:
The demonstration was judged a success; there were several instances in which one or more of the sensors missed an attack or flagged a non-existent one, but the analysis systems ignored those situations and accurately detected the attacks. During the post mortem, it was judged that a new experiment needed to be held, which would be blinded, in order to judge more honestly the strength of the specification.
This third experiment is designed to test semantic interoperability of the CISL specification. The participants include Silicon Defense, USC/ISI, Harvey Mudd College, Stanford University, and others.
The format of the experiment is as follows. The participants are divided into three groups: A, B, and C. Group A devises a series of 10 scenarios. Each scenario consists of a title, a difficulty level, an overview description, a detailed description, and a set of three questions.
Group A then hands the title, difficulty level, and detailed description of five of the scenarios to each Group B participant. (Different participants may receive a different set of scenarios.) Each of the Group B members must then speak to no one in detail about the experiment other than Group A, and then only to clarify confusions about the detailed descriptions. Group B's task is to encode as much of the information in the detailed description as possible into a single GIDO (plus optional addendums).
While Group B is thus at work, Group A also hands the title, difficulty level, overview description, and questions of all ten scenarios to Group C. Group C must then write code for each scenario; this code must read in a binary GIDO, interpret it, and answer the three questions, all without human assistance (with the possible exception of wording the answer).
When Group B is completed, it sends its GIDOs to Group C. This "sending" doesn't use the message layer; the GIDOs can be e-mailed. Group C must freeze its code prior to receiving the first of the GIDOs from Group B. Group C's code will then read in the Group B GIDOs, interpret them, and answer the associated questions. The GIDOs will be labeled, so Group C's code need not "match" the GIDOs to the scenarios.
Participants may write their code entirely from scratch, or they may be use (are encouraged to use) the following tools, most of which were developed primarily for the preceding demonstration:
We are still interested in participants for Group B. Please let us know if you would like to join in. Send e-mail to <firstname.lastname@example.org> to indicate your interest.
Maintained by Brian Tung
Last modified 10 September 1999