Skip to content
This repository has been archived by the owner on Aug 31, 2024. It is now read-only.

Commit

Permalink
Update readme.md
Browse files Browse the repository at this point in the history
  • Loading branch information
nkrusch authored Jan 29, 2024
1 parent bc02446 commit 0d74d7c
Showing 1 changed file with 24 additions and 7 deletions.
31 changes: 24 additions & 7 deletions readme.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,35 @@
# Evaluating AML Threats in NIDS

Implementation of an evaluation system, to measure success rate of adversarial machine learning (AML) evasion
attacks, in network intrusion detection systems (NIDS).
This repository contains an implementation of an _evaluation system_ to run experiments. An experiment measures success rate of adversarial machine learning (AML) evasion attacks in network intrusion detection systems (NIDS).

The system allows to evaluate classifiers trained on network data sets against adversarial black-box evasion attacks.
Supported classifiers are Keras deep neural network and a tree-based ensemble learner XGBoost.
Both classifiers can be enhanced with an adversarial defense.

<br/>

**Experiment Overview**

<pre>
┌───────────────┐ ┌───────────┐ ┌───────────────┐
│ classifier │ ────────→ │ Evasion │ ───────→ │ validator │ ───────→ valid & evasive
│ (+defense) │ ◁-------- │ attack │ │ [constraints] │ adversarial examples
└───────────────┘ query └───────────┘ └───────────────┘
INPUT Training data Testing data OUTPUT
</pre>

* The input is NIDS data and the validator is configured to know the applicable domain constraints.
* One experiment run repeats 5 times, for different disjoint partitions of the input data, using 5-folds cross-validation.
* Various metrics are recorded: classifier accuracy, evasions before and after validators, time, etc.

<br/>

**Source code organization**

| Directory | Description |
|:--------------|:------------------------------------------------------|
| `.github` | Actions workflow files |
| `aml` | Implementation source code |
| `aml` | Evaluation system implementation source code |
| `config` | Experiment configuration files |
| `data` | Preprocessed datasets ready for experiments |
| `result` | Referential result for comparison |
Expand Down Expand Up @@ -52,16 +69,16 @@ The runtime estimates are for 8-core 32 GB RAM Linux (Ubuntu 20.04) machine. Act
make query
```

(~24h) This experiment uses the full holdout set and repeats experiments with different model query limits.
By default, the max iteration limits are: 2, 5, default (varies by attack).
(~24h) This experiment uses the full testing set and repeats experiments with different model query limits.
By default, the max query limits are: 2, 5, default (varies by attack).

#### 2. Limited sampled input

```
make sample
```

(~90 min) Run experiments using limited input size by randomly sampling the holdout set.
(~90 min) Run experiments using limited input size by randomly sampling the testing set.
By default, the sample size is 50 and sampling is repeated 3 times. The result is the average of 3 runs.

#### 3. Plot results
Expand Down Expand Up @@ -173,4 +190,4 @@ You are ready to run experiments and make code changes.
[IOT]: https://www.stratosphereips.org/datasets-iot23/
[UNS]: https://research.unsw.edu.au/projects/unsw-nb15-dataset
[DOC]: https://docs.docker.com/engine/install/
[RBT]: https://github.com/chenhongge/RobustTrees/tree/master/python-package#from-source
[RBT]: https://github.com/chenhongge/RobustTrees/tree/master/python-package#from-source

0 comments on commit 0d74d7c

Please sign in to comment.