Blind Concealment from Reconstruction-based Attack Detectors for Industrial Control Systems via Backdoor Attacks
When using the code from this repository please cite our work as follows:
@InProceedings{walita23icsbackdoorattacks,
title = {Blind Concealment from Reconstruction-based Attack Detectors for Industrial Control Systems via Backdoor Attacks},
author = {Walita, Tim, Erba, Alessandro, John H. Castellanos and Tippenhauer, Nils Ole},
booktitle = {Proceedings of the ACM Cyber-Physical System Security Workshop (CPSS)},
year = {2023},
month = JUL,
doi = {10.1145/3592538.3594271},
publisher = {ACM},
address={New York, NY, USA}
}
In the following I list the main libraries that I have used in my virtual environment to run all files:
- tensorflow (2.2.0)
- keras (2.3.1)
- keras-preprocessing (1.1.0)
- pandas (1.2.2)
- numpy (1.19.1)
- scikit-learn (0.24.1)
- scipy (1.4.1)
- seaborn (0.11.1)
- notebook (6.2.0)
- jupyter (1.0.0)
- h5py (2.10.0)
Attacks
: Contains all attacks and helper files
Evaluation
: Contains the notebook to evaluate the attacks and some graphs
autoencoder
: Contains the original BATADAL datasets (in BATADALcsv
) and the class file for the attacked model (autoencoder)
backdoored_datasets
: Contains all the backdoored datasets that are generated when running an attack
python3 Standard_Backdoor_Attack.py
You can change the pattern/trigger in this attack manually in the main function by changing the list index. I marked the corresponding code line with a comment that starts with "CHANGE ME...".
These are the seeds to reconstruct the results mentioned in the thesis for this attack.
Seed in attack file: random seed = 123
Seed in evaluation file: tensorflow seed = 123
python3 Improved_Standard_Attack.py
You can change the pattern/trigger in this attack manually in the main function by changing the list index. I marked the corresponding code line with a comment that starts with "CHANGE ME...".
These are the seeds to reconstruct the results mentioned in the thesis for this attack:
Seed in attack file: random seed = 123
Seed in evaluation file: tensorflow seed = 123
python3 File_To_Execute_Combined_Attack.py
This attack runs automatically on all 51 patterns sequentially and also evaluates each pattern on our own test dataset. In the end, it prints the best result.
The attack can also be run manually. Therefore, the main function needs to be changed accordingly (by commenting and uncommenting the respective lines) in the file Combined_Backdoor_Attack.py
and this file needs to be executed instead of the previous one.
These are the seeds to reconstruct the results mentioned in the thesis for this attack:
Seed in attack file: random seed = 123
Seed in evaluation file: tensorflow seed = 123
python3 Constrained_Backdoor_Attack.py
This attack runs for the second possible pattern of PLC 3 ([1, 0, 0, 0] - best result) on default. You can change the PLC and pattern in the main function manually. At the respective places in the code, I added comments that start with "CHANGE ME..." and I also explain how you can change the PLC or pattern there.
These are the seeds to reconstruct the results mentioned in the thesis for this attack:
Seed in attack file (PLC 1): random seed = 99
Seed in attack file (PLC 3): random seed = 123
Seed in attack file (PLC 5): random seed = 123
Seed in evaluation file: tensorflow seed = 123
python3 Sensor_Backdoor_Attack.py
You can change the pattern/trigger in this attack manually in the main function by changing the list index. I marked the corresponding code line with a comment that starts with "CHANGE ME...".
These are the seeds to reconstruct the results mentioned in the thesis for this attack:
Seed in attack file: random seed = 123
Seed in evaluation file: tensorflow seed = 123
The evaluation for all attacks can be found in the notebook: Evaluation.ipynb