Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
LukasStruppek committed Jan 18, 2024
1 parent 39bbfbe commit ac38eba
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
Model inversion attacks (MIAs) intend to create synthetic images that reflect the characteristics of a specific class from a model's training data. For face recognition, the target model is trained to classify the identities of a set of people. An adversary without specific knowledge about the identities but with access to the trained model then tries to create synthetic facial images that share characteristic features with the targeted identities, such as gender, eye color, and facial shape. More intuitively, the adversary can be interpreted as a phantom sketch artist who aims to reconstruct faces based on the knowledge extracted from a target model.

# Changelog
- **January 17, 2024** Updated repository to add the code of ICLR paper "Be Careful What You Smooth For". The update includes support for label smoothing training, the knowledge extraction score, and various code improvements.
- **January 18, 2024** Updated repository to add the code of ICLR paper "Be Careful What You Smooth For". The update includes support for label smoothing training, the knowledge extraction score, and various code improvements. We also added pre-trained weights of the ResNet-152 classifiers used in the paper.
- **October 12, 2023** Updated PyTorch version to 2.0 to improve speed and add support for additional features.
- **July 20, 2022** Added GPU memory requirements.
- **July 18, 2022** Updated BibTex with proceeding information.
Expand Down Expand Up @@ -145,6 +145,9 @@ After an attack configuration file has been created, run the following command t
```bash
python attack.py -c=configs/attacking/default_attacking.yaml
```

We also provide pre-trained model weights for the target and evaluation models with our GitHub Releases. Download the weight files to a local folder and adjust the model parameters accordingly, as demonstrated in ```configs/attacking/default_attacking_local.yaml```.

All results including the metrics will be stored at WandB for easy tracking and comparison.

## Compute Knowledge Extraction Score
Expand Down

0 comments on commit ac38eba

Please sign in to comment.