From ac38ebaf5c9fe68ec8ebf78a047897f290468493 Mon Sep 17 00:00:00 2001 From: Lukas Struppek <25303143+LukasStruppek@users.noreply.github.com> Date: Thu, 18 Jan 2024 07:30:28 +0000 Subject: [PATCH] Update readme --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 3328da1..f3deb0a 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ Model inversion attacks (MIAs) intend to create synthetic images that reflect the characteristics of a specific class from a model's training data. For face recognition, the target model is trained to classify the identities of a set of people. An adversary without specific knowledge about the identities but with access to the trained model then tries to create synthetic facial images that share characteristic features with the targeted identities, such as gender, eye color, and facial shape. More intuitively, the adversary can be interpreted as a phantom sketch artist who aims to reconstruct faces based on the knowledge extracted from a target model. # Changelog -- **January 17, 2024** Updated repository to add the code of ICLR paper "Be Careful What You Smooth For". The update includes support for label smoothing training, the knowledge extraction score, and various code improvements. +- **January 18, 2024** Updated repository to add the code of ICLR paper "Be Careful What You Smooth For". The update includes support for label smoothing training, the knowledge extraction score, and various code improvements. We also added pre-trained weights of the ResNet-152 classifiers used in the paper. - **October 12, 2023** Updated PyTorch version to 2.0 to improve speed and add support for additional features. - **July 20, 2022** Added GPU memory requirements. - **July 18, 2022** Updated BibTex with proceeding information. @@ -145,6 +145,9 @@ After an attack configuration file has been created, run the following command t ```bash python attack.py -c=configs/attacking/default_attacking.yaml ``` + +We also provide pre-trained model weights for the target and evaluation models with our GitHub Releases. Download the weight files to a local folder and adjust the model parameters accordingly, as demonstrated in ```configs/attacking/default_attacking_local.yaml```. + All results including the metrics will be stored at WandB for easy tracking and comparison. ## Compute Knowledge Extraction Score