Skip to content

Latest commit

 

History

History
25 lines (19 loc) · 1.53 KB

readME.md

File metadata and controls

25 lines (19 loc) · 1.53 KB

Adversarial noise generation using projected gradient descent, pytorch implementation

Introduction

This is a PyTorch implementation of the Projected Gradient Descent (PGD) attack (source: Y. Deng and L. J. Karam, "Universal Adversarial Attack Via Enhanced Projected Gradient Descent," 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 2020, pp. 1241-1245, doi: 10.1109/ICIP40778.2020.9191288, https://ieeexplore.ieee.org/abstract/document/9191288). The PGD attack is a white-box attack that aims to generate adversarial examples by adding small perturbations to the input data. The perturbations are generated by maximizing the loss function with respect to the input data, while ensuring that the perturbed data within a small epsilon-ball around the original data. I used a relatively simple resnet18 model trained on ImageNet dataset (n.b. there is also a copy of deepfool modified slightly for the latest version of pytorch in the helpers folder, which is not a targeted attack method).

Installation

To run the code, you need to have the following main packages installed:

  • pytorch 2.3.1 (with CUDA 11.8)
  • numpy

Clone the repository: git clone https://github.com/carlacodes/adversnoise.git

and run the following command to install the required packages using conda: conda env create -f adversnoise.yml

or use pip (not recommended): pip install -r requirements.txt

Usage

The main script can be found here: run_advnoise.py. Unit tests are also provided in the tests folder.