Skip to content

[CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.

License

Notifications You must be signed in to change notification settings

IBM/composite-adv

Repository files navigation

Composite-Adv

CVPR 2023 | Demonstration | Quickstart | Usage | Join Leaderboard | Citation

Overview

This repository contains code for the CVPR 2023 Paper "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations." The research demo and the leaderboard of composite adversarial robustness can be found at CARBEN: Composite Adversarial Robustness Benchmark, which earlier appeared in IJCAI 2022 Demo Track.

Authors: Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.

Composite Adversarial Attack (CAA)

The adversarial attacks have been widely explored in Neural Network (NN). However, previous studies have sought to create bounded perturbations in a metric manner. Most such work has focused on $\ell_{p}$-norm perturbation (i.e., $\ell_{1}$, $\ell_{2}$, or $\ell_{\infty}$) and utilized gradient-based optimization to effectively generate the adversarial example. However, it is possible to extend adversarial perturbations beyond the $\ell_{p}$-norm bounds.

Figure 1. The Flow of the Composite Adversarial Attacks.

We combined the $\ell_\infty$-norm and semantic perturbations (i.e., hue, saturation, rotation, brightness, and contrast), and proposed a novel approach — composite adversarial attack (CAA) — capable of generating unified adversarial examples (see Figure 1). The main differences between CAA and previously proposed perturbations are a) that CAA incorporates several threat models simultaneously, and b) that CAA's adversarial examples are semantically similar and/or natural-looking, but nevertheless result in large differences in $\ell_{p}$-norm measures.

In this README, we are going to show how to execute our code and derive the experimental results in our paper.

Environment

  • cuda 11.3
  • python 3.7.16
  • numpy 1.21.6
  • pytorch 1.12.0
  • torchvision 0.13.0
  • kornia 0.6.3
  • requests

Installation

Composite-adv can be downloaded as a GitHub repository or a package, the code including training (GAT) and robustness evaluation (CAA) phases.

  • Install Python 3.
  • Use Composite-Adv as a package.
    pip install git+https://github.com/IBM/composite-adv.git
  • Use Composite-Adv as a repository.
    git clone https://github.com/IBM/composite-adv.git
    cd composite_adv
    pip install -r requirements.txt

GAT Pretrained Models and Baselines

Please use composite_adv.utilities.make_model() to load GAT pre-trained models.

CIFAR-10

Two Architectures Available: ResNet-50 ($\dagger$) and Wide-ResNet-34-10 ($\ast$).

$\text{GAT}^\dagger\text{-f}$ $\text{GAT}^\dagger\text{-fs}$ $\text{Normal}^\dagger$ $\text{Madry}_\infty^\dagger$ $\text{PAT}_\text{self}^\dagger$ $\text{PAT}_\text{alex}^\dagger$
Link Link Link Link Link Link
$\text{GAT}^\ast\text{-f}$ $\text{GAT}^\ast\text{-fs}$ $\text{Normal}^\ast$ $\text{Trades}_\infty^\ast$ $\text{FAT}_\infty^\ast$ $\text{AWP}_\infty^\ast$
Link Link Link Link Link Link

ImageNet

One Architecture Available: ResNet-50 ($\dagger$).

$\text{GAT}^\dagger\text{-f}$ $\text{Normal}^\dagger$ $\text{Madry}_\infty^\dagger$ $\text{FAST-AT}_\infty^\dagger$
Link Link Link Link

SVHN

One Architectures Available: Wide-ResNet-34-10 ($\ast$).

$\text{GAT}^\ast\text{-f}$ $\text{GAT}^\ast\text{-fs}$ $\text{Normal}^\ast$ $\text{Trades}_\infty^\ast$
Link Link Link Link

Usage

This section demonstrate how to launch CAA and also using GAT to derive a robust model.

Getting Started

In getting_started.ipynb, we provide a step-by-step demonstration showing how to launch our composite adversarial attack (CAA). We use the CIFAR-10 dataset for demonstration, while other datasets could be executed similarly.

Open In Colab

Scripts for running the experiments

Composite Adversarial Attack (CAA) Evaluation

Evaluate robust accuracy / attack success rate of the model

Multiple Attacks
python evaluate_model.py \
       --arch ARCHITECTURE --checkpoint PATH_TO_MODEL \
       --dataset DATASET_NAME --dataset-path DATASET_PATH \
       --message MESSAGE_TO_PRINT_IN_CSV \
       --batch-size BATCH_SIZE --output RESULT.csv \
        "CompositeAttack(model, enabled_attack=(0,1,5), order_schedule='random', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,1,5), order_schedule='scheduled', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(3,4,5), order_schedule='random', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(3,4,5), order_schedule='scheduled', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,2,5), order_schedule='random', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,2,5), order_schedule='scheduled', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,1,2,3,4), order_schedule='random', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,1,2,3,4), order_schedule='scheduled', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,1,2,3,4,5), order_schedule='random', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(0,1,2,3,4,5), order_schedule='scheduled', inner_iter_num=10, dataset='DATASET_NAME')"
Single Attacks
python evaluate_model.py \
       --arch ARCHITECTURE --checkpoint PATH_TO_MODEL \
       --dataset DATASET_NAME --dataset-path DATASET_PATH \
       --message MESSAGE_TO_PRINT_IN_CSV \
       --batch-size BATCH_SIZE --output RESULT.csv \
        "NoAttack()" \
        "CompositeAttack(model, enabled_attack=(0,), order_schedule='fixed', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(1,), order_schedule='fixed', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(2,), order_schedule='fixed', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(3,), order_schedule='fixed', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(4,), order_schedule='fixed', inner_iter_num=10, dataset='DATASET_NAME')" \
        "CompositeAttack(model, enabled_attack=(5,), order_schedule='fixed', inner_iter_num=20, dataset='DATASET_NAME')" \
        "AutoLinfAttack(model, 'DATASET_NAME')"

Generalized Adversarial Training (GAT)

CIFAR-10

Single Node, MultiGPUs, Single-Processing, Multi-Threading (TRADES Loss)
python train_cifar10.py \
        --batch-size BATCH_SIZE --epochs 150 --arch ARCHITECTURE \
        --checkpoint PATH_TO_MODEL_FOR_RESUMING.pt \
        --mode TRAINING_OBJECTIVE --order random --enable 0,1,2,3,4,5 \
        --model-dir DIR_TO_SAVE_EPOCH/ \
        --log_filename TRAINING_LOG.csv
Distributed, MultiGPUs, Multi-Processing (Madry's Loss)
python train_cifar10.py \
        --dist-backend 'nccl' --multiprocessing-distributed \
        --batch-size BATCH_SIZE --epochs 150 --arch ARCHITECTURE \
        --checkpoint PATH_TO_MODEL_FOR_RESUMING.pt \
        --mode TRAINING_OBJECTIVE --order random --enable 0,1,2,3,4,5 \
        --model-dir DIR_TO_SAVE_EPOCH/ \
        --log_filename TRAINING_LOG.csv

ImageNet

python train_imagenet.py \
        --dist-backend 'nccl' --multiprocessing-distributed \
        --batch-size BATCH_SIZE  --epochs 150 --arch ARCHITECTURE \
        --checkpoint PATH_TO_MODEL_FOR_RESUMING.pt --stat-dict TYPE_OF_CHECKPOINT \
        --mode TRAINING_OBJECTIVE --order random --enable 0,1,2,3,4,5 \
        --model-dir DIR_TO_SAVE_EPOCH \
        --log_filename TRAINING_LOG.csv

SVHN

python train_svhn.py \
        --batch-size BATCH_SIZE --epochs 150 --arch ARCHITECTURE \
        --checkpoint PATH_TO_MODEL_FOR_RESUMING.pt \
        --mode TRAINING_OBJECTIVE --order random --enable 0,1,2,3,4,5 \
        --model-dir DIR_TO_SAVE_EPOCH/ \
        --log_filename TRAINING_LOG.csv

Join Leaderboard

We maintain the leaderboards to track the progress of the Compositional Adversarial Robustness. Specifically, we focus on "white-box" scenarios in which the attacker has all knowledge of the models. We have provided similar entries to those in the RobustBench leaderboard, and hereby solicit model submissions to compete against composite perturbations in our leaderboard. If you would like to submit your model, please follow the instructions in CARBEN-Leaderboard.ipynb to evaluate your model. After the robustness assessment is completed, please fill in the Google Form, and we will update the leaderboard after confirmation.

Citations

If you find this helpful for your research, please cite our papers as follows:

@inproceedings{hsiung2023caa,
  title={{Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations}}, 
  author={Lei Hsiung and Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho},
  booktitle={{IEEE/CVF} Conference on Computer Vision and Pattern Recognition, {CVPR}},
  publisher={{IEEE}},
  year={2023},
  month={June}
}

@inproceedings{hsiung2022carben,
  title={{CARBEN: Composite Adversarial Robustness Benchmark}},
  author={Lei Hsiung and Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho},
  booktitle={Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}},
  publisher={International Joint Conferences on Artificial Intelligence Organization},
  year={2022},
  month={July}
}

About

[CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published