This repository is the official code for "Passive Attention in Artificial Neural Networks Predicts Human Visual Selectivity".
To install requirements:
pip install -r requirements.txt
The images used in our paper are under images/
. These images are a subset of this open source dataset, provided under the CC BY 4.0 license.
Maps from the human experiments are under humans-maps/
. human-maps-read.ipynb
gives an example of reading in and visualizing these maps.
Here are reference commands for acquiring class confidences for each model, assuming that input images are located under images/
and have the directory structure specified here (note that, since we do not use "grouth truth" classes, you can simply place all input images in a single folder, e.g. images/all/
):
python3 baseline_cnns/get_cifar_confidences.py --arch alexnet --data images --resume /path/to/pretrained/alexnet/model_best.pth.tar
python3 baseline_cnns/get_cifar_confidences.py --arch vgg19_bn --data images --resume /path/to/pretrained/vgg19_bn/model_best.pth.tar
python3 baseline_cnns/get_cifar_confidences.py --arch resnet --data images --resume /path/to/pretrained/resnet-110/model_best.pth.tar
python3 baseline_cnns/get_imagenet_confidences.py --arch alexnet --data images
python3 baseline_cnns/get_imagenet_confidences.py --arch vgg16_bn --data images
python3 baseline_cnns/get_imagenet_confidences.py --arch resnet101 --data images
python3 baseline_cnns/get_imagenet_confidences.py --arch efficientnet --data images
python3 baseline_cnns/get_imagenet_confidences.py --arch vit --data images
python3 baseline_cnns/get_places_confidences.py --arch alexnet --data images --resume /path/to/pretrained/alexnet/model_best.pth.tar
python3 baseline_cnns/get_places_confidences.py --arch resnet50 --data images --resume /path/to/pretrained/resnet50/model_best.pth.tar
python3 attention-branch-network/get_cifar_confidences.py --arch resnet --data images --model /path/to/pretrained-cifar100-resnet110/model_best.pth.tar
python3 attention-branch-network/get_cifar_confidences.py --arch densenet --data images --model /path/to/pretrained-cifar100-densenet/model_best.pth.tar --depth 100
python3 attention-branch-network/get_imagenet_confidences.py --arch resnet101 --data images --model /path/to/pretrained-imagenet2012-resnet101/model_best.pth.tar
python3 learn-to-pay-attention/get_confidences.py --data images --model /path/to/pretrained/pretrained-before/net.pth --normalize_attn
Here are reference commands for acquiring attention maps:
python3 baseline_cnns/get_passive_attention_unlabeled_cifar.py -o ../out/cifar_maps --method guidedbp --data images --resume /path/to/pretrained/alexnet/model_best.pth.tar --arch alexnet
Available methods are guidedbp
, guidedbpximage
, smoothgradguidedbp
, gradcam
, scorecam
. Available architectures are alexnet
, vgg19_bn
, resnet
.
python3 baseline_cnns/get_passive_attention_unlabeled_imagenet.py -o ../out/imagenet_maps --method guidedbp --data images --arch alexnet
Available methods are guidedbp
, guidedbpximage
, smoothgradguidedbp
, gradcam
, scorecam
, cameras
. Available architectures are alexnet
, vgg16_bn
, resnet
, efficientnet
, vit
.
python3 baseline_cnns/get_passive_attention_unlabeled_places.py -o ../out/places_maps --method guidedbp --data images --arch alexnet --resume /path/to/pretrained/alexnet/model_best.pth.tar
Available methods are guidedbp
, guidedbpximage
, smoothgradguidedbp
, gradcam
, scorecam
, cameras
. Available architectures are alexnet
, resnet50
.
python3 attention-branch-network/get_attention_cifar100.py --data images -o ../out/cifar_maps --model /path/to/pretrained-cifar100-resnet110/model_best.pth.tar --arch resnet --depth 110
python3 attention-branch-network/get_attention_cifar100.py --data images -o ../out/cifar_maps --model /path/to/pretrained-cifar100-densenet/model_best.pth.tar --arch densenet
python3 attention-branch-network/get_attention_imagenet2012.py --data images -o ../out/imagenet_maps --model /path/to/pretrained-imagenet2012-resnet101/model_best.pth.tar --arch resnet101
python3 learn-to-pay-attention/get_attention_heatmaps.py --data images -o ../out/cifar_maps --attn_mode before --model /path/to/pretrained-before/net.pth --normalize_attn
You can download pretrained models using these links (details are provided in the READMEs under baseline_cnns/
, attention-branch-network/
, and learn-to-pay-attention/
):
- Baseline CIFAR-100 CNNs
- Baseline Places365 CNNs
- Attention Branch Network
- Learn To Pay Attention
section_6
includes the ANN recognition experiments corresponding to Section 6 in the paper.