Literature collection of adversarial examples.
As continuation from this collection.
2022
2021
Journal
https://paperswithcode.com/paper/adversarial-attacks-and-defenses-on-graphs-a- Countering Malicious DeepFakes: Survey, Battleground, and Horizon
2020
2018
- Adversarial Examples: Attacks and Defenses for Deep Learning, github
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
2017
2021
Preprint
Membership Inference Attacks From First PrinciplesArxiv
ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense + Code
2017
2016
2021
ICML
+ code A General Framework For Detecting Anomalous Inputs to DNN Classifiers + talk + slidesICME
DefakeHop: A Light-Weight High-Performance Deepfake Detector- Detecting Adversarial Examples with Bayesian Neural Network
CVPR
+ code LiBRe: A Practical Bayesian Approach to Adversarial DetectionThesis
Defense Methods for Convolutional Neural Networks Against Adversarial Attacks
2020
2019
ICML
The odds ard oddICLR
AD V-BNN: IMPROVED ADVERSARIAL DEFENSE THROUGH ROBUST BAYESIAN NEURAL NETWORK
2018
NeurIPS
Code + Trust Score
2017
ICLR
ON DETECTING ADVERSARIAL PERTURBATIONSarXiv
Detecting Adversarial Samples from Artifacts + code + original code
2019
CVPR
Detection based Defense against Adversarial Examples from the Steganalysis Point of ViewNeurIPS
Adversarial Examples Are Not Bugs, They Are Features
2021
NeurIPS
Do Wider Neural Networks Really Help Adversarial Robustness?ICML
Meta Adversarial Training against Universal PatchesICML
Adversarially Trained Neural Policies in the Fourier Domain
2019
2022
Preprint
DAAIN: Detection of Anomalous and Adversarial Input using Normalizing FlowsPreprint
Adversarial Examples on Segmentation Models Can be Easy to Transfer
2021
2018
2022
Preprint
Evaluation of Neural Networks defenses and attacks using NDCG and reciprocal rank metricsPreprint
Performance Evaluation of Adversarial Attacks: Discrepancies and SolutionsPrepring
Gradients without Backpropagation
2020
2021
CVPR
Natural Adversarial Examples- https://simons.berkeley.edu/sites/default/files/docs/11887/nn-simons-part2.pdf
- FFT slides
Preprint
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
2019