Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicate results seen in arXiv paper #5

Open
Hyperparticle opened this issue Mar 31, 2018 · 10 comments
Open

Replicate results seen in arXiv paper #5

Hyperparticle opened this issue Mar 31, 2018 · 10 comments
Labels
help wanted Extra attention is needed

Comments

@Hyperparticle
Copy link
Owner

There are some preliminary results of the one-pixel attack performed on Cifar10 in the repo, but it is not quite as comprehensive as seen in https://arxiv.org/abs/1710.08864. It would be nice to not only replicate the experiments but also match (or surpass) their metrics.

@Hyperparticle Hyperparticle added the help wanted Extra attention is needed label Mar 31, 2018
@Carina02
Copy link

Carina02 commented Jan 29, 2019

Hi there,
I am an author of this paper. Thank you very much for the nice code and tutorial. Recently we have shared our code for validating. If you are still interesting in this problem you might visit here for the code and please let me know if any problems. Cheers.

@Hyperparticle
Copy link
Owner Author

@Carina02 Thanks for letting me know that you now have an official repository. I'll update my README so that others may find it.

@Carina02
Copy link

Carina02 commented Feb 6, 2019

@Hyperparticle Sorry for the late reply and thanks for the redirection. Recently I was trying to figure out why our results of attack rate are so different from yours, especially on the pure_cnn. Actually I have roughly gone through your code and customized it to our settings (e.g., roughly got ~60% non-targeted attack rate on Net-In-Net) but currently still no idea on the results of pure-cnn . Just wonder if you have any ideas on why this can happen (e.g., any important factors that I did not notice that can affect the attack accuracy so much).

@Hyperparticle
Copy link
Owner Author

I was also puzzled by the discrepancy, but I didn't find an answer yet.

There could be a range of factors, but the most likely culprit is how the differential evolution is implemented. Another possibility is there might be a discrepancy in my implementation of a pixel perturbation compared to yours. And a third factor could be how the Keras models are implemented.

I didn't do any extensive search yet, but you're more than welcome to try to look.

@Carina02
Copy link

Carina02 commented Feb 6, 2019

Thanks for the reply. Another possible factor I can imagine is that I use opencv to read&write real images during iterations while your code directly works with image data. I found that opencv can slightly change the image quality after reading&writing the same image. This probably can make the images more "vulnerable".

@Hyperparticle
Copy link
Owner Author

Hyperparticle commented Feb 6, 2019

@Carina02 If you have time, you could experiment with this hypothesis. I think it would be simple to import and reload each image with opencv during preprocessing. Did you test this in your own repo? I think if it does influence the attack, it should be reported in your paper.

@Carina02
Copy link

Carina02 commented Feb 8, 2019

@Hyperparticle Sure, will mention this in a future version of paper in Arxiv. It seems that others got similar problems, such as:
https://stackoverflow.com/questions/13704667/opencv-imwrite-increases-the-size-of-png-image?rq=1
https://stackoverflow.com/questions/12216333/opencv-imread-imwrite-increases-the-size-of-png

My rough observation is that the modification by opencv does not change the pixel values totally or just very tiny change but the size of images always increase a little bit after doing imwrite. Will get back to you if have new discovery.

@Carina02
Copy link

Carina02 commented May 6, 2019

@Hyperparticle
Hi,
I think we've figured it out. The opencv problem is only on the results of ImageNet, which we have fixed together with some other refinement. Our cifar-10 results are good but they are based on the Kaggle cifar-10 data-set (https://www.kaggle.com/c/cifar-10) not the original one therefore the discrepancy between our results. We have updated the paper (https://arxiv.org/abs/1710.08864) by adding the detail of these.

@Hyperparticle
Copy link
Owner Author

@Carina02 Thanks for the update! Glad you were able to find the cause of the discrepancy.

@YYYZXC
Copy link

YYYZXC commented Aug 11, 2024

用的什么环境啊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants