Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

seek for help about the visualization of the CAM of attention unet #16

Open
shanhuhaifeng opened this issue Feb 4, 2020 · 2 comments
Open

Comments

@shanhuhaifeng
Copy link

Hope you can see this.
I am really exciting to have chance to learn attention unet with your work and codes, i wanna to learn how to figure out the feature maps with CAM (class activate map), is the 'attention coefficiency' tensor named 'q_att_bn_1'?

@nabsabraham
Copy link
Owner

Hi, thanks for the interest in our work. Since this is segmentation, the "CAM" can be thought of as the feature map activations at any layer. Since in gradCAM and CAM, they take the last layer before things like batchnorm or dropout, I would suggest taking q_attn_conv as the CAM layer. But you could also take the output of something like attn1 in the model and pool those features in the feature dimension. For example, the output of attn1 would be something like 128 * height * width. You could average out the 128 to get a map that is just height * width and that could be your CAM. Hope this helps! Oktay et al is the paper that originally proposed these attention Gates and they might have a way to generate them in their code repo.

@shanhuhaifeng
Copy link
Author

Hi, I was so happy and surprised to receive your reply so soon. Thank you so much for your detailed explaination to my problem, and your suggestion is also useful for me and other learners with similar question. Thus, i also suggest other learners to read the paper 'A NOVEL FOCAL TVERSKY LOSS FUNCTION WITH IMPROVED ATTENTION U-NET FOR LESION SEGMENTATION'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants