You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hope you can see this.
I am really exciting to have chance to learn attention unet with your work and codes, i wanna to learn how to figure out the feature maps with CAM (class activate map), is the 'attention coefficiency' tensor named 'q_att_bn_1'?
The text was updated successfully, but these errors were encountered:
Hi, thanks for the interest in our work. Since this is segmentation, the "CAM" can be thought of as the feature map activations at any layer. Since in gradCAM and CAM, they take the last layer before things like batchnorm or dropout, I would suggest taking q_attn_conv as the CAM layer. But you could also take the output of something like attn1 in the model and pool those features in the feature dimension. For example, the output of attn1 would be something like 128 * height * width. You could average out the 128 to get a map that is just height * width and that could be your CAM. Hope this helps! Oktay et al is the paper that originally proposed these attention Gates and they might have a way to generate them in their code repo.
Hi, I was so happy and surprised to receive your reply so soon. Thank you so much for your detailed explaination to my problem, and your suggestion is also useful for me and other learners with similar question. Thus, i also suggest other learners to read the paper 'A NOVEL FOCAL TVERSKY LOSS FUNCTION WITH IMPROVED ATTENTION U-NET FOR LESION SEGMENTATION'.
Hope you can see this.
I am really exciting to have chance to learn attention unet with your work and codes, i wanna to learn how to figure out the feature maps with CAM (class activate map), is the 'attention coefficiency' tensor named 'q_att_bn_1'?
The text was updated successfully, but these errors were encountered: