Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi maybe some bug in self-attention block #2

Open
Johnson-yue opened this issue Oct 11, 2019 · 2 comments
Open

Hi maybe some bug in self-attention block #2

Johnson-yue opened this issue Oct 11, 2019 · 2 comments

Comments

@Johnson-yue
Copy link

Hi, does your code has good performance with Self_attention??

I think your implement about self-attention is something wrong ,because basically self-attention have 4 convolutions( f, g, h, and v) you dont have v!!

and according official tensorflow version code , you have not max_pool followed by g()

@4m4n5
Copy link
Owner

4m4n5 commented Oct 15, 2019

Hey. You might be right I didn’t really look into it that much because the results were quite good regardless. I wrote the code when the SAGAN paper was pretty new. If you can spare some time to submit a PR, I’d totally merge it.

@Johnson-yue
Copy link
Author

Hi, I check SAGAN paper again , and I think your implement is v1 code for v1 paper, but I read v2 paper and code, So ,it is different.

and did you get good performance ?Because I can't get it for my implement and cifar10 dataset

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants