Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gating Signal before Convolution #31

Open
ghost opened this issue Aug 29, 2022 · 0 comments
Open

Gating Signal before Convolution #31

ghost opened this issue Aug 29, 2022 · 0 comments

Comments

@ghost
Copy link

ghost commented Aug 29, 2022

Hey,

I was working through the paper and the code together.
In the Paper in Figure 2, at each level the output of the convolution is passed to the next up convolution and is used for the gating signal.
However, in the code for the first upconvolution this is consistent (using center, which is output of the convolutional block). But for the next levels in the expanding path, there is no 3x3 convolutions applied for the input for the next level and the gating signal, but just previous concatenated "attn*" and "up*".

But for the output at each level, covolutional blocks are applied (not just a single 3x3 convolution):
conv6 = UnetConv2D(up1, 256, is_batchnorm=True, name='conv6')
conv7 = UnetConv2D(up2, 128, is_batchnorm=True, name='conv7')
conv8 = UnetConv2D(up3, 64, is_batchnorm=True, name='conv8')
conv9 = UnetConv2D(up4, 32, is_batchnorm=True, name='conv9')

So, the implementation seems not to be consistent with the Figure for me. I'm totally new to attention networks, so I'm very glad about any help to understand this architecture.

Thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

0 participants