You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was working through the paper and the code together.
In the Paper in Figure 2, at each level the output of the convolution is passed to the next up convolution and is used for the gating signal.
However, in the code for the first upconvolution this is consistent (using center, which is output of the convolutional block). But for the next levels in the expanding path, there is no 3x3 convolutions applied for the input for the next level and the gating signal, but just previous concatenated "attn*" and "up*".
But for the output at each level, covolutional blocks are applied (not just a single 3x3 convolution):
conv6 = UnetConv2D(up1, 256, is_batchnorm=True, name='conv6')
conv7 = UnetConv2D(up2, 128, is_batchnorm=True, name='conv7')
conv8 = UnetConv2D(up3, 64, is_batchnorm=True, name='conv8')
conv9 = UnetConv2D(up4, 32, is_batchnorm=True, name='conv9')
So, the implementation seems not to be consistent with the Figure for me. I'm totally new to attention networks, so I'm very glad about any help to understand this architecture.
Thanks :)
The text was updated successfully, but these errors were encountered:
Hey,
I was working through the paper and the code together.
In the Paper in Figure 2, at each level the output of the convolution is passed to the next up convolution and is used for the gating signal.
However, in the code for the first upconvolution this is consistent (using center, which is output of the convolutional block). But for the next levels in the expanding path, there is no 3x3 convolutions applied for the input for the next level and the gating signal, but just previous concatenated "attn*" and "up*".
But for the output at each level, covolutional blocks are applied (not just a single 3x3 convolution):
conv6 = UnetConv2D(up1, 256, is_batchnorm=True, name='conv6')
conv7 = UnetConv2D(up2, 128, is_batchnorm=True, name='conv7')
conv8 = UnetConv2D(up3, 64, is_batchnorm=True, name='conv8')
conv9 = UnetConv2D(up4, 32, is_batchnorm=True, name='conv9')
So, the implementation seems not to be consistent with the Figure for me. I'm totally new to attention networks, so I'm very glad about any help to understand this architecture.
Thanks :)
The text was updated successfully, but these errors were encountered: