You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 29, 2023. It is now read-only.
Since normal convolutional, pooling, etc. layers ignore the Nyquist sampling theorem, they can be very sensitive to slight changes in input. This adds an extra layer that can fix that. On previous work I've done, it helped get an improvement of a few percent on the task, so might help here.
The text was updated successfully, but these errors were encountered:
I am curious about this, please tell us when you tried this. I have found that CoorConvs help to get better results consistently, they act as positional embeddings.
Oh interesting! Yeah, I'll update this issue when I try it out to see if it makes a difference. Still have been working out a few kinks in the data pipeline, but it seems good to go now. I'll also try benchmarking models with your approach as well, to see how it works with all three.
https://github.com/adobe/antialiased-cnns
Since normal convolutional, pooling, etc. layers ignore the Nyquist sampling theorem, they can be very sensitive to slight changes in input. This adds an extra layer that can fix that. On previous work I've done, it helped get an improvement of a few percent on the task, so might help here.
The text was updated successfully, but these errors were encountered: