You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DADVI (paper) pre-samples the Gaussian noise in the init and then is fully deterministic in update which makes for more stable training. More samples are needed so might need gradient accumulation support Gradient accumulation #52
Last-layer deterministic VI (paper) provides a handy deterministic objective with linear last layers for regression and classification. Might be worth adding if we can generalise to exponential familiy losses and/or linearise the model.
The text was updated successfully, but these errors were encountered:
I am adding SNGP that is also a deterministic/ single-forward pass UQ method. not sure if it will work well on pre-trained models though, i.e., not necessarily regularized with spectral normalization.
My understanding is that SNGP #68 training as in the paper is deterministic because they use a Laplace approximation. I imagine though you could do it with VI and even the deterministic VI methods above!
init
and then is fully deterministic inupdate
which makes for more stable training. More samples are needed so might need gradient accumulation support Gradient accumulation #52The text was updated successfully, but these errors were encountered: