-
-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High loss when using bayesian lstm instead of standard lstm #100
Comments
This is how the loss looks like within the sample elbo method for multiple samples over many epochs PERFORMANCE LOSS: 1.3859792947769165 |
Having the same issue here... |
Hello, have you finally solved it? I had the same problem |
Have the same problem. And how to output the uncertainty of prediction result? |
Hey, I'm sorry for the delay. Will try to take a look at it this week. Maybe reducing the KL Divergence weight on the loss could help. @0wenwu to output the uncertainty you do multiple forward passes and check the variance, you can assume it is a normal. |
Hey, @piEsposito |
I am trying to implement a model using the bayesian lstm layer given I already have a model that relies on lstm and it gets good results for a classification task.
When I use the bayesian layer the loss becomes very high and the accuracy doesn't converge much. I tried changing the model's hyperparameters (especially prior variables and posterior_rho) but didn't that much. I also added sharpen=True for loss sharpening but nothing changed.
The model:
in the training I have
What's the problem here?
The text was updated successfully, but these errors were encountered: