You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Last time Ill bug you before I build this from scratch. After many different drop out rates, learning rates, regularizations on weights, adding batch_shuffle on queue, multiplying reconstruction loss by different values (like the theano version), adding reduce mean to the softmax_cross_entropy function, and different variances on the weight initialization I have not been able to get the model to get an accuracy better than .06 although the loss seems to have converged.
When I print out the correct labels and the predicted labels, the model usually starts predicting all elements in the batch to the same thing. Meaning, by step 40-70 it begins to predict the same class across the whole batch.
Have you been able to get the accuracy up? Any suggestions?
Thanks again
The text was updated successfully, but these errors were encountered:
Yeah, the theano implementation uses uniform initializers.
It also normalizes the SNP data which your code does not I don't think. I'm going to try adding those two features, add batch norm after first layer of aux nets and remove last layer of auxnet to make start simple.
Hey @gokceneraslan ,
Last time Ill bug you before I build this from scratch. After many different drop out rates, learning rates, regularizations on weights, adding batch_shuffle on queue, multiplying reconstruction loss by different values (like the theano version), adding reduce mean to the softmax_cross_entropy function, and different variances on the weight initialization I have not been able to get the model to get an accuracy better than .06 although the loss seems to have converged.
When I print out the correct labels and the predicted labels, the model usually starts predicting all elements in the batch to the same thing. Meaning, by step 40-70 it begins to predict the same class across the whole batch.
Have you been able to get the accuracy up? Any suggestions?
Thanks again
The text was updated successfully, but these errors were encountered: