You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your code. It helps a lot. When I was trying to reproduce your results on the amazon review datasets, I found the BERT-ADD accuracies are worse than no adapt results? Have you encountered the same issue? Epoch [79/80] Step [195/200]: acc=0.5000 g_loss=0.6922 d_loss=0.6932 kd_loss=0.0000. I also noticed that when the adapt training converges, only the kd_loss descent to 0, but g_loss and d_loss didn't descend at all. Is it normal or maybe this is where the problem is? Or could you please release the hyperparameters for Bert-AAD?
Thanks,
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for your code. It helps a lot. When I was trying to reproduce your results on the amazon review datasets, I found the BERT-ADD accuracies are worse than no adapt results? Have you encountered the same issue? Epoch [79/80] Step [195/200]: acc=0.5000 g_loss=0.6922 d_loss=0.6932 kd_loss=0.0000. I also noticed that when the adapt training converges, only the kd_loss descent to 0, but g_loss and d_loss didn't descend at all. Is it normal or maybe this is where the problem is? Or could you please release the hyperparameters for Bert-AAD?
Thanks,
The text was updated successfully, but these errors were encountered: