You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the code
I have a question if you can anwer in the training we donot feed the end token embedding as input as end token have to be predicted by model in the output
in the validation since the end token might be produce anywhere[might be different for each seq with in the batch] so let say if the seq produced by the LSTM is 5 tokens means 5 probs where last prob is of end token now in ground truth let say we have 9 tokens then how we supposed to compute the loss pz if you can anser my query
The text was updated successfully, but these errors were encountered:
Thanks for sharing the code
I have a question if you can anwer in the training we donot feed the end token embedding as input as end token have to be predicted by model in the output
in the validation since the end token might be produce anywhere[might be different for each seq with in the batch] so let say if the seq produced by the LSTM is 5 tokens means 5 probs where last prob is of end token now in ground truth let say we have 9 tokens then how we supposed to compute the loss pz if you can anser my query
The text was updated successfully, but these errors were encountered: