Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query #209

Open
malikfahadsarwar opened this issue Nov 8, 2024 · 0 comments
Open

Query #209

malikfahadsarwar opened this issue Nov 8, 2024 · 0 comments

Comments

@malikfahadsarwar
Copy link

Thanks for sharing the code
I have a question if you can anwer in the training we donot feed the end token embedding as input as end token have to be predicted by model in the output
in the validation since the end token might be produce anywhere[might be different for each seq with in the batch] so let say if the seq produced by the LSTM is 5 tokens means 5 probs where last prob is of end token now in ground truth let say we have 9 tokens then how we supposed to compute the loss pz if you can anser my query

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant