-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Got stuck when evaluating MMLU #1
Comments
Is your problem solved ? please let me know since I am dealing with same issue |
You can limit the #token feed to the model to match the max token length. This is rather a problem with lm_eval_harness than additional tasks. |
How do we do this exactly? |
Did you fix this ?? |
Thanks for your open sourcing! i'm trying to evaluate
Llama-7b-hf
onmmlu-fr
, a warning ofToken indices sequence length is longer than the specified maximum sequence length for this model (5023 > 4096). Running this sequence through the model will result in indexing errors
occurs and it seems the process is stuck. Here is the callstack after keyboard interrupt:it seems the process is stuck in the batched tokenizing, how to deal with this?
The text was updated successfully, but these errors were encountered: