-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding evaluation checks to prevent Transformer ValueError #3105
base: master
Are you sure you want to change the base?
Conversation
Hello! Thanks for tackling this, I think it's indeed quite smart to "get ahead" of the error that The edge case of no eval_strategy/no evaluator was already tackled a bit in #3035 to get Transformers v4.46 compatibility, but the "no eval_strategy and no evaluator" case was left as-is:
So I also updated the test that I made back then with more details about what the expected ValueError should be. What do you think? @stsfaroz
|
Looks like the tests failed for Python 3.9 and 3.10 because Python 3.11 changed how Enums are formatted by default: https://docs.python.org/3/whatsnew/3.11.html#enum I.e. they now get printed as
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 2 out of 2 changed files in this pull request and generated no suggestions.
When neither eval_dataset nor evaluator is provided, Transformers raises a ValueError stating that an eval_dataset must be passed or eval_strategy. However, this error does not account for the
evaluator
parameter.Error raised by Transformers:
Instead, we now raise a more specific error: