Skip to content

Conversation

@filyp
Copy link

@filyp filyp commented Jan 26, 2026

What does this PR do?

Additionally it:

  • Simplifies the installation of lm_eval (no need to have a special install group in setup.py)
  • Minor fix to the leaderboard doc
  • Makes .gitignore more comprehensive

Related issues: #173 and #155

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Have you gone through the contributions guide?
  • Are your changes documented? Read documentation guidelines here.

Other notes

transformers 5.0.0 recently came out, but it drops support for tokenizer, so for now I didn't want to touch it, as that would require more significant changes:

/root/code/src/trainer/base.py:19: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `CIR.__init__`. Use `processing_class` instead.

I manually tested the new evaluation, with removed prediction_step from UnlearninTrainer, and it works the same.

When I tested unlearning, the overall unlearning trajectory is very similar, but there are some tiny changes to the logged training loss, likely due to a difference in how new transformers aggregates it. (Not sure though, but the differences were small.)

filyp added 3 commits January 26, 2026 16:15
make UnlearnTrainer implementation more future proof
make other necessary changes to be compatible with new transformers version
fix leaderboard docs
wider gitignore
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant