You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for raising this point. Unfortunately, it could be hard --- in some sense, just like adaptors are not folded into the weights due to non-linearity and thus increase compute overhead. For instance, if ReFT is applied in the middle of two linear layers, then yes, it could be potentially folded in by coupling the rotation weights with the linear layer weights.
Another reason why it is hard is that, we intervene not on all positions, but just on very limited positions (e.g., the first n tokens and last n tokens in the prompt). And these intervening positions depend on the input. As a result, the interventions have to happen during run-time to target dynamic locations.
frankaging
changed the title
Is it possible to "bake in" ReFT changes to the weights and produce a model without pyreft dependencies?
[P1] Is it possible to "bake in" ReFT changes to the weights and produce a model without pyreft dependencies?
Apr 17, 2024
This is helpful in understanding how it works. From tinkering with it over the last few weeks, it seems unique in how it works, and would probably need to be built from scratch to work on something other than torch/transformers - is that a fair assumption? Was looking into what it would take to do mlx weeks ago, came back to it this weekend, and given pyreft requires pyvene, it sounds like much more than a weekend project.
Fun though - really neat to be able to steer it so easily.
I imagine it would be non-trivial (and then some), but am wondering if any plans are afoot.
The text was updated successfully, but these errors were encountered: