-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ideas behind sharing parameters of policy model and value model? #1563
Comments
If you do that, I think you have to maintain two different full-parameter models. But it seems not: |
@luo-li-ba-suo The use of adapters means that it is a LoRA tuning instead of full-parameter tuning. |
You are right. I just wonder whether two different LoRA Adapter on one model can be trained simultaneously. |
We should prevent the gradient of adapters from being disabled if we use multiple adapters in a PPO step. |
@hiyouga I didn't know Reward weights which the value model was initialized with:
And weights of value model after PPO:
Weird! |
My local branch has diverged from the main and contains many irrelevant changes, and I am trying to pick out the minimal necessary changes for your reference: #1624 |
Hey, why is it not updated now, is it found that there will be any bugs? |
curious to hear updates on whether this will be merged. |
It seems unnecessary to resolve this issue, because RLOO may perform better and is easier to be supported by llama-factory(trl supports it now). |
Thanks for this great work, but I wonder why we are sharing parameters of policy model and value model here. In most literature, the policy model is initialized with the SFT model, and the value model is initialized with the reward model, which have separate parameters. I also made some experiment, under same data and hyperparameters:
Share parameters, initialized with SFT model (default in this repo)
Separate parameters, initialized with SFT model
Separate parameters, policy model initialized with SFT model, value model initialized with reward model (default in most literature)
It seems if value model is initialized with the reward model, the initial value loss is much lower, and the achieved reward is better.
The implementation for separating parameters is quite straight forward, first add a
value
adapter and load from reward ckpt:Then run forward twice with different adapter to get logits and values:
The text was updated successfully, but these errors were encountered: