[Example] Clip_B and Clip_V from entropy dynamics#509
[Example] Clip_B and Clip_V from entropy dynamics#509hiyuchang wants to merge 1 commit intoagentscope-ai:mainfrom
Conversation
Summary of ChangesHello @hiyuchang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the framework's capabilities for fine-tuning Large Language Models using Reinforcement Learning by incorporating advanced entropy dynamics algorithms. It specifically implements the 'Clip_B' method, which refines token-level advantage calculations and introduces a new GRPO variant to manage these computations within the trainer. This integration allows for more nuanced control over the training process, aiming to improve model performance and stability. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the Clip_B algorithm, including a new advantage function and an example configuration. The implementation is largely correct, but there are opportunities for improvement. Specifically, the logic for token filtering in the advantage function can be simplified for better readability. There's also an inconsistency in the standard deviation calculation that should be addressed. A significant maintainability concern is the use of a patch file for modifying core trainer behavior; this change should be integrated directly into the source code. Finally, there are minor issues in the example's README and configuration file that should be fixed.
| --- a/trinity/trainer/verl_trainer.py | ||
| +++ b/trinity/trainer/verl_trainer.py | ||
| @@ -501,7 +501,8 @@ class VerlPPOTrainerWrapper(RayPPOTrainer, TrainEngineWrapper): | ||
| } | ||
| metrics.update(old_log_prob_metrics) | ||
| - old_log_prob.batch.pop("entropys") | ||
| + # Keep entropys in batch so advantage_fn (e.g. Clip_B) can use it | ||
| + # old_log_prob.batch.pop("entropys") | ||
| batch = batch.union(old_log_prob) | ||
| if "rollout_log_probs" in batch.batch.keys(): | ||
| # TODO: we may want to add diff of probs too. |
There was a problem hiding this comment.
Requiring users to manually apply a patch is not a maintainable or user-friendly approach. This change should be integrated directly into the trinity/trainer/verl_trainer.py file within this pull request. Instead of providing a patch, please modify the source code directly.
A better long-term solution would be to make the removal of entropys from the batch configurable. For instance, the advantage_fn could declare which fields it requires, and the trainer could conditionally avoid removing them. This would make the framework more extensible for future algorithms that might have similar requirements.
| @@ -0,0 +1,29 @@ | |||
| # Entropy dynamics of RL training | |||
|
|
|||
| This example shows the two algorithms **Clip_B** and **Clip_V** from the work [On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models](https://arxiv.org/pdf/2602.03392). | |||
There was a problem hiding this comment.
There is a typo in the arXiv link. The year should be 2402, not 2602.
| This example shows the two algorithms **Clip_B** and **Clip_V** from the work [On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models](https://arxiv.org/pdf/2602.03392). | |
| This example shows the two algorithms **Clip_B** and **Clip_V** from the work [On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models](https://arxiv.org/pdf/2402.03392). |
| response_key: 'answer' | ||
| rollout_args: | ||
| temperature: 0.7 | ||
| - name : aime25 |
| dtype=scores.dtype, device=scores.device | ||
| ) | ||
| id2mean[idx] = torch.mean(group_scores) | ||
| id2std[idx] = torch.std(group_scores) |
There was a problem hiding this comment.
There's an inconsistency in how standard deviation is calculated. Here, torch.std is used with its default unbiased=True, which calculates the sample standard deviation (using N-1 in the denominator). However, on line 100, the comment and implementation for varS indicate population variance (using N in the denominator). For consistency, if population statistics are intended throughout, you should use unbiased=False.
| id2std[idx] = torch.std(group_scores) | |
| id2std[idx] = torch.std(group_scores, unbiased=False) |
| A = exps.batch["advantages"].detach().to(torch.float32) # [B, T] | ||
| pos_mask = A > 0 | ||
| neg_mask = A < 0 | ||
|
|
||
| keep_pos = torch.ones_like(pos_mask, dtype=torch.bool) # positive: all kept | ||
| keep_neg = z >= -(self.mu * stdS) # negative: lower-side clip | ||
| keep_zero = torch.ones_like(pos_mask, dtype=torch.bool) # zero: all kept | ||
|
|
||
| keep_bool = torch.where(pos_mask, keep_pos, torch.where(neg_mask, keep_neg, keep_zero)) |
There was a problem hiding this comment.
The logic to determine which tokens to keep can be simplified. The nested torch.where calls are equivalent to a more concise and readable boolean expression.
| A = exps.batch["advantages"].detach().to(torch.float32) # [B, T] | |
| pos_mask = A > 0 | |
| neg_mask = A < 0 | |
| keep_pos = torch.ones_like(pos_mask, dtype=torch.bool) # positive: all kept | |
| keep_neg = z >= -(self.mu * stdS) # negative: lower-side clip | |
| keep_zero = torch.ones_like(pos_mask, dtype=torch.bool) # zero: all kept | |
| keep_bool = torch.where(pos_mask, keep_pos, torch.where(neg_mask, keep_neg, keep_zero)) | |
| A = exps.batch["advantages"].detach().to(torch.float32) # [B, T] | |
| # Keep tokens with non-negative advantage, or tokens with negative advantage that satisfy the entropy-based condition. | |
| keep_bool = (A >= 0) | (z >= -(self.mu * stdS)) |
Description
We add support for algorithms in On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models. Contact: @shuminwang-ai.
Checklist
Please check the following items before code is ready to be reviewed.