Skip to content

[Example] Clip_B and Clip_V from entropy dynamics#509

Open
hiyuchang wants to merge 1 commit intoagentscope-ai:mainfrom
hiyuchang:example/clipb
Open

[Example] Clip_B and Clip_V from entropy dynamics#509
hiyuchang wants to merge 1 commit intoagentscope-ai:mainfrom
hiyuchang:example/clipb

Conversation

@hiyuchang
Copy link
Collaborator

@hiyuchang hiyuchang commented Feb 13, 2026

Description

We add support for algorithms in On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models. Contact: @shuminwang-ai.

  • Clip_B
  • Clip_V

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hiyuchang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the framework's capabilities for fine-tuning Large Language Models using Reinforcement Learning by incorporating advanced entropy dynamics algorithms. It specifically implements the 'Clip_B' method, which refines token-level advantage calculations and introduces a new GRPO variant to manage these computations within the trainer. This integration allows for more nuanced control over the training process, aiming to improve model performance and stability.

Highlights

  • New Algorithm Integration: Integrated the 'Clip_B' algorithm for entropy dynamics in Reinforcement Learning (RL) fine-tuning of Large Language Models (LLMs), based on recent research.
  • GRPO Variant for Trainer-side Advantage: Introduced a new GRPOverlAlgorithm that delegates advantage computation to the trainer, enabling more flexible advantage function implementations like Clip_B.
  • Example Configuration and Documentation: Provided a comprehensive example including a README and a YAML configuration file to demonstrate how to set up and run experiments with the new Clip_B algorithm.
Changelog
  • examples/entropy/README.md
    • Added a new README file detailing the setup and execution of Clip_B experiments, including data preparation and a note on future Clip_V implementation.
  • examples/entropy/clipb.yaml
    • Added a new YAML configuration file for the Clip_B example, specifying model, algorithm, buffer, explorer, and trainer settings for a math-related dataset.
  • examples/entropy/clipb_trainer.patch
    • Added a patch file designed to modify trinity/trainer/verl_trainer.py to prevent the 'entropys' field from being removed from the batch, ensuring it's available for advantage functions like Clip_B.
  • trinity/algorithm/init.py
    • Registered the new grpo_verl algorithm type, making it available for use within the system.
  • trinity/algorithm/advantage_fn/init.py
    • Registered the new clipb advantage function, allowing it to be selected and used in algorithms.
  • trinity/algorithm/advantage_fn/clipb_advantage.py
    • Added the ClipBAdvantageFn class, implementing the Clip_B advantage computation logic, which includes token filtering based on entropy signals and rewards.
  • trinity/algorithm/algorithm.py
    • Defined the GRPOverlAlgorithm class, a GRPO variant configured to compute advantages within the trainer, supporting algorithms like Clip_B.
  • trinity/common/verl_config.py
    • Added an optional loss_scale_factor field to the Actor configuration class.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hiyuchang hiyuchang changed the title [Example] Clip_B and Clip_V from entropy dynmics [Example] Clip_B and Clip_V from entropy dynamics Feb 13, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Clip_B algorithm, including a new advantage function and an example configuration. The implementation is largely correct, but there are opportunities for improvement. Specifically, the logic for token filtering in the advantage function can be simplified for better readability. There's also an inconsistency in the standard deviation calculation that should be addressed. A significant maintainability concern is the use of a patch file for modifying core trainer behavior; this change should be integrated directly into the source code. Finally, there are minor issues in the example's README and configuration file that should be fixed.

Comment on lines +1 to +11
--- a/trinity/trainer/verl_trainer.py
+++ b/trinity/trainer/verl_trainer.py
@@ -501,7 +501,8 @@ class VerlPPOTrainerWrapper(RayPPOTrainer, TrainEngineWrapper):
}
metrics.update(old_log_prob_metrics)
- old_log_prob.batch.pop("entropys")
+ # Keep entropys in batch so advantage_fn (e.g. Clip_B) can use it
+ # old_log_prob.batch.pop("entropys")
batch = batch.union(old_log_prob)
if "rollout_log_probs" in batch.batch.keys():
# TODO: we may want to add diff of probs too.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Requiring users to manually apply a patch is not a maintainable or user-friendly approach. This change should be integrated directly into the trinity/trainer/verl_trainer.py file within this pull request. Instead of providing a patch, please modify the source code directly.

A better long-term solution would be to make the removal of entropys from the batch configurable. For instance, the advantage_fn could declare which fields it requires, and the trainer could conditionally avoid removing them. This would make the framework more extensible for future algorithms that might have similar requirements.

@@ -0,0 +1,29 @@
# Entropy dynamics of RL training

This example shows the two algorithms **Clip_B** and **Clip_V** from the work [On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models](https://arxiv.org/pdf/2602.03392).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a typo in the arXiv link. The year should be 2402, not 2602.

Suggested change
This example shows the two algorithms **Clip_B** and **Clip_V** from the work [On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models](https://arxiv.org/pdf/2602.03392).
This example shows the two algorithms **Clip_B** and **Clip_V** from the work [On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models](https://arxiv.org/pdf/2402.03392).

response_key: 'answer'
rollout_args:
temperature: 0.7
- name : aime25
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is an extra space before the colon in name : aime25. While many YAML parsers might handle this, it's inconsistent with the rest of the file and can lead to parsing issues with stricter parsers. Please remove the space for consistency.

    - name: aime25

dtype=scores.dtype, device=scores.device
)
id2mean[idx] = torch.mean(group_scores)
id2std[idx] = torch.std(group_scores)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's an inconsistency in how standard deviation is calculated. Here, torch.std is used with its default unbiased=True, which calculates the sample standard deviation (using N-1 in the denominator). However, on line 100, the comment and implementation for varS indicate population variance (using N in the denominator). For consistency, if population statistics are intended throughout, you should use unbiased=False.

Suggested change
id2std[idx] = torch.std(group_scores)
id2std[idx] = torch.std(group_scores, unbiased=False)

Comment on lines +111 to +119
A = exps.batch["advantages"].detach().to(torch.float32) # [B, T]
pos_mask = A > 0
neg_mask = A < 0

keep_pos = torch.ones_like(pos_mask, dtype=torch.bool) # positive: all kept
keep_neg = z >= -(self.mu * stdS) # negative: lower-side clip
keep_zero = torch.ones_like(pos_mask, dtype=torch.bool) # zero: all kept

keep_bool = torch.where(pos_mask, keep_pos, torch.where(neg_mask, keep_neg, keep_zero))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to determine which tokens to keep can be simplified. The nested torch.where calls are equivalent to a more concise and readable boolean expression.

Suggested change
A = exps.batch["advantages"].detach().to(torch.float32) # [B, T]
pos_mask = A > 0
neg_mask = A < 0
keep_pos = torch.ones_like(pos_mask, dtype=torch.bool) # positive: all kept
keep_neg = z >= -(self.mu * stdS) # negative: lower-side clip
keep_zero = torch.ones_like(pos_mask, dtype=torch.bool) # zero: all kept
keep_bool = torch.where(pos_mask, keep_pos, torch.where(neg_mask, keep_neg, keep_zero))
A = exps.batch["advantages"].detach().to(torch.float32) # [B, T]
# Keep tokens with non-negative advantage, or tokens with negative advantage that satisfy the entropy-based condition.
keep_bool = (A >= 0) | (z >= -(self.mu * stdS))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant