Skip to content

[megatron] refactor: Refactor policy update failure handling from exception to warning in MegatronPPOActor to support FP16 training#5369

Open
mirrorboat wants to merge 2 commits intoverl-project:mainfrom
mirrorboat:megatron_fp16_gradient_overflow
Open

[megatron] refactor: Refactor policy update failure handling from exception to warning in MegatronPPOActor to support FP16 training#5369
mirrorboat wants to merge 2 commits intoverl-project:mainfrom
mirrorboat:megatron_fp16_gradient_overflow

Conversation

@mirrorboat
Copy link
Copy Markdown
Contributor

Refactor policy update failure handling from exception to warning in MegatronPPOActor to support FP16 training

What does this PR do?

Replaced NotImplementedError with a warning log when policy updates fail due to gradient overflow in FP16 training. This prevents abrupt termination and allows continued execution while alerting users to potential precision issues.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the policy update failure handling in MegatronPPOActor to be more robust for FP16 training. Instead of raising a NotImplementedError when a policy update fails (e.g., due to gradient overflow), it now logs a warning and continues. This prevents training from crashing on transient errors. My review includes a suggestion to enhance this by adding a counter for consecutive failures to prevent silent training stalls in case of persistent issues.

Comment on lines 815 to 817
else:
raise NotImplementedError
logger.warning("Policy update failed, no update is made. This can be caused by gradient overflow in fp16 training.")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

While replacing NotImplementedError with a warning is a good step to handle transient gradient overflows in FP16 training without crashing, it introduces a risk of silent failures. If policy updates fail repeatedly, the model will not learn, but the training script will continue to run, potentially wasting significant compute resources. This could happen if there's a more persistent instability issue.

To make this more robust, I suggest introducing a counter for consecutive update failures. If the number of consecutive failures exceeds a configurable threshold, it should raise an exception to halt the training. This provides a safety net against silent training failures while still allowing for occasional, transient issues.

You would need to:

  1. Initialize self.consecutive_update_failures = 0 in the __init__ method.
  2. Reset the counter with self.consecutive_update_failures = 0 inside the if update_successful: block (around line 812).
  3. Implement the failure counting and check in this else block as suggested below.
Suggested change
else:
raise NotImplementedError
logger.warning("Policy update failed, no update is made. This can be caused by gradient overflow in fp16 training.")
else:
self.consecutive_update_failures += 1
logger.warning(
f"Policy update failed ({self.consecutive_update_failures} consecutive), no update is made. "
"This can be caused by gradient overflow in fp16 training."
)
max_failures = self.config.get("max_consecutive_update_failures", 10)
if self.consecutive_update_failures > max_failures:
raise RuntimeError(
f"Exceeded max consecutive policy update failures ({max_failures}). "
"Training is likely unstable."
)

@mirrorboat mirrorboat changed the title [megatron]refactor: Refactor policy update failure handling from exception to warning in MegatronPPOActor to support FP16 training [megatron] refactor: Refactor policy update failure handling from exception to warning in MegatronPPOActor to support FP16 training Feb 23, 2026
@mirrorboat
Copy link
Copy Markdown
Contributor Author

@ISEEKYAN @vermouth1992 Could you please review this PR that supports FP16 training for Megatron ? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant