Fix MoE quantization tests and add sync_expert_weight_amax option#1158
Fix MoE quantization tests and add sync_expert_weight_amax option#1158
Conversation
- Pass num_experts and moe_grouped_gemm to TE spec in get_mcore_gpt_model - Fix partial arg ordering in test_te_grouped_vs_sequential_quantize - Add sync_expert_weight_amax flag to MaxCalibConfig to optionally sync weight quantizer amax across SequentialMLP experts (matches TEGroupedMLP) - Update sync_moe_expert_amax helper to accept sync_weight_amax param - Add debug prints for grouped vs sequential amax comparison Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
📝 WalkthroughWalkthroughAdded a boolean calibration option Changes
Sequence Diagram(s)sequenceDiagram
participant Config as Config (MaxCalibConfig)
participant Calibrator as max_calibrate()
participant MoELayer as layer_sync_moe_local_experts_amax()
participant SyncUtil as sync_moe_expert_amax()
participant Experts as Local Experts
Config->>Calibrator: provide sync_expert_weight_amax flag
Calibrator->>MoELayer: call layer_sync_moe_local_experts_amax(sync_weight_amax=flag)
MoELayer->>SyncUtil: call sync_moe_expert_amax(experts, sync_weight_amax=flag)
SyncUtil->>Experts: collect amax (input_quantizer[, weight_quantizer if flag])
SyncUtil-->>MoELayer: return synchronized amax values
MoELayer-->>Calibrator: complete local-expert sync
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~35 minutes 🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modelopt/torch/quantization/utils/core_utils.py (1)
533-558:⚠️ Potential issue | 🟠 MajorPopulate missing weight
amaxbefore taking the cross-expert max.When
sync_weight_amax=True, Line 548 can assign a shared value to an expert that never collected weight stats, so Lines 552-557 no longer run its weight-only fallback. If that unrouted expert has the largest weights, it never contributes to the synced max, and SequentialMLP can still diverge from TEGroupedMLP. Calibrate missing weight quantizers first, or run a second max-reduction after the fallback.💡 One way to preserve TEGroupedMLP semantics
- amax_dict: dict[str, torch.Tensor] = {} + from ..model_calib import max_calibrate + + if sync_weight_amax: + for expert in experts: + for name, module in expert.named_modules(): + if name.endswith("weight_quantizer") and module.is_enabled and module.amax is None: + weight = expert.state_dict().get(name.replace("weight_quantizer", "weight")) + if weight is not None: + max_calibrate(module, lambda m, w=weight: m(w), distributed_sync=False) + + amax_dict: dict[str, torch.Tensor] = {} @@ - from ..model_calib import max_calibrate - for expert in experts: for name, module in expert.named_modules(): if name.endswith("weight_quantizer") and module.is_enabled and module.amax is None: weight = expert.state_dict().get(name.replace("weight_quantizer", "weight")) if weight is not None: max_calibrate(module, lambda m, w=weight: m(w), distributed_sync=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modelopt/torch/quantization/utils/core_utils.py` around lines 533 - 558, The current loop computes amax_dict and assigns shared amax to weight_quantizer modules before running the per-expert weight-only fallback (max_calibrate), causing experts that never collected weight stats to be skipped; change the order so missing weight amax values are calibrated first (using max_calibrate on modules where name.endswith("weight_quantizer") and module.is_enabled and module.amax is None, retrieving weight via expert.state_dict()) and only then compute the cross-expert maximum into amax_dict and assign it, or alternatively retain the existing first max-reduction but add a second pass that recomputes amax_dict (torch.maximum reduction) after the fallback; reference functions/classes to update: TensorQuantizer, amax_dict, max_calibrate, sync_weight_amax, and the expert.named_modules() loops.
🧹 Nitpick comments (1)
tests/gpu_megatron/torch/quantization/plugins/test_megatron.py (1)
638-676: Pin the no-token expert case in this test.
sync_expert_weight_amaxonly changes behavior when some SequentialMLP experts never collect weightamax. With random routing andbatch_size=8, this helper can still touch every local expert, so the regression can slip through. Force at least one expert to receive zero tokens, or compare the synced weightamaxtensors against TEGroupedMLP directly. As per coding guidelines, "tests/**/*.py: Write tests using pytest for all new features and examples".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/gpu_megatron/torch/quantization/plugins/test_megatron.py` around lines 638 - 676, The test can miss the no-token-expert case because random routing with batch_size=8 can touch every expert; update the test around TEGroupedMLP/SequentialMLP (where copy_weights_from_grouped_to_non_grouped, mtq.quantize, seq_quant_cfg, and forward are used) to force at least one SequentialMLP expert to receive zero tokens before quantization (e.g., make the gating deterministic or craft inputs so only a subset of the num_moe_experts=4 experts are selected) so sync_expert_weight_amax behavior is exercised; alternatively, after quantizing both models compare the synced expert weight amax tensors from sequential_moe_model directly against the TEGroupedMLP amaxs to ensure they match.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@modelopt/torch/quantization/model_calib.py`:
- Around line 133-136: The call to
layer_sync_moe_local_experts_amax(sync_weight_amax=sync_expert_weight_amax) is
incompatible with some MoE implementations (e.g., the HuggingFace plugin) that
do not accept the sync_weight_amax kwarg; update the call site in the loop over
model.named_modules() to detect the method signature or fall back at runtime
(e.g., use inspect.signature(module.layer_sync_moe_local_experts_amax) to check
for a sync_weight_amax parameter or try calling with the kwarg and on TypeError
call without it) so the call works for both implementations, leaving the
HuggingFace and Megatron implementations unchanged.
---
Outside diff comments:
In `@modelopt/torch/quantization/utils/core_utils.py`:
- Around line 533-558: The current loop computes amax_dict and assigns shared
amax to weight_quantizer modules before running the per-expert weight-only
fallback (max_calibrate), causing experts that never collected weight stats to
be skipped; change the order so missing weight amax values are calibrated first
(using max_calibrate on modules where name.endswith("weight_quantizer") and
module.is_enabled and module.amax is None, retrieving weight via
expert.state_dict()) and only then compute the cross-expert maximum into
amax_dict and assign it, or alternatively retain the existing first
max-reduction but add a second pass that recomputes amax_dict (torch.maximum
reduction) after the fallback; reference functions/classes to update:
TensorQuantizer, amax_dict, max_calibrate, sync_weight_amax, and the
expert.named_modules() loops.
---
Nitpick comments:
In `@tests/gpu_megatron/torch/quantization/plugins/test_megatron.py`:
- Around line 638-676: The test can miss the no-token-expert case because random
routing with batch_size=8 can touch every expert; update the test around
TEGroupedMLP/SequentialMLP (where copy_weights_from_grouped_to_non_grouped,
mtq.quantize, seq_quant_cfg, and forward are used) to force at least one
SequentialMLP expert to receive zero tokens before quantization (e.g., make the
gating deterministic or craft inputs so only a subset of the num_moe_experts=4
experts are selected) so sync_expert_weight_amax behavior is exercised;
alternatively, after quantizing both models compare the synced expert weight
amax tensors from sequential_moe_model directly against the TEGroupedMLP amaxs
to ensure they match.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ce8a1169-4247-4e70-95e3-e0d28bbc2ce2
📒 Files selected for processing (5)
modelopt/torch/quantization/config.pymodelopt/torch/quantization/model_calib.pymodelopt/torch/quantization/plugins/megatron.pymodelopt/torch/quantization/utils/core_utils.pytests/gpu_megatron/torch/quantization/plugins/test_megatron.py
Signed-off-by: Jennifer Chen <jennifchen@nvidia.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
modelopt/torch/quantization/plugins/huggingface.py (1)
556-556: Add explicit type hints on the updated sync hook signature.Line 556 introduces
sync_weight_amaxwithout type annotation. Please annotate it (and return type) for mypy consistency.Suggested diff
- def layer_sync_moe_local_experts_amax(self, sync_weight_amax=False): + def layer_sync_moe_local_experts_amax(self, sync_weight_amax: bool = False) -> None:As per coding guidelines "Ensure type hints are properly annotated for static type checking with mypy".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modelopt/torch/quantization/plugins/huggingface.py` at line 556, The new hook method layer_sync_moe_local_experts_amax has an unannotated parameter and missing return type; add explicit type hints for sync_weight_amax (e.g., bool) and the method return type (likely None) to satisfy mypy. Locate the method layer_sync_moe_local_experts_amax in the HuggingFace quantization plugin and update its signature to include the parameter annotation and a return annotation consistent with other sync hooks in the file (use bool for sync_weight_amax and -> None if it returns nothing).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@modelopt/torch/quantization/plugins/huggingface.py`:
- Line 556: The new hook method layer_sync_moe_local_experts_amax has an
unannotated parameter and missing return type; add explicit type hints for
sync_weight_amax (e.g., bool) and the method return type (likely None) to
satisfy mypy. Locate the method layer_sync_moe_local_experts_amax in the
HuggingFace quantization plugin and update its signature to include the
parameter annotation and a return annotation consistent with other sync hooks in
the file (use bool for sync_weight_amax and -> None if it returns nothing).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fc0ff616-d05f-4d63-9b3b-f2ee1462617c
📒 Files selected for processing (1)
modelopt/torch/quantization/plugins/huggingface.py
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1158 +/- ##
===========================================
- Coverage 70.20% 54.54% -15.67%
===========================================
Files 230 348 +118
Lines 26098 39781 +13683
===========================================
+ Hits 18322 21697 +3375
- Misses 7776 18084 +10308
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/gpu_megatron/torch/quantization/plugins/test_megatron.py (1)
667-669: Preserve existing algorithm options when enablingsync_expert_weight_amax.Line 668 replaces the entire
algorithmconfig, which can silently drop other algorithm fields if callers pass a richer config. Prefer merging instead of overwriting.♻️ Proposed refactor
- seq_quant_cfg = copy.deepcopy(quant_cfg) - seq_quant_cfg["algorithm"] = {"method": "max", "sync_expert_weight_amax": True} + seq_quant_cfg = copy.deepcopy(quant_cfg) + algo_cfg = seq_quant_cfg.get("algorithm", {}) + if isinstance(algo_cfg, str): + algo_cfg = {"method": algo_cfg} + else: + algo_cfg = copy.deepcopy(algo_cfg) + algo_cfg["method"] = "max" + algo_cfg["sync_expert_weight_amax"] = True + seq_quant_cfg["algorithm"] = algo_cfg🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/gpu_megatron/torch/quantization/plugins/test_megatron.py` around lines 667 - 669, The test currently overwrites the entire algorithm dict (seq_quant_cfg["algorithm"] = {...}), which can drop existing algorithm fields from quant_cfg; instead merge the new option into the existing algorithm dict so other options are preserved—e.g., obtain the original algorithm from seq_quant_cfg (or quant_cfg), update it with {"method":"max","sync_expert_weight_amax":True}, assign that merged dict back to seq_quant_cfg["algorithm"], and then call mtq.quantize(sequential_moe_model, seq_quant_cfg, forward).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/gpu_megatron/torch/quantization/plugins/test_megatron.py`:
- Around line 667-669: The test currently overwrites the entire algorithm dict
(seq_quant_cfg["algorithm"] = {...}), which can drop existing algorithm fields
from quant_cfg; instead merge the new option into the existing algorithm dict so
other options are preserved—e.g., obtain the original algorithm from
seq_quant_cfg (or quant_cfg), update it with
{"method":"max","sync_expert_weight_amax":True}, assign that merged dict back to
seq_quant_cfg["algorithm"], and then call mtq.quantize(sequential_moe_model,
seq_quant_cfg, forward).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a5410045-2e76-452d-a309-ae5994d24b6a
📒 Files selected for processing (1)
tests/gpu_megatron/torch/quantization/plugins/test_megatron.py
Fix MOE quantization tests to ensure QuantSequentialMLP and QuantTEGroupedMLP produce identical outputs when SequentialMLP has weight & input quantizer sync both enabled.
By default SequentialMLP does not sync weight scales (but does sync input quantizer scales) so the outputs of QuantSequentialMLP and QuantTEGroupedMLP are not the same. However when weight scales are synced the amax values and outputs of both modules should be identical
What does this PR do?
Type of change: Bug fix
Usage
# Add a code snippet demonstrating how to use thisTesting
Before your PR is "Ready for review"
Make sure you read and follow Contributor guidelines and your commits are signed (
git commit -s -S).Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded
trust_remote_code=True,torch.load(..., weights_only=False),pickle, etc.).CONTRIBUTING.md: ✅ / ❌ / N/AAdditional Information
Summary by CodeRabbit
New Features
Tests