You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When performing weight-only quantization (e.g., NVFP4) on models with QuantTEColumnParallelGroupedLinear or other GroupedLinear modules, the quantization process fails to properly compute and store the amax values.
It may because GroupedLinear modules use multiple weight parameters (weight0, weight1, weight2, ...) instead of a single weight parameter, and share a single weight_quantizer across all weights.
The existing weight_attr_names() function cannot detect these weight parameters, causing weight_only_quantize() to skip these modules entirely.