Skip to content

Improve Range update for Relu/Clip #21251

Closed
@f2013519

Description

@f2013519

How do we specify fused operator patterns like (conv+relu) in the quantization config? I see such options are available in pytorch but not in onnx static_quantize.

Right now I see different scales at output of conv and relu which is not suitable for us as it will require additional requantize step.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    quantizationissues related to quantization

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions