Skip to content

PT2E test demucs model no data available in the INT8 SYMM and ASYMM scenario #1771

@kaileiyx

Description

@kaileiyx

🐛 Describe the bug

Reproducer:
XPU_QUANT_CONFIG=ASYMM python pt2e-performance/run_benchmark.py xpu --test eval --channels-last --metrics throughputs --torchdynamo inductor --quantization pt2e -m $models 2>&1 |
tee "${pt2e_logs_dir}/performance-int8-ASYMM.log"
XPU_QUANT_CONFIG=SYMM python pt2e-performance/run_benchmark.py xpu --test eval --channels-last --metrics throughputs --torchdynamo inductor --quantization pt2e -m $models 2>&1 |
tee "${pt2e_logs_dir}/performance-int8-SYMM.log"

Error log:
Running TorchBenchModelConfig(name='demucs', test='eval', device='xpu', batch_size=None, extra_args=['--channels-last', '--torchdynamo', 'inductor', '--quantization', 'pt2e'], extra_env=None, output_dir=None) ...Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/userbenchmark/xpu/run_config.py", line 97, in
run(args, extra_args)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/userbenchmark/xpu/run_config.py", line 75, in run
metrics_res = run_config(config, metrics, nwarmup=int(args.nwarmup), niter=int(args.niter), dryrun=args.dryrun)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/userbenchmark/xpu/run_config.py", line 57, in run_config
result: TorchBenchModelMetrics = get_model_test_metrics(model, metrics=metrics, nwarmup=nwarmup, num_iter=niter)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/torchbenchmark/util/experiment/metrics.py", line 183, in get_model_test_metrics
latencies = get_latencies(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/torchbenchmark/util/experiment/metrics.py", line 40, in get_latencies
func()
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/torchbenchmark/util/model.py", line 413, in invoke
out = self.eval()
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 703, in compile_wrapper
return fn(*args, **kwargs)
File "/home/sdp/actions-runner/_work/torch-xpu-ops/torch-xpu-ops/pt2e-performance/torchbenchmark/models/demucs/init.py", line 88, in eval
def eval(self) -> Tuple[torch.Tensor]:
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 899, in _fn
return fn(*args, **kwargs)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1240, in forward
return compiled_fn(full_args)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 357, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 723, in inner_fn
outs = compiled_fn(args)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 529, in wrapper
return compiled_fn(runtime_args)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1932, in wrapper
return optimized_function(args_new)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 583, in call
return self.current_callable(inputs)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_inductor/utils.py", line 2714, in run
out = model(new_inputs)
File "/tmp/torchinductor_sdp/tmp4jcofzyc/77/c77o65wtu7oeuqnhbroczbp37lblyum6fpqa7kzzptl43gc4ct3l.py", line 31818, in call
buf1 = torch.ops.onednn.qconv_pointwise.default(buf0, x_scale, x_zp, _frozen_param88, _frozen_param59, _frozen_param60, _frozen_param0, [4], [0], [1], 1, 1.0, 0, torch.float32, 'relu', [None], '')
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_ops.py", line 818, in call
return self._op(*args, **kwargs)
RuntimeError: quantized pointwise conv1d doesn't support unary_post_op fusion. Got unary_post_op:relu.

Versions

pytorch : 74d0136772a9564c77360c17c849c3a368fa3e8f
torch-xpu-ops:a3a196ccdbcbc399e157b6bcf8f5611e6561b6d6
Device: 8*[23.43.27642.52]
OS:Ubuntu 22.04.2 LTS

Metadata

Metadata

Assignees

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions