Open
Description
Description
I'm trying to compile an ONNX file into TensorRT engine (v10.12.0) for A100 but it is failing
Environment
TensorRT Version: 10.12.0
NVIDIA GPU: A100
NVIDIA Driver Version: 550.54.15
CUDA Version: 12.9
CUDNN Version: ???
Operating System: Triton NGC image with trtexec installed
Python Version (if applicable): N/A
Tensorflow Version (if applicable): N/A
PyTorch Version (if applicable): N/A
Baremetal or Container (if so, version): N/A
Relevant Files
trtexec logs (click to expand)
``` &&&& RUNNING TensorRT.trtexec [TensorRT v101200] [b36] # /usr/src/tensorrt/bin/trtexec --onnx=folded.onnx --saveEngine=/vol/models/hubertsiuzdak/snac_24khz/trt_engines/decoder.plan --minShapes=reconstructed_z:1x768x4 --optShapes=reconstructed_z:2x768x32 --maxShapes=reconstructed_z:32x768x20000 --fp16 [07/05/2025-07:49:39] [I] === Model Options === [07/05/2025-07:49:39] [I] Format: ONNX [07/05/2025-07:49:39] [I] Model: folded.onnx [07/05/2025-07:49:39] [I] Output: [07/05/2025-07:49:39] [I] === Build Options === [07/05/2025-07:49:39] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default, tacticSharedMem: default [07/05/2025-07:49:39] [I] avgTiming: 8 [07/05/2025-07:49:39] [I] Precision: FP32+FP16 [07/05/2025-07:49:39] [I] LayerPrecisions: [07/05/2025-07:49:39] [I] Layer Device Types: [07/05/2025-07:49:39] [I] Calibration: [07/05/2025-07:49:39] [I] Refit: Disabled [07/05/2025-07:49:39] [I] Strip weights: Disabled [07/05/2025-07:49:39] [I] Version Compatible: Disabled [07/05/2025-07:49:39] [I] ONNX Plugin InstanceNorm: Disabled [07/05/2025-07:49:39] [I] ONNX kENABLE_UINT8_AND_ASYMMETRIC_QUANTIZATION_DLA flag: Disabled [07/05/2025-07:49:39] [I] TensorRT runtime: full [07/05/2025-07:49:39] [I] Lean DLL Path: [07/05/2025-07:49:39] [I] Tempfile Controls: { in_memory: allow, temporary: allow } [07/05/2025-07:49:39] [I] Exclude Lean Runtime: Disabled [07/05/2025-07:49:39] [I] Sparsity: Disabled [07/05/2025-07:49:39] [I] Safe mode: Disabled [07/05/2025-07:49:39] [I] Build DLA standalone loadable: Disabled [07/05/2025-07:49:39] [I] Allow GPU fallback for DLA: Disabled [07/05/2025-07:49:39] [I] DirectIO mode: Disabled [07/05/2025-07:49:39] [I] Restricted mode: Disabled [07/05/2025-07:49:39] [I] Skip inference: Disabled [07/05/2025-07:49:39] [I] Save engine: /vol/models/hubertsiuzdak/snac_24khz/trt_engines/decoder.plan [07/05/2025-07:49:39] [I] Load engine: [07/05/2025-07:49:39] [I] Profiling verbosity: 0 [07/05/2025-07:49:39] [I] Tactic sources: Using default tactic sources [07/05/2025-07:49:39] [I] timingCacheMode: local [07/05/2025-07:49:39] [I] timingCacheFile: [07/05/2025-07:49:39] [I] Enable Compilation Cache: Enabled [07/05/2025-07:49:39] [I] Enable Monitor Memory: Disabled [07/05/2025-07:49:39] [I] errorOnTimingCacheMiss: Disabled [07/05/2025-07:49:39] [I] Preview Features: Use default preview flags. [07/05/2025-07:49:39] [I] MaxAuxStreams: -1 [07/05/2025-07:49:39] [I] BuilderOptimizationLevel: -1 [07/05/2025-07:49:39] [I] MaxTactics: -1 [07/05/2025-07:49:39] [I] Calibration Profile Index: 0 [07/05/2025-07:49:39] [I] Weight Streaming: Disabled [07/05/2025-07:49:39] [I] Runtime Platform: Same As Build [07/05/2025-07:49:39] [I] Debug Tensors: [07/05/2025-07:49:39] [I] Distributive Independence: Disabled [07/05/2025-07:49:39] [I] Mark Unfused Tensors As Debug Tensors: Disabled [07/05/2025-07:49:39] [I] Input(s)s format: fp32:CHW [07/05/2025-07:49:39] [I] Output(s)s format: fp32:CHW [07/05/2025-07:49:39] [I] Input build shape (profile 0): reconstructed_z=1x768x4+2x768x32+32x768x20000 [07/05/2025-07:49:39] [I] Input calibration shapes: model [07/05/2025-07:49:39] [I] === System Options === [07/05/2025-07:49:39] [I] Device: 0 [07/05/2025-07:49:39] [I] DLACore: [07/05/2025-07:49:39] [I] Plugins: [07/05/2025-07:49:39] [I] setPluginsToSerialize: [07/05/2025-07:49:39] [I] dynamicPlugins: [07/05/2025-07:49:39] [I] ignoreParsedPluginLibs: 0 [07/05/2025-07:49:39] [I] [07/05/2025-07:49:39] [I] === Inference Options === [07/05/2025-07:49:39] [I] Batch: Explicit [07/05/2025-07:49:39] [I] Input inference shape : reconstructed_z=2x768x32 [07/05/2025-07:49:39] [I] Iterations: 10 [07/05/2025-07:49:39] [I] Duration: 3s (+ 200ms warm up) [07/05/2025-07:49:39] [I] Sleep time: 0ms [07/05/2025-07:49:39] [I] Idle time: 0ms [07/05/2025-07:49:39] [I] Inference Streams: 1 [07/05/2025-07:49:39] [I] ExposeDMA: Disabled [07/05/2025-07:49:39] [I] Data transfers: Enabled [07/05/2025-07:49:39] [I] Spin-wait: Disabled [07/05/2025-07:49:39] [I] Multithreading: Disabled [07/05/2025-07:49:39] [I] CUDA Graph: Disabled [07/05/2025-07:49:39] [I] Separate profiling: Disabled [07/05/2025-07:49:39] [I] Time Deserialize: Disabled [07/05/2025-07:49:39] [I] Time Refit: Disabled [07/05/2025-07:49:39] [I] NVTX verbosity: 0 [07/05/2025-07:49:39] [I] Persistent Cache Ratio: 0 [07/05/2025-07:49:39] [I] Optimization Profile Index: 0 [07/05/2025-07:49:39] [I] Weight Streaming Budget: 100.000000% [07/05/2025-07:49:39] [I] Inputs: [07/05/2025-07:49:39] [I] Debug Tensor Save Destinations: [07/05/2025-07:49:39] [I] Dump All Debug Tensor in Formats: [07/05/2025-07:49:39] [I] === Reporting Options === [07/05/2025-07:49:39] [I] Verbose: Disabled [07/05/2025-07:49:39] [I] Averages: 10 inferences [07/05/2025-07:49:39] [I] Percentiles: 90,95,99 [07/05/2025-07:49:39] [I] Dump refittable layers:Disabled [07/05/2025-07:49:39] [I] Dump output: Disabled [07/05/2025-07:49:39] [I] Profile: Disabled [07/05/2025-07:49:39] [I] Export timing to JSON file: [07/05/2025-07:49:39] [I] Export output to JSON file: [07/05/2025-07:49:39] [I] Export profile to JSON file: [07/05/2025-07:49:39] [I] [07/05/2025-07:49:39] [I] === Device Information === [07/05/2025-07:49:39] [I] Available Devices: [07/05/2025-07:49:39] [I] Device 0: "NVIDIA A100 80GB PCIe" UUID: GPU-109edf76-ba1b-a0a5-b38a-eec3920d6623 [07/05/2025-07:49:39] [I] Selected Device: NVIDIA A100 80GB PCIe [07/05/2025-07:49:39] [I] Selected Device ID: 0 [07/05/2025-07:49:39] [I] Selected Device UUID: GPU-109edf76-ba1b-a0a5-b38a-eec3920d6623 [07/05/2025-07:49:39] [I] Compute Capability: 8.0 [07/05/2025-07:49:39] [I] SMs: 108 [07/05/2025-07:49:39] [I] Device Global Memory: 81037 MiB [07/05/2025-07:49:39] [I] Shared Memory per SM: 164 KiB [07/05/2025-07:49:39] [I] Memory Bus Width: 5120 bits (ECC enabled) [07/05/2025-07:49:39] [I] Application Compute Clock Rate: 1.41 GHz [07/05/2025-07:49:39] [I] Application Memory Clock Rate: 1.512 GHz [07/05/2025-07:49:39] [I] [07/05/2025-07:49:39] [I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at. [07/05/2025-07:49:39] [I] [07/05/2025-07:49:39] [I] TensorRT version: 10.12.0 [07/05/2025-07:49:39] [I] Loading standard plugins [07/05/2025-07:49:39] [I] [TRT] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 29, GPU 426 (MiB) [07/05/2025-07:49:41] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +1636, GPU +8, now: CPU 1867, GPU 434 (MiB) [07/05/2025-07:49:41] [I] Start parsing network model. [07/05/2025-07:49:41] [I] [TRT] ---------------------------------------------------------------- [07/05/2025-07:49:41] [I] [TRT] Input filename: folded.onnx [07/05/2025-07:49:41] [I] [TRT] ONNX IR version: 0.0.11 [07/05/2025-07:49:41] [I] [TRT] Opset version: 17 [07/05/2025-07:49:41] [I] [TRT] Producer name: pytorch [07/05/2025-07:49:41] [I] [TRT] Producer version: 2.7.1 [07/05/2025-07:49:41] [I] [TRT] Domain: [07/05/2025-07:49:41] [I] [TRT] Model version: 0 [07/05/2025-07:49:41] [I] [TRT] Doc string: [07/05/2025-07:49:41] [I] [TRT] ---------------------------------------------------------------- [07/05/2025-07:49:41] [I] Finished parsing network model. Parse time: 0.0401286 [07/05/2025-07:49:41] [I] Set shape of input tensor reconstructed_z for optimization profile 0 to: MIN=1x768x4 OPT=2x768x32 MAX=32x768x20000 [07/05/2025-07:49:42] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored. [07/05/2025-07:49:50] [I] [TRT] Compiler backend is used during engine build. [07/05/2025-07:49:52] [E] Error[9]: Error Code: 9: Skipping tactic 0x0000000000000000 due to exception [autotuner.cpp:get_best_tactics:2696] Autotuner: no tactics to implement operation: 2637: deconv: /model/model_2/block/block_1/ConvTranspose_output'_before_bias.1-(f32[__mye4604_proxy.1,512,proxy___mye21039.1,1][]so[], mem_prop=0) | ONNXTRT_unsqueezeTensor_6_reshape_output.1-(f32[__mye4604_proxy.1,1024,__mye4608_proxy.1,1][]so[], mem_prop=0), /model/model_2/block/block_1/ConvTranspose filterWeightsFloat-{-0.00562609, -0.00617777, -0.00557466, 0.00758389, 0.00420195, -0.00492521, -0.00424007, -0.00513516, ...}(f32[1024,512,16,1][8192,16,1,1]so[3,2,1,0], mem_prop=0), __mye2638/model/model_2/block/block_1/ConvTranspose_alpha-1F:(f32[][]so[], mem_prop=0), __mye2639/model/model_2/block/block_1/ConvTranspose_beta-0F:(f32[][]so[], mem_prop=0), stream = 0 // /model/model_2/block/block_1/ConvTranspose | n_groups: 1 lpad: {4, 0} rpad: {4, 0} pad_mode: 0 strides: {8, 1} dilations: {1, 1} [07/05/2025-07:49:54] [E] Error[9]: Error Code: 9: Skipping tactic 0x0000000000000000 due to exception [autotuner.cpp:get_best_tactics:2696] Autotuner: no tactics to implement operation: 2637: deconv: /model/model_2/block/block_1/ConvTranspose_output'_before_bias.1-(f16[__mye4604_proxy.1,512,proxy___mye21039.1,1][]so[], mem_prop=0) | ONNXTRT_unsqueezeTensor_6_reshape_output.1-(f16[__mye4604_proxy.1,1024,__mye4608_proxy.1,1][]so[], mem_prop=0), /model/model_2/block/block_1/ConvTranspose filterWeightsHalf-{-0.00562668, -0.00617599, -0.00557327, 0.00758362, 0.0042038, -0.00492477, -0.00424194, -0.00513458, ...}(f16[1024,512,16,1][8192,16,1,1]so[3,2,1,0], mem_prop=0), __mye2638/model/model_2/block/block_1/ConvTranspose_alpha-1F:(f32[][]so[], mem_prop=0), __mye2639/model/model_2/block/block_1/ConvTranspose_beta-0F:(f32[][]so[], mem_prop=0), stream = 0 // /model/model_2/block/block_1/ConvTranspose | n_groups: 1 lpad: {4, 0} rpad: {4, 0} pad_mode: 0 strides: {8, 1} dilations: {1, 1} [07/05/2025-07:49:57] [E] Error[9]: Error Code: 9: Skipping tactic 0x0000000000000000 due to exception [autotuner.cpp:get_best_tactics:2696] Autotuner: no tactics to implement operation: 2638: deconv: /model/model_2/block/block_1/ConvTranspose_output'_before_bias.1-(f16[__mye4606_proxy.1,512,proxy___mye21041.1,1][]so[], mem_prop=0) | ONNXTRT_unsqueezeTensor_6_reshape_output.1-(f16[__mye4606_proxy.1,1024,__mye4610_proxy.1,1][]so[], mem_prop=0), /model/model_2/block/block_1/ConvTranspose filterWeightsHalf-{-0.00562668, -0.00617599, -0.00557327, 0.00758362, 0.0042038, -0.00492477, -0.00424194, -0.00513458, ...}(f16[1024,512,16,1][8192,16,1,1]so[3,2,1,0], mem_prop=0), __mye2639/model/model_2/block/block_1/ConvTranspose_alpha-1F:(f32[][]so[], mem_prop=0), __mye2640/model/model_2/block/block_1/ConvTranspose_beta-0F:(f32[][]so[], mem_prop=0), stream = 0 // /model/model_2/block/block_1/ConvTranspose | n_groups: 1 lpad: {4, 0} rpad: {4, 0} pad_mode: 0 strides: {8, 1} dilations: {1, 1} [07/05/2025-07:49:57] [E] Error[10]: IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[ONNXTRT_squeezeTensor_4.../model/model.8/Tanh]}.) [07/05/2025-07:49:57] [E] Engine could not be created from network [07/05/2025-07:49:57] [E] Building engine failed [07/05/2025-07:49:57] [E] Failed to create engine from model or file. [07/05/2025-07:49:57] [E] Engine set up failed &&&& FAILED TensorRT.trtexec [TensorRT v101200] [b36] # /usr/src/tensorrt/bin/trtexec --onnx=folded.onnx --saveEngine=/vol/models/hubertsiuzdak/snac_24khz/trt_engines/decoder.plan --minShapes=reconstructed_z:1x768x4 --optShapes=reconstructed_z:2x768x32 --maxShapes=reconstructed_z:32x768x20000 --fp16 ```Model link: https://drive.google.com/file/d/1D5ASbrHcf-4sgdfdqXR6kBKUS5nZH-5l/view?usp=sharing
Steps To Reproduce
Commands or scripts:
/usr/src/tensorrt/bin/trtexec \
--onnx=folded.onnx \
--saveEngine=/vol/models/hubertsiuzdak/snac_24khz/trt_engines/decoder.plan \
--minShapes=reconstructed_z:1x768x4 \
--optShapes=reconstructed_z:2x768x32 \
--maxShapes=reconstructed_z:32x768x20000 \
--fp16
Have you tried the latest release?: Yes
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt
): Yes