Failed to use Tensorrt as execution provider on Jetson with built onnxruntime from source #99
-
Hello, we are trying to use your ort rust bindings on a jetson Xavier (Jetpack 5.5.1 with cuda11.4). We built the shared libraries as well as the python wheel from source. We can run our models using the python wheel with tensorrt as execution provider and we get a performance of 21 fps in the model. However when using the ort-rust binding we can only use CUDA as the execution provider and our model's fps drops to 5 fps. We tried two strategies.
We also added "load-dynamic" as a feature for ort-rust in Cargo.toml
So even though tensorrt.is_available() = true, we failed at attempting to register Tensorrt as execution provider. We also tried to use the system strategy. We installed the dynamic libraries with make in /usr/local/lib. However, buikding onnxruntime does not produce the onnxruntime.a static library
Do you guys have any suggestions as to how to make onnxruntime work with tensorrt execution provider? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
The TensorRT issue was fixed recently. It should be working with an update to ort 1.15.5 or 1.16.0. |
Beta Was this translation helpful? Give feedback.
The TensorRT issue was fixed recently. It should be working with an update to ort 1.15.5 or 1.16.0.