Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to install onnx-tensorrt on jetson nano #376

Closed
cloudrivers opened this issue Jan 17, 2020 · 9 comments
Closed

how to install onnx-tensorrt on jetson nano #376

cloudrivers opened this issue Jan 17, 2020 · 9 comments
Labels
triaged Issue has been triaged by maintainers

Comments

@cloudrivers
Copy link

Hi, I am tring to install onnx-tensorrt on jetson nano. my env is jetpack 4.3 and detailed package is as below.

dpkg -l | grep TensorRT
ii graphsurgeon-tf 6.0.1-1+cuda10.0 arm64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 6.0.1-1+cuda10.0 arm64 TensorRT binaries
ii libnvinfer-dev 6.0.1-1+cuda10.0 arm64 TensorRT development libraries and headers
ii libnvinfer-doc 6.0.1-1+cuda10.0 all TensorRT documentation
ii libnvinfer-plugin-dev 6.0.1-1+cuda10.0 arm64 TensorRT plugin libraries
ii libnvinfer-plugin6 6.0.1-1+cuda10.0 arm64 TensorRT plugin libraries
ii libnvinfer-samples 6.0.1-1+cuda10.0 all TensorRT samples
ii libnvinfer6 6.0.1-1+cuda10.0 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 6.0.1-1+cuda10.0 arm64 TensorRT ONNX libraries
ii libnvonnxparsers6 6.0.1-1+cuda10.0 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 6.0.1-1+cuda10.0 arm64 TensorRT parsers libraries
ii libnvparsers6 6.0.1-1+cuda10.0 arm64 TensorRT parsers libraries
ii nvidia-container-csv-tensorrt 6.0.1.10-1+cuda10.0 arm64 Jetpack TensorRT CSV file
ii python-libnvinfer 6.0.1-1+cuda10.0 arm64 Python bindings for TensorRT
ii python-libnvinfer-dev 6.0.1-1+cuda10.0 arm64 Python development package for TensorRT
ii python3-libnvinfer 6.0.1-1+cuda10.0 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 6.0.1-1+cuda10.0 arm64 Python 3 development package for TensorRT
ii tensorrt 6.0.1.10-1+cuda10.0 arm64 Meta package of TensorRT
ii uff-converter-tf 6.0.1-1+cuda10.0 arm64 UFF converter for TensorRT package

   when cmake ../ -DCUDA_INCLUDE_DIRS=/usr/local/cuda/include -DTENSORRT_ROOT=/usr/src/tensorrt -DGPU_ARCHS="53" . some error occur.

Determining if the pthread_create exist failed with the following output:
Change Dir: /home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_46b0b/fast"
/usr/bin/make -f CMakeFiles/cmTC_46b0b.dir/build.make CMakeFiles/cmTC_46b0b.dir/build
make[1]: Entering directory '/home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_46b0b.dir/CheckSymbolExists.c.o
/usr/bin/cc -fPIE -o CMakeFiles/cmTC_46b0b.dir/CheckSymbolExists.c.o -c /home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c
Linking C executable cmTC_46b0b
/home/michael/cmake-3.13.3/bin/cmake -E cmake_link_script CMakeFiles/cmTC_46b0b.dir/link.txt --verbose=1
/usr/bin/cc CMakeFiles/cmTC_46b0b.dir/CheckSymbolExists.c.o -o cmTC_46b0b
CMakeFiles/cmTC_46b0b.dir/CheckSymbolExists.c.o: In function main': CheckSymbolExists.c:(.text+0x14): undefined reference to pthread_create'
CheckSymbolExists.c:(.text+0x18): undefined reference to `pthread_create'
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_46b0b.dir/build.make:86: recipe for target 'cmTC_46b0b' failed
make[1]: *** [cmTC_46b0b] Error 1
make[1]: Leaving directory '/home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp'
Makefile:121: recipe for target 'cmTC_46b0b/fast' failed
make: *** [cmTC_46b0b/fast] Error 2

File /home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
/* */
#include <pthread.h>

int main(int argc, char** argv)
{
(void)argv;
#ifndef pthread_create
return ((int*)(&pthread_create))[argc];
#else
(void)argc;
return 0;
#endif
}

Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_be2a3/fast"
/usr/bin/make -f CMakeFiles/cmTC_be2a3.dir/build.make CMakeFiles/cmTC_be2a3.dir/build
make[1]: Entering directory '/home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_be2a3.dir/CheckFunctionExists.c.o
/usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create -fPIE -o CMakeFiles/cmTC_be2a3.dir/CheckFunctionExists.c.o -c /home/michael/cmake-3.13.3/Modules/CheckFunctionExists.c
Linking C executable cmTC_be2a3
/home/michael/cmake-3.13.3/bin/cmake -E cmake_link_script CMakeFiles/cmTC_be2a3.dir/link.txt --verbose=1
/usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTC_be2a3.dir/CheckFunctionExists.c.o -o cmTC_be2a3 -lpthreads
/usr/bin/ld: cannot find -lpthreads
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_be2a3.dir/build.make:86: recipe for target 'cmTC_be2a3' failed
make[1]: *** [cmTC_be2a3] Error 1
make[1]: Leaving directory '/home/michael/onnx-tensorrt/build/CMakeFiles/CMakeTmp'
Makefile:121: recipe for target 'cmTC_be2a3/fast' failed
make: *** [cmTC_be2a3/fast] Error 2

@zjd1988
Copy link

zjd1988 commented Apr 14, 2020

I tried the tensorrt 6.0-full-dims branch on jetson nano and succeed.

@aljohn0422
Copy link

@zjd1988 I got fatal error: cuda_runtime.h: No such file or directory with 6.0-full-dims. Could you share more detail on this?

@zjd1988
Copy link

zjd1988 commented Jun 18, 2020

Hi @aljohn0422 ,cuda_runtime.h should be contained in jetpack,when you have install jetpack,cuda_runtime.h had already installed. You can search this file by " find / -name cuda_runtime.h ".

@matthewgiarra
Copy link

@zjd1988 I got fatal error: cuda_runtime.h: No such file or directory with 6.0-full-dims. Could you share more detail on this?

Based on this link, I believe you need to enable NVIDIA as the default Docker runtime engine to enable access to the cuda compiler and other cuda toolkit components during docker build operations. Instructions contained in that link.

@undefined-references
Copy link

undefined-references commented Nov 15, 2020

I have also written a detailed explanation on how to build onnx-tensorrt on jetson nano with jetpack 4.3 here on neuralet repo.

@kevinch-nv
Copy link
Collaborator

@aljohn0422 were you able to get the repository built?

@kevinch-nv kevinch-nv added the triaged Issue has been triaged by maintainers label Apr 16, 2021
@casper-hansen
Copy link

Leaving this code for anyone who needs to build in a Docker container on a Jetson device.

  • DCUDA_INCLUDE_DIRS: Modify to match your CUDA version
  • DGPU_ARCHS: 53 for Jetson Nano, 62 for Jetson TX2. (lookup if another device)

The rest should match how things were installed on your Jetson.

RUN git clone --recurse-submodules https://github.com/onnx/onnx-tensorrt.git
RUN cd onnx-tensorrt && \
    mkdir build && \
    cd build && \
    cmake ../ -DCUDA_INCLUDE_DIRS=/usr/local/cuda-10.2/include -DTENSORRT_INCLUDE_DIR=/usr/include/aarch64-linux-gnu -DTENSORRT_LIBRARY_INFER_PLUGIN=/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -DTENSORRT_ROOT=/usr/src/tensorrt/ -DGPU_ARCHS="53" -DCUDNN_INCLUDE_DIR=/usr/lib/aarch64-linux-gnu -DTENSORRT_LIBRARY_INFER=/usr/lib/aarch64-linux-gnu/libnvinfer.so && \
    make -j4 && \
    make install

These commands should also be runnable outside a docker container given the right paths.

@sumodnandanwar
Copy link

Note: DGPU_ARCHS: 72 for Jetson Xavier Nx

@kevinch-nv
Copy link
Collaborator

Thanks for providing the correct build command @casperbh96. Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

8 participants