-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in prepare.sh Jetson and Jetson_trt on last commit #167
Comments
Are you using jetpack version 4.4.1? I see "nigthy" in your version. If you're using the rigth jetpack version, share nvidia-smi output |
Other recommendations:
|
Previus commit generare a models optimized wihout problem, y try it with jetson_trft and the same problem |
You're not answering our question about jetpack version. Also, previous commits didn't change jetson at all, they were about Linux x86, android and Raspberry. Not a single jetson file was changed in the previous commits. You should double check your device. |
Last commit that changed jetson was on February. We have more that 10k devices using the implementation on jetson. Few chances that no one noticed the issue for 5 months. Again, check your device, answer to our questions about jetpack and nvidia-smi |
*[ULTALPR_SDK INFO]:
** This application is used to generate optimized NVIDIA TensorRT models **
*[COMPV INFO]: [UltAlprSdkEngine] Call: optimizeTRT
*[COMPV INFO]: [CompVBase] Initializing [base] modules (v 1.0.0, nt 1)...
*[COMPV INFO]: [CompVBase] sizeof(compv_scalar_t)= #8
*[COMPV INFO]: [CompVBase] sizeof(float)= #4
*[COMPV INFO]: Initializing window registery
*[COMPV INFO]: [ImageDecoder] Initializing image decoder...
*[COMPV INFO]: [CompVCpu] H: '', S: '', M: '', MN: 'ARMv8 Processor rev 1 (v8l)'
*[COMPV INFO]: [CompVBase] CPU features: [arm];[arm64];neon;neon_fma;vfpv4;
*[COMPV INFO]: [CompVBase] CPU cores: #4
*[COMPV INFO]: [CompVBase] CPU cache1: line size: #64B, size :#0KB
*[COMPV INFO]: [CompVBase] CPU Phys RAM size: #3964GB
*[COMPV INFO]: [CompVBase] CPU endianness: LITTLE
*[COMPV INFO]: [CompVBase] Binary type: AArch64
*[COMPV INFO]: [CompVBase] Intrinsic enabled
*[COMPV INFO]: [CompVBase] Assembler enabled
*[COMPV INFO]: [CompVBase] Code built with option /arch:NEON
*[COMPV INFO]: [CompVBase] OS name: Jetson_TFTRT
*[COMPV INFO]: [CompVBase] Math Fast Trig.: true
*[COMPV INFO]: [CompVBase] Math Fixed Point: true
*[COMPV INFO]: [CompVMathExp] Init
*[COMPV INFO]: [CompVBase] Default alignment: #32
*[COMPV INFO]: [CompVBase] Best alignment: #32
*[COMPV INFO]: [CompVBase] Heap limit: #262144KB (#256MB)
*[COMPV INFO]: [CompVParallel] Initializing [parallel] module...
*[COMPV INFO]: [CompVParallel] [Parallel] module initialized
*[COMPV INFO]: [CompVBase] [Base] modules initialized
*[COMPV INFO]: [UltAlprSdkTRT] Optimizing models in [../../../assets/models.tensorrt/] folder...
*[COMPV INFO]: [FileUtils] Loading files in ../../../assets/models.tensorrt/ ...
*[COMPV INFO]: [CompVSharedLib] Loaded shared lib: /home/homelock/Downloads/ultimateALPR-SDK/binaries/jetson/aarch64/libultimatePluginTensorRT.so
*[COMPV INFO]: [UltAlprSdkTRT] **** Optimizing ultimateALPR-SDK_klass_lpci.desktop.model.tensorrt.doubango [3.3]: 1/6 ****
*[COMPV INFO]: /!\ Code in file '/home/ultimate/compv/base/compv_mem.cxx' in function 'CompVMemCopy_C' starting at line #956: Not optimized -> No SIMD implementation found. On ARM consider http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka13544.html
*[PLUGIN_TENSORRT INFO]: [TensorRT Optimizer] Loading TensorRT local plugins...
*[PLUGIN_TENSORRT INFO]: [TensorRT Optimizer] Loading TensorRT local plugins done.
*[PLUGIN_TENSORRT INFO]: [TensorRT Optimizer] Starting to parse the ONNX/UFF data...
*[PLUGIN_TENSORRT INFO]: [TensorRT Optimizer] Parsing the ONNX/UFF data is done.
*[PLUGIN_TENSORRT INFO]: [TensorRT Optimizer] Starting to build the CUDA engine...
***[PLUGIN_TENSORRT ERROR]: function: "log()"
file: "/home/projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "33"
message: [TensorRT Inference] From logger: /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 702 (the launch timed out and was terminated)
***[PLUGIN_TENSORRT ERROR]: function: "log()"
file: "/home/projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "33"
message: [TensorRT Inference] From logger: ../rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 702 (the launch timed out and was terminated)
terminate called after throwing an instance of 'nvinfer1::CudaError'
what(): std::exception
./prepare.sh: line 5: 18725 Aborted sudo ./trt_optimizer --assets ../../../assets
The text was updated successfully, but these errors were encountered: