You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[How to implement multi-GPU processing, taking YOLOv4 as example](./tutorials/multi_GPU_processing.md)
43
43
-[Check if Your GPU support FP16/INT8](./tutorials/check_fp16_int8_support.md)
44
44
-[How to Compile and Run on Windows](./tutorials/run_on_windows.md)
@@ -47,21 +47,80 @@ The basic workflow of TensorRTx is:
47
47
48
48
## Test Environment
49
49
50
-
1. TensorRT 7.x
51
-
2. TensorRT 8.x(Some of the models support 8.x)
50
+
1. (**NOT recommended**) TensorRT 7.x
51
+
2. (**Recommended**)TensorRT 8.x
52
+
3. (**NOT recommended**) TensorRT 10.x
53
+
54
+
### Note
55
+
56
+
1. For history reason, some of the models are limited to specific TensorRT version, please check the README.md or code for the model you want to use.
57
+
2. Currently, TensorRT 8.x has better compatibility and the most of the features supported.
52
58
53
59
## How to run
54
60
55
-
Each folder has a readme inside, which explains how to run the models inside.
61
+
**Note**: this project support to build each network by the `CMakeLists.txt` in its subfolder, or you can build them together by the `CMakeLists.txt` on top of this project.
62
+
63
+
* General procedures before building and running:
64
+
65
+
```bash
66
+
# 1. generate xxx.wts from https://github.com/wang-xinyu/pytorchx/tree/master/lenet
67
+
# ...
68
+
69
+
# 2. put xxx.wts on top of this folder
70
+
# ...
71
+
```
72
+
73
+
* (*Option 1*) To build a single subproject in this project, do:
74
+
75
+
```bash
76
+
## enter the subfolder
77
+
cd tensorrtx/xxx
78
+
79
+
## configure & build
80
+
cmake -S . -B build
81
+
make -C build
82
+
```
83
+
84
+
* (*Option 2*) To build many subprojects, firstly, in the top `CMakeLists.txt`, **uncomment** the project you don't want to build or not suppoted by your TensorRT version, e.g., you cannot build subprojects in `${TensorRT_8_Targets}` if your TensorRT is `7.x`. Then:
85
+
86
+
```bash
87
+
## enter the top of this project
88
+
cd tensorrtx
89
+
90
+
## configure & build
91
+
# you may use "Ninja" rather than "make" to significantly boost the build speed
92
+
cmake -G Ninja -S . -B build
93
+
ninja -C build
94
+
```
95
+
96
+
**WARNING**: This part is still under development, most subprojects are not adapted yet.
97
+
98
+
* run the generated executable, e.g.:
99
+
100
+
```bash
101
+
# serialize model to plan file i.e. 'xxx.engine'
102
+
build/xxx -s
103
+
104
+
# deserialize plan file and run inference
105
+
build/xxx -d
106
+
107
+
# (Optional) check if the output is same as pytorchx/lenet
108
+
# ...
109
+
110
+
# (Optional) customize the project
111
+
# ...
112
+
```
113
+
114
+
For more details, each subfolder may contain a `README.md` inside, which explains more.
56
115
57
116
## Models
58
117
59
118
Following models are implemented.
60
119
61
-
|Name | Description |
62
-
|-|-|
63
-
|[mlp](./mlp)| the very basic model for starters, properly documented |
64
-
|[lenet](./lenet)| the simplest, as a "hello world" of this project |
120
+
|Name | Description| Supported TensorRT Version|
121
+
|---------------|---------------|---------------|
122
+
|[mlp](./mlp)| the very basic model for starters, properly documented | 7.x/8.x/10.x |
123
+
|[lenet](./lenet)| the simplest, as a "hello world" of this project | 7.x/8.x/10.x |
65
124
|[alexnet](./alexnet)| easy to implement, all layers are supported in tensorrt |
Copy file name to clipboardExpand all lines: docker/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,11 +49,11 @@ Change the `TAG` on top of the `.dockerfile`. Note: all images are officially ow
49
49
50
50
For more detail of the support matrix, please check [HERE](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
51
51
52
-
### How to customize opencv?
52
+
### How to customize the opencv in the image?
53
53
54
54
If prebuilt package from apt cannot meet your requirements, please refer to the demo code in `.dockerfile` to build opencv from source.
55
55
56
-
### How to solve image build fail issues?
56
+
### How to solve failiures when building image?
57
57
58
58
For *443 timeout* or any similar network issues, a proxy may required. To make your host proxy work for building env of docker, please change the `build` node inside docker-compose file like this:
lenet5 is the simplest net in this tensorrtx project. You can learn the basic procedures of building tensorrt app from API. Including `define network`, `build engine`, `set output`, `do inference`, `serialize model to file`, `deserialize model from file`, etc.
3
+
lenet5 is one of the simplest net in this repo. You can learn the basic procedures of building CNN from TensorRT API. This demo includes 2 major steps:
4
4
5
-
## TensorRT C++ API
6
-
7
-
```
8
-
// 1. generate lenet5.wts from https://github.com/wang-xinyu/pytorchx/tree/master/lenet
9
-
10
-
// 2. put lenet5.wts into tensorrtx/lenet
11
-
12
-
// 3. build and run
13
-
14
-
cd tensorrtx/lenet
15
-
16
-
mkdir build
5
+
1. Build engine
6
+
* define network
7
+
* set input/output
8
+
* serialize model to `.engine` file
9
+
2. Do inference
10
+
* load and deserialize model from `.engine` file
11
+
* run inference
17
12
18
-
cd build
19
-
20
-
cmake ..
21
-
22
-
make
23
-
24
-
sudo ./lenet -s // serialize model to plan file i.e. 'lenet5.engine'
25
-
26
-
sudo ./lenet -d // deserialize plan file and run inference
13
+
## TensorRT C++ API
27
14
28
-
// 4. see if the output is same as pytorchx/lenet
29
-
```
15
+
see [HERE](../README.md#how-to-run)
30
16
31
17
## TensorRT Python API
32
18
33
-
```
19
+
```bash
34
20
# 1. generate lenet5.wts from https://github.com/wang-xinyu/pytorchx/tree/master/lenet
35
21
36
22
# 2. put lenet5.wts into tensorrtx/lenet
@@ -39,9 +25,11 @@ sudo ./lenet -d // deserialize plan file and run inference
39
25
40
26
cd tensorrtx/lenet
41
27
42
-
python lenet.py -s # serialize model to plan file, i.e. 'lenet5.engine'
28
+
# 4.1 serialize model to plan file, i.e. 'lenet5.engine'
29
+
python lenet.py -s
43
30
44
-
python lenet.py -d # deserialize plan file and run inference
31
+
# 4.2 deserialize plan file and run inference
32
+
python lenet.py -d
45
33
46
-
# 4. see if the output is same as pytorchx/lenet
34
+
#5. (Optional) see if the output is same as pytorchx/lenet
0 commit comments