@@ -52,19 +52,19 @@ MIVisionX provides developers with [docker images](https://hub.docker.com/u/mivi
52
52
53
53
* Start docker with display
54
54
55
- ```
56
- % sudo docker pull mivisionx/ubuntu-16.04:latest
57
- % xhost +local:root
58
- % sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-16.04:latest
59
- ```
55
+ ```
56
+ % sudo docker pull mivisionx/ubuntu-16.04:latest
57
+ % xhost +local:root
58
+ % sudo docker run -it --device=/dev/kfd --device=/dev/dri --cap-add=SYS_RAWIO --device=/dev/mem --group-add video --network host --env DISPLAY=unix$DISPLAY --privileged --volume $XAUTH:/root/.Xauthority --volume /tmp/.X11-unix/:/tmp/.X11-unix mivisionx/ubuntu-16.04:latest
59
+ ```
60
60
61
61
* Test display with MIVisionX sample
62
62
63
- ```
64
- % export PATH=$PATH:/opt/rocm/mivisionx/bin
65
- % export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/mivisionx/lib
66
- % runvx /opt/rocm/mivisionx/samples/gdf/canny.gdf
67
- ```
63
+ ```
64
+ % export PATH=$PATH:/opt/rocm/mivisionx/bin
65
+ % export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/mivisionx/lib
66
+ % runvx /opt/rocm/mivisionx/samples/gdf/canny.gdf
67
+ ```
68
68
69
69
* Run [ Samples] ( #samples )
70
70
@@ -73,7 +73,7 @@ MIVisionX provides developers with [docker images](https://hub.docker.com/u/mivi
73
73
### Command Line Interface (CLI)
74
74
75
75
```
76
- usage: python mivisionx_inference_analyzer.py [-h]
76
+ usage: python3 mivisionx_inference_analyzer.py [-h]
77
77
--model_format MODEL_FORMAT
78
78
--model_name MODEL_NAME
79
79
--model MODEL
@@ -115,7 +115,7 @@ usage: python mivisionx_inference_analyzer.py [-h]
115
115
### Graphical User Interface (GUI)
116
116
117
117
```
118
- usage: python mivisionx_inference_analyzer.py
118
+ usage: python3 mivisionx_inference_analyzer.py
119
119
```
120
120
121
121
<p align =" center " ><img width =" 75% " src =" ../../docs/images/analyzer-4.png " /></p >
@@ -138,23 +138,24 @@ usage: python mivisionx_inference_analyzer.py
138
138
139
139
* ** Step 1:** Clone MIVisionX Inference Analyzer Project
140
140
141
- ```
142
- % cd && mkdir sample-1 && cd sample-1
143
- % git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git
144
- ```
141
+ ```
142
+ % cd && mkdir sample-1 && cd sample-1
143
+ % git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX
144
+ % cd MIVisionX/apps/mivisionx_inference_analyzer/
145
+ ```
145
146
146
- **Note:**
147
+ ** Note:**
147
148
148
149
+ MIVisionX needs to be pre-installed
149
150
+ MIVisionX Model Compiler & Optimizer scripts are at ` /opt/rocm/mivisionx/model_compiler/python/ `
150
151
+ ONNX model conversion requires ONNX install using ` pip install onnx `
151
152
152
153
* ** Step 2:** Download pre-trained SqueezeNet ONNX model from [ ONNX Model Zoo] ( https://github.com/onnx/models#open-neural-network-exchange-onnx-model-zoo ) - [ SqueezeNet Model] ( https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz )
153
154
154
- ```
155
- % wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
156
- % tar -xvf squeezenet.tar.gz
157
- ```
155
+ ```
156
+ % wget https://s3.amazonaws.com/download.onnx/models/opset_8/squeezenet.tar.gz
157
+ % tar -xvf squeezenet.tar.gz
158
+ ```
158
159
159
160
** Note:** pre-trained model - ` squeezenet/model.onnx `
160
161
@@ -164,15 +165,15 @@ usage: python mivisionx_inference_analyzer.py
164
165
165
166
+ View inference analyzer usage
166
167
167
- ```
168
- % cd ~/sample-1/MIVisionX-inference-analyzer/
169
- % python mivisionx_inference_analyzer.py -h
170
- ```
168
+ ```
169
+ % cd ~/sample-1/MIVisionX-inference-analyzer/
170
+ % python3 mivisionx_inference_analyzer.py -h
171
+ ```
171
172
172
173
+ Run SqueezeNet Inference Analyzer
173
174
174
175
```
175
- % python mivisionx_inference_analyzer.py --model_format onnx --model_name SqueezeNet --model ~/sample-1/squeezenet/model.onnx --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-1/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
176
+ % python3 mivisionx_inference_analyzer.py --model_format onnx --model_name SqueezeNet --model ~/sample-1/squeezenet/model.onnx --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-1/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
176
177
```
177
178
178
179
<p align="center"><img width="100%" src="../../docs/images/sample-1-4.png" /></p>
@@ -187,34 +188,36 @@ usage: python mivisionx_inference_analyzer.py
187
188
188
189
* **Step 1:** Clone MIVisionX Inference Analyzer Project
189
190
190
- ```
191
- % cd && mkdir sample-2 && cd sample-2
192
- % git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git
193
- ```
191
+ ```
192
+ % cd && mkdir sample-2 && cd sample-2
193
+ % git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX
194
+ % cd MIVisionX/apps/mivisionx_inference_analyzer/
195
+ ```
194
196
195
- **Note:**
197
+ **Note:**
196
198
197
199
+ MIVisionX needs to be pre-installed
198
200
+ MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/`
201
+
199
202
* **Step 2:** Download pre-trained VGG 16 caffe model - [VGG_ILSVRC_16_layers.caffemodel](http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel)
200
203
201
- ```
202
- % wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
203
- ```
204
+ ```
205
+ % wget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
206
+ ```
204
207
205
208
* **Step 3:** Use the command below to run the inference analyzer
206
209
207
210
+ View inference analyzer usage
208
211
209
212
```
210
213
% cd ~/sample-2/MIVisionX-inference-analyzer/
211
- % python mivisionx_inference_analyzer.py -h
214
+ % python3 mivisionx_inference_analyzer.py -h
212
215
```
213
216
214
217
+ Run VGGNet-16 Inference Analyzer
215
218
216
219
```
217
- % python mivisionx_inference_analyzer.py --model_format caffe --model_name VggNet-16-Caffe --model ~/sample-2/VGG_ILSVRC_16_layers.caffemodel --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-2/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
220
+ % python3 mivisionx_inference_analyzer.py --model_format caffe --model_name VggNet-16-Caffe --model ~/sample-2/VGG_ILSVRC_16_layers.caffemodel --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-2/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
218
221
```
219
222
220
223
<p align="center"><img width="100%" src="../../docs/images/sample-2-2.png" /></p>
@@ -227,41 +230,44 @@ usage: python mivisionx_inference_analyzer.py
227
230
228
231
* **Step 1:** Clone MIVisionX Inference Analyzer Project
229
232
230
- ```
231
- % cd && mkdir sample-3 && cd sample-3
232
- % git clone https://github.com/kiritigowda/MIVisionX-inference-analyzer.git
233
- ```
233
+ ```
234
+ % cd && mkdir sample-3 && cd sample-3
235
+ % git clone https://github.com/GPUOpen-ProfessionalCompute-Libraries/MIVisionX
236
+ % cd MIVisionX/apps/mivisionx_inference_analyzer/
237
+ ```
234
238
235
- **Note:**
239
+ **Note:**
236
240
237
241
+ MIVisionX needs to be pre-installed
238
242
+ MIVisionX Model Compiler & Optimizer scripts are at `/opt/rocm/mivisionx/model_compiler/python/`
239
243
+ NNEF model conversion requires [NNEF python parser](https://github.com/KhronosGroup/NNEF-Tools/tree/master/parser#nnef-parser-project) installed
240
244
241
245
* **Step 2:** Download pre-trained VGG 16 NNEF model
242
246
243
- ```
244
- % mkdir ~/sample-3/vgg16
245
- % cd ~/sample-3/vgg16
246
- % wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz
247
- % tar -xvf vgg16.onnx.nnef.tgz
248
- ```
247
+ ```
248
+ % mkdir ~ /sample-3/vgg16
249
+ % cd ~ /sample-3/vgg16
250
+ % wget https://sfo2.digitaloceanspaces.com/nnef-public/vgg16.onnx.nnef.tgz
251
+ % tar -xvf vgg16.onnx.nnef.tgz
252
+ ```
249
253
250
254
* **Step 3:** Use the command below to run the inference analyzer
251
255
252
256
+ View inference analyzer usage
253
257
254
258
```
255
- % cd ~/sample-3/MIVisionX-inference-analyzer/
256
- % python mivisionx_inference_analyzer.py -h
259
+ % cd ~/sample-3/MIVisionX-inference-analyzer/
260
+ % python3 mivisionx_inference_analyzer.py -h
257
261
```
258
262
259
263
+ Run VGGNet-16 Inference Analyzer
260
264
261
265
```
262
- % python mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
266
+ % python3 mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes
263
267
```
264
268
265
269
* **Preprocessing the model:** Use the --add/--multiply option to preprocess the input images
266
270
267
- % python mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~/sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~/sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes --add [-2.1179,-2.0357,-1.8044] --multiply [0.0171,0.0175,0.0174]
271
+ ```
272
+ % python3 mivisionx_inference_analyzer.py --model_format nnef --model_name VggNet-16-NNEF --model ~ /sample-3/vgg16/ --model_input_dims 3,224,224 --model_output_dims 1000,1,1 --label ./sample/labels.txt --output_dir ~ /sample-3/ --image_dir ../../data/images/AMD-tinyDataSet/ --image_val ./sample/AMD-tinyDataSet-val.txt --hierarchy ./sample/hierarchy.csv --replace yes --add [ -2.1179,-2.0357,-1.8044] --multiply [ 0.0171,0.0175,0.0174]
273
+ ```
0 commit comments