-
Notifications
You must be signed in to change notification settings - Fork 1
/
README.md
95 lines (78 loc) · 3.37 KB
/
README.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
# yolo and other models of onnxruntime from usls
we can interference yolo models and other models by usls in rust easily,thanks to jamjamjon.
## fork
📔 we changed fork from
https://github.com/ultralytics/ultralytics to https://github.com/jamjamjon/usls, which provides yolo examples to ultralytics,
📔 ultralytics add examples:
`examples/YOLO-Series-ONNXRuntime-Rust`, it used usls crate directly.
📔 so we use usls crate directly.in it:
* yolo5~yolo11 are supported.
## Features
* Support `Classification`, `Segmentation`, `Detection`, `Pose(Keypoints)-Detection` tasks.
* Support `FP16` & `FP32` ONNX models.
* Support `CPU`, `CUDA` and `TensorRT` execution provider to accelerate computation.
* Support dynamic input shapes(`batch`, `width`, `height`).
* more details,please check usls crate doc.
## ⛳️ Installation ONNXRuntime Linking
You have two options to link the ONNXRuntime library
* ### Option 1: Manual Linking
⚠️
```txt
onnxruntime version >=1.20.1,
ort version>=2.0.0.rc.9
usls version >=0.0.20
```
* For detailed setup instructions, refer to the [ORT documentation](https://ort.pyke.io/setup/linking).
* This repository use `ort` crate, which is ONNXRuntime wrapper for Rust. (https://docs.rs/ort/latest/ort/)
* #### For Linux or macOS Users:
* Download the ONNX Runtime package from the [Releases page](https://github.com/microsoft/onnxruntime/releases).
* Set up the library path by exporting the `ORT_DYLIB_PATH` environment variable:
`vim ~/.bashrc`
add the following line to ~/.bashrc
`txt
export ORT_DYLIB_PATH=/path/to/onnxruntime/lib/libonnxruntime.so.1.20.1
`
`source ~/.bashrc`
* ### Option 2: Automatic Download
Just use `--features auto`
```shell
cargo run -r --example yolo --features auto
```
* ### Optional 3: Install CUDA & CuDNN & TensorRT
* CUDA execution provider requires CUDA v11.6+.
* TensorRT execution provider requires CUDA v11.4+ and TensorRT v8.4+.
## models
### Option 1:Download the YOLOv8 ONNX Models
usls:https://github.com/jamjamjon/assets/releases/tag/yolo ,or other models.
### Option 2:Export the YOLOv8 ONNX Models
```bash
pip install -U ultralytics
# export onnx model with dynamic shapes
yolo export model=yolov8m.pt format=onnx simplify dynamic
yolo export model=yolov8m-cls.pt format=onnx simplify dynamic
yolo export model=yolov8m-pose.pt format=onnx simplify dynamic
yolo export model=yolov8m-seg.pt format=onnx simplify dynamic
# export onnx model with constant shapes
yolo export model=yolov8m.pt format=onnx simplify
yolo export model=yolov8m-cls.pt format=onnx simplify
yolo export model=yolov8m-pose.pt format=onnx simplify
yolo export model=yolov8m-seg.pt format=onnx simplify
```
## Run Inference
### yolo
For examples,model path is relative to `../models.`:
- Detect
```sh
cargo run -r --example yolo -- --task detect --ver v8 --scale n --model../models/v8-m.onnx --source assets/bus.jpg
cargo run -r --example yolo -- --task detect --ver v11 --scale n --model../models/v11-m.onnx --source assets/bus.jpg
```
- Pose
```sh
cargo run -r --example yolo -- --task pose --ver v8 --scale n --model ../models/v8-m-pose.onnx --source assets/bus.jpg
cargo run -r --example yolo -- --task pose --ver v11 --scale n --model ../models/v11-m-pose.onnx --source assets/bus.jpg
```
### other
check `README.md` in examples.
not yet test, testing soon.
## TODO
test other models.