YOLOv4-P5 (object detection reference application), based on this repository, optimised for Graphcore's IPU.
Framework | Domain | Model | Datasets | Tasks | Training | Inference | Reference |
---|---|---|---|---|---|---|---|
PyTorch | Vision | YOLOv4-P5 | COCO 2017 | Object detection | ❌ |
✅ |
Scaled-YOLOv4: Scaling Cross Stage Partial Network |
-
Install and enable the Poplar SDK (see Poplar SDK setup)
-
Install the system and Python requirements (see Environment setup)
-
Download the ImageNet LSVRC 2012 dataset (See Dataset setup)
To check if your Poplar SDK has already been enabled, run:
echo $POPLAR_SDK_ENABLED
If no path is provided, then follow these steps:
-
Navigate to your Poplar SDK root directory
-
Enable the Poplar SDK with:
cd poplar-<OS version>-<SDK version>-<hash>
. enable.sh
- Additionally, enable PopART with:
cd popart-<OS version>-<SDK version>-<hash>
. enable.sh
More detailed instructions on setting up your Poplar environment are available in the Poplar quick start guide.
To prepare your environment, follow these steps:
- Create and activate a Python3 virtual environment:
python3 -m venv <venv name>
source <venv path>/bin/activate
-
Navigate to the Poplar SDK root directory
-
Install the PopTorch (PyTorch) wheel:
cd <poplar sdk root dir>
pip3 install poptorch...x86_64.whl
-
Navigate to this example's root directory
-
Install the Python requirements:
pip3 install -r requirements.txt
- Build the custom ops:
make
More detailed instructions on setting up your PyTorch environment are available in the PyTorch quick start guide.
Download the COCO 2017 dataset from the source or via kaggle, or via the script we provide:
bash utils/download_coco_dataset.sh
Additionally, also download and unzip the labels:
curl -L https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip -o coco2017labels.zip && unzip -q coco2017labels.zip -d '<dataset path>' && rm coco2017labels.zip
Disk space required: 26G
.
├── LICENSE
├── README.txt
├── annotations
├── images
├── labels
├── test-dev2017.txt
├── train2017.cache
├── train2017.txt
├── val2017.cache
└── val2017.txt
3 directories, 7 files
To run a tested and optimised configuration and to reproduce the performance shown on our performance results page, use the examples_utils
module (installed automatically as part of the environment setup) to run one or more benchmarks. The benchmarks are provided in the benchmarks.yml
file in this example's root directory.
For example:
python3 -m examples_utils benchmark --spec <path to benchmarks.yml file>
Or to run a specific benchmark in the benchmarks.yml
file provided:
python3 -m examples_utils benchmark --spec <path to benchmarks.yml file> --benchmark <name of benchmark>
For more information on using the examples-utils benchmarking module, please refer to the README.
To download the pretrained weights, run the following commands:
mkdir weights
cd weights
curl https://gc-demo-resources.s3.us-west-1.amazonaws.com/yolov4_p5_reference_weights.tar.gz -o yolov4_p5_reference_weights.tar.gz && tar -zxvf yolov4_p5_reference_weights.tar.gz && rm yolov4_p5_reference_weights.tar.gz
cd ..
These weights are derived from the a pre-trained model shared by the YOLOv4's author. We have post-processed these weights to remove the model description and leave a state_dict compatible with the IPU model description.
To run:
python3 run.py --weights weights/yolov4_p5_reference_weights/yolov4-p5-sd.pt
python run.py
run.py
will use the default config defined in configs/inference-yolov4p5.yaml
which can be overridden by various arguments (python run.py --help
for more info)
To compute evaluation metrics run:
python run.py --weights '/path/to/your/pretrain_weights.pt' --obj-threshold 0.001 --class-conf-threshold 0.001
You can use the --verbose
flag if you want to print the metrics per class. Here is a comparison of our metrics against the GPU on the COCO 2017 detection validation set:
Model | Image Size | Type | Classes | Precision | Recall | [email protected] | [email protected]:.95 |
---|---|---|---|---|---|---|---|
GPU | 896 | FP32 | all | 0.4501 | 0.76607 | 0.6864 | 0.49034 |
GPU | 896 | FP16 | all | 0.44997 | 0.7663 | 0.68663 | 0.49037 |
IPU | 896 | FP16 | all | 0.45032 | 0.7674 | 0.68674 | 0.49159 |
We generate the numbers for the GPU by re-running the Scaled-YOLOv4 repo code on an AWS instance. Please note that these numbers are slightly different from what they report in their repo. This is attributed to the rect
parameter. In their inference, this is set to be True
. The IPU currently can not support different sized images, and therefore, we set this to False
in their evaluation in order to draw a fair comparison. In that regard, we do perform at par with SOTA.
Framework | Domain | Model | Datasets | Tasks | Training | Inference | Reference |
---|---|---|---|---|---|---|---|
PyTorch | vision | yolo | COCO | object detection | ❌ |
✅ |
POD4/POD16/POD64 |
The notebook demonstrates the object detection task with YOLOv4 model executed on Graphcore IPU. The assumption is that the Poplar SDK is downloaded and activated.