-
Notifications
You must be signed in to change notification settings - Fork 561
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
NVidia's pytorch-based object_detection implementation (#240)
* Delete Caffe2 object_detection * Added new pytorch-based object_detection * object_detection: removed unused configs; deleted misleading code * object_detection Dockerfile now based on public image and specifies exact library versions
- Loading branch information
1 parent
5309aff
commit 0badcd1
Showing
169 changed files
with
13,547 additions
and
197 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
FROM pytorch/pytorch:1.0.1-cuda10.0-cudnn7-devel | ||
|
||
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections | ||
|
||
# install basics | ||
RUN apt-get update -y \ | ||
&& apt-get install -y apt-utils=1.2.29ubuntu0.1 \ | ||
libglib2.0-0=2.48.2-0ubuntu4.1 \ | ||
libsm6=2:1.2.2-1 \ | ||
libxext6=2:1.3.3-1 \ | ||
libxrender-dev=1:0.9.9-0ubuntu1 | ||
|
||
RUN pip install ninja==1.8.2.post2 \ | ||
yacs==0.1.5 \ | ||
cython==0.29.5 \ | ||
matplotlib==3.0.2 \ | ||
opencv-python==4.0.0.21 \ | ||
mlperf_compliance==0.0.10 \ | ||
torchvision==0.2.2 | ||
|
||
# install pycocotools | ||
RUN git clone https://github.com/cocodataset/cocoapi.git \ | ||
&& cd cocoapi/PythonAPI \ | ||
&& git reset --hard ed842bffd41f6ff38707c4f0968d2cfd91088688 \ | ||
&& python setup.py build_ext install | ||
|
||
# For information purposes only, these are the versions of the packages which we've successfully used: | ||
# $ pip list | ||
# Package Version Location | ||
# -------------------- ----------------- ------------------------------------------------- | ||
# backcall 0.1.0 | ||
# certifi 2018.11.29 | ||
# cffi 1.11.5 | ||
# cycler 0.10.0 | ||
# Cython 0.29.5 | ||
# decorator 4.3.2 | ||
# fairseq 0.6.0 /scratch/fairseq | ||
# ipython 7.2.0 | ||
# ipython-genutils 0.2.0 | ||
# jedi 0.13.2 | ||
# kiwisolver 1.0.1 | ||
# maskrcnn-benchmark 0.1 /scratch/mlperf/training/object_detection/pytorch | ||
# matplotlib 3.0.2 | ||
# mkl-fft 1.0.10 | ||
# mkl-random 1.0.2 | ||
# mlperf-compliance 0.0.10 | ||
# ninja 1.8.2.post2 | ||
# numpy 1.16.1 | ||
# opencv-python 4.0.0.21 | ||
# parso 0.3.2 | ||
# pexpect 4.6.0 | ||
# pickleshare 0.7.5 | ||
# Pillow 5.4.1 | ||
# pip 19.0.1 | ||
# prompt-toolkit 2.0.8 | ||
# ptyprocess 0.6.0 | ||
# pycocotools 2.0 | ||
# pycparser 2.19 | ||
# Pygments 2.3.1 | ||
# pyparsing 2.3.1 | ||
# python-dateutil 2.8.0 | ||
# pytorch-quantization 0.2.1 | ||
# PyYAML 3.13 | ||
# setuptools 40.8.0 | ||
# six 1.12.0 | ||
# torch 1.0.0.dev20190225 | ||
# torchvision 0.2.1 | ||
# tqdm 4.31.1 | ||
# traitlets 4.3.2 | ||
# wcwidth 0.1.7 | ||
# wheel 0.32.3 | ||
# yacs 0.1.5 |
185 changes: 95 additions & 90 deletions
185
object_detection/caffe2/README.md → object_detection/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,90 +1,95 @@ | ||
# 1. Problem | ||
Object detection and segmentation. Metrics are mask and box mAP. | ||
|
||
# 2. Directions | ||
### Steps to configure machine | ||
Standard script. | ||
|
||
### Steps to download and verify data | ||
init and update the submodules in this directory. | ||
|
||
Run the provided shell scripts *in this directory*. | ||
|
||
### Steps to run and time | ||
Build the docker container. | ||
|
||
``` | ||
sudo docker build -t detectron . | ||
``` | ||
|
||
Run the docker container and mount the data appropriately | ||
|
||
``` | ||
sudo nvidia-docker run | ||
-v /mnt/disks/data/coco/:/packages/detectron/lib/datasets/data/coco | ||
-it detectron /bin/bash | ||
``` | ||
|
||
(replace /mnt/disks/data/coco/ with the data directory) | ||
|
||
Run the command: | ||
``` | ||
time stdbuf -o 0 \ | ||
python tools/train_net.py --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml \ | ||
--box_min_ap 0.377 --mask_min_ap 0.339 \ | ||
--seed 3 | tee run.log | ||
``` | ||
|
||
# 3. Dataset/Environment | ||
### Publication/Attribution | ||
Microsoft COCO: Common Objects in Context | ||
|
||
### Data preprocessing | ||
Only horizontal flips are allowed. | ||
|
||
### Training and test data separation | ||
As provided by MS-COCO (2017 version). | ||
|
||
### Training data order | ||
Randomly. | ||
|
||
### Test data order | ||
Any order. | ||
|
||
# 4. Model | ||
### Publication/Attribution | ||
He, Kaiming, et al. "Mask r-cnn." Computer Vision (ICCV), 2017 IEEE International Conference on. | ||
IEEE, 2017. | ||
|
||
We use a version of Mask R-CNN with a ResNet50 backbone. | ||
|
||
### List of layers | ||
Running the timing script will display a list of layers. | ||
|
||
### Weight and bias initialization | ||
The ResNet50 base must be loaded from the provided weights. They may be quantized. | ||
|
||
### Loss function | ||
Multi-task loss (classification, box, mask). Described in the Mask R-CNN paper. | ||
|
||
Classification: Smooth L1 loss | ||
|
||
Box: Log loss for true class. | ||
|
||
Mask: per-pixel sigmoid, average binary cross-entropy loss. | ||
|
||
### Optimizer | ||
Momentum SGD. Weight decay of 0.0001, momentum of 0.9. | ||
|
||
# 5. Quality | ||
### Quality metric | ||
As Mask R-CNN can provide both boxes and masks, we evaluate on both box and mask mAP. | ||
|
||
### Quality target | ||
Box mAP of 0.377, mask mAP of 0.339 | ||
|
||
### Evaluation frequency | ||
Once per epoch, 118k. | ||
|
||
### Evaluation thoroughness | ||
Evaluate over the entire validation set. Use the official COCO API to compute mAP. | ||
# 1. Problem | ||
Object detection and segmentation. Metrics are mask and box mAP. | ||
|
||
# 2. Directions | ||
|
||
### Steps to configure machine | ||
|
||
1. Checkout the MLPerf repository | ||
``` | ||
mkdir -p mlperf | ||
cd mlperf | ||
git clone https://github.com/mlperf/training.git | ||
``` | ||
2. Install CUDA and Docker | ||
``` | ||
source training/install_cuda_docker.sh | ||
``` | ||
3. Build the docker image for the object detection task | ||
``` | ||
cd training/object_detection/ | ||
nvidia-docker build . -t mlperf/object_detection | ||
``` | ||
|
||
4. Run docker container and install code | ||
``` | ||
nvidia-docker run -v .:/workspace -t -i --rm --ipc=host mlperf/object_detection \ | ||
"cd mlperf/training/object_detection && ./install.sh" | ||
``` | ||
Now exit the docker container (Ctrl-D) to get back to your host. | ||
|
||
### Steps to download data | ||
``` | ||
# From training/object_detection/ | ||
source download_dataset.sh | ||
``` | ||
|
||
### Steps to run benchmark. | ||
``` | ||
nvidia-docker run -v .:/workspace -t -i --rm --ipc=host mlperf/object_detection \ | ||
"cd mlperf/training/object_detection && ./run_and_time.sh" | ||
``` | ||
|
||
# 3. Dataset/Environment | ||
### Publication/Attribution | ||
Microsoft COCO: Common Objects in Context | ||
|
||
### Data preprocessing | ||
Only horizontal flips are allowed. | ||
|
||
### Training and test data separation | ||
As provided by MS-COCO (2017 version). | ||
|
||
### Training data order | ||
Randomly. | ||
|
||
### Test data order | ||
Any order. | ||
|
||
# 4. Model | ||
### Publication/Attribution | ||
He, Kaiming, et al. "Mask r-cnn." Computer Vision (ICCV), 2017 IEEE International Conference on. | ||
IEEE, 2017. | ||
|
||
We use a version of Mask R-CNN with a ResNet50 backbone. | ||
|
||
### List of layers | ||
Running the timing script will display a list of layers. | ||
|
||
### Weight and bias initialization | ||
The ResNet50 base must be loaded from the provided weights. They may be quantized. | ||
|
||
### Loss function | ||
Multi-task loss (classification, box, mask). Described in the Mask R-CNN paper. | ||
|
||
Classification: Smooth L1 loss | ||
|
||
Box: Log loss for true class. | ||
|
||
Mask: per-pixel sigmoid, average binary cross-entropy loss. | ||
|
||
### Optimizer | ||
Momentum SGD. Weight decay of 0.0001, momentum of 0.9. | ||
|
||
# 5. Quality | ||
### Quality metric | ||
As Mask R-CNN can provide both boxes and masks, we evaluate on both box and mask mAP. | ||
|
||
### Quality target | ||
Box mAP of 0.377, mask mAP of 0.339 | ||
|
||
### Evaluation frequency | ||
Once per epoch, 118k. | ||
|
||
### Evaluation thoroughness | ||
Evaluate over the entire validation set. Use the official COCO API to compute mAP. |
This file was deleted.
Oops, something went wrong.
Submodule caffe2
deleted from
8468b7
Submodule detectron
deleted from
80f329
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,26 @@ | ||
wget https://s3-us-west-2.amazonaws.com/detectron/coco/coco_annotations_minival.tgz | ||
wget http://images.cocodataset.org/zips/train2014.zip | ||
wget http://images.cocodataset.org/zips/val2014.zip | ||
wget http://images.cocodataset.org/annotations/annotations_trainval2014.zip | ||
#!/bin/bash | ||
|
||
# Get COCO 2014 data sets | ||
mkdir -p pytorch/datasets/coco | ||
pushd pytorch/datasets/coco | ||
|
||
curl -O https://dl.fbaipublicfiles.com/detectron/coco/coco_annotations_minival.tgz | ||
tar xzf coco_annotations_minival.tgz | ||
|
||
curl -O http://images.cocodataset.org/zips/train2014.zip | ||
unzip train2014.zip | ||
|
||
curl -O http://images.cocodataset.org/zips/val2014.zip | ||
unzip val2014.zip | ||
|
||
curl -O http://images.cocodataset.org/annotations/annotations_trainval2014.zip | ||
unzip annotations_trainval2014.zip | ||
|
||
# TBD: MD5 verification | ||
# $md5sum *.zip *.tgz | ||
#f4bbac642086de4f52a3fdda2de5fa2c annotations_trainval2017.zip | ||
#cced6f7f71b7629ddf16f17bbcfab6b2 train2017.zip | ||
#442b8da7639aecaf257c1dceb8ba8c80 val2017.zip | ||
#2d2b9d2283adb5e3b8d25eec88e65064 coco_annotations_minival.tgz | ||
|
||
popd |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.