Skip to content

Latest commit

 

History

History
128 lines (96 loc) · 5.19 KB

README.md

File metadata and controls

128 lines (96 loc) · 5.19 KB

SeeCucumbers

LinkedIn GitHub stars GitHub License

Table of Contents
  1. About the Project
  2. Getting Started
  3. Data Preparation
  4. Training and Validation
  5. Detection
  6. Evaluation
  7. Contact
  8. Acknowledgements

About the Project

This is a companion repository to a journal paper entitled 'SeeCucumber: ASDFASDFASDF' which is currently under review. The work investigated sea cucumber detection from drone images using YOLOv3. The implementation of the YOLOv3 architecture in Keras was inspired by qqwweee/keras-yolo3.

Here is the detected result sample: Result

Getting Started

  1. The test environment dependencies are

    • Python 3.6
    • Keras 2.2.4
    • tensorflow 1.13.1. The detailed requirements can be checked at requirements.txt
  2. Download the repo using following command or from current webpage

	git clone https://github.com/joanlyq/SeeCucumbers.git
  1. Download YOLOv3 pretrained weights.

  2. If you only want to test the detection result, download the final weights to model_data/ and start from detection.

Data preparation

  • Crop the raw images into the size that can be trained and detected on using default YOLOv3 model:
	python crop_img.py

Note: if you want to train the model using your own images, please copy your image collection into image/raw and change the image resolutions according to your own.

  • Convert yolo.weights to the .h5 format that can be used by the keras-yolov3 implementation:
	python convert.py -w yolov3.cfg yolov3.weights model_data/pretrained-COCO.h5

Training and Validation

Once the pre-trained weights are converted and images are cropped, you can start training by simply running:

	python train.py

The result will be saved in model_data/ as sc_final.h5.

  • Hyperparameters (learning rate, training from scratch, etc.) can be tuned in train.py.
  • Customed anchor boxes can be generated using kmeans.py.

Note: the training dataset containes 6000 cropped images (80% for training, 20% for validation) as it is the best result obtained so far. If you'd like to test different size of training dataset, please change step accordingly in training_set_generator.py.

Detection

After the training is done, run:

	python detect.py

Evaluation

The performance was evaluated using mean Average Precision (mAP) adopted from ashish-roopan/mAP on 608 cropped images that were never used in the training and validation section.

Change the annotation_path in detect.py and score in yolo.py according to the comments for mAP calculation.

Contact

If you have any questions of this repo, please contact through email or twitter.

Acknowledgements

I would like to thank Dr. Karen Joyce, Dr. Stephanie Duce for brining me into this project. I would also like to thank Todd McNeill for his help in collecting drone imagery (full image collection can be found here);Dr. Jane Williamson and other volunteers for their help in labelling the dataset.

Citations

If you use the SeeCucumbers dataset in your work, please cite it as:

Li, J.Y.Q.; Duce, S.; Joyce, K.E.; Xiang, W. SeeCucumbers: Using Deep Learning and Drone Imagery to Detect Sea Cucumbers on Coral Reef Flats. Drones 2021, 5, 28. https://doi.org/10.3390/drones5020028

BibTeX

@article{drones5020028,
AUTHOR = {Li, Joan Y. Q. and Duce, Stephanie and Joyce, Karen E. and Xiang, Wei},
TITLE = {SeeCucumbers: Using Deep Learning and Drone Imagery to Detect Sea Cucumbers on Coral Reef Flats},
JOURNAL = {Drones},
VOLUME = {5},
YEAR = {2021},
NUMBER = {2},
ARTICLE-NUMBER = {28},
URL = {https://www.mdpi.com/2504-446X/5/2/28},
ISSN = {2504-446X},
DOI = {10.3390/drones5020028}
}