Skip to content

znhacker/FRDC_RDD

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FRDC's method for Road Damage Detection Competition

This repository contains the code and models for our method of road damage detection, submitted for the competition. The code is structured into five main directories, each containing various models and tools used in the detection and processing pipeline.

About FRDC

FRDC (Fujitsu Research & Development Center, Co., LTD.), homepage: https://www.fujitsu.com/cn/en/about/local/subsidiaries/frdc/
Our research fields involve Artificial Intelligence Technologies, Converging Technologies, Data & Security Technologies, and Network Technologies. We look forward to contributing to Fujitsu’s purpose – “to make the world more sustainable by building trust in society through innovation” – through the research and development activities of these cutting-edge technologies.

Directory Structure

  • Co-DERT: Contains model configuration and training scripts based on MMDetection 3.3.0.
  • RTMDet: Includes configuration files and training scripts for RTMDet, also based on MMDetection 3.3.0.
  • YOLOv10: Implements the YOLOv10 model, based on the latest version of Ultralytics.
  • tools: Utility scripts for processing images, converting annotations, and exporting models to TensorRT.
  • annotations: Contains dataset split with COCO type annotations, pseudo labels.

Dependencies

  • MMDetection 3.3.0: Used for the Co-DERT and RTMDet models.
  • Ultralytics (latest version): Used for YOLOv10.
  • TensorRT: For model export and inference acceleration.
  • Other required libraries can be found in the requirements.txt.

Tools Overview

The tools directory contains the following scripts:

  • crop_image.py: Crops images based on specified coordinates and generates the corresponding annotations.
  • export_tensorrt.py: Converts PyTorch .pt weight files to TensorRT .engine files, performs inference, and saves the results to a file.
  • inference_script.py: Runs inference on input images and outputs the results to a file.
  • txt2json.py: Converts annotation files from TXT format to JSON.
  • voc2txt.py: Converts VOC format annotations to TXT.

Usage Instructions

Inference

Please download pre-trained weights first: https://huggingface.co/WangFangjun/FRDC-RDD.

To get dense outputs of Co-DERT and RTMDet, please use following scripts in corresponding foler:

python infer.py \
    --config <config_path> \
    --ckpt <ckpt_path> \
    --img_dir <image_dir> \
    --out_dir <output_dir>

Edit infer.py for more configuration settings.

For the final results, please use the inference_script.py to perform inference on a set of images:

python tools/inference_script.py --model-file <model weight, including pt, onnx, engine > --source-path <path_to_images> --output-file <path_to_output> --engine < whether export the pt to engine for inference >

Export to TensorRT

To export a PyTorch model to TensorRT and run inference:

python tools/export_tensorrt.py --model-file <model weight, including pth, onnx, engine > --source-path <path_to_images> --output-file <path_to_output>

Training

To train models using MMDetection:

  1. Navigate to the Co-DERT or RTMDet folder.
  2. Modify the configuration files as needed.
  3. Run the training script:
    sh dist_train.sh configs/<your_config>.py GPU_NUM --work-dir <your_work_dir>

For YOLOv10, training instructions can be found in the YOLOv10/README.md file.

Acknowledgements

This project is based on the following repositories:

About

Implementation of FRDC's method for Road Damage Detection challenge (https://orddc2024.sekilab.global/)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.6%
  • Shell 1.4%