Skip to content

lekang2/CV_auto

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 

Repository files navigation

BYD Autonomous Driving Perception - Drivable Area Segmentation

This repository summarizes my work at BYD Engineering Research Institute (2022.07–2023.08),
focusing on autonomous driving perception models with an emphasis on drivable area segmentation.


🌟 Highlights

  • Built drivable area segmentation models based on DeepLabV3
  • Modified backbone (ResNet50/101, MobileNetV2) and ASPP for fine-grained road feature extraction
  • Designed annotation guidelines for diverse urban & highway scenarios
  • Contributed to labeling 200k+ images from BDD100K and in-house 8MP fleet data
  • Developed scripts to convert & validate annotations, reducing error rate to <1%
  • Comprehensive evaluation under varied weather, lighting, and road conditions
  • Accuracy improved from 90%+ β†’ 95%+ (mIoUβ‰ˆ0.93, mPAβ‰ˆ0.97)

πŸ“‚ Project Structure

  • models/ – Modified DeepLabV3 and backbone networks
  • data/ – Dataset handling, preprocessing, annotation tools
  • training/ – Training and evaluation scripts
  • experiments/ – Configurations, logs, results, and visualizations
  • deployment/ – Export to ONNX/TensorRT for real-time inference
  • docs/ – Documentation, dataset guidelines, evaluation reports
  • docker/ – Containerization for reproducibility

πŸš€ Quick Start

Environment

git clone https://github.com/lekang2/cv_auto.git
cd byd-autonomous-driving
pip install -r requirements.txt

Training

python training/train.py --config experiments/configs/city.yaml

Evaluation

python training/eval.py --checkpoint checkpoints/model_best.pth

πŸ“Š Evaluation

    Metrics: mIoU, Pixel Accuracy (PA), Panoptic Quality (PQ)

    Model achieved mIoU β‰ˆ 0.93, mPA β‰ˆ 0.97

    Robust performance across different weather, lighting, and road conditions

    Visual examples available in experiments/results/

πŸ› οΈ Tech Stack

    Language: Python

    Framework: PyTorch

    Deployment: Docker, TensorRT, ONNX

    Tools: Labelme (annotation), Git

    Datasets: BDD100K, BYD in-house 8MP dataset

πŸ’‘ Key Learnings

    Data defines the upper bound: dataset scale, diversity, and balance determine model ceiling

    Annotation quality matters: automated validation scripts reduce human errors to <1%

    Engineering focus: training logs, reproducibility, and deployment pipelines are as important as accuracy

    Iterative improvement: data expansion, backbone tuning, and deployment optimization together boosted accuracy from 90% β†’ 95%+

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published