This project implements a sensor data fusion pipeline that combines radar and camera data for object detection and tracking.
- Radar point cloud processing and clustering using custom DBSCAN algorithm
- YOLO-based object detection on camera images
- Sensor calibration and coordinate transformation
- Association of radar clusters with image bounding boxes
- Visualization of fused data on both image and ground planes
- Python 3.x
- NumPy
- Pandas
- Matplotlib
- OpenCV (cv2)
- PyTorch
- Ultralytics YOLO
-
Set up the required folder structure:
- Images folder
- Radar PCD folder
- Calibration file
-
Update the paths in the
main()
function:path_to_images
path_to_pcd
calibration_file
- YOLO model path
-
Run the main script:
python main.py
my_custom_dbscan
: Custom DBSCAN implementation for radar point clusteringradar_to_ground_transfomer
: Transforms radar points to ground planeradar_to_camera_transformer
: Projects radar points onto the image planeget_association_matrix
: Creates association matrix between radar clusters and image bounding boxesget_filtered_cases
: Analyzes different association cases (one-to-one, one-to-many, many-to-one)get_image_visualization
: Visualizes the associated objects on the image plane
The script generates visualizations showing:
- Fused data on the image plane with bounding boxes and radar points
- Top-down view of objects on the ground plane
This project is designed for research and development purposes in the field of multi-sensor fusion for autonomous systems.