An AI-powered traffic management system using computer vision to detect vehicles and dynamically control traffic signals.
- Real-time vehicle detection and counting using YOLOv8
- Dynamic traffic light timing based on vehicle density
- Emergency vehicle priority handling
- Multi-lane traffic management
- Visual overlay showing lane boundaries and vehicle statistics
- Configurable camera input (webcam or IP camera)
- Python 3.8+
- OpenCV
- Ultralytics YOLOv8
- NumPy
- PyYAML
- A camera source (laptop webcam or smartphone camera via IP camera app)
- Clone this repository:
git clone https://github.com/yourusername/smart-traffic-control-system.git
cd smart-traffic-control-system- Install the required dependencies:
pip install -r requirements.txt- The YOLOv8 model will be automatically downloaded when you first run the system. If you prefer to download it manually, get
yolov8n.ptfrom the Ultralytics repository and place it in themodeldirectory.
All settings are stored in config.yaml:
- Camera Settings: Change the camera source, resolution, and FPS
- YOLOv8 Settings: Adjust model path, confidence threshold, and target classes
- Traffic Light Timings: Modify min/max durations and scaling factors
- Lane Definitions: Update the polygon coordinates to match your camera view
You can use either:
- Your laptop webcam: Set
camera.source: 0in the config file - A smartphone IP camera app:
- Install an IP camera app (like IP Webcam for Android or EpocCam for iOS)
- Set
camera.source: "http://your-phone-ip:port/video"in the config file
The system uses polygon coordinates to define lanes. Update the coordinates in config.yaml to match your camera view:
lanes:
left:
- [100, 480] # Bottom-left point
- [280, 100] # Top-left point
- [360, 100] # Top-right point
- [320, 480] # Bottom-right point
right:
- [320, 480] # Bottom-left point
- [360, 100] # Top-left point
- [440, 100] # Top-right point
- [540, 480] # Bottom-right pointRun the system with:
python main.pyOr specify a custom config file:
python main.py --config my_custom_config.yaml- Press 'q' to exit the program
- Camera Input: Frames are captured from your specified camera source
- Vehicle Detection: YOLOv8 detects vehicles in each frame
- Lane Assignment: Each detected vehicle is assigned to a lane based on its center point
- Traffic Light Logic:
- Green light duration is calculated based on vehicle count
- Emergency vehicles trigger an immediate light change (after minimum green time)
- Yellow light provides a transition period before switching
- Visualization: The system displays lane overlays, vehicle counts, and traffic light status
- Frame skipping for better performance
- Multithreading separates detection from visualization
- Moving average smoothing for vehicle counts
- Confidence threshold filtering
- Frame resizing before inference
- Camera Access Issues: Ensure your camera is not being used by another application
- Model Loading Errors: Check that the model file exists in the specified location
- Performance Issues: Adjust the frame skip and resize values in the config to reduce processing load
This project is licensed under the MIT License - see the LICENSE file for details.