The goal of this project is to develop the tools and processes necessary to provide timely and reliable robot-relative game piece detection to the robot controller via WPILib NetworkTables.
Simply put: if the robot stands still for a second, this system will detect game objects and where they are relative to the robot.
This system supports two primary modes: Development and Competition.
🔧 Development Mode
- The Raspberry Pi does not autorun any scripts.
- SSH into the Raspberry Pi to run scripts for Raspberry PI setup, model inference, pipeline tuning, etc.
- Use the
oak_recorder
tool to capture video and save it to an attached USB drive for later playback.
🏁 Competition Mode
- The Raspberry Pi will automatically launch the
frc4607-spatial-ai
Python script as a systemd service at boot. - Each match is automatically recorded to the attached USB drive, triggered by start/stop signals received from the robot code.
🐞 Debugging
- Use
oak_replay
on a laptop to play recorded video files. - Inference outputs are viewable during replay (note: spatial data is unavailable without the B&W stereo cameras).
- 🔧 Hardware
- 💻 Software
- 🍓 1. Setting Up the Raspberry Pi 4B
- 📸 2. Gathering the Training Images
- 🧹 3. Preparing the Training Images
- 🧠 4. Training the YOLO Model
- 🚀 5. Running the YOLO Model
- 📂 Project Structure
This project uses the following hardware:
- Stereo + color camera in one compact device
- Provides both object detection and 3D location (relative to the camera)
- Hosts the OAK-D Lite interface
- Runs inference and publishes results to NetworkTables
- Captures video streams to a connected USB drive
⚠️ More powerful host hardware is supported and may be explored in future implementations.
These tools are required:
- 🔗 Python 3 – Core runtime environment on the Pi
- 🔗 pyntcore – NetworkTables client library
- 🔗 DepthAI & SDK – Interface to the OAK-D camera
- 🔗 Luxonis YOLO Converter – Converts YOLOv5/v8 models to
.blob
format
📝 Use a custom Raspberry Pi image with:
- Raspberry Pi OS Lite (64-bit, headless)
- SSH enabled
- Bluetooth and other unused services disabled
- Optional: Read-only filesystem for power resilience
- Preloaded software/scripts
-
Download and install Raspberry Pi OS Lite using Raspberry Pi Imager
- In "Edit Settings":
- Hostname:
frc4607
- Username:
frc4607
- Password:
frc4607
- Hostname:
- Under "Services":
✅ Enable SSH and password authentication
- In "Edit Settings":
-
From PowerShell on your PC, run:
.\setup_pi.ps1 -User "Your Name" -Email "[email protected]" -Repo "https://github.com/FRC4607/Spatial-AI.git"
🔗 Raspberry Pi Docs
🔗 Embedded Pi Setup Resource
- Use the robot-mounted OAK-D Lite to capture all data
- Gather a core dataset on the BRIC field:
- Vary lighting, backgrounds, and robot poses
- Supplement the dataset using curated screenshots from match video
🎯 Goal: Create a focused, high-quality dataset
“Don't try to boil the ocean.”
- Annotate – Draw bounding boxes and assign class labels
- Format – Organize in a YOLO-compatible folder layout
📝 Use Roboflow for annotation and export:
🔗 Roboflow - FRC4607 Workspace
🔗 Ultralytics Data Annotation Guide
We use YOLOv5 to train models on our dataset. As data grows throughout the season, we continuously retrain.
⚠️ YOLOv8 is newer and may be adopted in the future if it improves performance.
Training is done using Google Colab, with outputs saved to Google Drive. The GitHub auto-commit feature uses a GitHub token for uploads.
- Create a folder:
Google Colab
at the root of your Google Drive - Add a file:
github_token.txt
inside that folder
-
Run cells and monitor progress actively
-
Once training finishes, the
.pt
file is saved to themodels/
directory -
Convert the PyTorch model using the Luxonis Model Converter
-
Download and extract the
results.zip
to the same directory as your.pt
file
⚠️ Ignore deprecation warning and do not use Luxonis HubAI for now. The DepthAI SDK v2 still depends on the older format.
🚧 Coming soon: Real-time inference with DepthAI SDK and NetworkTables messaging.
spatial-ai/
├── models/ # YOLO model blobs (PyTorch, OpenVINO)
├── notebooks/ # Google Colab training notebooks
├── pi-setup/ # Raspberry Pi setup scripts and image tools
├── resources/ # Figures and documentation resources
├── training_data/ # Annotated YOLO training datasets
├── src/ # Python code for inference and NetworkTables communication
└── README.md