To deploy this on the raspberry pi: 1.Create a virtual environment:
python -m venv myenv
source myenv/bin/activate
- download requirements:
pip install -r requirements.txt
- go to either pre-trained model or custom-transfer learning directory using cd
- Run test.py
The code uses the COCO dataset and a pre-trained model called mobilenet to detect objects from the dataset. Link for the coco dataset: https://cocodataset.org/#home Link for the Mobilenet tflite model : https://www.kaggle.com/models/iree/ssd-mobilenet-v2 you can also stream the object detection, which the VR headset displays.
Here, I followed the tutorial for transfer learning on a custom dataset given my Tensorflow : https://www.tensorflow.org/lite/models/modify/model_maker/object_detection In the directory tflite_models there are several Tensorflow lite models to choose from. The best one for now is the people_Detection_2 one. The datasets I trained on : https://www.kaggle.com/datasets/sbaghbidi/human-faces-object-detection?rvi=1
‘Open Images Dataset V7’. Accessed: Feb. 23, 2024. [Online]. Available: https://storage.googleapis.com/openimages/web/visualizer/index.html?type=detection&set=train&c=%2Fm%2F02rdsp
Anytime a new dependency (requirements.txt or sudo apt-get install) is added, the Docker image needs to be rebuilt. This is done by running ./build_image
.
To start Docker, run ./run_docker
.
To run the ROS 2 mode with an actual camera run ./run_docker_production
(make sure the device argument in the file is correct). Once in the container run ./colcon
then start the node by running ./run_ai
.