In Part 1 of the tutorial, we learned how to create our Scene in Unity Editor.
In Part 2 of the tutorial, we learned:
- How to equip the camera for the data collection
- How to set up labelling and label configurations
- How to create your own Randomizer
- How to add our custom Randomizer
In Part 3 of the tutorial we learned:
- How to collect large dataset of RGB images and the corresponding poses of the target and the drone.
- How to use that data to train machine learning model to predict the target's position and drone's position from images taken by our camera.
In this part, we will setup the grpc connection in order to communicate between our python model and our Unity environment.
Steps included in this part of the tutorial:
Table of Contents
Here you have two options for the model:
- To save time, you may use the model we have trained. Download this Drone_pose_estimation_model.tar file, which contains the pre-trained model weights.
- You can also use the model you have trained in Part 3. However, be sure to rename your model to
Drone_pose_estimation_model.tar
as the script that will call the model is expecting this name.
- Navigate to folder
drone-pose-estimation-navigation/inference
- Run
pip install -r requirements.txt
- Based on Add the Pose Estimation Model we choose the model to be used, and run a python process that exposes a service API over gRPC using:
python server.py
- Open the Unity project as instructed in Part 2.
- Select the
SimulationScenario
GameObject, and uncheckTraining
flag inPose Estimation Scenario
in the inspector as shown below:
- New Environment: creates a new environment based off the Randomizers
- Start Pose Estimation: this will send a screenshot of the current scene to the gRPC API exposed in step 2, and will get back a prediction of the drone & target translation.
- Start Navigate: the predicted translation of the target is then used by the Navmesh module to navigate the drone to the target.