This code can perform a 3D monocular sparse reconstruction from video frames (or image frames with matching or overlapping areas) and their corresponding camera poses.
It also provides code for calibrating your camera, and for recording images with their corresponding camera positions using aruCo markers.
python version 3.11
conda create --name reconstruction
conda activate reconstruction
install requirements.txt opencv-contrib-python-4.7.0.72 matplotlib==3.7.1 torch==2.0.0 scikit-surgerycore==0.6.10
pandas==1.5.3 plotly==5.13.1
pytest
##################
pip3 install -r requirements.txt
alternatively install dependencies as follows:
pip3 install opencv-python matplotlib torch scikit-surgerycore
for visualisation:
pip3 install pandas plotly
- (open3d) -> only needed if you want to visualize with open3D (open3d 0.15.1)
Before performing any 3D reconstruction or recording videos, you will need to calibrate the camera you will use for recording the data.
The code for calibrating a camera is provided in calibration/calibration.py
For more information on camera calibration see this scikit-surgery tutorial TO-DO - add calibration code, images of process and detailed instructions
When recording your data, you need to store your camera's position in space. For this, you can use aruCo markers.
First you will need to generate and print an aruCo board which will help you know relative camera positions. You can do this with:
python aruco_board_creation.py
You can change the size and number of markers as you see fit. Once you have your aruCo board, print it and ensure it is the correct size you specified.
TODO- CAN ADD PIC HERE
Now that you have your aruco board, you can record a video of your scene using record_video_and_pose_aruco.py
. Don't forget to put the board somewhere visibly in the scene so that the poses can also be recorded!
**Note- if you have changed the aruco board parameters in the previous stage, you will have to change them in this file aswell.
python record_video_and_pose_aruco.py
- clone git repo and install all above requirements
- download the example data and place it in a folder called assets/random
- Open python file reconstruction.py
- choose the correct parameters needed for reconstruction (see next section)
- run reconstruction.py
python reconstruction.py
After running reconstruction.py, your point cloud's coordinates and RGB colours will appear under the folder "reconstructions/<chosen-triangulation-method>/<chosen-tracking-method>".
In order to run the reconstruction, there are the following variable parameters you will need to choose:
Choose correct data folder and subfolder name where the data you want to reconstruct is located. This should be structured as follows:
3D_Reconstruction
│
├── assets
│ └── <type>
│ └── <folder>
│ ├── images
│ │ └── 00000000.png
│ │ └── XXXXXXXX.png
│ │ └── XXXXXXXX.png
│ │ └── ...
│ └── rvecs.npy
│ └── tvecs.npy
├── ...
├── reconstruction.py
└── README.md
type
this is the folder right under assets
folder
this is the folder name right under type
Choose a method of feature matching ('superglue'/'sift'/'manual')
The following are options for the matching_method
argument:
Matches points between images using Superpoint & superlue:
- Website: psarlin.com/superglue
Note that this will save the feature matches under a folder called outputs
and therefore if you in future run feature matching for the same data with superglue it will load the matches instead of computing them again.
If you would like to overrun the feature matches every time, you can change this under the superglue parameters defined in match_pairs.py
. You can also change any other parameters there!
To get superglue feature matches without running reconstruction, you can run match_pairs.py
to get the matches- this will appear under 'outputs' folder. Remember to edit the parameters.
This will match your features using sift. It won't save features in a folder.
This will let you manually label keypoint pairs. When you run reconstruction.py
, a window will pop up with the image pairs as subplots. You can then label the images by simply clicking a point in the left image an then the corresponding point in the right image (or vice-versa). You could also alternatively click all the points in the left image, followed by all the points on the right image so long as the order in which you clicked the points is the same.
TODO- CAN ADD PIC HERE
method used for triangulation
This is the tracking method used. Eg EM/aruCo
Rate at which frames are chosen from folder. Eg if 1, then all frames are used. If 2 then every other frame etc.
path to where intrinsix and distortion matrices are stored respectively
To visualise results:
open plot_reconstruction.py
and change the type
, folder
and method
to be the same as you used in reconstruction.py
(or to whatever reconstruction you want to visualise)
Then run plot_reconstruction.py
python plot_reconstruction.py
Run setup file for importing modules
pip install -e .
then run pytest inside the tests folder
cd tests
pytest