A generic C++ library for single and multi system vision-based navigation, with multi-sensor fusion capabilities for thermal, range, solar and inertial measurements.
Webpage
·
Report Bug
·
Request Feature
Table of Contents
This is the code for the paper Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry (PDF) by Vincenzo Polizzi , Robert Hewitt, Javier Hidalgo-Carrió , Jeff Delaune and Davide Scaramuzza. For an overview of our method, check out our webpage.
If you use any of this code, please cite the following publication:
@ARTICLE{Polizzi22RAL,
author={Polizzi, Vincenzo and Hewitt, Robert and Hidalgo-Carrió, Javier and Delaune, Jeff and Scaramuzza, Davide},
journal={IEEE Robotics and Automation Letters},
title={Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry},
year={2022},
volume={7},
number={4},
pages={10681-10688},
doi={10.1109/LRA.2022.3194675}
}
We propose a system solution to achieve dataefficient, decentralized state estimation for a team of flying robots using thermal images and inertial measurements. Each robot can fly independently, and exchange data when possible to refine its state estimate. Our system front-end applies an online photometric calibration to refine the thermal images so as to enhance feature tracking and place recognition. Our system back-end uses a covariance-intersection fusion strategy to neglect the cross-correlation between agents so as to lower memory usage and computational cost. The communication pipeline uses Vector of Locally Aggregated Descriptors (VLAD) to construct a request-response policy that requires low bandwidth usage. We test our collaborative method on both synthetic and real-world data. Our results show that the proposed method improves by up to 46 % trajectory estimation with respect to an individualagent approach, while reducing up to 89 % the communication exchange. Datasets and code are released to the public, extending the already-public JPL xVIO library.
include
: contains all the header files for the librarysrc
: contains all the declaration of the classes in the headersthird_party
: contains a modified version of DBoW3 that builds with c++17Vocabulary
: contains a vocabulary of visual words trained with thermal datathermal_voc_3_4_dbow3_calib.yaml
This code was tested on Ubuntu 20.04
using ROS Noetic
, compiled with g++-10
(sudo apt-get install g++-10
).
The following libraries are needed for installing the x library with the thermal collaborative feature:
To install the x library you can download the .deb package for your architecture here, or build it by yourself by doing:
$ git clone [email protected]:jpl-x/x_multi_agent.git
$ mkdir build && cd build
$ cmake ..
$ make package
$ sudo dpkg -i x_1.2.3_$(dpkg --print-architecture).deb
To enable/disable some features of the collaborative approach, set to ON/OFF the CMake options:
PHOTOMETRIC_CALI
: ifON
enables the request-response communication pipelineMULTI_UAV
: ifON
enables the collaborative setupREQUEST_COMM
: ifON
enables the request-response communication pipelineGT_DEBUG
: ifON
the library expects to receive matches with the keypoints 3D ground-truth ( see working with ground truth).
When all the tags are OFF
the system's configuration is back to the xVIO.
Make sure to have installed Docker, you don't need any other dependency here! To build a Docker image containing the library with the different configurations setting to true the following flags:
PHOTOMETRIC_CALI
: iftrue
enables the photometric calibration algorithm on the incoming imagesMULTI_UAV
: iftrue
enables the collaborative setupREQUEST_COMM
: iftrue
enables the request-response communication pipeline
Run the following command to build the image:
docker build -t x-image:x-amd64 --build-arg PHOTOMETRIC_CALI=true --build-arg MULTI_UAV=true --build-arg REQUEST_COMM=true .
The x Library accepts as input various sensors' measurements, that are then fused together in the IEKF.
To use the library in your project add in your CMakeLists.txt
:
find_package(x 1.2.3 REQUIRED)
# ...
include_directories(
OTHER_INCLUDES
${x_INCLUDE_DIRS}
)
# ...
target_link_libraries(${PROJECT_NAME}
OTHER_LIBRARIES
${x_LIBRARIES}
)
Usage example:
- Initialization
#include <x/vio.h>
#include <ctime>
VIO vio;
const auto params = vio.loadParamsFromYaml("PATH_TO_A_YAML_FILE");
vio.setUp(params);
time_t now = time(0);
vio_.initAtTime((double)now);
Feed the filter with sensors' data. All the sensors reading can be performed in different threads.
- Sun sensor
// Sun sensor
SunAngleMeasurement angle;
angle.timestamp = YOUR_DATA;
angle.x_angle = YOUR_DATA;
angle.y_angle = YOUR_DATA;
vio_.setLastSunAngleMeasurement(angle);
- Range sensor
RangeMeasurement range;
range.timestamp = YOUR_DATA;
range.range = YOUR_DATA;
vio_.setLastRangeMeasurement(range);
- IMU
double timestamp = YOUR_DATA;
int seq = YOUR_DATA;
Vector3 w_m(YOUR_DATA);
Vector3 a_m(YOUR_DATA);
const auto propagated_state = vio_.processImu(timestamp, seq, w_m, a_m);
- Images
double timestamp = YOUR_DATA;;
int seq = YOUR_DATA;
auto match_img = TiledImage(YOUR_DATA);
ato feature_img = TiledImage(match_img);
const auto updated_state = vio_.processImageMeasurement(timestamp, seq, match_img, feature_img);
Refer to the demo as an example of usage with ROS.
When the CMake option GT_DEBUG
is ON
, the library expects to receive matches with the 3D
keypoints information.
The format of the matches is a vector
of floats
, and the structure is the following:
9N match vector structure:
0: cam_id
1: time_prev in seconds
2: x_dist_prev
3: y_dist_prev
4: time_curr
5: x_dist_curr
6: x_dist_curr
7,8,9: 3D coordinate of feature
To run the demo refer to the ROS wrapper.
Datasets to test our system are available here.
We provide visual data in the Inveraray Castle around and Inveraray Castle parallel. The former contains four drones flying trajectories around the castle. The latter has three drones performing parallel trajectories in front of the castle, including the landmarks' ground truth. Namely, it contains 3D points visible from the three UAVs.
We also provide the real data dataset with thermal data of two drones flying squared trajectories on the MarsYard.
Copyright (c) 2022-23 California Institute of Technology (“Caltech”). U.S. Government
sponsorship acknowledged.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided
that the following conditions are met:
• Redistributions of source code must retain the above copyright notice, this list of conditions and
the following disclaimer.
• Redistributions in binary form must reproduce the above copyright notice, this list of conditions
and the following disclaimer in the documentation and/or other materials provided with the
distribution.
• Neither the name of Caltech nor its operating division, the Jet Propulsion Laboratory, nor the
names of its contributors may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Distributed under the Apache 2.0 License. See LICENSE for more information.
The research was funded by the Combat Capabilities Development Command Soldier
Center and Army Research Laboratory. This research was carried out at the Jet
Propulsion Laboratory, California Institute of Technology, and was sponsored
by the JPL Visiting Student Research Program (JVSRP) and the National
Aeronautics and Space Administration (80NM0018D0004).
A particular thanks goes also to Manash Patrim Das for opensourcing his code about the online photometric calibration.
The 3D reconstruction of the Inveraray Castle by Andrea Spognetta (Spogna) is licensed under Creative Commons Attribution-NonCommercial.
Readme template layout from Best-README-Template.