A Videography Planner for Quadrotors with a Rotating Gimbal.
Video Links: youtube or bilibili
Auto-Filmer is a planner for autonomous tracking and videogrphy in unknown circumstance. It enables users to send shooting command in real-time while keeping the drone safe.
Authors: Zhiwei Zhang ([email protected]) and Fei Gao from the ZJU Fast Lab.
Paper: Auto Filmer: Autonomous Aerial Videography Under Human Interaction, Zhiwei Zhang, Yuhang Zhong, Junlong Guo, Qianhao Wang, Chao Xu and Fei Gao, Published in IEEE Robotics and Automation Letters (RAL).
git clone https://github.com/ZJU-FAST-Lab/Auto-Filmer.git
cd Auto-Filmer
We use CUDA to render depth. Please remember to also change the 'arch' and 'code' flags in the line of
set(CUDA_NVCC_FLAGS
# set this according to your cuda version
-gencode=arch=compute_80,code=sm_80 ;
)
in CMakeList.txt in the package local_sensing. If you encounter compiling error due to different Nvidia graphics card you use or you can not see proper depth images as expected, you can check the right code via link1 or link2.
The Python package tkinter is used to create the videography interface. To install tkinter,
pip install tk
catkin_make
source devel/setup.sh
roslaunch planning af_tracking.launch
Use "2D Nav Goal" in Rviz to select goals for the target drone, then the filming drone begins to follow and shoot it.
Furthermore, our gui provides videography interactions. The gif below shows an image position change (the white circle in the image indicates the target). Try to change other items including the camera angle, transition time and distance by yourself. 😄
We use MINCO as our trajectory representation.
We use EGO-Planner-v2 as the target drone planner.
Elastic Tracker provides an efficient framework of tracking objects.
We use DecompROS for safe flight corridor generation and visualization.
@ARTICLE{9998054,
author={Zhang, Zhiwei and Zhong, Yuhang and Guo, Junlong and Wang, Qianhao and Xu, Chao and Gao, Fei},
journal={IEEE Robotics and Automation Letters},
title={Auto Filmer: Autonomous Aerial Videography Under Human Interaction},
year={2023},
volume={8},
number={2},
pages={784-791},
doi={10.1109/LRA.2022.3231828}}