This work attempts to construct pseudo laser findings based on direct depth matrix prediction from a single camera image while still retaining the stable performance of the robot navigation task. The depth prediction phase is inherited from monodepth2 pre-trained on the KITTI dataset.
Our paper can be found here.
It can be seen that the robot on the right-hand side moves surprisingly faster. However, the reason is that at each time step, the policy accompanied by a visual prediction module obviously takes more time to make a decision. Therefore, when a decision is not yet made, the robot moves on inertia.
This repo contains the trained models to perform like the demo above. It is also daunting to reproduce this result. The reason could be briefly explained in this article. See the installation for more information.