This repository containes forks of the SOTA works on local map construction for a mobile robot based on sensory input. For relevant papers with code, please, refer to the BEV-map construction notion section.
The end-to-end architecture that directly extracts a bird’s-eye-view semantic representation of a scene given image data from an arbitrary number of cameras.
Given a single color image captured from a driving platform, the model predicts the bird's-eye view semantic layout of the road and other traffic participants.
KITTI | Argoverse |
---|---|
Joint Perception and Motion Prediction for Autonomous Driving Based on Bird's Eye View Maps. In addition to semantic information, the model also predicts motion direction of the cells on a local map based on sequence of lidar sweeps input.