Christian Löwens · Thorben Funke · Jingchao Xie · Alexandru Paul Condurache
bev_animation.mp4
This is the companion code for the method described in the paper "PseudoMapTrainer: Learning Online Mapping without HD Maps" by Löwens et al. accepted at ICCV 2025. The code allows users to reproduce and extend the results reported in the paper. Please cite the above work when reporting, reproducing or extending the results.
This software is a research prototype, developed solely for and published as part of the PseudoMapTrainer publication. It will neither be maintained nor monitored in any way.
This codebase primarily consists of three repositories: Mask2Former (see 1.1), RoGS (1.2), and MapVR (2), each adapted specifically for PseudoMapTrainer. Each repository has its own Python environment, described in detail below. Please follow these steps to generate pseudo-labels and train the online mapping model.
1.1) Get PV segmentation labels (Mask2Former
)
cd ./Mask2Former
- Follow the
INSTALL.md
- We used python=3.9, torch=1.10.1+cu113, detectron2=0.6+cu113
- Download and convert the pre-trained weights
pip install timm wget https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth python tools/convert-pretrained-swin-model-to-d2.py swin_large_patch4_window12_384_22k.pth swin_large_patch4_window12_384_22k.pkl
- Download the Mapillary Vistas V2 dataset
- Train the segmentation model on Mapillary V2
export DETECTRON2_DATASETS=path/to/mapillary-parent-dir python train_net.py \ --num-gpus 2 \ --config-file configs/mapillary-vistas-v2/semantic-segmentation/swin/maskformer2_swin_large_IN21k_384_bs16_300k.yaml \ SOLVER.IMS_PER_BATCH 8 SOLVER.BASE_LR 0.00008
- Infer the segmentation for the nuScenes dataset
export DETECTRON2_DATASETS=path/to/nuscenes-parent-dir python demo/inference.py \ --config-file configs/mapillary-vistas-v2/semantic-segmentation/swin/maskformer2_swin_large_IN21k_384_bs16_300k.yaml \ --base_dir path/to/nuscenes \ --save_dir data/m2f_infer \ MODEL.WEIGHTS output/model_xyz.pth
1.2) Get vectorized map labels (RoGS
)
-
Create a new environment with the following packages:
- python=3.8.20
- pip
- addict=2.4.0
- numpy=1.24.4
- nuscenes-devkit=1.1.11
- scipy=1.10.1
- scikit-learn=1.3.0
- setuptools=74.1.2
- tensorboard=2.14.0
- plyfile=1.0.3
- pyquaternion=0.9.9
- pyrotation=0.0.2
- pytz=2024
- opencv-python=4.11.0.86
- opencv-contrib-python=4.11.0.86
- pytorch3d=0.7.2 (see their requirements and installation instructions)
- torch=1.13.1+cu116
- conda-forge
- pyyaml=6.0.2
- tqdm=4.66.5
-
Install the diff-gaussian-rasterization to optimize RGB:
cd $HOME git clone --recursive https://github.com/fzhiheng/diff-gs-depth-alpha.git && cd diff-gs-depth-alpha git checkout 486d1882497d8890888222ea8252a59964ec5dfc # version we used python setup.py install
-
Install and modify the
diff-gaussian-rasterization
to optimize semanticcd $HOME git clone --recursive https://github.com/fzhiheng/diff-gs-depth-alpha.git diff-gs-label && cd diff-gs-label git checkout 486d1882497d8890888222ea8252a59964ec5dfc # version we used mv diff_gaussian_rasterization diff-gs-label # follow the instructions below to modify the file python setup.py install
Set
NUM_CHANNELS
in the filecuda_rasterizer/config.h
to6
(number of selected classes from Mapillary Vistas) and replace all occurrences ofdiff_gaussian_rasterization
insetup.py
withdiff-gs-label
. For more background information check out the original RoGS README.md. -
Set the paths
base_dir
andlabel_dir
in both config filessingle_trip.yaml
andmulti_trip.yaml
.base_dir
corresponds tobase_dir
in step 1.1.6 andlabel_dir
corresponds tosave_dir
.output
will be the path of the output directory of the pseudo labels androad_gt_dir
will store the preprocessed point clouds (see next step). -
Optional: If LiDAR data should be used, preprocess the point clouds with:
cd ~/PseudoMapTrainer/RoGS python -m preprocess.process_nusc --config configs/nusc/single_trip.yaml
If you do not want to use LiDAR data, set
z_weight
to0
in both configuration files -
Generate the vectorized maps:
python pseudo_label_generation.py --config configs/nusc/single_trip.yaml python pseudo_label_generation.py --config configs/nusc/multi_trip.yaml
A minimal working example for two trips can be found in mwe.ipynb.
2) Training the online mapping model (MapVR
)
- Create a new environment according to the
install.md
. - Preprocess the pseudo-labels:
For multi-trip pseudo-labels change the paths accordingly. The final folder structure should now look like this.
cd ~/PseudoMapTrainer/MapVR python custom_tools/maptrv2/custom_nusc_map_converter.py \ --root-path ./data/nuscenes \ --pseudo-labels-dir ../RoGS/output/single_trip \ --out-dir ./data/nuscenes \ --extra-tag nuscenes_pseudo_single \ --version v1.0 \ --canbus ./data \ --use-geo-split
- Preprocess the GT for evaluation:
python custom_tools/maptrv2/custom_nusc_map_converter.py \ --root-path ./data/nuscenes \ --out-dir ./data/nuscenes \ --extra-tag nuscenes \ --version v1.0 \ --canbus ./data \ --use-geo-split # important to use the same geo split
- Adjust the
data_root_seg
path in the training configspmt_single.py
andpmt_multi.py
to thesave_dir
in step 1.1.6. - Start the training:
N_GPUS=2 # adjustable custom_tools/dist_train.sh ./projects/configs/maptrv2/pmt_single.py ${N_GPUS} # or pmt_multi.py
For the evaluation of the online model:
N_GPUS=2 # adjustable
custom_tools/dist_test_map.sh ./projects/configs/maptrv2/pmt_single.py ./path/to/ckpts.pth ${N_GPUS} # or pmt_multi.py
To evaluate the pseudo-labels:
N_GPUS=2 # adjustable
custom_tools/dist_test_pseudo_labels.sh ./projects/configs/pseudo_eval/single_trip.py ${N_GPUS} --masked # or multi_trip.py
Use the --masked
flag for the evaluation referred as "observed area only" in Table 1 of our paper. Removing the flag evaluates the labels for the full BEV range.
For visualization methods, refer to our minimal working example in RoGS/mwe.ipynb. The code used for the animation above is also provided in this notebook.
PseudoMapTrainer is open-sourced under the AGPL-3.0 license. See the LICENSE file for details.
For a list of other open source components included in PseudoMapTrainer, see the file 3rd-party-licenses.txt.
This project builds heavily on RoGS and MapVR / MapTRv2. Thanks for their amazing work!