We propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices, namely corner-guided exploration policy and category-aware identification policy, simultaneously perform by utilizing online fused 3D points as observation.
-
Dependeces: We use earlier (
0.2.2
) versions of habitat-sim and habitat-lab. Other related depencese can be found inrequirements.txt
. -
Data (MatterPort3D): Please download the scene dataset and the episode dataset from habitat-lab/DATASETS.md. Then organize the files as follows:
3dAwareNav/
data/
scene_datasets/
mp3d/
episode_datasets/
objectnav_mp3d_v1/
The weight of our 2D backbone RedNet can be found in Stubborn.
We provide scripts for quick training and evaluation. The parameters can be found in sh_train_mp3d.sh and sh_eval.sh, You can modify these parameters to customize them according to your specific requirements.
sh sh_train_mp3d.sh # training
sh sh_eval.sh # evaluating
@inproceedings{zhang20233d,
title={3D-Aware Object Goal Navigation via Simultaneous Exploration and Identification},
author={Zhang, Jiazhao and Dai, Liu and Meng, Fanpeng and Fan, Qingnan and Chen, Xuelin and Xu, Kai and Wang, He},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={6672--6682},
year={2023}
}
Our code is inspired by Object-Goal-Navigation and FusionAwareConv .
This is an open-source version, some functions have been rewritten to avoid certain license. It would not be expected to reproduce the result exactly, but the result is almost the same. Thank Liu Dai (@bbbbbMatrix) and Fanpeng Meng (@mfp0610) for their contributions to this repository.
If you have any questions, feel free to email Jiazhao Zhang at [email protected].
This work and the dataset are licensed under CC BY-NC 4.0.