Skip to content

A Unified Driving World Model for Future Generation and Perception

Notifications You must be signed in to change notification settings

dk-liang/UniFuture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception

Dingkang Liang1*, Dingyuan Zhang1*, Xin Zhou1*, Sifan Tu1, Tianrui Feng1,
Xiaofan Li2, Yumeng Zhang2, Mingyang Du1, Xiao Tan2, Xiang Bai1

1 Huazhong University of Science & Technology, 2 Baidu

(*) Equal contribution.

arXiv Project Hits Code License

Check our awesome for the latest World Models! Awesome World Model Stars

📣 News

  • [2025.03.17] Release the demo. Check it out and give it a star 🌟!
  • [2025.03.17] Release the paper.

Abstract

We present UniFuture, a simple yet effective driving world model that seamlessly integrates future scene generation and perception within a single framework. Unlike existing models focusing solely on pixel-level future prediction or geometric reasoning, our approach jointly models future appearance (i.e., RGB image) and geometry (i.e., depth), ensuring coherent predictions. Specifically, during the training, we first introduce a Dual-Latent Sharing scheme, which transfers image and depth sequence in a shared latent space, allowing both modalities to benefit from shared feature learning. Additionally, we propose a Multi-scale Latent Interaction mechanism, which facilitates bidirectional refinement between image and depth features at multiple spatial scales, effectively enhancing geometry consistency and perceptual alignment. During testing, our UniFuture can easily predict high-consistency future image-depth pairs by only using the current image as input. Extensive experiments on the nuScenes dataset demonstrate that UniFuture outperforms specialized models on future generation and perception tasks, highlighting the advantages of a unified, structurally-aware world model.

Overview

Training pipeline
Inference pipeline

Visualizations

Example 1
Example 2

For more demos, please refer to our project page.

Main Results

Getting Started

Coming soon.

To Do

  • Release demo.
  • Release checkpoints.
  • Release training code.

Acknowledgment

Thanks for the wonderful works: Vista (paper, code) and Depth Anything (paper, code).

Citation

If you find this repository useful in your research, please consider giving a star ⭐ and a citation.

@article{liang2025UniFuture,
  title={Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception},
  author={Liang, Dingkang and Zhang, Dingyuan and Zhou, Xin and Tu, Sifan and Feng, Tianrui and Li, Xiaofan and Zhang, Yumeng and Du, Mingyang and Tan, Xiao and Bai, Xiang},
  journal={arXiv preprint arXiv:2503.13587},
  year={2025}
}

About

A Unified Driving World Model for Future Generation and Perception

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published