Vision-Language-Action (VLA) models enable embodied decision-making but rely heavily on imitation learning, leading to compounding errors and poor robustness under distribution shift. Reinforcement learning (RL) can mitigate these issues yet typically demands costly real-world interactions or suffers from sim-to-real gaps. We introduce VLA-RFT, a reinforcement fine-tuning framework that leverages a data-driven world model as a controllable simulator. Trained from real interaction data, the simulator predicts future visual observations conditioned on actions, allowing policy rollouts with dense, trajectory-level rewards derived from goal-achieving references. This design delivers an efficient and action-aligned learning signal, drastically lowering sample requirements. With fewer than 400 fine-tuning steps, VLA-RFT surpasses strong supervised baselines and achieves greater efficiency than simulator-based RL. Moreover, it exhibits strong robustness under perturbed conditions, sustaining stable task execution. Our results establish world-model-based RFT as a practical post-training paradigm to enhance the generalization and robustness of VLA models.
- Python 3.10+
- CUDA 12.2+
- PyTorch 2.4+
- UV package manager
# Clone the repository
git clone https://github.com/OpenHelix-Team/VLA-RFT.git
cd VLA-RFT# 1) Set up the environment
git submodule update --init --recursive
uv venv --seed -p 3.10
source .venv/bin/activate
# 2) Install dependencies
uv pip install -e train/verl/".[gpu]"
uv pip install 'https://github.com/Dao-AILab/flash-attention/releases/download/v2.6.0.post1/flash_attn-2.6.0.post1+cu122torch2.4cxx11abiFALSE-cp310-cp310-linux_x86_64.whl'
uv pip install -e train/verl/".[vllm]"
uv pip install -r train/verl/requirements.txt
# 3) Install vla-adapter
uv pip install git+https://github.com/moojink/dlimp_openvla.git
uv pip install -e train/verl/vla-adapter/openvla-oft
# 4) Install LIBERO requirements
uv pip install -e third_party/LIBEROPlease refer to the instructions at third_party/README.md.
Please refer to the instructions at data/README.md.
# Run evaluation with LIBERO tasks
cd scripts/libero
bash eval_libero.shWhen using LIBERO, you may get an error message like AttributeError: 'NoneType' object has no attribute 'eglQueryString'. You can use:
sudo apt-get update
sudo apt-get install libgl1-mesa-dev libegl1-mesa-dev libgles2-mesa-dev libglew-dev# Run training with LIBERO dataset
cd scripts/libero
bash post_train_rlvr.sh- LIBERO-Spatial: Spatial reasoning tasks
- LIBERO-Object: Object manipulation tasks
- LIBERO-Goal: Goal-conditioned tasks
- LIBERO-10: 10-task suite
With fewer than 400 fine-tuning steps, VLA-RFT surpasses strong supervised baselines and achieves greater efficiency than simulator-based RL.
Please refer to our paper for detailed benchmark results.
- Init codebase
- Release pre-trained and rft VLA(policy) weights
- Release pre-trained World Model weights
- Support real-world deployment
This project is licensed under the MIT License - see the LICENSE file for details.
If you use VLA-RFT in your research, please cite:
@article{wang2025vlaadapter,
author={Wang, Yihao and Ding, Pengxiang and Li, Lingxiao and Cui, Can and Ge, Zirui and Tong, Xinyang and Song, Wenxuan and Zhao, Han and Zhao, Wei and Hou, Pengxu and Huang, Siteng and Tang, Yifan and Wang, Wenhui and Zhang, Ru and Liu, Jianyi and Wang, Donglin},
title={VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model},
journal={arXiv preprint arXiv:2509.09372},
year={2025}
}@article{li2025vla,
title={VLA-RFT: Vision-Language-Action Reinforcement Fine-tuning with Verified Rewards in World Simulators},
author={Li, Hengtao and Ding, Pengxiang and Suo, Runze and Wang, Yihao and Ge, Zirui and Zang, Dongyuan and Yu, Kexian and Sun, Mingyang and Zhang, Hongyin and Wang, Donglin and others},
journal={arXiv preprint arXiv:2510.00406},
year={2025}
}This work builds upon several excellent open-source projects:
- VLA-Adapter: Foundation vision-language-action adapter model
- VERL: Volcano Engine Reinforcement Learning framework
- LIBERO: Lifelong robot learning benchmark
- RLVR-world: Training world model with verified reward
⭐ Star this repository if you find it helpful!




