This repository builds on the FLOWER VLA codebase.
NinA replaces the diffusion-based approach with Normalizing Flows (NF) for training VLA models.
Our results show that NF achieves performance comparable to diffusion policies, while requiring significantly fewer parameters and offering faster inference.
NinA follows the standard NF training procedure, illustrated below:
We provide two backbone architectures:
- MLP – a lightweight and simple variant.
- Transformer – a more scalable and performant option.
The implementation can be found in flower/models/flower_nf.py
.
Follow the installation instructions from the FLOWER VLA repository to set up this codebase.
To train the NinA, run:
python3 flower/training_libero.py
--backbone
: backbone architecture (mlp
or trans
).
--n_layers
: number of flow layers.
--affine_dim
: hidden size of flow layers.
--action_noise_mult
: amplitude of noise added to ground-truth actions (important hyperparameter).
--use_plu
: whether to use PLU transformations (true
/false
). Our experiments show minimal impact of PLU on performance.
If you find this code useful, please cite our work:
@article{tarasov2025nina,
title={NinA: Normalizing Flows in Action. Training VLA Models with Normalizing Flows},
author={Tarasov, Denis and Nikulin, Alexander and Zisman, Ilya and Klepach, Albina and Lyubaykin, Nikita and Polubarov, Andrei and Derevyagin, Alexander and Kurenkov, Vladislav},
journal={arXiv preprint arXiv:2508.16845},
year={2025}
}