The official implementation of two papers:
- "2025ICLR Dynamic Diffusion Transformer"
- Journal version "2025ArXiv DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation"
DiT.vs.DyDiT.mp4
2025.04.10:
The extended journal version has been released.2025.03.26:
We release the code of training and text-to-image generation model, DyFLUX.2025.01.23:
"Dynamic Diffusion Transformer" is accepted by ICLR 2025!!! We will update the code and paper soon.2024.12.19:
We release the code for inference.2024.10.04:
Our paper is released.
We provide detailed instructions to run our code. Please cd DyDiT
or cd DyFLUX
for more information.
If you found our work useful, please consider citing us.
@article{zhao2024dynamic,
title={Dynamic diffusion transformer},
author={Zhao, Wangbo and Han, Yizeng and Tang, Jiasheng and Wang, Kai and Song, Yibing and Huang, Gao and Wang, Fan and You, Yang},
journal={ICLR},
year={2025}
}
@misc{zhao2025dyditdynamicdiffusiontransformers,
title={DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation},
author={Wangbo Zhao and Yizeng Han and Jiasheng Tang and Kai Wang and Hao Luo and Yibing Song and Gao Huang and Fan Wang and Yang You},
year={2025},
eprint={2504.06803},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.06803},
}
If you're interested in collaborating with us, feel free to reach out via email at [email protected].