This repository is the official implementation of Residual Denoising Diffusion Models.
Note:
- The current setting is to train two unets (one to estimate the residuals and one to estimate the noise), which can be used to explore partially path-independent generation process.
- Other tasks need to modify a)
[self.alphas_cumsum[t]*self.num_timesteps, self.betas_cumsum[t]*self.num_timesteps]]
->[t,t]
(in L852 and L1292). b) For image restoration, generation=False in L120. c) modify the corresponding experimental settings (see Table 4 in the Appendix). - The code is being updated.
To install requirements:
conda env create -f install.yaml
To train RDDM, run this command:
python train.py
or
accelerate launch train.py
To evaluate image generation, run:
cd eval/image_generation_eval/
python fid_and_inception_score.py path_of_gen_img
For image restoration, MATLAB evaluation codes in ./eval
.
The pre-trained models will be provided later.
See Table 3 in main paper.
We can convert a pre-trained DDIM to RDDM by coefficient transformation (see code).
If you find our work useful in your research, please consider citing:
@article{liu2023residual,
title={Residual Denoising Diffusion Models},
author={Jiawei Liu and Qiang Wang and Huijie Fan and Yinong Wang and Yandong Tang and Liangqiong Qu},
year={2023},
journal={arXiv preprint arxiv:2308.13712}
}
Please contact Jiawei Liu if there is any question ([email protected]).