HazeFlow : Revisit Haze Physical Model as ODE and Realistic Non-Homogeneous Haze Generation for Real-World Dehazing (ICCV2025)
Junseong Shin*, Seungwoo Chung*, Yunjeong Yang, Tae Hyun Kim†
This is the official implementation of ICCV2025 "HazeFlow: Revisit Haze Physical Model as ODE and Realistic Non-Homogeneous Haze Generation for Real-World Dehazing" [paper] / [project page]
More qualitative and quantitative results can be found on the [project page].
git clone https://github.com/cloor/HazeFlow.git
cd HazeFlow
pip install -r requirements.txt
or
git clone https://github.com/cloor/HazeFlow.git
cd HazeFlow
conda env create -f environment.yaml
Checkpoints can be downloaded here.
Figure: Example of non-homogeneous haze synthesized via MCBM. (a) Generated hazy image. (b) Transmission map TMCBM
. (c) Spatially varying density coefficient map 𝛽̃
.
You can generate haze density maps using MCBM by running the command below:
python haze_generation/brownian_motion_generation.py
Please download and organize the datasets as follows:
Dataset | Description | Download Link |
---|---|---|
RIDCP500 | 500 clear RGB images | rgb_500 / da_depth_500 |
RTTS | Real-world task-driven testing set | Link |
URHI | Urban and rural haze images (duplicate-removed version) | Link |
HazeFlow/
├── datasets/
│ ├── RIDCP500/
│ │ ├── rgb_500/
│ │ ├── da_depth_500/
│ │ ├── MCBM/
│ ├── RTTS/
│ ├── URHI/
│ └── custom/
Before training, make sure the datasets are properly structured as shown above.
Additionally, prepare the MCBM-based haze density maps and corresponding depth maps.
To estimate depth maps, follow the instructions provided in the Depth Anything V2 repository and place the depth maps in the datasets/RIDCP500/da_depth_500/
directory.
Once depth maps are ready, you can proceed to training and inference as described below.
We propose using a color loss to reduce color distortion.
You can configure the loss type by editing --config.training.loss_type
in pretrain.sh
.
sh pretrain.sh
Specify the pretrained checkpoint from the pretrain phase by editing --config.flow.pre_train_model
in reflow.sh
.
sh reflow.sh
Specify the checkpoint obtained from the reflow phase by editing --config.flow.pre_train_model
in distill.sh
.
sh distill.sh
To run inference on your own images, place them in the dataset/custom/
directory.
Then, configure the following options in sampling.sh
:
--config.sampling.ckpt
: path to your trained model checkpoint--config.data.dataset
: name of your dataset (rtts
orcustom
)--config.data.test_data_root
: path to your input images
Finally, run:
sh sampling.sh
Our implementation is based on RectifiedFlow and SlimFlow. We sincerely thank the authors for their contributions to the community.
If you use this code or find our work helpful, please cite our paper:
@article{shin2025hazeflow,
title={HazeFlow: Revisit Haze Physical Model as ODE and Realistic Non-Homogeneous Haze Generation for Real-World Dehazing},
author={Junseong Shin and Seungwoo Chung and Yunjeong Yang and Tae Hyun Kim},
journal={ICCV},
year={2025}
}
If you have any questions, please contact [email protected].