Yinuo Liao, Yuanshen Guan, Ruikang Xu, Jiacheng Li, Shida Sun, Zhiwei Xiong*
[Paper Link] [Datasets] [Codes] [Scripts] [Contact]
@inproceedings{Liao_2025_ICLR,
title = {Learning Gain Map for Inverse Tone Mapping},
author = {Yinuo Liao and Yuanshen Guan and Ruikang Xu and Jiacheng Li and Shida Sun and Zhiwei Xiong},
booktitle = {The Thirteenth International Conference on Learning Representations},
month = {April},
year = {2025}
}
We provide Synthetic Dataset and Real-world Dataset, which are organized by following four parts:
image
: The input SDR Imagesgainmap
: The Gourd-turth Gain Mapsmetadata
: The metadata for restore HDR form SDR-GM pair (Only Qmax here)thumbnail
: The down-sampled SDR Images in resolution256×256
(Bicubic interpolation)
The data structure in dataset will be like:
synthetic_dataset
├── train
| ├── image
| | └── *.png
| ├── gainmap
| | └── *.png
| ├── metadata
| | └── *.npy
| └── thumbnail
| └── *.png
└── test
├── image
| └── *.png
├── gainmap
| └── *.png
├── metadata
| └── *.npy
└── thumbnail
└── *.png
and more information can be found in the paper and the table below:
Synthetic Dataset | Real-world Dataset | |
---|---|---|
Source | HDR video frames | taken photos |
Volume | ㅤㅤㅤ900 trainset & 100 testsetㅤㅤㅤ | ㅤㅤㅤ900 trainset & 100 testsetㅤㅤㅤ |
SDR White Level | 100 nit | 203 nit |
HDR Peak Level | 800 nit | 1015 nit |
ㅤㅤQmax Rangeㅤㅤ | [0, 3] ([0, log8]) | [0, 2.32] ([0,log5]) |
Input SDR Image | 3840×2160 8bit RGB | 4096×3072 8bit RGB |
Gourd-turth Gain Map | 3840×2160 8bit Gray | 2048×1536 8bit Gray |
ㅤㅤDownload Linkㅤㅤ | [BaiduNetDisk] [OneDrive] | [BaiduNetDisk] [OneDrive] |
Please download our dataset first, then modify the dataroot
in ./codes/options/test/gmnet_test.yml
to the path you store dataset, and you can modify pretrain_model_G
to choose the pretrained model. When the configuration in gmnet_test.yml
is ready, you can run the conmand:
cd codes
python test.py -opt options/config/test_real.yml
The test results will be saved to ./results/test_name
, and you can evaluate the quantitative metrics by matlab_evaluation
in [Scripts].
To facilitate the training process, please modify the data path in crop_training_patch.py
in [Scripts] and run it to crop the images to patches:
cd scripts
python crop_training_patch.py
It will generate pathes of image
to image_sub
folder, and the pathes of gainmap
to gainmap_sub
folder. After that, please modify the dataroot
in ./codes/options/train/gmnet_train.yml
to the sub-folder, then tun:
cd codes
python train.py -opt options/config/train_real.yml
The checkpoints and training states can be found ./experiments/train_name
.
We provide several practical scripts in ./scripts
and the details are as following:
matlab_evaluation
: This folder stores matlab scripts for evaluation. Please first download and install HDRVDP3 and HDR_Toolbox and place two folders into./scripts/matlab_evaluate
folder. The modify the PD and GT path inevaluation.m
and run it to get quantitative metrics.crop_training_patch.py
: This script crop the images to patches for training. (from HDRTVNet)gm_hdr_decode.py
: The double-layer HDR image are store in one single file. This script extractsimage
,gainmap
andqmax
from double-layer file.pq_visualize.py
: This script convert the linear HDR image innit
unit to HDR image by PQ-OETF for visualization. The PQ-EOTF are also provided.
If you have any questions, please submit issue or contact [email protected]
We appreciate the following github repositories for their valuable work:
- BasicSR: https://github.com/xinntao/BasicSR
- HDRSample: https://github.com/JonaNorman/HDRSample
- HDR Toys: https://github.com/natural-harmonia-gropius/hdr-toys