- [2024-01-16] Our paper is accepted by ICLR 2024.
- [2023-12-26] Create project.
This repo includes the source code of the paper: "Dual Associated Encoder for Face Restoration" by Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin C.K. Chan, and Ming-Hsuan Yang.
We propose a novel dual-branch framework named DAEFR. Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs. Additionally, we incorporate association training to promote effective synergy between the two branches, enhancing code prediction and output quality. We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets, demonstrating its superior performance in restoring facial details.
- python>=3.8
- pytorch>=1.10.0
- pytorch-lightning==1.0.8
- omegaconf==2.0.0
- basicsr==1.3.3.4
Please properly set up the environment by the following command:
conda create -n DAEFR python=3.8 -y
conda activate DAEFR
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
Warning: Different versions of pytorch-lightning and omegaconf may lead to errors or different results.
Training Dataset:
- Training data: HQ Codebook, LQ Codebook and DAEFR are trained with FFHQ which attained from FFHQ repository.
- The original size of the images in FFHQ are 1024x1024. We resize them to 512x512 with bilinear interpolation in our work.
- We provide our resized 512x512 FFHQ on HuggingFace.
- Link this 512x512 version dataset to
./datasets/FFHQ/image512x512.
- If you use our FFHQ dataset, please change the filename or change the code in the line 43 to
f'{v:05d}.png'
in/Your/conda/envs/DAEFR/lib/python3.8/site-packages/basicsr/data/ffhq_dataset.py
Testing Dataset:
Please put the following datasets in the ./dataset/
folder.
Datasets | Short Description | Download | DAEFR results |
---|---|---|---|
CelebA-Test (HQ) | 3000 (HQ) ground truth images for evaluation | celeba_512_validation.zip | None |
CelebA-Test (LQ) | 3000 (LQ) synthetic images for testing | self_celeba_512_v2.zip | Link |
LFW-Test (LQ) | 1711 real-world images for testing | lfw_cropped_faces.zip | Link |
WIDER-Test (LQ) | 970 real-world images for testing | Wider-Test.zip | Link |
Model:
Pretrained models used for training and the trained model of our DAEFR can be attained from HuggingFace.
Link these models to ./experiments
.
You can use the following command to download the model:
python download_model_from_huggingface.py
Make sure the models are stored as follows:
experiments/
|-- HQ_codebook.ckpt
|-- LQ_codebook.ckpt
|-- Association_stage.ckpt
|-- DAEFR_model.ckpt
|-- pretrained_models/
|-- FFHQ_eye_mouth_landmarks_512.pth
|-- arcface_resnet18.pth
|-- inception_FFHQ_512-f7b384ab.pth
|-- lpips/
|-- vgg.pth
If you put the model and dataset folder properly, you can use the following command:
bash run_test_and_evaluation.sh
to obtain the results.
Or you can modify the following the following script:
sh scripts/test.sh
Or you can use the following command for testing:
CUDA_VISIBLE_DEVICES=$GPU python -u scripts/test.py \
--outdir $outdir \
-r $checkpoint \
-c $config \
--test_path $align_test_path \
--aligned
sh scripts/run_HQ_codebook_training.sh
sh scripts/run_LQ_codebook_training.sh
sh scripts/run_association_stage_training.sh
sh scripts/run_DAEFR_training.sh
Note.
- Please modify the related paths to your own.
- If you encounter problems with FFHQ dataset names during training, please change the filename or change the code in the line 43 to
f'{v:05d}.png'
in/Your/conda/envs/DAEFR/lib/python3.8/site-packages/basicsr/data/ffhq_dataset.py
- The second stage is for model association. You need to add your trained HQ_Codebook and LQ_Codebook model to
ckpt_path_HQ
andckpt_path_LQ
inconfig/Association_stage.yaml
. - The final stage is for face restoration. You need to add your trained HQ_Codebook and Association model to
ckpt_path_HQ
andckpt_path_LQ
inconfig/DAEFR.yaml
. - Our model is trained with 8 A100 40GB GPUs with batchsize 4.
sh scripts/metrics/run.sh
Note.
- You need to add the path of CelebA-Test dataset in the script if you want get IDA, PSNR, SSIM, LPIPS. You also need to modify the name of restored folders for evaluation.
- For LMD and NIQE, we use the evaluation code from VQFR. Please refer to their repo for more details.
@inproceedings{tsai2024dual,
title={Dual Associated Encoder for Face Restoration},
author={Tsai, Yu-Ju and Liu, Yu-Lun and Qi, Lu and Chan, Kelvin CK and Yang, Ming-Hsuan},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024}
}
We thank everyone who makes their code and models available, especially Taming Transformer, basicsr, RestoreFormer, CodeFormer, and VQFR.
For any question, feel free to email [email protected]
.