DualGAN: unsupervised dual learning for image-to-image translation
please cite the paper, if the codes has been used for your research.
-
Linux
-
Python (2.7 or later)
-
numpy
-
scipy
-
NVIDIA GPU + CUDA 8.0 + CuDNN v5.1
-
TensorFlow 1.0 or later
- clone this repo:
git clone https://github.com/duxingren14/DualGAN.git
cd DualGAN
- download datasets (e.g., sketch-photo), run:
bash ./datasets/download_dataset.sh sketch-photo
- download pre-trained model (e.g., sketch-photo), run:
bash ./checkpoint/download_ckpt.sh sketch-photo
- train the model:
python main.py --phase train --dataset_name sketch-photo --image_size 256 --lambda_A 1000.0 --lambda_B 1000.0 --epoch 100
- test the model:
python main.py --phase test --dataset_name sketch-photo --image_size 256 --lambda_A 1000.0 --lambda_B 1000.0 --epoch 100
Similarly, run experiments on facades dataset with the following commands:
bash ./datasets/download_dataset.sh facades
python main.py --phase train --dataset_name facades --lambda_A 1000.0 --lambda_B 1000.0 --epoch 100
python main.py --phase test --dataset_name facades --lambda_A 1000.0 --lambda_B 1000.0 --epoch 100
For thoese who cannot download datasets or pretrained models using the scripts, please try manual downloading from the link as below:
all datasets from google drive
pretrained models from google drive
Codes are built on the top of pix2pix-tensorflow and DCGAN-tensorflow. Thanks for their precedent contributions!