PyTorch implementation of DCGAN.
UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS.
To ensure output size (64 x 64), I set kernel size, stride, padding size to 4, 2, 1, respectively.
$ python train.py
if you have multi-gpus using it to train more faster! I use it for imagenet.
$ python -m torch.distributed.launch --nproc_per_node={num gpus} --master_port={port} train_ddp.py --dataset_name 'imagenet' --dataset_path {your dataset path} --n_batch {batchsize}
Epochs : 10 - 50
Epochs : 100, 200, 300, 500, 1000
Epochs : 1000
Epochs : 200
Epochs : 200
- model code
- train code
- add model save code
- add load check point
- add infer code : generate samples
- add other datasets (ImageNet)
- add other datasets (LSUN)
- vis results
- add argparse for hyperparams
- update wandb to log losses
- update wandb to log samples
- tqdm : progress bar
- ddp : support multi-gpu training