Our fork scripts:
EPE_seg_infer.py
- takes input directory with images, predicts segmentation masks for them and saves masks to the given output directoryEPE_inference_with_detector_*
- scripts for testing detectors, takes input directory with images and saves detection results to given output_directoryextra_video2frames.py
- extract frames from given video and save them to "frames" directory in our dataset structureextra_copyframes_afterwards.py
- copy frames from one dataset to another, both datasets must have our dataset structure
-
Prepare training data: -- download CelebAMask-HQ dataset
-- change file path in the
prepropess_data.py
and run
python prepropess_data.py
- Train the model using CelebAMask-HQ dataset: Just run the train script:
$ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
If you do not wish to train the model, you can download our pre-trained model and save it in res/cp
.
- Evaluate the trained model using:
# evaluate using GPU
python test.py
Hair | Lip | |
---|---|---|
Original Input | ||
Color |