Skip to content

BrainGardenAI/face-parsing.PyTorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

face-parsing.PyTorch

Our fork scripts:

  • EPE_seg_infer.py - takes input directory with images, predicts segmentation masks for them and saves masks to the given output directory
  • EPE_inference_with_detector_* - scripts for testing detectors, takes input directory with images and saves detection results to given output_directory
  • extra_video2frames.py - extract frames from given video and save them to "frames" directory in our dataset structure
  • extra_copyframes_afterwards.py - copy frames from one dataset to another, both datasets must have our dataset structure

Contents

Training

  1. Prepare training data: -- download CelebAMask-HQ dataset

    -- change file path in the prepropess_data.py and run

python prepropess_data.py
  1. Train the model using CelebAMask-HQ dataset: Just run the train script:
    $ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py

If you do not wish to train the model, you can download our pre-trained model and save it in res/cp.

Demo

  1. Evaluate the trained model using:
# evaluate using GPU
python test.py

Face makeup using parsing maps

face-makeup.PyTorch

  Hair Lip
Original Input Original Input Original Input
Color Color Color

References

About

Using modified BiSeNet for face parsing in PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 73.9%
  • Cuda 16.2%
  • C++ 9.0%
  • C 0.9%