Skip to content

ubc-3d-vision-lab/NEO-3DF

Repository files navigation

NEO-3DF: Novel Editing-Oriented 3D Face Creation and Reconstruction Framework

licensebuttons by-nc


This repo. is the official implementation of our paper:

Yan, P., Gregson, J., Tang, Q., Ward, R., Xu, Z., & Du, S. “NEO-3DF: Novel Editing-Oriented 3D Face Creation and Reconstruction”. Accepted, 2022 Asian Conference on Computer Vision (ACCV), Macau SAR, China.

Citation


Please cite the following paper if you find this work is helpful to your research:

@inproceedings{yan2022neo,
  title={NEO-3DF: Novel Editing-Oriented 3D Face Creation and Reconstruction},
  author={Yan, Peizhi and Gregson, James and Tang, Qiang and Ward, Rabab and Xu, Zhan and Du, Shan},
  booktitle={Proceedings of the Asian Conference on Computer Vision},
  pages={486--502},
  year={2022}
}

Dependencies (versions are recommended but may not be necessary)


  • Python == 3.6.7
  • Cython == 0.29.22
  • dlib == 19.22.0
  • face-alignment == 1.3.4
  • facenet-pytorch == 2.5.2
  • jupyter == 1.0.0
  • matplotlib == 3.2.1
  • networkx == 2.3
  • ninja == 1.10.2
  • numpy == 1.19.5
  • nvdiffrast == 0.2.5
  • open3d == 0.12.0
  • opencv-python == 4.1.0.25
  • pandas == 0.25.0
  • Pillow == 8.3.2
  • pytorch3d == 0.5.0
  • scikit-image == 0.17.2
  • scikit-learn == 0.24.2
  • scipy == 1.4.1
  • seaborn == 0.10.0
  • sklearn == 0.0
  • tensorboard == 2.6.0
  • torch == 1.9.0+cu111
  • torchvision == 0.10.0+cu111
  • tqdm == 4.62.2
  • trimesh == 3.9.19

Preparation


STEP-1: Prepare BFM (Basel Face Model)

STEP-2: Prepare dataset.

Pre-Trained Models (Optional)

Download pre-trained models, extract to ./saved_models/

  • Link option 1 (Google Drive): download
  • Link option 2 (UBC ECE server): download

Train


Stage-1: Train the VAEs

  • Run ./train/train_vae_overall.ipynb to train the VAE for the overall shape. Save the trained model to ./saved_models/part_vaes and ./saved_models/part_decoders

  • Run ./train/train_vaes_parts.ipynb to train the VAE for all the five parts. Save the trained models to ./saved_models/part_vaes and ./saved_models/part_decoders

Stage-2: Train part encoders, offset regressor, Facenet

  • Run ./train/train_other.ipynb to train the part encoders (a.k.a. disentangle networks), offset regressor, and fine-tune FaceNet.

Stage-3: Fine-tune the entire network

Generate Linear Mappings for Local Editing


Step-1: Compute the linear mapping

Step-2: Dataset measurement

  • Run all the code in ./mapping/measure/ to generate the measured features (e.g., nose height, nose bridge width, etc.).

Step-3: Compute the linear mappings for each part

  • Run all the code start with mapping in ./mapping/ to generate the linear mappings for local control/editing.

Local Editing Demo


Automatic Shape Adjusting with Differentiable ARAP-based Blending


Download pre-computed inverse A and save it to ./automatic_shape_adjusting: download

Acknowledgement


This work is partially based on the following works:

Contact


Peizhi Yan ([email protected])