Skip to content

Official code for ICCV25 paper: "CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation""

License

Notifications You must be signed in to change notification settings

crmbz0r/CanonSwap

 
 

Repository files navigation

CanonSwap

Environment Setup

conda create -n CanonSwap python=3.10
conda activate CanonSwap

Install PyTorch (Over versions may be supported):

pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118

Install other dependencies:

pip install -r requirements.txt

Model Download

1. CanonSwap Checkpoints

Download from here and move to ./pretrained_weights folder

2. InsightFace Models

Download Antelope from here.

Download buffalo_l from here.

Extract both models to ./pretrained_weights/insightface/models with the following structure:

/pretrained_weights/insightface/models/
├── antelope/
│   ├── glintr100.onnx
│   └── scrfd_10g_bnkps.onnx
└── buffalo_l/
│   ├── 2d106det.onnx
│   └── det_10g.onnx

3. ArcFace

Download ArcFace from here and extract to ./pretrained_weights folder.

4. Landmark Model

Download Landmark Model from here and extract to ./pretrained_weights folder.

Project Structure After Download

After downloading all models, your project structure should look like:

CanonSwap/
├── pretrained_weights/
│   ├── combined_weights.pth
│   ├── arcface_checkpoint.tar
│   ├── landmark.onnx
│   └── insightface/
│       └── models/
│           ├── antelope/
│           │   ├── glintr100.onnx
│           │   └── scrfd_10g_bnkps.onnx
│           └── buffalo_l/
│               ├── 2d106det.onnx
│               └── det_10g.onnx

Inference

The first inference run will automatically download the face parsing model.

Face Swapping

python inference_canswap.py -s examples/source.jpeg -t examples/target.mp4

This also supports image-to-image swapping.

Video-to-Image Swap

python inference_v2i.py -s examples/i2v_s.jpeg -t examples/i2v_t.mov

Citation

If you find this work useful, please star and cite:

@article{luo2025canonswap,
   title={CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation},
   author={Luo, Xiangyang and Zhu, Ye and Liu, Yunfei and Lin, Lijian and Wan, Cong and Cai, Zijian and Huang, Shao-Lun and Li, Yu},
   journal={arXiv preprint arXiv:2507.02691},
   year={2025}
}

It is the greatest appreciation of our work!

License and Attention

This project is licensed under the Research Responsible AI License (ResearchRAIL-M). A full copy of the license can be found in the LICENSE file. By using this code or model, you agree to the terms outlined in the license. This project is intended for technical and academic purposes only. You are strictly prohibited from using this project for any illegal or unethical applications, including but not limited to creating non-consensual content, spreading misinformation, or harassing individuals. Please see the Use-Based Restrictions in the license for a non-exhaustive list of prohibited uses. The authors are exempt from any liability arising from your violation of these terms.

Acknowledgments

This project is based on LivePortrait. We thank the authors for their excellent work in efficient portrait animation.

About

Official code for ICCV25 paper: "CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation""

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.0%
  • Cuda 8.2%
  • C++ 0.8%