sudo apt-get update && sudo apt-get install git-lfs ffmpeg cbm
conda create --name person python=3.10
conda activate person
pip install ipykernel
python -m ipykernel install --user --name person --display-name "person"
git clone https://github.com/svjack/personalize-anything.git
cd personalize-anything
pip install torch torchvision
pip install -r requirements.txt
huggingface-cli login
python gradio_demo.py
🏠 Project Page | Paper
Personalize Anything is a training-free framework for personalized image generation in Diffusion Transformers (DiT), ensuring subject consistency and structural diversity via timestep-adaptive token replacement and patch perturbation, while enabling layout control, multi-subject composition, and applications including inpainting/outpainting.
- Training-Free Framework: Achieves rapid generation through a single inversion and forward process, eliminating training or fine-tuning requirements while minimizing computational overhead.
- High Fidelity & Controllability: Preserves fine-grained subject details and enables generation with explicit spatial control via user-defined layouts.
- Versatility: Supports single/multi-subject injection, inpainting, and outpainting tasks within a unified framework.
- [2025-03] Release gradio demo and example code for subject reconstruction, single-subject personalization, inpainting, and outpainting.
Clone the repo first:
git https://github.com/fenghora/personalize-anything.git
cd personalize-anything
(Optional) Create a fresh conda env:
conda create -n person python=3.10
conda activate person
Install necessary packages (torch > 2):
# pytorch (select correct CUDA version)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# other dependencies
pip install -r requirements.txt
- See subject_reconstruction notebook for reconstructing subjects in different positions using FLUX. See our paper for detailed information.
- See single_subject_personalization notebook for generating images with a subject from reference image using FLUX.
- See inpainting_outpainting notebook for inpainting and outpainting with mask conditions using FLUX.
To start a demo locally, simply run
python gradio_demo.py
We currently support inpainting and outpainting in this script. We will update more features. Stay tuned!
To run custom examples, you may need to obtain the corresponding object masks. A script for running Grounded SAM is provided in grounding_sam.py
. The following command will generate a segmentation mask in the same directory as the input image:
python scripts/grounding_sam.py --image example_data/white_cat/background.png --labels cat
After obtaining the corresponding segmentation mask, simply modify the file paths in the configuration to effortlessly generate your subject customization.
We appreciate the open source of the following projects:
- diffusers
- Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations
- Taming Rectified Flow for Inversion and Editing
@article{feng2025personalize,
title={Personalize Anything for Free with Diffusion Transformer},
author={Feng, Haoran and Huang, Zehuan and Li, Lin and Lv, Hairong and Sheng, Lu},
journal={arXiv preprint arXiv:2503.12590},
year={2025}
}