Skip to content

svjack/personalize-anything

 
 

Repository files navigation

Personalize Anything for Free with Diffusion Transformer

sudo apt-get update && sudo apt-get install git-lfs ffmpeg cbm

conda create --name person python=3.10
conda activate person
pip install ipykernel
python -m ipykernel install --user --name person --display-name "person"

git clone https://github.com/svjack/personalize-anything.git
cd personalize-anything

pip install torch torchvision
pip install -r requirements.txt

huggingface-cli login

python gradio_demo.py

nainly try notebook

teaser

Personalize Anything is a training-free framework for personalized image generation in Diffusion Transformers (DiT), ensuring subject consistency and structural diversity via timestep-adaptive token replacement and patch perturbation, while enabling layout control, multi-subject composition, and applications including inpainting/outpainting.

🌟 Features

  • Training-Free Framework: Achieves rapid generation through a single inversion and forward process, eliminating training or fine-tuning requirements while minimizing computational overhead.
  • High Fidelity & Controllability: Preserves fine-grained subject details and enables generation with explicit spatial control via user-defined layouts.
  • Versatility: Supports single/multi-subject injection, inpainting, and outpainting tasks within a unified framework.

🔥 Updates

  • [2025-03] Release gradio demo and example code for subject reconstruction, single-subject personalization, inpainting, and outpainting.

🔨 Installation

Clone the repo first:

git https://github.com/fenghora/personalize-anything.git
cd personalize-anything

(Optional) Create a fresh conda env:

conda create -n person python=3.10
conda activate person

Install necessary packages (torch > 2):

# pytorch (select correct CUDA version)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118

# other dependencies
pip install -r requirements.txt

📒 Examples

  • See subject_reconstruction notebook for reconstructing subjects in different positions using FLUX. See our paper for detailed information.

subject_reconstruction_

single_subject_personalization_

inpainting_outpainting_

🤗 Demos

To start a demo locally, simply run

python gradio_demo.py

We currently support inpainting and outpainting in this script. We will update more features. Stay tuned!

💡 Tips

To run custom examples, you may need to obtain the corresponding object masks. A script for running Grounded SAM is provided in grounding_sam.py. The following command will generate a segmentation mask in the same directory as the input image:

python scripts/grounding_sam.py --image example_data/white_cat/background.png --labels cat

After obtaining the corresponding segmentation mask, simply modify the file paths in the configuration to effortlessly generate your subject customization.

🤝 Acknowledgement

We appreciate the open source of the following projects:

Citation

@article{feng2025personalize,
  title={Personalize Anything for Free with Diffusion Transformer},
  author={Feng, Haoran and Huang, Zehuan and Li, Lin and Lv, Hairong and Sheng, Lu},
  journal={arXiv preprint arXiv:2503.12590},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.1%
  • Python 1.9%