You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I would like to suggest implementing my paper: Differential Diffusion: Giving Each Pixel Its Strength.
Is your feature request related to a problem? Please describe.
The paper allows a user to edit a picture by a change map that describes how much each region should change.
The editing process is typically guided by textual instructions, although it can also be applied without guidance.
We support both continuous and discrete editing.
Our framework is training and fine tuning free! And has negligible penalty of the inference time.
Our implementation is diffusers-based.
We already tested it on 4 different diffusion models (Kadinsky, DeepFloyd IF, SD, SD XL).
We are confident that the framework can also be ported to other diffusion models, such as SD Turbo, Stable Cascade, and amused.
I notice that you usually stick to white==change convention, which is opposite to the convention we used in the paper.
The paper can be thought of as a generalization to some of the existing techniques.
A black map is just regular txt2img ("0"),
A map of one color (which isn't black) can be thought as img2img,
A map of two colors which one color is white can be thought as inpaint.
And the rest? It's completely new!
In the paper, we suggest some further applications such as soft inpainting and strength visualization.
Would be cool to have this as a mode in inpaint, to re-generate outside mask with common prompt and inside mask - with inpaint-specific prompt. With controllable denoising strength for outside part and inside part any kind of artistic-driven mix would be possible
Hello,
I would like to suggest implementing my paper: Differential Diffusion: Giving Each Pixel Its Strength.
Is your feature request related to a problem? Please describe.
The paper allows a user to edit a picture by a change map that describes how much each region should change.
The editing process is typically guided by textual instructions, although it can also be applied without guidance.
We support both continuous and discrete editing.
Our framework is training and fine tuning free! And has negligible penalty of the inference time.
Our implementation is diffusers-based.
We already tested it on 4 different diffusion models (Kadinsky, DeepFloyd IF, SD, SD XL).
We are confident that the framework can also be ported to other diffusion models, such as SD Turbo, Stable Cascade, and amused.
I notice that you usually stick to white==change convention, which is opposite to the convention we used in the paper.
The paper can be thought of as a generalization to some of the existing techniques.
A black map is just regular txt2img ("0"),
A map of one color (which isn't black) can be thought as img2img,
A map of two colors which one color is white can be thought as inpaint.
And the rest? It's completely new!
In the paper, we suggest some further applications such as soft inpainting and strength visualization.
Describe the idea you'd like
I believe that a user should supply an image and a change map, and the editor should output the result according to the algorithm.
Site:
https://differential-diffusion.github.io/
Paper:
https://differential-diffusion.github.io/paper.pdf
Repo:
https://github.com/exx8/differential-diffusion
It might also address: #1788
It has already been implemented by amazing @vladmandic at vladmandic/automatic@0239435
and incredible @shiimizu at: comfyanonymous/ComfyUI#2876 .
Thanks
The text was updated successfully, but these errors were encountered: