[New Preprocessor] The "reference_adain" and "reference_adain+attn" are added #1280
Replies: 48 comments 69 replies
-
In our test:
|
Beta Was this translation helpful? Give feedback.
-
Can style fidelity be added to the API as threshold_a. Looking forward to fully using all of these new preprocessors. |
Beta Was this translation helpful? Give feedback.
-
sorry for repeating request, but i don't know if you see my reply in previous thread, i hope you add option to reference feature, to use multiple photos as reference instead of one photo, i guess this will solve output images is not look alike the original person 🤞🤞🤞🤞🤞 |
Beta Was this translation helpful? Give feedback.
-
it is not working at batch tool in img2img. Could you check it? |
Beta Was this translation helpful? Give feedback.
-
hi all we fixed the multi-reference weight bug in 1.1.172 |
Beta Was this translation helpful? Give feedback.
-
Did something change with lineart_standard? I'm getting a weird hidden halo effect when using lineart_standard that doesn't appear with the same input using the other lineart preprocessors. The background warps to form an outline around the input image's lines. Also, tons of artifacts especially on faces. Again, just with lineart_standard, doesn't appear when using other lineart preprocessors. |
Beta Was this translation helpful? Give feedback.
-
Unfortunately I'm working with some old legacy files that are low resolution in this case, and upscaling is causing too many changes. :/ |
Beta Was this translation helpful? Give feedback.
-
The new feature is powerful and will have a lot of use cases. Thanks for this! One thing I've noticed is that the images it produces are often slightly blurry compared to images generated without using the new feature. Is this a known issue? |
Beta Was this translation helpful? Give feedback.
-
Still, there is almost no face recognition, |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Thanks for all the hard work |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Thank you for providing this awesome preprocessor! Through reference_adain+attn, it is possible to learn illustration composition very well, and can fine-tuning character through prompts. Using Eunice Ye's Naraka character image as a reference, the generated images are compared (the left one is the original picture):
|
Beta Was this translation helpful? Give feedback.
-
Is there a way to use them with diffusers yet? |
Beta Was this translation helpful? Give feedback.
-
Update in 1.1.178Hi all, reference preprocessors are now supported in "inpaint only masked" Although you need to make some adjustments to the window, the result is good when you are done. This can achieve effects like moving objects from one image to another, like moving face, etc ExampleInput image reference image Meta woman face, best quality You need at least 1.1.178 to use it. |
Beta Was this translation helpful? Give feedback.
-
I didn't see it mentioned so figured I would ask: does it improve anything if we use a txt2img reference image that's the same size as the txt2img output? example: txt2img outputs a 512x512 image and uses a 512x512 image for the reference |
Beta Was this translation helpful? Give feedback.
-
I just updated my PyTorch version to 2.0 on MacOS and images are collapsing again on some models when using reference_only / reference_adain / reference_adain+attn. This wasn't the case on the previous version of PyTorch. I'm running Controlnet 1.1.189. |
Beta Was this translation helpful? Give feedback.
-
Can the reference preprocessors be used in combination with controlnet models? I want to do img2img and preserve the semantic of the input image (depth, segmentation masks), and apply the style of a second image to the first image, would that be possible with the current state? |
Beta Was this translation helpful? Give feedback.
-
is there a way to get a bunch of MidJourney images and pass it through this without prompting but still get a great output? want to do an article comparing the progress but won't be efficient to manually prompt for 50 images :( |
Beta Was this translation helpful? Give feedback.
-
Would it be possible to have a negative reference? I give ControlNet an example of how it is not supposed to look like - that would be a cool feature! |
Beta Was this translation helpful? Give feedback.
-
when these will be available outside web ui? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Is there any resolution requirement for uploaded images? I upload a picture with an indeterminate scale resolution, for example 712x1321 image as source, error will occur RuntimeError: shape '[2, -1, 8, 40]' is invalid for input of size 24640 |
Beta Was this translation helpful? Give feedback.
-
Guys, I see you have preprocessor params. Where does one set these? |
Beta Was this translation helpful? Give feedback.
-
Hi, I achieve a cross-image region drag based on the reference scheme. 1. Use inpaint controlnet to extract inpainted region feature from another image. 2. Use the segment-anything controlnet to keep reasonable pose. Code in https://github.com/sail-sg/EditAnything |
Beta Was this translation helpful? Give feedback.
-
Hey, can you give more mathematical details about how reference-only is implemented? I am very curious about it. |
Beta Was this translation helpful? Give feedback.
-
I wonder if it's possible to use this method to restrict the Color scheme in the output, much like T2I-Adapter Color (but it's not available for SDXL) |
Beta Was this translation helpful? Give feedback.
-
Greetings 👋, I'm employing the Fooocus V2, Fooocus Photograph, Fooocus Negative model in replication to create realistic photos. However, despite providing four high-quality input photos, I'm encountering challenges in achieving accurate face matching in the AI-generated images, and the quality of the eyes is consistently subpar and very bad, not real like venom. Any suggestions on how to address this issue? |
Beta Was this translation helpful? Give feedback.
-
about reference_only,I can't reproduce it photos after May 16.Even with the same parameters. |
Beta Was this translation helpful? Give feedback.
-
V1.1.171 adds two new reference preprocessors:
reference_adain
AdaIn (Adaptive Instance Normalization) from
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
Xun Huang and Serge Belongie, Cornell University
https://arxiv.org/abs/1703.06868
reference_adain+attn
AdaIn (Adaptive Instance Normalization) + Attention link same as "reference_only"
Comparison
Input image (midjourney v5, https://twitter.com/kajikent/status/1654409097041817601)
meta
woman in street, masterpiece, best quality,
Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 12345, Size: 768x512, Model hash: c0d1994c73, Model: realisticVisionV20_v20, Version: v1.2.0, ControlNet 0: "preprocessor: reference_????, model: None, weight: 1, starting/ending: (0, 1), resize mode: Crop and Resize, pixel perfect: True, control mode: Balanced, preprocessor params: (64, 0.5, 64)"
reference_only (style fidelity=0.5):
reference_adain (style fidelity=0.5):
reference_adain+attn (style fidelity=0.5):
without CN (all controlnets disabled):
You need at least 1.1.171 to use them.
Beta Was this translation helpful? Give feedback.
All reactions