-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
option to use local SDXL model file #80
base: main
Are you sure you want to change the base?
Conversation
When I replace the model path with: When I replace the model path with: if use_local_model: Could you tell me how to replace this path? Thank you |
@shellddd does this file actually exist or is there a typo “Stablle Diffusion” (extra ‘l’) ? |
I tested it again, Yes, the model does exist, its path is what I copied directly, no errors When I replace the path to: base_model_dir = os.environ["D:\AGI\Stablle Diffusion\models\Stable-diffusion"] |
I have a Mac not windows, but check here: https://stackoverflow.com/questions/2953834/how-should-i-write-a-windows-path-in-a-python-string-literal |
I replaced the path according to the python usage: The terminal shows that the model has been correctly identified, but a new error is prompted:
|
I was able to load local SDXL / Pony models. For those who gets the "ValueError:Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL." error, check the sdxl_name. It should not contain the file extension (.safetensors etc). You shouldn't move a model that is dispatched using accelerate hooks.
Load to GPU: LlamaForCausalLM
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
User stopped generation
Last assistant response is not valid canvas: Response does not contain codes!
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
Automatically corrected [lightgoldenroyalloverde] -> [lightgoldenrodyellow].
Automatically corrected [papaywhrop] -> [papayawhip].
You shouldn't move a model that is dispatched using accelerate hooks.
Unload to CPU: LlamaForCausalLM
Load to GPU: CLIPTextModel
Load to GPU: CLIPTextModelWithProjection
Traceback (most recent call last):
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/queueing.py", line 528, in process_events
response = await route_utils.call_process_api(
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1908, in process_api
result = await self.call_function(
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1485, in call_function
prediction = await anyio.to_thread.run_sync(
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/gradio_app.py", line 226, in diffusion_fn
positive_cond, positive_pooler, negative_cond, negative_pooler = pipeline.all_conds_from_canvas(canvas_outputs, negative_prompt)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/lib_omost/pipeline.py", line 313, in all_conds_from_canvas
negative_cond, negative_pooler = self.encode_cropped_prompt_77tokens(negative_prompt)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/lib_omost/pipeline.py", line 354, in encode_cropped_prompt_77tokens
pooled_prompt_embeds = prompt_embeds.pooler_output
AttributeError: 'CLIPTextModelOutput' object has no attribute 'pooler_output'
|
...
Reporting same error here. In
I also printed the structure of the prompt embeds. With the 2 local models that I've tried (epicrealismXL_v7FinalDestination and leosamsHelloworldXL_helloworldXL70), I can see that there are many prompt embeds that are iterated over which do not have the This seems to be the cause of the subsequent error that occurs just before it actually returns the image:
|
Looks like I found the solution. pipe.save_pretrained(save_directory=f"{model_dir}{model_name}/",safe_serialization=False,variant='fp16') Just in case, make sure you use 'fp16' variant while saving and switch off the safe_serialization. Safe serialization == safetensors format. model_dir - I used the same folder where safetensors file is located. Step 2. Do the same loading as it was originally but replace sdxl_name = f"{model_dir}{model_name}/"
tokenizer = CLIPTokenizer.from_pretrained(
sdxl_name, subfolder="tokenizer")
tokenizer_2 = CLIPTokenizer.from_pretrained(
sdxl_name, subfolder="tokenizer_2")
text_encoder = CLIPTextModel.from_pretrained(
sdxl_name, subfolder="text_encoder", torch_dtype=torch.float16, variant="fp16")
text_encoder_2 = CLIPTextModel.from_pretrained(
sdxl_name, subfolder="text_encoder_2", torch_dtype=torch.float16, variant="fp16")
vae = AutoencoderKL.from_pretrained(
sdxl_name, subfolder="vae", torch_dtype=torch.bfloat16, variant="fp16") # bfloat16 vae
unet = UNet2DConditionModel.from_pretrained(
sdxl_name, subfolder="unet", torch_dtype=torch.float16, variant="fp16") This is a duct tape solution (ugly code with duplication). Make this code clean is trivial task for reader's home work. ;) |
Excellent! Here is how I've expanded on this... this will check if the model was already converted and subsequently skip that process. Also removed that redundancy you mentioned I've tested this a few times and it is working correctly: # SDXL
#sdxl_name = 'SG161222/RealVisXL_V4.0'
# Use local model
# This will create a diffusers version of the model in the same directory as the source checkpoint.
use_local_model = True
# Path to directory containing your SDXL models
local_models_path = "C:/0_SD/Omost/Stable-diffusion"
# Name of the model (without extension). Include subdirectory if applicable.
local_model_name = "sdxl/juggernautXL_juggernautX"
if use_local_model:
local_model_path = os.path.join(local_models_path, local_model_name)
local_model_checkpoint = f"{local_model_path}.safetensors"
local_model_diffusers_path = f"{local_model_path}/"
# Warn if invalid path specified
if not os.path.exists(local_model_checkpoint) and not os.path.exists(local_model_diffusers_path):
raise FileNotFoundError(local_model_checkpoint)
# List of required directories in diffusers format
required_dirs = ['text_encoder', 'text_encoder_2', 'tokenizer', 'tokenizer_2', 'unet', 'vae']
# Skip conversion to diffusers format if already previously converted
if os.path.isdir(local_model_path):
all_dirs_exist = all(os.path.isdir(os.path.join(local_model_path, dir_name)) for dir_name in required_dirs)
if all_dirs_exist:
print(f'Using "{local_model_name}/", already in diffusers format.')
else:
print(f'Warning: "{local_model_diffusers_path}" is a directory, but is missing required sub-directories\n \
(Required: "{required_dirs}".)')
# Build pipeline from checkpoint if first time loading local model
else:
print("Building pipeline from local model: ", local_model_checkpoint)
pipe = StableDiffusionXLPipeline.from_single_file(
local_model_checkpoint,
torch_dtype=torch.float16,
variant="fp16"
)
tokenizer = pipe.tokenizer
tokenizer_2 = pipe.tokenizer_2
text_encoder = pipe.text_encoder
text_encoder_2 = pipe.text_encoder_2
vae = pipe.vae
unet = pipe.unet
# Save pipeline to diffusers format
print("Saving pipeline in diffusers format to: ", local_model_diffusers_path)
pipe.save_pretrained(save_directory=local_model_diffusers_path, safe_serialization=False, variant='fp16')
sdxl_name = local_model_diffusers_path
tokenizer = CLIPTokenizer.from_pretrained(
sdxl_name, subfolder="tokenizer")
tokenizer_2 = CLIPTokenizer.from_pretrained(
sdxl_name, subfolder="tokenizer_2")
text_encoder = CLIPTextModel.from_pretrained(
sdxl_name, subfolder="text_encoder", torch_dtype=torch.float16, variant="fp16")
text_encoder_2 = CLIPTextModel.from_pretrained(
sdxl_name, subfolder="text_encoder_2", torch_dtype=torch.float16, variant="fp16")
vae = AutoencoderKL.from_pretrained(
sdxl_name, subfolder="vae", torch_dtype=torch.bfloat16, variant="fp16") # bfloat16 vae
unet = UNet2DConditionModel.from_pretrained(
sdxl_name, subfolder="unet", torch_dtype=torch.float16, variant="fp16")
unet.set_attn_processor(AttnProcessor2_0())
vae.set_attn_processor(AttnProcessor2_0())
pipeline = StableDiffusionXLOmostPipeline(
vae=vae,
text_encoder=text_encoder,
tokenizer=tokenizer,
text_encoder_2=text_encoder_2,
tokenizer_2=tokenizer_2,
unet=unet,
scheduler=None, # We completely give up diffusers sampling system and use A1111's method
) |
It’s so strange that the model pipeline data has to be saved and read back in again, in order to have all the correct attributes for Omost… |
Is see it in other way. Omost pipeline supports Diffusers format only. It also doesn't support LoRas in Kohya SS format. Yesterday I tried to load LoRa in safetensors format (one was made by the CivitAI trainer and another one was produced by OneTrainer). It is required to rename internal layers into old format. So I see the Omost as a tooling based on old formats. I believe A1111 and Comfy just have a pipeline scripts, which extract correctly the networks from newer safetensors format on the fly. And probably, Omost just need to import on-the-fly converters... |
Oddly your solution works for me while the version in the fork listed at the top doesn't. With yours I can even make it run turbo models and pony models. |
uses SDXL_MODELS_DIR shell environment variable