Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Troubleshoot] Read this first if you have any problem. [Troubleshoot] #1327

Open
lllyasviel opened this issue Dec 11, 2023 · 18 comments
Open
Labels
bug Something isn't working documentation Improvements or additions to documentation Official

Comments

@lllyasviel
Copy link
Owner

This is a troubleshoot of common problems.

Check here.

Feel free to remind us if there are more common problems that are not covered by this doc.

@lllyasviel lllyasviel added bug Something isn't working documentation Improvements or additions to documentation Official labels Dec 11, 2023
@lllyasviel lllyasviel pinned this issue Dec 11, 2023
@mashb1t
Copy link
Collaborator

mashb1t commented Dec 11, 2023

@lllyasviel should we also include checking for Re-BAR support here?
#1112 (comment)

@lllyasviel
Copy link
Owner Author

@lllyasviel should we also include checking for Re-BAR support here? #1112 (comment)

no. only recent cards has resize bar. if a card support resize bar, then it supports fp16. if a card supports fp16, then the minimal requirement is 4g vram. so it will not trigger resize bar.

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 11, 2023

ok, thank you for the explanation, much appreciate it 👍

@AFOLcast
Copy link

I opened an issue 2 weeks ago here: #1051

But it seems I should have brought it up here? I've tried swapping the prembeded to old xtransformers on the current version. Has not worked since 11/28. I've tried a clean install of 2.1.791. Works once, then goes into timeout loops. I tried the clean install again, replacing the prembed folder just in case. No good.

2023-11-28_15-01-51_1716.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Quality

Resolution: (3536, 1984), Sharpness: 3

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: juggernautXL_version6Rundiffusion.safetensors, Refiner Model: None

Refiner Switch: 0.9, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 164706322227898777

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.1, LoRA [add-detail-xl.safetensors] weight: 0.25

LoRA [xl_more_art-full_v1.safetensors] weight: 0.25,

2023-11-28_14-47-53_8129.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Quality

Resolution: (3536, 1984), Sharpness: 3

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: juggernautXL_version6Rundiffusion.safetensors, Refiner Model: None

Refiner Switch: 0.9, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 164706322227898776

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.1, LoRA [add-detail-xl.safetensors] weight: 0.25

LoRA [xl_more_art-full_v1.safetensors] weight: 0.25,

2023-11-28_14-32-58_6960.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Quality

Resolution: (3536, 1984), Sharpness: 3

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: juggernautXL_version6Rundiffusion.safetensors, Refiner Model: DreamShaper_8_pruned.safetensors

Refiner Switch: 0.9, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 6203301018344342760

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.25, LoRA [add-detail-xl.safetensors] weight: 0.25

2023-11-28_14-22-26_6241.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Quality

Resolution: (3536, 1984), Sharpness: 3

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: juggernautXL_version6Rundiffusion.safetensors, Refiner Model: DreamShaper_8_pruned.safetensors

Refiner Switch: 0.9, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 6203301018344342759

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.25, LoRA [add-detail-xl.safetensors] weight: 0.25

2023-11-28_14-06-08_1742.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Speed

Resolution: (3536, 1984), Sharpness: 2

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: juggernautXL_version6Rundiffusion.safetensors, Refiner Model: DreamShaper_8_pruned.safetensors

Refiner Switch: 0.8, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 6428068312079717363

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.25, LoRA [add-detail-xl.safetensors] weight: 0.5

2023-11-28_14-00-28_8468.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Speed

Resolution: (3536, 1984), Sharpness: 2

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: juggernautXL_version6Rundiffusion.safetensors, Refiner Model: DreamShaper_8_pruned.safetensors

Refiner Switch: 0.8, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 6428068312079717362

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.25, LoRA [add-detail-xl.safetensors] weight: 0.5

2023-11-28_13-42-45_7527.png

Prompt:

Negative Prompt: unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Photograph', 'Fooocus Negative'], Performance: Speed

Resolution: (1768, 992), Sharpness: 2

Guidance Scale: 3, ADM Guidance: (1.5, 0.8, 0.3)

Base Model: realisticStockPhoto_v10.safetensors, Refiner Model: None

Refiner Switch: 0.5, Sampler: dpmpp_2m_sde_gpu

Scheduler: karras, Seed: 227090093299087744

LoRA [SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] weight: 0.25,

2023-11-28_13-22-40_2118.png

Prompt:

Negative Prompt: (embedding:unaestheticXLv31:0.8), low quality, watermark

Fooocus V2 Expansion:

Styles: ['Fooocus V2', 'Fooocus Masterpiece', 'SAI Anime', 'SAI Digital Art', 'SAI Enhance', 'SAI Fantasy Art'], Performance: Extreme Speed

Resolution: (1768, 992), Sharpness: 0.0

Guidance Scale: 1.0, ADM Guidance: (1.0, 1.0, 0.0)

Base Model: bluePencilXL_v050.safetensors, Refiner Model: None

Refiner Switch: 1.0, Sampler: lcm

Scheduler: lcm, Seed: 3333355426078829622

LoRA [sd_xl_offset_example-lora_1.0.safetensors] weight: 0.5, LoRA [sdxl_lcm_lora.safetensors] weight: 1.0

Fooocus Log 2023-11-28 (private)

All images do not contain any hidden data.

Today I tried for the last time, I guess to do a clean install of 2.1.791. As has happened so far, it runs. Once. Then gets stuck in a number of places. Here's the log from today. IF ANYONE could help? I really depend on Fooocus... or at least did for a couple of months..

D:\Fooocus>.\python_embeded\python.exe -s Fooocus\launch.py --preset anime
[System ARGV] ['Fooocus\launch.py', '--preset', 'anime']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.791
Running on local URL: http://127.0.0.1:7860/

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 16200 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : native
VAE dtype: torch.float32
Using xformers cross attention
model_type EPS
adm 0
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
Visited extra keys: {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Refiner model loaded: D:\Fooocus\Fooocus\models\checkpoints\DreamShaper_8_pruned.safetensors
model_type EPS
adm 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
Visited extra keys: {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: D:\Fooocus\Fooocus\models\checkpoints\bluePencilXL_v050.safetensors
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)] for model [D:\Fooocus\Fooocus\models\checkpoints\bluePencilXL_v050.safetensors].
Loaded LoRA [D:\Fooocus\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for model [D:\Fooocus\Fooocus\models\checkpoints\bluePencilXL_v050.safetensors] with 788 keys at weight 0.5.
Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)] for model [D:\Fooocus\Fooocus\models\checkpoints\DreamShaper_8_pruned.safetensors].
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.92 seconds
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Parameters] Seed = 8772597745022207838
[Fooocus] Downloading upscale models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 18 - 12
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
Traceback (most recent call last):
File "D:\Fooocus\Fooocus\modules\async_worker.py", line 668, in worker
handler(task)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\async_worker.py", line 351, in handler
t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\default_pipeline.py", line 151, in clip_encode
cond, pooled = clip_encode_single(final_clip, text)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\default_pipeline.py", line 128, in clip_encode_single
result = clip.encode_from_tokens(tokens, return_pooled=True)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd.py", line 133, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 54, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
File "D:\Fooocus\Fooocus\modules\patch.py", line 274, in encode_token_weights_patched_with_a1111_method
out, pooled = self.encode(to_encode)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd1_clip.py", line 211, in encode
return self(tokens)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\sd1_clip.py", line 189, in forward
outputs = self.transformer(input_ids=tokens, attention_mask=attention_mask, output_hidden_states=self.layer=="hidden")
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 822, in forward
return self.text_model(
File "D:\Fooocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 757, in forward
input_ids.to(dtype=torch.int, device=last_hidden_state.device).argmax(dim=-1),
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "D:\Fooocus\Fooocus\modules\async_worker.py", line 675, in worker
pipeline.prepare_text_encoder(async_call=True)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus\Fooocus\modules\default_pipeline.py", line 172, in prepare_text_encoder
fcbh.model_management.load_models_gpu([final_clip.patcher, final_expansion.patcher])
File "D:\Fooocus\Fooocus\modules\patch.py", line 444, in patched_load_models_gpu
y = fcbh.model_management.load_models_gpu_origin(*args, **kwargs)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\model_management.py", line 372, in load_models_gpu
free_memory(extra_mem, d, models_already_loaded)
File "D:\Fooocus\Fooocus\backend\headless\fcbh\model_management.py", line 330, in free_memory
if get_free_memory(device) > memory_required:
File "D:\Fooocus\Fooocus\backend\headless\fcbh\model_management.py", line 573, in get_free_memory
mem_free_cuda, _ = torch.cuda.mem_get_info(dev)
File "D:\Fooocus\python_embeded\lib\site-packages\torch\cuda\memory.py", line 618, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Total time: 898.90 seconds

@lllyasviel
Copy link
Owner Author

@AFOLcast

the log says that the version control of your fooocus is corrupted. This may be caused by that user have followed misleading wrong guides from internet.

Download the latest package of fooocus from official link and run it directly. Then, if you meet any problem, paste error here.

@AFOLcast
Copy link

I did a totally clean install of 2.1.831. But it does not run. I'm on a RTX 2060, MSI Creator 15 running Windows 11.

Here's the console:

D:\Fooocus21831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.839
Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_version6Rundiffusion.safetensors" to D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors

100%|███████████████████████████████████████████████████████████████████████████| 6.62G/6.62G [1:09:50<00:00, 1.70MB/s]
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 16200 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus21831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.88 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 3
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 5460210364201560103
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 60 - 30
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.19 seconds
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
Traceback (most recent call last):
File "D:\Fooocus21831\Fooocus\modules\async_worker.py", line 803, in worker
handler(task)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\async_worker.py", line 420, in handler
t['uc'] = pipeline.clip_encode(texts=t['negative'], pool_top_k=t['negative_top_k'])
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\default_pipeline.py", line 190, in clip_encode
cond, pooled = clip_encode_single(final_clip, text)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\default_pipeline.py", line 148, in clip_encode_single
result = clip.encode_from_tokens(tokens, return_pooled=True)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\sd.py", line 131, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\sdxl_clip.py", line 55, in encode_token_weights
l_out, l_pooled = self.clip_l.encode_token_weights(token_weight_pairs_l)
File "D:\Fooocus21831\Fooocus\modules\patch.py", line 321, in encode_token_weights_patched_with_a1111_method
return torch.cat(output, dim=-2).to(ldm_patched.modules.model_management.intermediate_device()), first_pooled
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Requested to load GPT2LMHeadModel
Loading 1 new model
Exception in thread Thread-3 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in _bootstrap_inner
File "threading.py", line 953, in run
File "D:\Fooocus21831\Fooocus\modules\async_worker.py", line 809, in worker
pipeline.prepare_text_encoder(async_call=True)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\default_pipeline.py", line 211, in prepare_text_encoder
ldm_patched.modules.model_management.load_models_gpu([final_clip.patcher, final_expansion.patcher])
File "D:\Fooocus21831\Fooocus\modules\patch.py", line 475, in patched_load_models_gpu
y = ldm_patched.modules.model_management.load_models_gpu_origin(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\model_management.py", line 388, in load_models_gpu
free_memory(total_memory_required[device] * 1.3 + extra_mem, device, models_already_loaded)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\model_management.py", line 348, in free_memory
mem_free_total, mem_free_torch = get_free_memory(device, torch_free_too=True)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\model_management.py", line 643, in get_free_memory
mem_free_cuda, _ = torch.cuda.mem_get_info(dev)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info
Total time: 931.21 seconds
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

@lllyasviel
Copy link
Owner Author

@AFOLcast

Hi we have updated the troubleshoot for this problem. The info is

CUDA kernel errors might be asynchronously reported at some other API call

A very small amount of devices does have this problem. The cause can be complicated but usually can be resolved after following these steps:

  1. Make sure that you are using official version and latest version installed from here. (Some forks and other versions are more likely to cause this problem.)
  2. Upgrade your Nvidia driver to the latest version. (Usually the version of your Nvidia driver should be 53X, not 3XX or 4XX.)
  3. If things still do not work, then perhaps it is a problem with CUDA 12. You can use CUDA 11 and Xformers to try to solve this problem. We have prepared all files for you, and please do NOT install any CUDA or other environment on you own. The only one official way to do this is: (1) Backup and delete your python_embeded folder (near the run.bat); (2) Download the "previous_old_xformers_env.7z" from the release page, decompress it, and put the newly extracted python_embeded folder near your run.bat; (3) run Fooocus.
  4. If it still does not work, please open an issue for us to take a look.

See also

See also https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#cuda-kernel-errors-might-be-asynchronously-reported-at-some-other-api-call

@AFOLcast
Copy link

Should I open a new issue? I am using the latest version. Downloaded it yesterday. Ran it today.

I am running the latest version of the Nvidia Studio Driver 546.33.
I backed up and deleted the python_embeded file. I downloaded the old xformers file fresh. (I had tried this trick twice before already.)
I extracted a fresh copy of the old xformers into the folder.
I ran run.bat.
It choked at the same point again. The term run is below. Are there any other logs or specs I could send you? If there is nothing further to try, is it possible to roll back to an even earlier version? I was quite happy with the artwork I was generating in August, September, most of October.

D:\Fooocus21831>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.842
Installing requirements
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 16200 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : native
VAE dtype: torch.float32
Using xformers cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus21831\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus21831\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.02 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 7782454209570747637
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.36 seconds
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
Traceback (most recent call last):
File "D:\Fooocus21831\Fooocus\modules\async_worker.py", line 803, in worker
handler(task)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\async_worker.py", line 420, in handler
t['uc'] = pipeline.clip_encode(texts=t['negative'], pool_top_k=t['negative_top_k'])
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\default_pipeline.py", line 190, in clip_encode
cond, pooled = clip_encode_single(final_clip, text)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\modules\default_pipeline.py", line 148, in clip_encode_single
result = clip.encode_from_tokens(tokens, return_pooled=True)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\sd.py", line 131, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\sdxl_clip.py", line 54, in encode_token_weights
g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
File "D:\Fooocus21831\Fooocus\modules\patch.py", line 303, in encode_token_weights_patched_with_a1111_method
out, pooled = self.encode(to_encode)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\sd1_clip.py", line 191, in encode
return self(tokens)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\sd1_clip.py", line 173, in forward
outputs = self.transformer(tokens, attention_mask, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\clip_model.py", line 131, in forward
return self.text_model(*args, **kwargs)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\clip_model.py", line 109, in forward
x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\clip_model.py", line 68, in forward
x = l(x, mask, optimized_attention)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\clip_model.py", line 49, in forward
x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
File "D:\Fooocus21831\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Fooocus21831\Fooocus\ldm_patched\modules\clip_model.py", line 20, in forward
out = optimized_attention(q, k, v, self.heads, mask)
File "D:\Fooocus21831\Fooocus\ldm_patched\ldm\modules\attention.py", line 318, in attention_pytorch
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Total time: 680.03 seconds

@lllyasviel
Copy link
Owner Author

@AFOLcast yes you can find previous versions here. #1405

@AFOLcast
Copy link

Do you have any input on why this version won't run on my machine?

@lllyasviel
Copy link
Owner Author

@AFOLcast The log says that it is still not using xformers. You cay also try --attention-split or --attention-quad

@Camoen
Copy link

Camoen commented Dec 16, 2023

I had a problem with initial setup.

Running python entry_with_update.py resulted in an error:

Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "~/Fooocus/modules/async_worker.py", line 31, in worker
    import extras.preprocessors as preprocessors
  File "~/Fooocus/extras/preprocessors.py", line 1, in <module>
    import cv2
  File "~/Fooocus/fooocus_env/lib/python3.10/site-packages/cv2/__init__.py", line 153, in bootstrap
    native_module = importlib.import_module("cv2")
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ImportError: libGL.so.1: cannot open shared object file: No such file or directory

I was able to resolve this by running pip install opencv-python-headless and retrying (thanks to ultralytics/ultralytics#1270 (comment) for the resolution).

@axia89
Copy link

axia89 commented Dec 26, 2023

hi everyone . i read that its possible to run fooocus on CPU . i also have an integrated intel ARC GPU . i modified the run bat but unfortunately still getting CUDA not found . any help , please. i tried it on colab but sometimes its ok . most of the time i have to run the cell again in colab . for now no money to buy a new laptop with nvidia graphic.

@nadora35
Copy link

plz help with PyraCanny
Untitled

@samliu315
Copy link

This is a troubleshoot of common problems.这是常见问题的排查。

Check here. 检查这里。

Feel free to remind us if there are more common problems that are not covered by this doc.如果还有本文档未涵盖的常见问题,请随时提醒我们。

### Hi! I really like the inpaint feature of Fooocus. Do you have any plans to implement it on Flux as well?

@Arcanus6669
Copy link

When trying to do a face swap I get this error:

[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 934889909701111104
[Parameters] CFG = 3
[Fooocus] Downloading upscale models ...
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is /Users/master/Fooocus/models/inpaint/inpaint_v26.fooocus.patch
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 60 - 48
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Synthetic Refiner Activated
Synthetic Refiner Activated
Request to load LoRAs [('SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors', 0.25), ('/Users/master/Fooocus/models/inpaint/inpaint_v26.fooocus.patch', 1.0)] for model [/Users/master/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors].
Loaded LoRA [/Users/master/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for UNet [/Users/master/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 722 keys at weight 0.25.
Loaded LoRA [/Users/master/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for CLIP [/Users/master/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 264 keys at weight 0.25.
Loaded LoRA [/Users/master/Fooocus/models/inpaint/inpaint_v26.fooocus.patch] for UNet [/Users/master/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 960 keys at weight 1.0.
Request to load LoRAs [('SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors', 0.25)] for model [/Users/master/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors].
Loaded LoRA [/Users/master/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for UNet [/Users/master/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 722 keys at weight 0.25.
Requested to load SDXLClipModel
Loading 1 new model
unload clone 1
[Fooocus Model Management] Moving model(s) has taken 1.73 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Fooocus] Image processing ...
Upscaling image with shape (423, 423, 3) ...
Traceback (most recent call last):
File "/Users/master/Fooocus/modules/async_worker.py", line 1471, in worker
handler(task)
File "/Users/master/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/master/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/master/Fooocus/modules/async_worker.py", line 1193, in handler
denoising_strength, initial_latent, width, height, current_progress = apply_inpaint(async_task,
File "/Users/master/Fooocus/modules/async_worker.py", line 486, in apply_inpaint
inpaint_worker.current_task = inpaint_worker.InpaintWorker(
File "/Users/master/Fooocus/modules/inpaint_worker.py", line 177, in init
self.mask = morphological_open(mask)
File "/Users/master/Fooocus/modules/inpaint_worker.py", line 45, in morphological_open
maxed = max_filter_opencv(x_int16, ksize=3) - 8
File "/Users/master/Fooocus/modules/inpaint_worker.py", line 35, in max_filter_opencv
return cv2.dilate(x, np.ones((ksize, ksize), dtype=np.int16))
cv2.error: OpenCV(4.10.0) 👎 error: (-5:Bad argument) in function 'dilate'

Overload resolution failed:

  • src is not a numpy array, neither a scalar
  • Expected Ptrcv::UMat for argument 'src'

Total time: 15.09 seconds

@hltzngnn
Copy link

I am running the Fooocus application through Google Colab, but the input image feature has not been working for a while.

@Arcanus6669
Copy link

It works on my slower machine... So what are people using instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation Improvements or additions to documentation Official
Projects
None yet
Development

No branches or pull requests

9 participants