You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue exists on a clean installation of Fooocus
The issue exists in the current version of Fooocus
The issue has not been reported before recently
The issue has been reported before but has not been fixed yet
What happened?
After clicking the "Generate" button, errors occur on the standard settings after startup.
Steps to reproduce the problem
An error appears before generation, and some models that were working stopped working and also crashes.
Sometimes crashes after "Upscale"
What should have happened?
I haven't used it for half a year
Everything used to work perfectly, I don't know why errors are constantly now
What browsers do you use to access Fooocus?
Brave, Microsoft Edge, Other
Where are you running Fooocus?
Locally
What operating system are you using?
Windows 11
Console logs
after the next time it came out in the console -
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.4.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.55 GiB is allocated by PyTorch, and 151.31 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.5.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.56 GiB is allocated by PyTorch, and 135.06 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.5.ff.net.2.weight CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.56 GiB is allocated by PyTorch, and 135.22 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.6.attn1.to_v.weight CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.59 GiB is allocated by PyTorch, and 112.83 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.6.attn1.to_out.0.weight CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.59 GiB is allocated by PyTorch, and 109.71 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 1471, in worker
handler(task)
File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 1286, in handler
imgs, img_paths, current_progress = process_task(all_steps, async_task, callback, controlnet_canny_path,
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 295, in process_task
imgs = pipeline.process_diffusion(
File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\default_pipeline.py", line 451, in process_diffusion
sampled_latent = core.ksampler(
File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\core.py", line 310, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\sample.py", line 93, in sample
real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\sample.py", line 86, in prepare_sampling
ldm_patched.modules.model_management.load_models_gpu([model] + models, model.memory_required([noise_shape[0] * 2] + list(noise_shape[1:])) + inference_memory)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\patch.py", line 447, in patched_load_models_gpu
y = ldm_patched.modules.model_management.load_models_gpu_origin(*args, **kwargs)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 437, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 304, in model_load
raise e
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 300, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_patcher.py", line 199, in patch_model
temp_weight = ldm_patched.modules.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)
File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 615, in cast_to_device
return tensor.to(device, copy=copy, non_blocking=non_blocking).to(dtype, non_blocking=non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.58 GiB is allocated by PyTorch, and 119.16 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Total time: 9.68 seconds
Additional information
I changed different versions, from the old ones to the latest drivers.
The text was updated successfully, but these errors were encountered:
Checklist
What happened?
After clicking the "Generate" button, errors occur on the standard settings after startup.
Steps to reproduce the problem
An error appears before generation, and some models that were working stopped working and also crashes.
Sometimes crashes after "Upscale"
What should have happened?
I haven't used it for half a year
Everything used to work perfectly, I don't know why errors are constantly now
What browsers do you use to access Fooocus?
Brave, Microsoft Edge, Other
Where are you running Fooocus?
Locally
What operating system are you using?
Windows 11
Console logs
Additional information
I changed different versions, from the old ones to the latest drivers.
The text was updated successfully, but these errors were encountered: