Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: After clicking the "Generate" button ERRORS #3660

Open
3 of 5 tasks
VladislavT-Flex opened this issue Oct 6, 2024 · 1 comment
Open
3 of 5 tasks

[Bug]: After clicking the "Generate" button ERRORS #3660

VladislavT-Flex opened this issue Oct 6, 2024 · 1 comment
Labels
bug Something isn't working triage This needs an (initial) review

Comments

@VladislavT-Flex
Copy link

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

After clicking the "Generate" button, errors occur on the standard settings after startup.
уе

Steps to reproduce the problem

An error appears before generation, and some models that were working stopped working and also crashes.
Sometimes crashes after "Upscale"

What should have happened?

I haven't used it for half a year
Everything used to work perfectly, I don't know why errors are constantly now

What browsers do you use to access Fooocus?

Brave, Microsoft Edge, Other

Where are you running Fooocus?

Locally

What operating system are you using?

Windows 11

Console logs

after the next time it came out in the console -

ERROR diffusion_model.output_blocks.1.1.transformer_blocks.4.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.55 GiB is allocated by PyTorch, and 151.31 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.5.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.56 GiB is allocated by PyTorch, and 135.06 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.5.ff.net.2.weight CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.56 GiB is allocated by PyTorch, and 135.22 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.6.attn1.to_v.weight CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.59 GiB is allocated by PyTorch, and 112.83 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.output_blocks.1.1.transformer_blocks.6.attn1.to_out.0.weight CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.59 GiB is allocated by PyTorch, and 109.71 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 1471, in worker
    handler(task)
  File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 1286, in handler
    imgs, img_paths, current_progress = process_task(all_steps, async_task, callback, controlnet_canny_path,
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 295, in process_task
    imgs = pipeline.process_diffusion(
  File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\default_pipeline.py", line 451, in process_diffusion
    sampled_latent = core.ksampler(
  File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\core.py", line 310, in ksampler
    samples = ldm_patched.modules.sample.sample(model,
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\sample.py", line 93, in sample
    real_model, positive_copy, negative_copy, noise_mask, models = prepare_sampling(model, noise.shape, positive, negative, noise_mask)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\sample.py", line 86, in prepare_sampling
    ldm_patched.modules.model_management.load_models_gpu([model] + models, model.memory_required([noise_shape[0] * 2] + list(noise_shape[1:])) + inference_memory)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\modules\patch.py", line 447, in patched_load_models_gpu
    y = ldm_patched.modules.model_management.load_models_gpu_origin(*args, **kwargs)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 437, in load_models_gpu
    cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 304, in model_load
    raise e
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 300, in model_load
    self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_patcher.py", line 199, in patch_model
    temp_weight = ldm_patched.modules.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)
  File "C:\ProgramData\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 615, in cast_to_device
    return tensor.to(device, copy=copy, non_blocking=non_blocking).to(dtype, non_blocking=non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacty of 11.99 GiB of which 7.01 GiB is free. Of the allocated memory 3.58 GiB is allocated by PyTorch, and 119.16 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Total time: 9.68 seconds

Additional information

I changed different versions, from the old ones to the latest drivers.
63

@VladislavT-Flex VladislavT-Flex added bug Something isn't working triage This needs an (initial) review labels Oct 6, 2024
@mashb1t
Copy link
Collaborator

mashb1t commented Oct 18, 2024

Which GPU are you using? Ensure it has more zhan 6GB VRAM for playground.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage This needs an (initial) review
Projects
None yet
Development

No branches or pull requests

2 participants