-
Notifications
You must be signed in to change notification settings - Fork 54
Labels
bugSomething isn't workingSomething isn't working
Description
Confirmation
- I have confirmed this is a workflow template mistake
Template Name
"Flux.2 [Klein] 9B Distilled: Image Edit" or "image_flux2_klein_image_edit_9b_distilled"
Problem Description
The value of the noise_seed parameter does not change when the randomize value of the control after generate parameter is set.
When I try to generate in the console output, I get the message got prompt Prompt executed in 0.02 seconds
ComfyUI Mode
Legacy Mode
Error Message / Logs
M:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --disable-all-custom-nodes
Checkpoint files will always be loaded safely.
WARNING: You need pytorch with cu130 or higher to use optimized CUDA operations.
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Found comfy_kitchen backend triton: {'available': False, 'disabled': True, 'unavailable_reason': "ImportError: No module named 'triton'", 'capabilities': []}
Total VRAM 12281 MB, total RAM 32689 MB
pytorch version: 2.7.1+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 14709.0
Using pytorch attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.12.3
ComfyUI frontend version: 1.37.11
[Prompt Server] web root: M:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
Skipping loading of custom nodes
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=['models']) completed in 0.033s (created=0, skipped_existing=29, orphans_pruned=5, total_seen=29)
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load AutoencoderKL
loaded completely; 7014.81 MB usable, 160.31 MB loaded, full load: True
Found quantization metadata version 1
Using MixedPrecisionOps for text encoder
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load Flux2TEModel_
loaded completely; 9467.49 MB usable, 8263.34 MB loaded, full load: True
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
Requested to load Flux2
loaded partially; 7287.49 MB usable, 6500.02 MB loaded, 10816.00 MB offloaded, 768.00 MB buffer reserved, lowvram patches: 0
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:12<00:00, 3.17s/it]
Requested to load AutoencoderKL
Unloaded partially: 740.02 MB freed, 5760.00 MB remains loaded, 768.00 MB buffer reserved, lowvram patches: 0
loaded completely; 384.13 MB usable, 160.31 MB loaded, full load: True
Prompt executed in 79.75 seconds
got prompt
Prompt executed in 0.01 seconds
got prompt
Prompt executed in 0.00 seconds
got prompt
Prompt executed in 0.01 seconds
got prompt
Prompt executed in 0.01 seconds
got prompt
Prompt executed in 0.01 seconds
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working