Open
Description
Describe the bug
I try the gguf quantize method like FluxPipeline or FluxFillPipeline but fail, I use https://huggingface.co/bullerwins/FLUX.1-Kontext-dev-GGUF/blob/main/flux1-kontext-dev-Q4_K_M.gguf. I also tried resize image to width = (width // 64) * 64, height = (height // 64) * 64
Reproduction
transformer = FluxTransformer2DModel.from_single_file(
"https://huggingface.co/bullerwins/FLUX.1-Kontext-dev-GGUF/blob/main/flux1-kontext-dev-Q4_K_M.gguf",
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),`
torch_dtype=torch.bfloat16)
pipeline = FluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev",
transformer=transformer,
torch_dtype=torch.bfloat16).to("cuda")
images = pipeline(
prompt="restored photo",
image=init_image,
guidance_scale=2.5,
num_inference_steps=28,
).images[0]
Logs
File "/root/miniconda3/envs/Flux_test/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_kontext.py", line 687, in prepare_latents
image_latents = self._pack_latents(
File "/root/miniconda3/envs/Flux_test/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_kontext.py", line 574, in _pack_latents
latents = latents.view(batch_size, num_channels_latents, height // 2, 2, width // 2, 2)
RuntimeError: shape '[1, 32, 64, 2, 64, 2]' is invalid for input of size 262144
System Info
- Diffusers version: 0.35.0.dev0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Huggingface_hub version: 0.33.1
- Transformers version: 4.53.0
- Accelerate version: 1.8.1
- PEFT version: not installed
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB
Who can help?
No response