You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
I'm encountering an out-of-memory error when converting Stable Diffusion 3-medium to CoreML using Torch2CoreML, even with a small latent size.
Steps to Reproduce:
Followed installation guide:
conda create -n diffusionkit python=3.11 -y
conda activate diffusionkit
cd /path/to/diffusionkit/repo
pip install -e .
huggingface-cli login --token MY_TOKEN
(accepted StabilityAI license terms, and allowed gated access to HuggingFace token.)
Run the command: python -m python.src.diffusionkit.tests.torch2coreml.test_mmdit --sd3-ckpt-path stabilityai/stable-diffusion-3-medium --model-version 2b -o for_CoreML_mlpackage --latent-size 16
Expected Behavior:
The script should successfully convert the Stable Diffusion model to CoreML.
Actual Behavior:
The script fails with the following error message:
RuntimeError: MPS backend out of memory (MPS allocated: 8.08 GB, other allocations: 8.35 GB, max allowed: 9.07 GB). Tried to allocate 12.00 KB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
Description:
I'm encountering an out-of-memory error when converting Stable Diffusion 3-medium to CoreML using Torch2CoreML, even with a small latent size.
Steps to Reproduce:
conda create -n diffusionkit python=3.11 -y
conda activate diffusionkit
cd /path/to/diffusionkit/repo
pip install -e .
huggingface-cli login --token MY_TOKEN
(accepted StabilityAI license terms, and allowed gated access to HuggingFace token.)
python -m python.src.diffusionkit.tests.torch2coreml.test_mmdit --sd3-ckpt-path stabilityai/stable-diffusion-3-medium --model-version 2b -o for_CoreML_mlpackage --latent-size 16
Expected Behavior:
The script should successfully convert the Stable Diffusion model to CoreML.
Actual Behavior:
The script fails with the following error message:
RuntimeError: MPS backend out of memory (MPS allocated: 8.08 GB, other allocations: 8.35 GB, max allowed: 9.07 GB). Tried to allocate 12.00 KB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
System Information:
macOS Version: 14.0
Mac Model: arm64, Mac mini M1, 2020
RAM Size: 8 GB
Python Version: 3.11.10 (main, Oct 3 2024, 02:26:51) [Clang 14.0.6 ]
PyTorch Version: 2.5.1
Additional Information:
Tried reducing latent-size, but the issue persists.
Date and Time:
2024-11-12 00:07:02
The text was updated successfully, but these errors were encountered: