You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I was trying to generate embeddings from a very small subset of VoxCeleb dataset (around 200 MB). The process created a training_data.npz file (around 2 GB), which was loaded in the training process (using uis-rnn). However, I encountered this error:
RuntimeError: CUDA out of memory. Tried to allocate 27.45 GiB (GPU 1; 39.59 GiB total capacity; 18.75 GiB already allocated; 19.56 GiB free; 18.77 GiB reserved in total by PyTorch)
The error does not occur when I try with a smaller file. Any idea how to resolve this issue? Thank you in advanced.
The text was updated successfully, but these errors were encountered:
Thank you for your prompt response :) I was modifying the code to set the allow_pickle to True, but I've reverted it back to your original code and downgraded the pytorch & numpy version instead. However, the error still occurred with a much smaller size of the allocation request:
RuntimeError: CUDA out of memory. Tried to allocate 2.31 GiB (GPU 0; 39.59 GiB total capacity; 37.57 GiB already allocated; 997.19 MiB free; 1016.00 KiB cached)
Is this the normal size for a 2GiB input? I'm not sure where the 37.57 GiB memory is allocated. There was no other process that running at the same time. Btw, what is the maximum size of the input training_data.npz file that you've tried to train?
Hi, I was trying to generate embeddings from a very small subset of VoxCeleb dataset (around 200 MB). The process created a training_data.npz file (around 2 GB), which was loaded in the training process (using uis-rnn). However, I encountered this error:
The error does not occur when I try with a smaller file. Any idea how to resolve this issue? Thank you in advanced.
The text was updated successfully, but these errors were encountered: