You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the project requires pip3 install ffmpeg soundfile
this is not documented
should also be known when I tried to run a 7 minute wave 48 khz 16 bit on a 24 GB card it runned out of memory
when I'm running on 8 core CPU with 96 GB it's pretty solid with torch.set_num_threads(16) (it allocates up to 390 GB at 96 khz but CPUs are almost always used 100%, swapping with 2 optanes)
also no documentation on where to get the model or where to put once you get it
The text was updated successfully, but these errors were encountered:
To save GPU memory, you can use segmentation for inference. As for the model, you can load it directly from Hugging Face without needing to download it manually. You can check the following part of the code for how it's loaded: https://github.com/JusperLee/Apollo/blob/main/inference.py#L18C122-L18C124.
the project requires pip3 install ffmpeg soundfile
this is not documented
should also be known when I tried to run a 7 minute wave 48 khz 16 bit on a 24 GB card it runned out of memory
when I'm running on 8 core CPU with 96 GB it's pretty solid with torch.set_num_threads(16) (it allocates up to 390 GB at 96 khz but CPUs are almost always used 100%, swapping with 2 optanes)
also no documentation on where to get the model or where to put once you get it
The text was updated successfully, but these errors were encountered: