Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[#21] Add Nvidia CUDA GPU support and additional options, update README #24

Merged
merged 14 commits into from
Jan 3, 2023

Conversation

chucknelson
Copy link
Contributor

@chucknelson chucknelson commented Dec 29, 2022

Hopefully closes issue #21.

Continued from the work started in PR #22 by @aavetis.

I'm going to try and see if I can convince you all that instead of two docker files and different Makefile jobs, perhaps a single image and parameterized Makefile jobs are simpler.

Single Docker Image

I initially began doing the "two Dockerfile" option, but when building the images, I noticed that the Nvidia CUDA-based one was...actually smaller than the Python base image. After tool installation and pip install, the image sizes turned out like this on a Windows 11 machine:

REPOSITORY                                          TAG       IMAGE ID       CREATED          SIZE
xserrat/facebook-demucs                             latest    cab4714ec64d   11 seconds ago   4.84GB
xserrat/facebook-demucs-nvidia                      latest    9dd7d7483197   4 minutes ago    4.56GB

So with this in mind, I just changed the Dockerfile to the CUDA base, which supports both CPU-only (if it can't find an Nvidia GPU) and GPU. It "just works" from my testing - tested on a Windows 11 machine w/ a Ryzen 5800x and RTX 2060 Super, and an M1 Mac Mini.

Makefile Job Parameters

To keep the number of jobs in the Makefile from growing, I added some "options" / parameters that adjust the docker run command. See the README for the documentation on them, but for GPU specifically, if you run make run track=mysong.mp3 gpu=true, it'll try to use GPU. I also added options to output as MP3, split/separate a single track and to specify a different demucs model if desired.

Interactive Mode

One last thing I added is an "interactive" job, run-interactive, to just freely run demucs on the command line. I found this useful to experiment with other demucs options, and I think it is a simple enough thing that it doesn't pollute the simplicity of this project too much...I hope anyway.

I also just did some minor readme cleanup and such - hopefully you don't mind!

Let me know what you all think - thanks!

@chucknelson chucknelson changed the title [#21] Add Nvidia CUDA GPU support, update README [#21] Add Nvidia CUDA GPU support and additional options, update README Dec 30, 2022
@xserrat xserrat self-requested a review January 2, 2023 16:06
Copy link
Owner

@xserrat xserrat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @chucknelson,

Thanks for your interest and enormous contribution to the repository.

Single Docker Image

I think it's a good approach taking into account that the base image supports CPU & GPU and also as you commented, we're reducing the image size which is great. 👏🏽
In addition, it's great to know that using the build-essential and the python-dev packages we're adding support for ARM64, I didn't know that! I was trying to build manually (as commented in another issue) a dependency that was crashing using ARM64 called lameenc, but it was fairly complex.

Makefile Job Parameters

I like the way each option is added, it looks simple and understandable. Also, good naming for the variables to understand if they're related to docker or the demucs CLI.

Interactive Mode

Good idea, it's nice to have the possibility to play with the different options 👌🏽

What I tested:

  • amd64 & arm64 support
  • CPU support
    and works like a charm!! 👏🏽 👏🏽

I've just left you the comment to define the default goal for the Makefile. After applying this change I'll merge this pull request and then I'll update my branch to add the Github Action to automatically update the image we have in DockerHub with your changes so the users won't need to build manually the image, just download it.

Thanks again 🚀

Makefile Show resolved Hide resolved
@xserrat
Copy link
Owner

xserrat commented Jan 2, 2023

fixes also #14 and #23 as we're adding ARM64 support

@chucknelson
Copy link
Contributor Author

@xserrat

Thanks for your interest and enormous contribution to the repository.

No problem - thank you for having this demucs option ready to go! I just wanted an easy way to locally run demucs to get some drumless tracks without messing around with Python, CUDA, etc., and thankfully this project was called out in the Demucs readme 👍

Thanks for the feedback and looking things over, appreciate it!

Copy link
Owner

@xserrat xserrat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job @chucknelson and thanks @aavetis for the first approach too 👏

I'm glad to see that the project has interested others to the point to improve it with their contributions 🙌

@xserrat xserrat merged commit b953149 into xserrat:main Jan 3, 2023
@chucknelson chucknelson deleted the feature/issue-21-utilize-gpu branch June 10, 2023 17:28
@renchris
Copy link

Is there Apple Chip (ex. M1 Max) gpu support available?

@chucknelson
Copy link
Contributor Author

Is there Apple Chip (ex. M1 Max) gpu support available?

Unfortunately I don't think this will be possible any time soon. It looks like Demucs isn't very active anymore and only supports GPU acceleration via CUDA (i.e., Nvidia GPUs only).

It would be interesting to see if there are any python libraries that could be "plugged in" to Demucs to utilize Apple Metal APIs, but I personally haven't looked into it.

@renchris
Copy link

Ah okay. Thanks for sharing. Would you be able to direct me to any resources/libraries that I can read into and see if it's feasible to try to figure out myself on the side?

@chucknelson
Copy link
Contributor Author

Ah okay. Thanks for sharing. Would you be able to direct me to any resources/libraries that I can read into and see if it's feasible to try to figure out myself on the side?

Looks like this issue has been discussed earlier in 2023 - you may be able to start here and eventually land on what the latest info is for full pytorch support for Apple Silicon that works with demucs...

facebookresearch/demucs#432

@renchris
Copy link

Thanks, I'll read and catch up on that previous discussion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants