-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Distributed computing with celery #128
base: main
Are you sure you want to change the base?
Conversation
We'll still need a way to start celery workers on individual GPUs. I bet we could do this with something like CUDA_VISIBLE_DEVICES=0 celery -A openmmtools.distributed worker -l info --concurrency=1 &
CUDA_VISIBLE_DEVICES=1 celery -A openmmtools.distributed worker -l info --concurrency=1 &
CUDA_VISIBLE_DEVICES=2 celery -A openmmtools.distributed worker -l info --concurrency=1 &
CUDA_VISIBLE_DEVICES=3 celery -A openmmtools.distributed worker -l info --concurrency=1 & though @pgrinaway may have better ideas for how best to do this with multiple GPUs on a node. It looks like there's also a way to specify worker queues with the |
Thanks! I'll take a look at this tomorrow. |
This is still very much test code for experimenting. I think the next steps are:
|
@andrrizzi : Here's the very basic test code I was playing with, in case you find it useful. This doesn't necessarily have to be merged, but might at least illustrate something along the lines of what I was thinking.
I haven't tested this on the cluster yet.