-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate performance #102
Comments
@cooperlab says: look at TF MultiWorker strategy - https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras. We can help with this. Key questions are:
|
Tensorflow does autosharding so we shouldn't have to explicitly shard the If the user has already created a |
In particular, are we leveraging the graph execution optimizations (e.g., parallelization, memory management, GPU usage) of tensorflow and torch or do we need to do more to get that?
The text was updated successfully, but these errors were encountered: