-
Notifications
You must be signed in to change notification settings - Fork 98
[request] Do we have plan to merge Kubernetes part to kubeflow/pytorch-operator? #117
Comments
Hi @gaocegege thanks for bringing this up! We've been looking into adding support for TorchElastic parameters to the existing Kubeflow pytorch operators and are planning to bundle TorchElastic in the pytorch docker image for the pytorch 1.7 release - which shouldn't be a blocker for the Kubeflow work but certainly makes things smoother for users. We welcome collaborations! and can certainly use some help from someone who is knowledgeable with Kubeflow in making pytorch operators support elastic parameters. |
@kiukchung Thanks for your reply! We are glad to support it in PyTorch operator, and we can help implement or propose it in Kubeflow community. |
That would be awesome! I think we can approach this from both sides. You from Kubeflow side and us from PyTorch/TorchElastic side. We can help from torchelastic and pytorch side and review/expedite relevant PRs and prioritize any changes required on TorchElastic or PyTorch (in general Facebook side) user-facing APIs and documentation to make the work on your end smoother (e.g bundling torchelastic in the pytorch docker image for instance). Let us know if you have any specific asks. Looking forward to seeing elastic support added to Kubeflow pytorch operators! cc) @drdarshan, @chauhang, @Tierex, @jspisak |
Yeah, we are going to have some action items these days and maybe we will have some specific asks when we are ready. Thanks for your help! |
+1 @gaocegege - thanks for reaching out on this! KubeFlow is a high priority platform for us so anything we can do to make it easier for users to leverage advanced features would be great, especially if we can collaborate to make it happen.. cc @chauhang |
@gaocegege @kiukchung I can write a proposal on it. It needs some interface changes. We can have some discussion by end of month. |
sounds good @Jeffwan, looking forward to the proposal! Keeping this issue open to continue the conversation on the same thread... |
@Jeffwan Thanks! Looking forward to it. |
ping @Jeffwan Any progress for the issue? |
@gaocegege did you have some thoughts on how we can merge |
cc @Jeffwan |
That's a good news. operator will be only used for orchestration, it's user's responsibility to wrap entrypoint. I think operator side we can determine if that's a torch elastic job by checking args or envs. common logic will be similar, adding/removing pods will be allowed for elastic job. (it's not allowed for general torch jobs). It's a little bit delayed, I was busy with testing-infra issue in kubeflow org. Currently presubmit jobs blocks all the PRs, it will takes some time to get all issues fixed. What's the release schedule for torchelastic.0.2.1 and the pytorch version with torchelastic? |
sorry for the late reply. @Jeffwan -
what do you recommend are the next steps? cc) @chauhang , @jspisak, @BorisValkov5 |
Awesome! I think it LGTM:
|
I looked through the code base of elastic and elasticjob operator, and propose such design in pytorch-operator: // PyTorchJobSpec is a desired state description of the PyTorchJob.
type PyTorchJobSpec struct {
+ // A switch to enable dynamic worker
+ EnableDynamicWorker *bool `json:"enableDynamicWorker,omitempty"`
+ MinReplicas *int32 `json:"minReplicas,omitempty"`
+ MaxReplicas *int32 `json:"maxReplicas,omitempty"`
// Specifies the duration (in seconds) since startTime during which the job can remain active
// before it is terminated. Must be a positive integer.
// This setting applies only to pods where restartPolicy is OnFailure or Always.
// +optional
ActiveDeadlineSeconds *int64 `json:"activeDeadlineSeconds,omitempty"`
// Number of retries before marking this job as failed.
// +optional
BackoffLimit *int32 `json:"backoffLimit,omitempty"`
// Defines the policy for cleaning up pods after the PyTorchJob completes.
// Defaults to None.
CleanPodPolicy *common.CleanPodPolicy `json:"cleanPodPolicy,omitempty"`
// Defines the TTL for cleaning up finished PyTorchJobs (temporary
// before Kubernetes adds the cleanup controller).
// It may take extra ReconcilePeriod seconds for the cleanup, since
// reconcile gets called periodically.
// Defaults to infinite.
TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"`
// A map of PyTorchReplicaType (type) to ReplicaSpec (value). Specifies the PyTorch cluster configuration.
// For example,
// {
// "Master": PyTorchReplicaSpec,
// "Worker": PyTorchReplicaSpec,
// }
PyTorchReplicaSpecs map[PyTorchReplicaType]*common.ReplicaSpec `json:"pytorchReplicaSpecs"`
} The switch The other two fields The only question is if we should add these fields and logic in PyTorchJob v1 or v2alpha1? /cc @andreyvelich @johnugeorge @Jeffwan |
Thanks @gaocegege.
From my perspective, all new APIs should follow the Kubernetes version policy structure. Currently, we have only v1 version of API: https://github.com/kubeflow/pytorch-operator/tree/master/pkg/apis/pytorch. |
I think it should be a pointer since it can be nil. |
cc) @drdarshan |
Elastic is merged into PyTorch 1.9 and it is released, I think we can support it in pytorch-operator now. |
FYI, there's a proposal kubeflow/community#522 to support Torch Elastic in Kubeflow. And we implemented the MVP here kubeflow/training-operator#1453 It is tested with the echo and imagenet example (with some changes to support CPU training). C10D and ETCD are both supported and tested. worker-0 is not fault-tolerant with c10d, but it is static and will not re-rank to another new worker. Workers are all fault-tolerant and elastic with etcd backend. There were some issues but already fixed in the upstream master branch:
It should work well if there is a new release involving the patches. If you are interested in it, please comment in the PR. Thanks! cc @Jeffwan |
HPA support has been added to Pytorch Elastic for Training operator in kubeflow/training-operator#1701 |
cc @Jeffwan
I think we share the similar scope between pytorch-operator and elasticjob-operator. I am wondering if we can collaborate to support PyTorch and PyTorch elastic well.
The text was updated successfully, but these errors were encountered: