You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 24, 2023. It is now read-only.
when i submit a batch of spark jobs, it runs doesn't like the expection.
Some executor pods stucking although there are enough resources in each node for it to run.
It annoyed me, and I wonder if there is something that I don't considered.
ps. I run these spark jobs like the example and it works ok for running a single job
The text was updated successfully, but these errors were encountered:
There are two nodes in my cluster. Also, there are two spark jobs and the total resources they required are larger than the cluster (i.e the k8s cluster can't run both jobs concurrently). If I submit second job after all executors of first job are running, it works well. However, some pods get hang (see following screenshot) if I submit two jobs at the same time. I traced the log and it seems that scheduler predicate the node (assign the resource) for both jobs at the same time. Hence, some pods can't get enough resources.
Is it the expected behavior? Can it be configured that scheduler predicates second job only if first job has been scheduled successfully? Or we should NOT submit jobs at the same time?
when i submit a batch of spark jobs, it runs doesn't like the expection.
Some executor pods stucking although there are enough resources in each node for it to run.
It annoyed me, and I wonder if there is something that I don't considered.
ps. I run these spark jobs like the example and it works ok for running a single job
The text was updated successfully, but these errors were encountered: