Skip to content

Commit fe4d2bc

Browse files
committed
OSDOCS-13835: Docs for Kueue gang scheduling / all-or-nothing
1 parent 6266219 commit fe4d2bc

File tree

3 files changed

+67
-0
lines changed

3 files changed

+67
-0
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,8 @@ Topics:
5555
File: configuring-quotas
5656
- Name: Using cohorts
5757
File: using-cohorts
58+
- Name: Gang scheduling
59+
File: gangscheduling
5860
---
5961
Name: Support
6062
Dir: support

configure/gangscheduling.adoc

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
include::_attributes/common-attributes.adoc[]
3+
[id="gangscheduling"]
4+
= Gang scheduling
5+
:context: gangscheduling
6+
7+
toc::[]
8+
9+
Gang scheduling ensures that a group or _gang_ of related jobs only start when all required resources are available. {product-title} enables gang scheduling by suspending jobs until the {platform} cluster can guarantee the capacity to start and execute all of the related jobs in the gang together. This is also known as _all-or-nothing_ scheduling.
10+
11+
Gang scheduling is important if you are working with expensive, limited resources, such as GPUs, and can prevent jobs from claiming but not using GPUs, which can improve GPU utilization and can reduce running costs. Gang scheduling can also help to prevent issues like resource segmentation and deadlocking.
12+
13+
include::modules/configuring-gangscheduling.adoc[leveloffset=+1]
14+
15+
////
16+
// use case - deep learning
17+
One classic example is in deep learning workloads. Deep learning frameworks (Tensorflow, PyTorch etc) require all the workers to be running during the training process.
18+
19+
In this scenario, when you deploy training workloads, all the components should be scheduled and deployed to ensure the training works as expected.
20+
21+
Gang Scheduling is a critical feature for Deep Learning workloads to enable all-or-nothing scheduling capability, as most DL frameworks requires all workers to be running to start training process. Gang Scheduling avoids resource inefficiency and scheduling deadlock sometimes.
22+
////
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * configure/gangscheduling.adoc
4+
5+
:_mod-docs-content-type: REFERENCE
6+
[id="configuring-gangscheduling_{context}"]
7+
= Configuring gang scheduling
8+
9+
You can configure gang scheduling by modifying the `gangScheduling` spec in the `Kueue` custom resource (CR).
10+
11+
.Example `Kueue` CR with gang scheduling configured
12+
[source,yaml]
13+
----
14+
apiVersion: kueue.openshift.io/v1
15+
kind: Kueue
16+
metadata:
17+
name: cluster
18+
labels:
19+
app.kubernetes.io/managed-by: kustomize
20+
app.kubernetes.io/name: kueue-operator
21+
namespace: openshift-kueue-operator
22+
spec:
23+
config:
24+
gangScheduling:
25+
policy: ByWorkload # <1>
26+
byWorkload:
27+
admission: Parallel # <2>
28+
# ...
29+
----
30+
<1> You can set the `policy` value to enable or disable gang scheduling. The possible values are `ByWorkload`, `None`, or empty (`""`).
31+
+
32+
ByWorkload:: When the `policy` value is set to `ByWorkload`, each job is processed and considered for admission as a single unit. If the job does not become ready within the specified time, the entire job is evicted and retried at a later time.
33+
+
34+
None:: When the `policy` value is set to `None`, gang scheduling is disabled.
35+
+
36+
Empty:: When the `policy` value is empty or set to `""`, the {product-title} Operator determines settings for gang scheduling. Currently, gang scheduling is disabled by default.
37+
<2> If the `policy` value is set to `ByWorkload`, you must configure job admission settings. The possible values for the `admission` spec are `Parallel`, `Sequential`, or empty (`""`).
38+
+
39+
Parallel:: When the `admission` value is set to `Parallel`, pods from any job can be admitted at any time. This can cause a deadlock, where jobs are in contention for cluster capacity, and pods from another job being successfully scheduled can prevent pods from the current job from being scheduled.
40+
+
41+
Sequential:: When the `admission` value is set to `Sequential`, only pods from the currently processing job are admitted. After all of the pods from the current job have been admitted and are ready, {product-title} processes the next job. Sequential processing can slow down admission when the cluster has sufficient capacity for multiple jobs, but provides a higher likelihood that all of the pods for a job are scheduled together successfully.
42+
+
43+
Empty:: When the `admission` value is empty or set to `""`, the {product-title} Operator determines job admission settings. Currently, the `admission` value is set to `Parallel` by default.

0 commit comments

Comments
 (0)