Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add placement spread assertions #11

Merged
merged 1 commit into from
Jun 7, 2024
Merged

add placement spread assertions #11

merged 1 commit into from
Jun 7, 2024

Conversation

jaypipes
Copy link
Member

@jaypipes jaypipes commented Jun 1, 2024

The assert.placement field of a gdt-kube test Spec allows a test author to specify the expected scheduling outcome for a set of Pods returned by the Kubernetes API server from the result of a kube.get call.

Suppose you have a Deployment resource with a TopologySpreadConstraints that specifies the Pods in the Deployment must land on different hosts:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx

You can create a gdt-kube test case that verifies that your nginx Deployment's Pods are evenly spread across all available hosts:

tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname

If there are more hosts than the spec.replicas in the Deployment, gdt-kube will ensure that each Pod landed on a unique host. If there are fewer hosts than the spec.replicas in the Deployment, gdt-kube will ensure that there is an even spread of Pods to hosts, with any host having no more than one more Pod than any other.

Debug/trace output includes information on how the placement spread looked like to the gdt-kube placement spread asserter:

jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s

Issue #7

@jaypipes jaypipes force-pushed the placement branch 3 times, most recently from 43fea83 to d778db4 Compare June 1, 2024 15:48
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
@jaypipes jaypipes merged commit d2c7dfe into main Jun 7, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant