Skip to content

Commit

Permalink
GITBOOK-108: No subject
Browse files Browse the repository at this point in the history
  • Loading branch information
mouuii authored and gitbook-bot committed Aug 31, 2024
1 parent 665db53 commit 4437f9f
Showing 1 changed file with 195 additions and 10 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,33 @@

如果你经常与`kubectl`打交道,那相信你一定见过 `kubectl.kubernetes.io/last-applied-configuration` annotation,以及那神烦的`managedFields`,像这样:

```bash
$ kubectl get pods hello -oyaml apiVersion: v1 kind: Pod metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"run":"hello"},"name":"hello","namespace":"default"},"spec":{"containers":[{"image":"nginx","name":"hello","resources":{}}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"},"status":{}} creationTimestamp: "2022-05-28T07:28:51Z" labels: run: hello managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:run: {} .... manager: kubectl operation: Update time: "2022-05-28T07:28:51Z" ....
```yaml
$ kubectl get pods hello -oyaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"run":"hello"},"name":"hello","namespace":"default"},"spec":{"containers":[{"image":"nginx","name":"hello","resources":{}}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"},"status":{}}
  creationTimestamp: "2022-05-28T07:28:51Z"
  labels:
    run: hello
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:run: {}
....
    manager: kubectl
    operation: Update
    time: "2022-05-28T07:28:51Z"
....
```

由这两个字段,引出本文的两位主角,Client-Side Apply(以下简称**CSA**)和Server-Side Apply(以下简称**SSA**
Expand All @@ -19,14 +44,39 @@ $ kubectl get pods hello -oyaml apiVersion: v1 kind: Pod metadata: annotations:
* Client-Side Apply 和 Server-Side Apply的基本工作方式。
* Server-Side Apply的优点。

## `kubectl apply` 最初始的样子——Client-Side Apply

在开始之前,有必要澄清一下`kubectl apply`的预期工作方式。`kubectl apply`是一种声明示的K8S对象管理方式,是我们最常用的应用部署,升级方式之一。

需要特别指出的是,`kubectl apply`声明的仅仅是它关心的字段的状态,而不是整个对象的真实状态。apply表达的意思是:“我”管理的字段应该和我apply的配置文件一致(但我不关心其他字段)。

什么是“我”管理的字段,什么又是其他的字段呢?举个例子,当我们希望使用HPA管理应用副本数时,[Kubernetes推荐的做法](https://link.juejin.cn/?target=https%3A%2F%2Fkubernetes.io%2Fdocs%2Ftasks%2Frun-application%2Fhorizontal-pod-autoscale%2F%23migrating-deployments-and-statefulsets-to-horizontal-autoscaling)是在apply的配置文中不指定具体`replicas`副本数。首次部署时,K8S会将`replicas`值设置为默认1,随后由HPA控制器扩容到合适的副本数。

```yaml
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: nginx spec: # replicas: 1 不要设置replicas selector: matchLabels: app: nginx strategy: {} template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx:latest name: nginx resources: {}
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  # replicas: 1 不要设置replicas
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        resources: {}

```

当升级应用时(修改镜像版本),修改配置文件中的`image`字段,再次执行`kubectl apply`。此时`kubectl apply`只会影响镜像版本(因为他是“我”管理的字段),而不会影响HPA控制器设置的副本数。在这个例子中,`replicas`字段不是`kubectl apply`管理的字段,因此更新镜像时不会被删除,避免了每次应用升级时,副本数都会被重置。
Expand All @@ -49,7 +99,7 @@ apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: a
由此可见,`last-applied-configuration`体现的是一种ownership的关系,表示哪些字段是由`kubectl`管理,它是`kubectl apply`时,计算patch报文的依据。

### `kubectl apply`升级版——Server-Side Apply
## `kubectl apply`升级版——Server-Side Apply

**SSA**是另一种声明式的对象管理方式,和**CSA**的作用是基本一致的。**SSA**始于从1.14开始发布alpha版本,到1.16beta,到1.18beta2,终于在1.22升级为GA。

Expand All @@ -60,13 +110,44 @@ apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: a
顾名思义,**SSA**将对象合并的逻辑转移到了服务端(APIServer),客户端只需提交完整的配置文件,剩下的工作交给服务端处理。 在`kubectl`中使用**SSA**,只需在`kubectl apply`时加上`--server-side`参数即可,例如这样:

```bash
$ kubectl apply --server-side=true -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: test-server-side-apply data: a: "a" b: "b" EOF
$ kubectl apply --server-side=true -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: test-server-side-apply
data:
  a: "a"
  b: "b"
EOF

```

部署成功后,查看对象会发现该对象中不再存在`last-applied-configuration`

```bash
$ kubectl get cm test-server-side-apply -oyaml apiVersion: v1 data: a: a b: b kind: ConfigMap metadata: creationTimestamp: "2022-12-04T07:59:24Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: f:a: {} f:b: {} manager: kubectl operation: Apply time: "2022-12-04T07:59:24Z" name: test-server-side-apply namespace: default resourceVersion: "1304750" uid: d265df3d-b9e9-4d0f-91c2-e654f850d25a # 没有 last-applied-configuration annotation啦
$ kubectl get cm test-server-side-apply -oyaml
apiVersion: v1
data:
  a: a
  b: b
kind: ConfigMap
metadata:
  creationTimestamp: "2022-12-04T07:59:24Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        f:a: {}
        f:b: {}
    manager: kubectl
    operation: Apply
    time: "2022-12-04T07:59:24Z"
  name: test-server-side-apply
  namespace: default
  resourceVersion: "1304750"
  uid: d265df3d-b9e9-4d0f-91c2-e654f850d25a
# 没有 last-applied-configuration annotation啦
```

> TIPS: 如果你没能看到`managedFields`字段,可以加上 --show-managed-fields 参数: kubectl get cm test-server-side-apply -oyaml --show-managed-fields
Expand All @@ -82,7 +163,28 @@ $ kubectl get cm test-server-side-apply -oyaml apiVersion: v1 data: a: a b: b ki
**SSA**中使用了字段管理机制来追踪对象的变化,当apply改变一个字段时,而恰巧该字段被其他用户声明了ownership,此时会发生冲突。 这可以防止一个管理者不小心覆盖掉其他用户设置的值。 举个例子: 如果修改我们刚刚通过**SSA**创建的`test-server-side-apply`configmap,并且手动设置管理者为`test`(通过--field-manager字段),此时`kubectl`会拒绝我们的提交,提示冲突:

```bash
$ kubectl apply --server-side=true --field-manager="test" -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: test-server-side-apply data: a: "a" # 把b,改成c了。 b: "c" EOF error: Apply failed with 1 conflict: conflict with "kubectl": .data.b Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
$ kubectl apply --server-side=true --field-manager="test" -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  name: test-server-side-apply
data:
  a: "a"
  # 把b,改成c了。
  b: "c" 
EOF
error: Apply failed with 1 conflict: conflict with "kubectl": .data.b
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
  current managers.
* You may co-own fields by updating your manifest to match the existing
  value; in this case, you'll become the manager if the other manager(s)
  stop managing the field (remove it from their configuration).
See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
```
从`kubectl`返回的提示,我们可以得知当冲突发生的时我们有三种选择:
Expand Down Expand Up @@ -113,21 +215,104 @@ $ kubectl apply --server-side=true --field-manager="test" -f - <<EOF apiVersion:
这表明`service.spec.ports`这个数组由`ports.port`和`ports.protocol`的组合值来确定唯一性。例如我们通过**SSA** apply这样一个`service`:
```yaml
kubectl apply --server-side=true -f - <<EOF apiVersion: v1 kind: Service metadata: name: my-cs spec: ports: - name: 5678-8080 port: 5678 protocol: TCP targetPort: 8080 type: ClusterIP EOF
kubectl apply --server-side=true -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: my-cs
spec:
  ports:
  - name: 5678-8080
    port: 5678
    protocol: TCP
    targetPort: 8080
  type: ClusterIP
EOF
```
这表示“5768”+“TCP”组成了唯一标识,当我们继续使用**SSA** apply对这个`service`进行修改时,如果在`ports`中有相同的`port` + `protocol`组合,那会被认定为是同一条记录。
这意味着如果另一个管理者尝试apply具有相同`port` + `protocol`组合的`ports`,会抛出冲突:
```yaml
$ kubectl apply --server-side=true --field-manager="test" -f - <<EOF apiVersion: v1 kind: Service metadata: name: my-cs spec: ports: - name: 5679-9999 # 这里的port和protocol还是5679和TCP的组合 port: 5679 protocol: TCP targetPort: 9999 type: ClusterIP EOF error: Apply failed with 2 conflicts: conflicts with "kubectl": - .spec.ports[port=5679,protocol="TCP"].targetPort - .spec.ports[port=5679,protocol="TCP"].targetPort .....
$ kubectl apply --server-side=true --field-manager="test" -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: my-cs
spec:
  ports:
  - name: 5679-9999
    # 这里的port和protocol还是5679和TCP的组合
    port: 5679
    protocol: TCP
    targetPort: 9999
  type: ClusterIP
EOF
error: Apply failed with 2 conflicts: conflicts with "kubectl":
- .spec.ports[port=5679,protocol="TCP"].targetPort
- .spec.ports[port=5679,protocol="TCP"].targetPort
.....
```
如果该管理者修改了`port`或`protocol`再次apply,`ports`字段中会出现两条记录,分属不同的管理者:
```yaml
$ kubectl get svc my-cs -oyaml apiVersion: v1 kind: Service metadata: creationTimestamp: "2022-12-04T14:32:24Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:ports: k:{"port":5679,"protocol":"TCP"}: .... manager: kubectl <-第一次apply operation: Apply time: "2022-12-04T14:32:24Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:ports: k:{"port":5679,"protocol":"UDP"}: .... f:type: {} manager: test <-第二次apply operation: Apply time: "2022-12-04T14:35:11Z" name: my-cs namespace: default resourceVersion: "1340102" uid: 6f7e23ab-165f-4498-8354-d3b83924faba spec: clusterIP: 10.96.155.168 clusterIPs: - 10.96.155.168 ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: # 有两条记录 - name: 5679-8080 port: 5679 protocol: TCP targetPort: 8080 - name: 5679-9999 port: 5679 protocol: UDP targetPort: 9999 sessionAffinity: None type: ClusterIP status: loadBalancer: {}
$ kubectl get svc my-cs -oyaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2022-12-04T14:32:24Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:ports:
          k:{"port":5679,"protocol":"TCP"}:
....
    manager: kubectl <-第一次apply
    operation: Apply
    time: "2022-12-04T14:32:24Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:ports:
          k:{"port":5679,"protocol":"UDP"}:
....
        f:type: {}
    manager: test <-第二次apply
    operation: Apply
    time: "2022-12-04T14:35:11Z"
  name: my-cs
  namespace: default
  resourceVersion: "1340102"
  uid: 6f7e23ab-165f-4498-8354-d3b83924faba
spec:
  clusterIP: 10.96.155.168
  clusterIPs:
  - 10.96.155.168
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  # 有两条记录
  - name: 5679-8080
    port: 5679
    protocol: TCP
    targetPort: 8080
  - name: 5679-9999
    port: 5679
    protocol: UDP
    targetPort: 9999
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
```
显然,这种合并策略更好的解决了多管理者之间的协作问题。
Expand Down

0 comments on commit 4437f9f

Please sign in to comment.