Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Porch's Different Runtime's Inconsistencies #854

Open
Catalin-Stratulat-Ericsson opened this issue Feb 13, 2025 · 0 comments
Open

Porch's Different Runtime's Inconsistencies #854

Catalin-Stratulat-Ericsson opened this issue Feb 13, 2025 · 0 comments

Comments

@Catalin-Stratulat-Ericsson
Copy link
Contributor

This issue comes about from the investigation of issue#827.
From investigating that issue it was found that sub-packages were not the root cause of the package render breakdown but rather the inconsistencies between the Porch "GRPC" function runner runtime and the "built-in" runtime.
More details on the different run-times in porch can be found here at 50 minutes into the video Istvan's Porch "Runtime" explination.

But to summarize what did occur was that the package used in #827 was using the apply-replacements kpt function. this is one of only 3 functions that are used currently in the builtinruntime which can be seen here. Due to this the package was rendered locally on the porch server's builtin runtime instead of through the GRPC function runner which seems to return the error "unable to find field "metadata.annotations.sub1-annotation1" in replacement target"

Originally we believed that this was a sub-package issue as it only occurs the moment we exit from the lowest level of the package(sub1sub1) and go a level to (sub1) however the issue persisted even when using a porch package without sub-packages.

main--+--sub1--+--sub1sub1
      |        |
      |        +--sub1sub2
      |
      +--sub2--+--sub2sub1
               |
               +--sub2sub2

The kpt function rendering states that the rendering works backwards so from sub1sub1 first to main last in our sub-package example but on top of that it renders sub-resources also e.g. sub1's apply-replacements function applies to the contents in its package AND any sub-packages in this case both sub1sub1 and sub1sub2 and this is where the problem occurred render logic.

Below is a apply replacements configuration file which mainly has 2 parts, a source specifying where to get the data which it is to insert and the targets specifying the resource to insert the data into and where. the main error occurred because as you can see we state that any deployment is used as a target. What occurred is that when sub1 was being rendered it was looking for all deployments in its package and its sub-packages (3 in total 1 in each sub1,sub1sub1 and sub1sub2) the problem being that it was looking for the filed "metadata.annotations.sub1-annotation1" which only exists in sub1 and does not exist in sub1sub1 or sub1sub2 deployments leading to the error message "metadata.annotations.sub1-annotation1" in replacement target" the replacement target being sub1sub1 & sub1sub2 deployments.

apiVersion: fn.kpt.dev/v1alpha1
kind: ApplyReplacements
metadata:
  name: replace-annotation1
  annotations:
    config.kubernetes.io/local-config: "true"
replacements:
- source:
    kind: ConfigMap
    name: sub1-cm-deployment
    fieldPath: data.annotation1
  targets:
  - select:
      kind: Deployment
    fieldPaths:
    - metadata.annotations.sub1-annotation1

But this error can occur even in a porch package without sub-packages. For example lets take the same apply-replacements file above and imagine it is in a porch package with these 2 deployment files below. due to the selector stating that all of the files of type deployment are to be injected in the location "metadata.annotations.sub1-annotation1" whilst the first deployment does have the fieldPath specified above the second deployment does not it only as "metadata.annotations.main-annotation1" and "metadata.annotations.main-annotation2" which will cause the same error as our sub-package example however the error will not be present if we use "kpt fn render" or run it with the porch GRPC runtime and will silently apply the replacements for the "metadata.annotations.sub1-annotation1" and ignore the others giving the user a potential false sense of security that everything passed.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sub1-deployment
  annotations:
    sub1-annotation1: sub1-local-annotation-1
    sub1-annotation2: sub1-local-annotation-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: sub1-deployment
  template:
    metadata:
      labels:
        app.kubernetes.io/name: sub1-deployment
    spec:
      containers:
      - name: nginx
        image: nginx:latest
apiVersion: apps/v1
kind: Deployment
metadata:
  name: main-deployment
  annotations:
    main-annotation1: main-local-annotation-1
    main-annotation2: main-local-annotation-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: main-deployment
  template:
    metadata:
      labels:
        app.kubernetes.io/name: main-deployment
    spec:
      containers:
      - name: nginx
        image: nginx:latest

Now the main question is which is right and which is wrong. should the error be shown or is it wrongly being hidden away?

Since the apply-replacements kpt function is based on the kustomize replacements function detailed here i created an example kustomize package using the replacements to see what kustomize does in the same situation.

The package had the following structure

testing-kustomize/
├── main
│   ├── config-map.yaml
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── second-deployment.yaml
└── overlays
    ├── sub1
    │   ├── another-config-map.yaml
    │   └── kustomization.yaml
    └── sub2
        ├── config-map.yaml
        └── kustomization.yaml

with 2 deployment files in the base and 2 overlays which attempt to override the contents of the main with the data in their respective config maps. The deployment.yaml only had 2 annotations

    main-annotation1: no-value-1
    main-annotation2: no-value-2

whilst the second-deployment.yaml had

    main-annotation3: no-value-1
    main-annotation4: no-value-2

and the kustomization.yaml in overlay sub1 contained the following

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../main
  - another-config-map.yaml  # Include the ConfigMap resource

replacements:
  # First replacement
  - source:
      kind: ConfigMap
      name: sub1-config-map-deployment
      fieldPath: data.annotation1
    targets:
      - select:
          kind: Deployment
        fieldPaths:
          - metadata.annotations.main-annotation1

  # Second replacement
  - source:
      kind: ConfigMap
      name: sub1-config-map-deployment
      fieldPath: data.annotation2
    targets:
      - select:
          kind: Deployment
        fieldPaths:
          - metadata.annotations.main-annotation2

when attempting to apply the kustomization we were greeted with the same error message

catalin@E-5CG2302VDX:~/testing-kustomize/overlays/sub1$ kubectl apply -k .
error: unable to find field "metadata.annotations.main-annotation1" in replacement target

so the error is a valid response for the issue and this raises the main questions of why does the Porch GRPC runtime and the "kpt fn render" not report/present it to the user but the builtin runtime does.

When debugging through the builtin runtime it was found that this line is the one which returns the error line

One possible reason for the reason could be that since the GRPC runtime uses the function runner to create a apply-replacements pod with the binary for the apply-replacement KPT function loaded on that pod and executed using the wrapper-server here. Perhaps the error is slipping through there instead of being caught

A further investigation needs to be carried out to find out exactly why both KPT render and the Porch GRPC function runner runtime are both failing to report this error in the apply-replacements function or perhaps in other functions also.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant