Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to use YAML anchors lost from v1.3.0 to v1.4.0 #107

Open
dmaphy opened this issue Feb 29, 2024 · 6 comments
Open

Possibility to use YAML anchors lost from v1.3.0 to v1.4.0 #107

dmaphy opened this issue Feb 29, 2024 · 6 comments

Comments

@dmaphy
Copy link

dmaphy commented Feb 29, 2024

Running the kyverno cli with current version v1.11.4 on a kyverno-test.yaml using anchors fails with a message like this currently:

Test errors:
  Path: ignore-delete-requests/kyverno-test.yaml
    Error: error converting YAML to JSON: yaml: unmarshal errors:
  line 43: key "policy" already set in map
  line 44: key "rule" already set in map
  line 45: key "namespace" already set in map
  line 48: key "policy" already set in map
  line 49: key "rule" already set in map
  line 50: key "namespace" already set in map
  line 53: key "policy" already set in map
  line 54: key "rule" already set in map
  line 55: key "namespace" already set in map
  line 58: key "policy" already set in map
  line 59: key "rule" already set in map
  line 60: key "namespace" already set in map
  line 63: key "policy" already set in map
  line 64: key "rule" already set in map
  line 65: key "namespace" already set in map
  line 68: key "policy" already set in map
  line 69: key "rule" already set in map
  line 70: key "namespace" already set in map
  line 73: key "policy" already set in map
  line 74: key "rule" already set in map
  line 75: key "namespace" already set in map
  line 78: key "policy" already set in map
  line 79: key "rule" already set in map
  line 80: key "namespace" already set in map
  line 83: key "policy" already set in map
  line 84: key "rule" already set in map
  line 85: key "namespace" already set in map
  line 88: key "policy" already set in map
  line 89: key "rule" already set in map
  line 90: key "namespace" already set in map
  line 93: key "policy" already set in map
  line 94: key "rule" already set in map
  line 95: key "namespace" already set in map
  line 98: key "policy" already set in map,
  line 99: key "rule" already set in map
  line 100: key "namespace" already set in map
  line 103: key "policy" already set in map
  line 104: key "rule" already set in map
  line 105: key "namespace" already set in map
  line 108: key "policy" already set in map
  line 109: key "rule" already set in map
  line 110: key "namespace" already set in map
  line 113: key "policy" already set in map
  line 114: key "rule" already set in map

Test Summary: 0 tests passed and 0 tests failed

We were running the kyverno cli in version v1.10.4 before. This has been working without any issues. Drilling down the issue, I've seen that with the update we've made for the kyverno cli, we also updated:

  • k8s.io/apimachinery from v0.27.1 to v0.29.2 and with this
  • sigs.k8s.io/yaml from v1.3.0 to v1.4.0

(The kubernetes apimachinery depends on this project, see https://github.com/kubernetes/apimachinery/blob/v0.29.1/pkg/util/yaml/decoder.go).

What I don't understand yet , is, why the functionality is lost. With my rudimentary knowledge of Go I understand that you forked the the goyaml.v2 and goyaml.v3 modules but currently rely on the goyaml.v2 module. The README.md files in the according subdirectory clearly state that goyaml.v2 includes support for anchors, but goyaml.v3 does not.

I could need support to actually understand what happened here. Also, are there any concrete plans to move to goyaml.v3 and to add support for anchors to goyaml.v3 then?

Thanks very much in advance for any help.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2024
@dmaphy
Copy link
Author

dmaphy commented May 31, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2024
@dmaphy
Copy link
Author

dmaphy commented Sep 1, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 30, 2024
@dmaphy
Copy link
Author

dmaphy commented Nov 30, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants