-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kops - Migrate remaining upgrade jobs to k8s infra prow clusters #32885
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: hakman, rifelpet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@rifelpet: Updated the
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
btw, we need to eliminate the dependency on the kops-ci bucket as that is no longer being updated after 1st of Aug. k8s-staging-kops is the replacement bucket and we do have some jobs fetching kops from there |
I'll open a PR updating the marker locations |
@upodroid which job publishes latest-ci.txt version marker to k8s-staging-kops? I see the marker files exist but I can't find the job that publishes them. I'd like to use it as a reference for migrating the release branch version markers to k8s-staging-kops. |
Ah I see. This job doesn't create a kops cluster so the only access it needs is to the k8s-staging-kops bucket. Any thoughts on how a job can both create AWS kops clusters and have write access to GCS? This is what blocks the presubmits and If that won't be possible, then we'd need the kops clusters to run in the same cloud provider as the artifact bucket, either move the jobs to create GCP clusters or create an S3 artifacts bucket. Moving the artifacts to S3 shouldn't be a problem because any consumers of the bucket will access it publicly and anonymously. |
This migrates the remaining kops upgrade jobs to the k8s infra prow clusters. These jobs will fail because they use the k8s 1.24 / 1.25 / 1.26
e2e.test
binaries which are built with an aws-sdk-go version too old to authenticate on the EKS prow clusters. Once we see the failures we can assess whether we can ignore specific test cases or whether the whole job wont work in which case we'll just delete these jobs.Refs: kubernetes/kops#16637 kubernetes/k8s.io#5127 kubernetes/k8s.io#2625
/cc @upodroid @hakman