-
Notifications
You must be signed in to change notification settings - Fork 256
HIVE-2302:update procedure of cluster adoption #2805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@jianping-shu: This pull request references HIVE-2302 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jianping-shu The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
2uasimojo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @jianping-shu.
I think there is another main scenario we need to account for: adoption of a cluster that was installed "manually" via openshift-install. In this case the metadata.json file was produced by that install process, and we should include instructions for how to package it up properly into a Secret (probably some variant of oc create secret ... --from-file=/path/to/metadata.json), which should then get referenced by this field.
docs/using-hive.md
Outdated
| name: pull-secret | ||
| ``` | ||
|
|
||
| Note: my-gcp-cluster-metadata-json is the optional and shall be applicable only if the cluster is already managed by hive and my-gcp-cluster-metadata-json already exists in the original ClusterDeployment yaml. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Note: my-gcp-cluster-metadata-json is the optional and shall be applicable only if the cluster is already managed by hive and my-gcp-cluster-metadata-json already exists in the original ClusterDeployment yaml. | |
| Note: my-gcp-cluster-metadata-json is optional and shall be applicable only if the cluster is already managed by hive and my-gcp-cluster-metadata-json already exists in the original ClusterDeployment yaml. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will update.
docs/using-hive.md
Outdated
| ``` | ||
| If the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`. | ||
| If my-aws-cluster-metadata-json doesn't exist and the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest phrasing this more like
| If my-aws-cluster-metadata-json doesn't exist and the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`. | |
| If not specifying a `metadataJSONSecretRef` and the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`. |
This also makes these sentences less awkward when they're describing the other platforms (e.g. we don't have the substring gcp in a sentence that's talking about an AWS cluster).
Similar below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Will update.
Is it better option to make hive controller to update in this case? I am worried about that the documentation needs to handle all the details of different platforms AWS/Azure/GCP, different versions of OCP, and if have shared VPC etc... |
If I'm understanding your question correctly, you're suggesting we would prefer to put all of those details into the CD and have hive-controllers generate the metadata.json secret from there? If that's what you're suggesting, the answer is unequivocally "no". The whole benefit we get from metadata.json being opaquely carried through from installer to destroyer is that we don't have to worry about those individual details -- they are whatever they were at install time. It would be much harder to describe to the user how they should unpack that file and parlay its contents into the bespoke ClusterDeployment fields we've grown over the years than to tell them to pass it through blindly in a Secret. |
|
In HIVE-2302 test case, we ever tested the case of upgrade the hive version from old one to the latest one with HIVE-2302 function, hive controller will make retrofit on metadataJSONSecretRef on its own. |
|
Okay, I understand you now. Definitely if metadata.json is available in any form -- either from the previous incarnation of the ClusterDeployment or from the installer run -- we should instruct the user to prefer that. If they can avoid both introspecting the actual json and mucking with the bespoke CD platform fields, that's the best option. But when it's not available, you're asking whether we should instruct them to build it as metadata.json or to populate the CD fields and let the retrofit code do it for them. And here I'm torn. I had a vague hope that the retrofit code could be temporary -- that we could get rid of it once we're confident no "legacy" clusters exist in the wild. But I'm not sure that's realistic: when would we have that confidence? So maybe this isn't a factor to consider. And in that case, I can't decide which is less awkward. It probably depends on the background of the user. I think I'm leaning toward doing as you suggest: instruct them to populate the CD fields and let the retrofit code generate the metadata.json. If you make the instructions just link into our API fields with a generic statement like, "If metadata.json is not available, be sure to populate any of these fields as appropriate..." then I think that's easier than trying to teach them how to format metadata.json. |
97a7b6d to
a43c4ef
Compare
a43c4ef to
2def9dd
Compare
|
@2uasimojo The document was updated to utilize the hiveutil "--adopt-metadata-json" function, PTAL |
|
@jianping-shu: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
HIVE-2302 introduced metadataJSONSecret.
If the cluster is migrated from hive 1 to hive 2, it is expected to copy the metadataJSONSecret so update a little bit for procedure of cluster adoption.