Skip to content

Conversation

@jianping-shu
Copy link
Contributor

@jianping-shu jianping-shu commented Dec 5, 2025

HIVE-2302 introduced metadataJSONSecret.
If the cluster is migrated from hive 1 to hive 2, it is expected to copy the metadataJSONSecret so update a little bit for procedure of cluster adoption.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Dec 5, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Dec 5, 2025

@jianping-shu: This pull request references HIVE-2302 which is a valid jira issue.

In response to this:

HIVE-2302 introduced metadataJSONSecret.
If the cluster is migrated from hive 1 to hive 2, it is expected to copy the metadataJSONSecret so update a little bit for procedure of cluster adoption.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from 2uasimojo and dlom December 5, 2025 08:56
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 5, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jianping-shu
Once this PR has been reviewed and has the lgtm label, please assign suhanime for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Member

@2uasimojo 2uasimojo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @jianping-shu.

I think there is another main scenario we need to account for: adoption of a cluster that was installed "manually" via openshift-install. In this case the metadata.json file was produced by that install process, and we should include instructions for how to package it up properly into a Secret (probably some variant of oc create secret ... --from-file=/path/to/metadata.json), which should then get referenced by this field.

name: pull-secret
```

Note: my-gcp-cluster-metadata-json is the optional and shall be applicable only if the cluster is already managed by hive and my-gcp-cluster-metadata-json already exists in the original ClusterDeployment yaml.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Note: my-gcp-cluster-metadata-json is the optional and shall be applicable only if the cluster is already managed by hive and my-gcp-cluster-metadata-json already exists in the original ClusterDeployment yaml.
Note: my-gcp-cluster-metadata-json is optional and shall be applicable only if the cluster is already managed by hive and my-gcp-cluster-metadata-json already exists in the original ClusterDeployment yaml.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will update.

```
If the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`.
If my-aws-cluster-metadata-json doesn't exist and the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest phrasing this more like

Suggested change
If my-aws-cluster-metadata-json doesn't exist and the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`.
If not specifying a `metadataJSONSecretRef` and the cluster you are looking to adopt is on AWS and uses a shared VPC, you will also need to include the name of the hosted zone role in `spec.clusterMetadata.platform.aws.hostedZoneRole`.

This also makes these sentences less awkward when they're describing the other platforms (e.g. we don't have the substring gcp in a sentence that's talking about an AWS cluster).

Similar below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. Will update.

@jianping-shu
Copy link
Contributor Author

Thanks for this @jianping-shu.

I think there is another main scenario we need to account for: adoption of a cluster that was installed "manually" via openshift-install. In this case the metadata.json file was produced by that install process, and we should include instructions for how to package it up properly into a Secret (probably some variant of oc create secret ... --from-file=/path/to/metadata.json), which should then get referenced by this field.

Is it better option to make hive controller to update in this case? I am worried about that the documentation needs to handle all the details of different platforms AWS/Azure/GCP, different versions of OCP, and if have shared VPC etc...

@2uasimojo
Copy link
Member

2uasimojo commented Dec 8, 2025

Thanks for this @jianping-shu.
I think there is another main scenario we need to account for: adoption of a cluster that was installed "manually" via openshift-install. In this case the metadata.json file was produced by that install process, and we should include instructions for how to package it up properly into a Secret (probably some variant of oc create secret ... --from-file=/path/to/metadata.json), which should then get referenced by this field.

Is it better option to make hive controller to update in this case? I am worried about that the documentation needs to handle all the details of different platforms AWS/Azure/GCP, different versions of OCP, and if have shared VPC etc...

If I'm understanding your question correctly, you're suggesting we would prefer to put all of those details into the CD and have hive-controllers generate the metadata.json secret from there?

If that's what you're suggesting, the answer is unequivocally "no". The whole benefit we get from metadata.json being opaquely carried through from installer to destroyer is that we don't have to worry about those individual details -- they are whatever they were at install time. It would be much harder to describe to the user how they should unpack that file and parlay its contents into the bespoke ClusterDeployment fields we've grown over the years than to tell them to pass it through blindly in a Secret.

@jianping-shu
Copy link
Contributor Author

jianping-shu commented Dec 9, 2025

In HIVE-2302 test case, we ever tested the case of upgrade the hive version from old one to the latest one with HIVE-2302 function, hive controller will make retrofit on metadataJSONSecretRef on its own.
I was suggesting to use this hive controller logic directly in the case, instead of making the procedure in the document like get the cluster id, infra id etc and manipulate them into metadata.json.
WDYT?

@2uasimojo
Copy link
Member

Okay, I understand you now.

Definitely if metadata.json is available in any form -- either from the previous incarnation of the ClusterDeployment or from the installer run -- we should instruct the user to prefer that. If they can avoid both introspecting the actual json and mucking with the bespoke CD platform fields, that's the best option.

But when it's not available, you're asking whether we should instruct them to build it as metadata.json or to populate the CD fields and let the retrofit code do it for them. And here I'm torn.

I had a vague hope that the retrofit code could be temporary -- that we could get rid of it once we're confident no "legacy" clusters exist in the wild. But I'm not sure that's realistic: when would we have that confidence? So maybe this isn't a factor to consider.

And in that case, I can't decide which is less awkward. It probably depends on the background of the user.

I think I'm leaning toward doing as you suggest: instruct them to populate the CD fields and let the retrofit code generate the metadata.json. If you make the instructions just link into our API fields with a generic statement like, "If metadata.json is not available, be sure to populate any of these fields as appropriate..." then I think that's easier than trying to teach them how to format metadata.json.

@jianping-shu
Copy link
Contributor Author

@2uasimojo The document was updated to utilize the hiveutil "--adopt-metadata-json" function, PTAL

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 11, 2025

@jianping-shu: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants