-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
I've been working on re-using InstallConfig within Hive's ClusterDeployment CRD and run into some problems that could use discussion:
- Can't be used within a kube type due to missing kube generated methods (DeepCopyInto). Will need to be maintained as a full Kubernetes object for this to work. At present I'm copying your code, and getting codegen working in our code base, with a few tweaks that are required.
- Use of IPNet for CIDRs doesn't work in Kube. Need a custom type or string I believe.
- InstallConfig carries ObjectMeta which can trigger some things we don't necessarily want in our repo. (this one may not be a big deal)
Should the canonical source of the cluster config type live in the Installer? Would the config ever contain options that the installer ignored or did not act on? (and perhaps Hive or other actors would?) (I think the answer is probably "no" here)
How can we share defaulting and validation code? Ideally we want to inform an API caller their config is not valid without relying on the Installer to fail in a pod we're running. Quicker feedback will be important and ideally we should all be sharing the same code to do so.
Is InstallConfig appropriately named? Per last arch call we agreed it's not just an install time thing. Would ClusterConfig be more accurate?
Some options to clean this up:
(1) Keep InstallConfig in Installer. Hookup Kubernetes code gen, commit to all the guarantees required for the type going forward: treat as an externally facing API object that must to be greater than or equal to the serialization of any embedding format.
(2) Hand it over to Hive, lets us maintain the Kube generation and API contract, vendor into your repo. (I would propose a breakout of something like ClusterConfig in the Hive repo, and InstallConfig remaining in your repo and having fields ClusterConfig and Admin (which doesn't map nicely to kube secrets we'd use for this info)).
(3) Place ClusterConfig definition directly into the core OpenShift API server. I don't know if this would fly but I kind of liked the idea, it makes it very official, we get API server for free and better options for versioning, validation and defaulting than CRDs, and it would signify how hard it is, or should be to change.
(4) Spin into a separate project and repo we all vendor.
Open to other suggestions. Please let me know what you think.