-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for deployments and configmaps in k8s/OS recipes #12203
Conversation
ci-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good to me. Please take a look my inline comments
It's not my final review, I want to review it more deeply one more time.
...ures/kubernetes/src/main/java/org/eclipse/che/workspace/infrastructure/kubernetes/Names.java
Show resolved
Hide resolved
...main/java/org/eclipse/che/workspace/infrastructure/kubernetes/KubernetesInternalRuntime.java
Outdated
Show resolved
Hide resolved
...java/org/eclipse/che/workspace/infrastructure/kubernetes/namespace/KubernetesObjectUtil.java
Outdated
Show resolved
Hide resolved
...main/java/org/eclipse/che/workspace/infrastructure/kubernetes/KubernetesInternalRuntime.java
Show resolved
Hide resolved
...a/org/eclipse/che/workspace/infrastructure/kubernetes/environment/KubernetesEnvironment.java
Outdated
Show resolved
Hide resolved
...a/org/eclipse/che/workspace/infrastructure/kubernetes/environment/KubernetesEnvironment.java
Outdated
Show resolved
Hide resolved
...ava/org/eclipse/che/workspace/infrastructure/openshift/environment/OpenShiftEnvironment.java
Outdated
Show resolved
Hide resolved
Results of automated E2E tests of Eclipse Che Multiuser on OCP: |
ci-build |
@ibuziuk |
@sleshchenko I though that both
Yeah, as a matter of fact it also fails from time to time, like in the last run with failures in |
@ibuziuk I've faced the same issue on my PR #12176 (comment) |
@sleshchenko Thank you for the early review -- I know this is a tedious PR to look over. I've added a commit to address your comments and fix a few issues I found while double checking it. See the commit message for a detailed list of changes (it changes a method name so of course it touches ~20 files). Also I forgot to add the recipe I have been using for testing: kind: List
items:
-
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws
labels:
test.label: test
spec:
replicas: 2
selector:
matchLabels:
che.deployment_test: test
template:
metadata:
name: ws
labels:
test.label: test
spec:
containers:
-
image: eclipse/che-dev:nightly
name: dev
resources:
limits:
memory: 512Mi
env:
-
name: CONFIGMAP_TEST
valueFrom:
configMapKeyRef:
name: test-configmap
key: test.entry
-
apiVersion: v1
kind: ConfigMap
metadata:
labels:
test.label: test
name: test-configmap
data:
test.entry: 'test.key' I use this to create a Che 7 workspace based off a deployment, with a configmap included for testing. I basically duplicated the current Che 7 stack and replaced the recipe with this one in my testing. Of course, functionality for currently existing recipes should be unchanged. |
To add: I've been experimenting with how the different metas in a deployment affect the workspace, and I've found:
I think we should either
Added a commit to make sure we're getting the machine correctly now. To test, change the |
@amisevsk Please try to use |
@sleshchenko I think the purpose of |
Basically, the only added constraint here is, like in the I've heard from people that they see stack creation as something of a arcane art, so maybe improving the docs is in order regardless :) |
very good point. I do believe we should consider creating a separate issue (probably for the next sprint) which would describe creation of custom stacks based on k8s / openshift recipes |
@amisevsk nice job! You have probably already planned it but you should update https://www.eclipse.org/che/docs/che-6/recipes.html#kubernetes-yaml-limitations-and-restrictions as well. Does it work with deployments that have more than one pod? |
@amisevsk I wanted to play with the PR, and figured out that I can not create stack based on the recipe you provided: |
I've added user dashboard and validation to the PR so that @ibuziuk 's use case (actually creating a new stack with the recipe) should work now. It's my first time doing anything mildly big in the dashboard side so I'm looking forward to some good feedback :) @l0rd The recipes should have no problem with multiple deployments, but AFAIK Kubernetes deployments support only a single pod spec per deployment. The validation will complain if you ask for more than one replica, and the server-side will overwrite it with "1" even if you manually change the recipe in a new stack. As an aside, with the dashboard validation in place, a lot of our discussion about workspace machines is easier to take, as the recipe will correctly (I think) get workspace machines from deployments in recipes. I'm working on a docs PR to update the requirements there. @ashumilova @benoitf github automatically requested your review re: the last commit in the PR, although I'm happy to get more feedback for the rest, if you're interested. |
f94bc14
to
8e5ea63
Compare
ci-test |
Results of automated E2E tests of Eclipse Che Multiuser on OCP: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I inlined comments about some non-major stuff.
...lipse/che/workspace/infrastructure/kubernetes/namespace/pvc/PerWorkspacePVCStrategyTest.java
Show resolved
Hide resolved
...main/java/org/eclipse/che/workspace/infrastructure/kubernetes/KubernetesInternalRuntime.java
Show resolved
Hide resolved
...a/org/eclipse/che/workspace/infrastructure/kubernetes/environment/KubernetesEnvironment.java
Show resolved
Hide resolved
...ipse/che/workspace/infrastructure/kubernetes/environment/KubernetesEnvironmentValidator.java
Outdated
Show resolved
Hide resolved
.../che/workspace/infrastructure/kubernetes/environment/KubernetesEnvironmentValidatorTest.java
Show resolved
Hide resolved
...va/org/eclipse/che/workspace/infrastructure/kubernetes/provision/UniqueNamesProvisioner.java
Show resolved
Hide resolved
...lipse/che/workspace/infrastructure/kubernetes/wsplugins/KubernetesPluginsToolingApplier.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Please take a look my inline comment
...main/java/org/eclipse/che/workspace/infrastructure/kubernetes/KubernetesInternalRuntime.java
Outdated
Show resolved
Hide resolved
...main/java/org/eclipse/che/workspace/infrastructure/kubernetes/KubernetesInternalRuntime.java
Outdated
Show resolved
Hide resolved
...a/org/eclipse/che/workspace/infrastructure/kubernetes/environment/KubernetesEnvironment.java
Show resolved
Hide resolved
...va/org/eclipse/che/workspace/infrastructure/kubernetes/provision/UniqueNamesProvisioner.java
Show resolved
Hide resolved
...va/org/eclipse/che/workspace/infrastructure/kubernetes/provision/UniqueNamesProvisioner.java
Outdated
Show resolved
Hide resolved
Add support of deployments in Kubernetes and OpenShift recipes. Important changes: - Since a (Kubernetes|OpenShift)Environment now can contain *both* pods and deployments simultaneously, we need to: - Introduce a wrapper class, PodData, which stores spec and metadata for either a pod or deployment, allowing provisioners to look in one place for specs that need to be modified - Disallow modifying the results of KubernetesEnvironment.getPods() directly, as this would ignore any deployments in the environment - Add method KubernetesEnvironment.getDeployments(), to obtain deployment objects directly when needed. - Modify workspace start process to start all deployments in environment as well as all pods. Note: In an effort to keep the changes in this PR readable, nothing is done to fix broken tests due to the changes above. This commit will not compile unless test compilation is skipped. The next commit fixes tests. Signed-off-by: Angel Misevski <[email protected]>
Changes necessary to make tests compile and pass after introduction of deployments into Kubernetes/OpenShift recipes. Signed-off-by: Angel Misevski <[email protected]>
Allow configmaps to be specified in Kubernetes/OpenShift recipes. Also changes UniqueNamesProvisioner to rewrite configmap names across the environment to avoid name collisions. Signed-off-by: Angel Misevski <[email protected]>
@ashumilova @benoitf ping for review regarding 90eed26. This commit changes recipe validation a fair bit to support other objects and will be useful further on when adding more objects to recipes support. The short explanation is: Instead of parsing/validating |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really great job, played with it and really like it. Special thanks for UD validations and doc PR 👍
- Rename method in KubernetesInternalRuntime to make its meaning more clear - Fix some documentation issues - Refactor KubernetesEnvironment to set podData in constructor instead of through builder, avoiding inconsistencies in podData when using constructor directly. - Rename KubernetesEnvironment#getPodData to getPodsData to better represent the data that is returned - Fix issue where pod-based recipes were not correctly getting deployment name label and so were failing to start - Fix issue where deployments were not given a unique name by provisioner. Signed-off-by: Angel Misevsk <[email protected]>
Add validation and processing on the dashboard side for recipes containing deployments and/or configmaps. This allows users to create new stacks using these items in recipes. Signed-off-by: Angel Misevski <[email protected]>
Remove podName parameter from addPod method and simply get it from the pod's metadata. Additional cleanup and documentation changes: - Make some log messages level 'info' instead of 'debug' - Modify argument order in messages to to make more sense - Add docs for KubernetesEnvironment methods about how/when pods map can be modified - Fix issue where pods referencing nonexistent configmaps would emit unreadable error messages due to configmap renaming. - Clean up code according to PR Quality Review Signed-off-by: Angel Misevski <[email protected]>
Add test cases for - KubernetesInternalRuntime, when creating from deployments - UniqueNamesProvisioner, to ensure deployments and configmaps are handled correctly Signed-off-by: Angel Misevsk <[email protected]>
e -> { | ||
PodTemplateSpec podTemplate = e.getValue().getSpec().getTemplate(); | ||
podData.put( | ||
e.getKey(), new PodData(podTemplate.getSpec(), podTemplate.getMetadata())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amisevsk During resolving conflicts with my PR I discovered that
PodData { "test-deployments": PodData{ metadata: name: "deployment-pod" .... spec: .... }
It is done by purpose? Would not be better to sync pod name and map key? If it would be better - I can do it in my PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The corresponding unit test is here https://github.com/eclipse/che/pull/12335/files#diff-7b6478093925317e7c597de94e907f45R262
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure which is better -- I don't think keys are really used in a very direct way. IIRC we pretty much always just iterate through the values in the map.
The issues with syncing the PodData
key with pod name for deployments are
- We would also need to modify the key for the deployments map, as otherwise using the same key between the two wouldn't work. Also, either the PodData key doesn't match the resource name, or the deployments map key doesn't match the resource name.
- The actual pod name in a deployment is irrelevant, as pod name is generated from deployment name. This could be confusing in debugging or testing, as this value is basically ignored.
Personally, I'm open to either choice so long as every key in PodData matches some key in the deployments or pods map. I lean slightly towards keeping it the way it is, as this keeps the key in sync with the deployment name, but I'm not firm either way.
One thing that I haven't tested yet but would be interested to know, is if pod name in deployments is even necessary. This could be an edge case for how PodData is handled. I can look into it once I get back to working on this bit of code.
Remove podName parameter from addPod method and simply get it from the pod's metadata. Additional cleanup and documentation changes: - Make some log messages level 'info' instead of 'debug' - Modify argument order in messages to to make more sense - Add docs for KubernetesEnvironment methods about how/when pods map can be modified - Fix issue where pods referencing nonexistent configmaps would emit unreadable error messages due to configmap renaming. - Clean up code according to PR Quality Review Signed-off-by: Angel Misevski <[email protected]>
What does this PR do?
Adds support for Deployments and ConfigMaps in Kubernetes and OpenShift recipes.
This PR is split into
threesix commits:Some relatively major changes here:
PodTemplateSpec
, there is no way to provide provisioners with the environment's pods directly. As a result, I've added a class internal toKubernetesEnvironment
that basically abstracts pod spec and template to allow us to return one map of objects to be provisioned. This means relatively minor changes to all the provisionersPodData
, and has the same methods asPod
s, so that it's mostly a drop-in replacement.getPods()
and it's parallelgetDeployments()
need to be used more carefully. Ideally, they are only used when we need read-only access to these objects (e.g. when we're deploying them). Instead,getPodData()
should be used, as this returns metadata and specs for all pods/deployments in the environment.Feedback on implementation / improvements is more than welcome :)
What issues does this PR fix or reference?
#11505 and #11507
Release Notes
Add support for Kubernetes deployments and configmaps in Kubernetes and OpenShift recipes
Docs PR
eclipse-che/che-docs#647