Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for deployments and configmaps in k8s/OS recipes #12203

Merged
merged 7 commits into from
Jan 9, 2019

Conversation

amisevsk
Copy link
Contributor

@amisevsk amisevsk commented Dec 17, 2018

What does this PR do?

Adds support for Deployments and ConfigMaps in Kubernetes and OpenShift recipes.

This PR is split into threesix commits:

  • The first commit adds support for Deployments and makes changes necessary to make main (not test) code compile. This commit only compiles if test compilation is skipped
  • The second commit fixes tests to accommodate the changes. It's separated out for readability.
  • The third commit adds support for configmaps. I'm lumping it with the rest of the PR because it's closely related and interconnected.
  • The sixth commit adds user dashboard validation and processing for configmaps and deployments in recipes.

Some relatively major changes here:

  • Since Deployments store the underlying pod as PodTemplateSpec, there is no way to provide provisioners with the environment's pods directly. As a result, I've added a class internal to KubernetesEnvironment that basically abstracts pod spec and template to allow us to return one map of objects to be provisioned. This means relatively minor changes to all the provisioners
    • The new class is PodData, and has the same methods as Pods, so that it's mostly a drop-in replacement.
  • Since environments now store both pods and/or deployments at the same time, the old methods getPods() and it's parallel getDeployments() need to be used more carefully. Ideally, they are only used when we need read-only access to these objects (e.g. when we're deploying them). Instead, getPodData() should be used, as this returns metadata and specs for all pods/deployments in the environment.
  • A lot of tests had to be modified fairly significantly to not accidentally test the scenario where we are returning a modifiable map of pods due to mocking -- ideally there would be a way to assert that provisioners do not modify the pods map directly. In general, this PR makes provisioners more complicated in a way I'm not altogether happy with.
  • I had to modify a lot of methods that took pods as parameters to instead take a pod spec and/or pod metadata separately, as there is no pod to pass around for the deployments case.

Feedback on implementation / improvements is more than welcome :)

What issues does this PR fix or reference?

#11505 and #11507

Release Notes

Add support for Kubernetes deployments and configmaps in Kubernetes and OpenShift recipes

Docs PR

eclipse-che/che-docs#647

@amisevsk amisevsk added the status/code-review This issue has a pull request posted for it and is awaiting code review completion by the community. label Dec 17, 2018
@benoitf benoitf added the kind/enhancement A feature request - must adhere to the feature request template. label Dec 17, 2018
@ibuziuk ibuziuk self-requested a review December 17, 2018 10:09
@ibuziuk
Copy link
Member

ibuziuk commented Dec 17, 2018

ci-test

Copy link
Member

@sleshchenko sleshchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good to me. Please take a look my inline comments
It's not my final review, I want to review it more deeply one more time.

@riuvshin
Copy link
Contributor

Results of automated E2E tests of Eclipse Che Multiuser on OCP:
Build details
Test report
docker image: eclipseche/che-server:12203
https://github.com/orgs/eclipse/teams/eclipse-che-qa please check this report.

@ibuziuk
Copy link
Member

ibuziuk commented Dec 17, 2018

ci-build

@sleshchenko
Copy link
Member

@ibuziuk ci-test job already build PR sources before running tests https://ci.codenvycorp.com/job/che-pullrequests-test-ocp/1312/. But Success Rate is 5% which indicates that something is wrong. What is the purpose of trigger build separately from tests? To have ci-build-check passed?

@ibuziuk
Copy link
Member

ibuziuk commented Dec 17, 2018

@sleshchenko I though that both ci-test / ci-build check are required.

To have ci-build-check passed

Yeah, as a matter of fact it also fails from time to time, like in the last run with failures in org.eclipse.che.api.workspace.server.hc.ServerCheckerTest.successfulCheckTest . Wondering if it is a common problem or related to the changes in this PR ?

@sleshchenko
Copy link
Member

@ibuziuk I've faced the same issue on my PR #12176 (comment)
It's a good time to create an issue for investigating a possible way to make this test stable.

@amisevsk
Copy link
Contributor Author

@sleshchenko Thank you for the early review -- I know this is a tedious PR to look over.

I've added a commit to address your comments and fix a few issues I found while double checking it. See the commit message for a detailed list of changes (it changes a method name so of course it touches ~20 files).

Also I forgot to add the recipe I have been using for testing:

kind: List
items:
  -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ws
      labels:
        test.label: test
    spec:
      replicas: 2
      selector:
        matchLabels:
          che.deployment_test: test
      template:
        metadata:
          name: ws
          labels: 
            test.label: test
        spec:
          containers:
            -
              image: eclipse/che-dev:nightly
              name: dev
              resources:
                limits:
                  memory: 512Mi
              env:
              - 
                name: CONFIGMAP_TEST
                valueFrom:
                  configMapKeyRef:
                    name: test-configmap
                    key: test.entry

  -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        test.label: test
      name: test-configmap
    data:
      test.entry: 'test.key'

I use this to create a Che 7 workspace based off a deployment, with a configmap included for testing. I basically duplicated the current Che 7 stack and replaced the recipe with this one in my testing.

Of course, functionality for currently existing recipes should be unchanged.

@amisevsk
Copy link
Contributor Author

To add: I've been experimenting with how the different metas in a deployment affect the workspace, and I've found:

  1. When creating a deployment, the pod template spec meta is ignored. This means that no matter what we set the pod name to, it will be overwritten with the deployment name (plus hashes for replica set, etc.)
  2. On the Che side, the pod template name is what is used to match with the machine name in the stack, so no matter what the deployment name is, the pod template spec name has to match the machine.

I think we should either

  • Add some docs to explain this discrepancy (I can do that), or
  • Try to fix it (should machine name come from deployment name instead?). This could be very tricky, as PodData is used to generate machine names throughout the code. I don't know if this is even possible.

Added a commit to make sure we're getting the machine correctly now. To test, change the spec.template.metadata.name in the recipe I posted to something or deployment name so that they don't match (an oversight on my part when writing the recipe).

@sleshchenko
Copy link
Member

@amisevsk Please try to use PodTemplate#metadata#generateName instead of name. Is it ignored as well? https://docs.openshift.com/container-platform/3.5/rest_api/kubernetes_v1.html#v1-objectmeta

@amisevsk
Copy link
Contributor Author

@sleshchenko generateName is also ignored, AFAICT. I'm also not sure it would offer many benefits, as it could result in a truncated name (making matching deployed objects to internal machines very difficult).

I think the purpose of generateName is to avoid name collisions when creating multiple objects from the same yaml. The deployment seems to always override it as necessary.

@amisevsk
Copy link
Contributor Author

Basically, the only added constraint here is, like in the pod recipe case, that the pod name has to match the machine name. The reason it's confusing is that normally it has no effect when writing normal Kubernetes deployments.

I've heard from people that they see stack creation as something of a arcane art, so maybe improving the docs is in order regardless :)

@ibuziuk
Copy link
Member

ibuziuk commented Dec 18, 2018

I've heard from people that they see stack creation as something of a arcane art, so maybe improving the docs is in order regardless :)

very good point. I do believe we should consider creating a separate issue (probably for the next sprint) which would describe creation of custom stacks based on k8s / openshift recipes

@l0rd
Copy link
Contributor

l0rd commented Dec 18, 2018

@amisevsk nice job!

You have probably already planned it but you should update https://www.eclipse.org/che/docs/che-6/recipes.html#kubernetes-yaml-limitations-and-restrictions as well.

Does it work with deployments that have more than one pod?

@ibuziuk
Copy link
Member

ibuziuk commented Dec 18, 2018

@amisevsk I wanted to play with the PR, and figured out that I can not create stack based on the recipe you provided:

image

@slemeur slemeur mentioned this pull request Dec 18, 2018
69 tasks
@amisevsk
Copy link
Contributor Author

amisevsk commented Dec 20, 2018

I've added user dashboard and validation to the PR so that @ibuziuk 's use case (actually creating a new stack with the recipe) should work now. It's my first time doing anything mildly big in the dashboard side so I'm looking forward to some good feedback :)

@l0rd The recipes should have no problem with multiple deployments, but AFAIK Kubernetes deployments support only a single pod spec per deployment. The validation will complain if you ask for more than one replica, and the server-side will overwrite it with "1" even if you manually change the recipe in a new stack.

As an aside, with the dashboard validation in place, a lot of our discussion about workspace machines is easier to take, as the recipe will correctly (I think) get workspace machines from deployments in recipes. I'm working on a docs PR to update the requirements there.

@ashumilova @benoitf github automatically requested your review re: the last commit in the PR, although I'm happy to get more feedback for the rest, if you're interested.

@amisevsk
Copy link
Contributor Author

@garagatyi
Copy link

ci-test

@riuvshin
Copy link
Contributor

Results of automated E2E tests of Eclipse Che Multiuser on OCP:
Build details
Test report
docker image: eclipseche/che-server:12203
https://github.com/orgs/eclipse/teams/eclipse-che-qa please check this report.

Copy link

@garagatyi garagatyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I inlined comments about some non-major stuff.

Copy link
Member

@sleshchenko sleshchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Please take a look my inline comment

Add support of deployments in Kubernetes and OpenShift recipes.

Important changes:
- Since a (Kubernetes|OpenShift)Environment now can contain *both* pods
  and deployments simultaneously, we need to:
  - Introduce a wrapper class, PodData, which stores spec and metadata
    for either a pod or deployment, allowing provisioners to look in one
    place for specs that need to be modified
  - Disallow modifying the results of KubernetesEnvironment.getPods()
    directly, as this would ignore any deployments in the environment
  - Add method KubernetesEnvironment.getDeployments(), to obtain
    deployment objects directly when needed.
- Modify workspace start process to start all deployments in environment
  as well as all pods.

Note: In an effort to keep the changes in this PR readable, nothing is
done to fix broken tests due to the changes above. This commit will not
compile unless test compilation is skipped. The next commit fixes tests.

Signed-off-by: Angel Misevski <[email protected]>
Changes necessary to make tests compile and pass after introduction of
deployments into Kubernetes/OpenShift recipes.

Signed-off-by: Angel Misevski <[email protected]>
Allow configmaps to be specified in Kubernetes/OpenShift recipes. Also
changes UniqueNamesProvisioner to rewrite configmap names across the
environment to avoid name collisions.

Signed-off-by: Angel Misevski <[email protected]>
@amisevsk
Copy link
Contributor Author

amisevsk commented Jan 8, 2019

@ashumilova @benoitf ping for review regarding 90eed26. This commit changes recipe validation a fair bit to support other objects and will be useful further on when adding more objects to recipes support.

The short explanation is: Instead of parsing/validating IPodItems, now we parse/validate ISupportedListItem = IPodItem | IDeploymentItem | IConfigMapItem. This affects other areas of the code, as now instead of getting an IPodItem it could be an IDeploymentItem, which has to be considered when e.g. detecting machines automatically.

Copy link

@garagatyi garagatyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@ibuziuk ibuziuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really great job, played with it and really like it. Special thanks for UD validations and doc PR 👍

- Rename method in KubernetesInternalRuntime to make its meaning more
  clear
- Fix some documentation issues
- Refactor KubernetesEnvironment to set podData in constructor instead
  of through builder, avoiding inconsistencies in podData when using
  constructor directly.
- Rename KubernetesEnvironment#getPodData to getPodsData to better
  represent the data that is returned
- Fix issue where pod-based recipes were not correctly getting
  deployment name label and so were failing to start
- Fix issue where deployments were not given a unique name by
  provisioner.

Signed-off-by: Angel Misevsk <[email protected]>
Add validation and processing on the dashboard side for recipes
containing deployments and/or configmaps. This allows users to create
new stacks using these items in recipes.

Signed-off-by: Angel Misevski <[email protected]>
Remove podName parameter from addPod method and simply get it from the
pod's metadata.

Additional cleanup and documentation changes:

- Make some log messages level 'info' instead of 'debug'
- Modify argument order in messages to to make more sense
- Add docs for KubernetesEnvironment methods about how/when pods map can
  be modified
- Fix issue where pods referencing nonexistent configmaps would emit
  unreadable error messages due to configmap renaming.
- Clean up code according to PR Quality Review

Signed-off-by: Angel Misevski <[email protected]>
Add test cases for
- KubernetesInternalRuntime, when creating from deployments
- UniqueNamesProvisioner, to ensure deployments and configmaps are
  handled correctly

Signed-off-by: Angel Misevsk <[email protected]>
@amisevsk amisevsk merged commit 1ffb78c into eclipse-che:master Jan 9, 2019
@amisevsk amisevsk deleted the issue-11505 branch January 9, 2019 00:28
e -> {
PodTemplateSpec podTemplate = e.getValue().getSpec().getTemplate();
podData.put(
e.getKey(), new PodData(podTemplate.getSpec(), podTemplate.getMetadata()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amisevsk During resolving conflicts with my PR I discovered that

PodData {
  "test-deployments": PodData{
                                metadata: 
                                    name: "deployment-pod"
                                    ....
                                spec: ....
                              }

It is done by purpose? Would not be better to sync pod name and map key? If it would be better - I can do it in my PR.

Copy link
Member

@sleshchenko sleshchenko Jan 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually not sure which is better -- I don't think keys are really used in a very direct way. IIRC we pretty much always just iterate through the values in the map.

The issues with syncing the PodData key with pod name for deployments are

  1. We would also need to modify the key for the deployments map, as otherwise using the same key between the two wouldn't work. Also, either the PodData key doesn't match the resource name, or the deployments map key doesn't match the resource name.
  2. The actual pod name in a deployment is irrelevant, as pod name is generated from deployment name. This could be confusing in debugging or testing, as this value is basically ignored.

Personally, I'm open to either choice so long as every key in PodData matches some key in the deployments or pods map. I lean slightly towards keeping it the way it is, as this keeps the key in sync with the deployment name, but I'm not firm either way.

One thing that I haven't tested yet but would be interested to know, is if pod name in deployments is even necessary. This could be an edge case for how PodData is handled. I can look into it once I get back to working on this bit of code.

@benoitf benoitf added this to the 6.17.0 milestone Jan 14, 2019
@benoitf benoitf removed the status/code-review This issue has a pull request posted for it and is awaiting code review completion by the community. label Jan 14, 2019
amisevsk referenced this pull request Jan 16, 2019
Remove podName parameter from addPod method and simply get it from the
pod's metadata.

Additional cleanup and documentation changes:

- Make some log messages level 'info' instead of 'debug'
- Modify argument order in messages to to make more sense
- Add docs for KubernetesEnvironment methods about how/when pods map can
  be modified
- Fix issue where pods referencing nonexistent configmaps would emit
  unreadable error messages due to configmap renaming.
- Clean up code according to PR Quality Review

Signed-off-by: Angel Misevski <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement A feature request - must adhere to the feature request template.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants