Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workspace is created, deployment is ready, pods are up and running, stuck on waiting for workspace to start #23103

Open
mcz-adhamsabry opened this issue Aug 19, 2024 · 19 comments
Assignees
Labels
area/devworkspace-operator area/install Issues related to installation, including offline/air gap and initial setup kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach

Comments

@mcz-adhamsabry
Copy link

Describe the bug

I created an empty workspace, angular workspace and no luck.

che-1                     workspacea524efca6b8347fb-d8559986d-gzphd          2/2     Running     0               8m23s
che-1                     workspaceb8181dcca4fb4663-bf6f46888-hwn65          2/2     Running     0               11m
che-1                     workspaced37a97a9566247c6-7fd88d9fc-b99l2          2/2     Running     0               10m

No errors, nothing out of ordinary except the following from universal-developer-image

Kubedock is disabled. It can be enabled with the env variable "KUBEDOCK_ENABLED=true"
set in the workspace Devfile or in a Kubernetes ConfigMap in the developer namespace.

Che version

7.90@latest

Steps to reproduce

Start empty workspace or angular template

Expected behavior

It should start the workspace as normal

Runtime

other (please specify in additional context)

Screenshots

Empty workspace:
image
get all
image

Installation method

chectl/latest

Environment

macOS

Eclipse Che Logs

2024-08-19 17:31:14

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:14Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:14

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:14Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:31:14

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:14Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:14

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:14Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:16

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:31:16.577771 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:31:16

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:31:16.577793 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:31:16

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:31:16.577802 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:31:31

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:31Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:31:31

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:31:31.801965 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:31:31

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:31:31.802087 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:31:31

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:31:31.802109 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:31:31

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:31Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:31

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:31Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:35

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:35Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:31:35

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:35Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:35

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:35Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:50

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:50Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:31:50

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:50Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:31:50

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:31:50Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
I0819 17:32:01.181309 6 status.go:304] "updating Ingress status" namespace="che-1" ingress="workspace4b384b46c6db48df-universal-developer-image-13131-code-redirect-1" currentValue=null newValue=[{"ip":"10.1.1.63"}]

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
I0819 17:32:01.181926 6 status.go:304] "updating Ingress status" namespace="che-1" ingress="workspace4b384b46c6db48df-universal-developer-image-13132-code-redirect-2" currentValue=null newValue=[{"ip":"10.1.1.63"}]

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
I0819 17:32:01.182333 6 status.go:304] "updating Ingress status" namespace="che-1" ingress="workspace4b384b46c6db48df-universal-developer-image-13133-code-redirect-3" currentValue=null newValue=[{"ip":"10.1.1.63"}]

2024-08-19 17:32:01

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:01Z","logger":"controllers.DevWorkspaceRouting","msg":"Reconciling DevWorkspaceRouting","Request.Namespace":"che-1","Request.Name":"routing-workspace4b384b46c6db48df","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
I0819 17:32:01.185553 6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"che-1", Name:"workspace4b384b46c6db48df-universal-developer-image-13131-code-redirect-1", UID:"838af6fa-abce-4a77-88c3-0cc48545cfd2", APIVersion:"networking.k8s.io/v1", ResourceVersion:"29787", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:32:01.185726 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:32:01.185774 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:32:01.185787 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:32:01

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:01Z","logger":"controllers.DevWorkspaceRouting","msg":"Reconciling DevWorkspaceRouting","Request.Namespace":"che-1","Request.Name":"routing-workspace4b384b46c6db48df","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
I0819 17:32:01.187040 6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"che-1", Name:"workspace4b384b46c6db48df-universal-developer-image-13133-code-redirect-3", UID:"7be07255-0fb3-4c2a-832e-32780b76c5e5", APIVersion:"networking.k8s.io/v1", ResourceVersion:"29788", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

2024-08-19 17:32:01

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
I0819 17:32:01.187048 6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"che-1", Name:"workspace4b384b46c6db48df-universal-developer-image-13132-code-redirect-2", UID:"ccfa645a-05cc-4e94-a7d8-9f87065c5f4d", APIVersion:"networking.k8s.io/v1", ResourceVersion:"29789", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync

2024-08-19 17:32:04

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:32:04.522944 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:32:04

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:32:04.522971 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:32:04

8s_controller_ingress-nginx-controller-55dd9c5f4-t5gvx_ingress-nginx_3df7f150-94fb-4d8b-a348-0fe48ed4c81e_3
W0819 17:32:04.522979 6 controller.go:1216] Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint.

2024-08-19 17:32:06

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:06Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:32:06

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:06Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:06

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:06Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:08

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:08Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:32:08

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:08Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:08

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:08Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:24

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:24Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

2024-08-19 17:32:24

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:24Z","logger":"controllers.DevWorkspace","msg":"Deployment is not ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}

2024-08-19 17:32:24

8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:24Z","logger":"controllers.DevWorkspace","msg":"Waiting on deployment to be ready","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df"}
2024-08-19 17:32:24 8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:24Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}
2024-08-19 17:32:24 8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:24Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}
2024-08-19 17:32:57 8s_devworkspace-controller_devworkspace-controller-manager-77ccf897f7-5l7p9_devworkspace-controller_18826a1e-a17c-45c0-83aa-efbdb78c624b_1
{"level":"info","ts":"2024-08-19T17:32:57Z","logger":"controllers.DevWorkspace","msg":"Reconciling Workspace","Request.Namespace":"che-1","Request.Name":"empty-5deu","devworkspace_id":"workspace4b384b46c6db48df","resolvedConfig":"workspace.progressTimeout=900s,workspace.persistUserHome.enabled=true,workspace.podSecurityContext is set"}

Additional context

No response

@mcz-adhamsabry mcz-adhamsabry added the kind/bug Outline of a bug - must adhere to the bug report template. label Aug 19, 2024
@che-bot che-bot added the status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. label Aug 19, 2024
@ibuziuk ibuziuk added severity/P1 Has a major impact to usage or development of the system. area/install Issues related to installation, including offline/air gap and initial setup status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach and removed status/need-triage An issue that needs to be prioritized by the curator responsible for the triage. See https://github. labels Aug 19, 2024
@ibuziuk
Copy link
Member

ibuziuk commented Aug 19, 2024

@mcz-adhamsabry could you please clarify how Eclipse Che was installed on the cluster
Also, please share DevWorkspace Operator logs

cc: @dkwon17 @AObuchow

@mcz-adhamsabry
Copy link
Author

I think those Service "che-1/workspace4b384b46c6db48df-service" does not have any active Endpoint. are the issue

Operator yaml

kind: CheCluster
apiVersion: org.eclipse.che/v2
spec:
  metrics:
    enable: false

  components:
    cheServer:
      extraProperties:
        CHE_OIDC_USERNAME__CLAIM: email

  devEnvironments:
    defaultNamespace:
      autoProvision: true
      template: che-<userid>
    persistUserHome:
      enabled: true
    startTimeoutSeconds: 900
    secondsOfInactivityBeforeIdling: 3600000
    secondsOfRunBeforeIdling: -1
    maxNumberOfRunningWorkspacesPerUser: -1
    maxNumberOfWorkspacesPerUser: -1
    security:
      podSecurityContext:
        fsGroup: 1724
        runAsUser: 1724

  gitServices:
    gitlab:
      - secretName: git-gitlab-oauth-config

  networking:
    ingressClassName: nginx
    auth:
      oAuthClientName: <client-name>
      oAuthSecret: <secret>
      identityProviderURL: <provider>
      gateway:
        oAuthProxy:
          cookieExpireSeconds: 7200

  k8s:
    singleHostExposureType: 'gateway'
    tlsSecretName: 'letsencrypt-wild-card-prod-secret'
  server:
    serverExposureStrategy: 'single-host'
    useInternalClusterSVCNames: true
    workspaceNamespaceDefault: 'che-<userid>'
  storage:
    pvcClaimSize: 5Gi
    pvcStrategy: per-workspace

install command:

chectl server:deploy --platform k8s --domain $DOMAIN_NAME --che-operator-cr-patch-yaml ./che-patch.yaml --telemetry off --skip-cert-manager

@AObuchow
Copy link

@mcz-adhamsabry are you running on an Apple Silicon Mac (e.g. m1, m2, m3...)? Also, are you using Minikube or another variant of Kubernetes?

I agree with your assumption that it seems to be the editor's endpoints or the editor health check that is not working.

@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 19, 2024

I'm on macos m2 k3s

@AObuchow
Copy link

@mcz-adhamsabry after doing a bit of digging, it seems like Che doesn't fully support deploying to k3s yet. This comment might provide some clues on how to get it working, but the comment is quite old by now.

Please share the devworkspace-controller-manager logs as well as the che-operator logs. It's also worth sharing the DevWorkspaceRouting YAML for the routing-workspace4b384b46c6db48df object on your cluster. It seems like there's issues with the created ingresses.

@SDAdham
Copy link

SDAdham commented Aug 19, 2024

Managed to fix it team. Thanks. Switching off single-host worked. I have to monitor it for a while.

I assumed the long domains are the culprit.

@AObuchow
Copy link

@SDAdham Glad to hear :) When things seem to be working as expected, please let us know if you could share any details (e.g. how to turned off single-host) and mark the issue as resolved :)

@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 19, 2024

Just removed serverExposureStrategy: 'single-host' from the che-patch.yaml. Thanks.

@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 20, 2024

I'm back, sorry. It's the same problem, nothing fixed, one time it opened and I was about able to see the editor but I couldn't open the terminal, even though I was able to show the panel. This is not consistem, stuck at the same area as the screenshots.

I had enabled log level to error on gateway and i see this:

{"level":"error","ts":"2024-08-20T05:09:30Z","msg":"Reconciler error","controller":"devworkspace","controllerGroup":"workspace.devfile.io","controllerKind":"DevWorkspace","DevWorkspace":{"name":"empty-evcq","namespace":"che-jlspaxaqtrq6daqvuo5e0v-zni1lnfhio3upw68vy5w-a0v7xo"},"namespace":"che-jlspaxaqtrq6daqvuo5e0v-zni1lnfhio3upw68vy5w-a0v7xo","name":"empty-evcq","reconcileID":"1d4a9510-2609-4654-adf6-1599b1444a59","error":"Get \"<url>/empty-evcq/3100/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:329\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235"}

The url it pings <url>/empty-evcq/3100/healthz\ returns ok

@AObuchow about that comment link you provided, I don't have issues deploying che and the che-tls certificate, che-tls certificate already contains a wild card cert on my end.
Here is the object that I found (configmap) kubectl describe configmap workspace6bc886cd73cf4de0-route -n che-jlspaxaqtrq6daqvuo5e0v-zni1lnfhio3upw68vy5w-a0v7xo:

Name:         workspace6bc886cd73cf4de0-route
Namespace:    che-jlspaxaqtrq6daqvuo5e0v-zni1lnfhio3upw68vy5w-a0v7xo
Labels:       app.kubernetes.io/part-of=che.eclipse.org
              controller.devfile.io/devworkspace_id=workspace6bc886cd73cf4de0
Annotations:  <none>

Data
====
traefik.yml:
----

entrypoints:
  http:
    address: ":3030"
    forwardedHeaders:
      insecure: true
global:
  checkNewVersion: false
  sendAnonymousUsage: false
providers:
  file:
    filename: "/etc/traefik/workspace.yml"
    watch: false
log:
  level: "INFO"
workspace.yml:
----
http:
  middlewares:
    workspace6bc886cd73cf4de0-universal-developer-image-3100-auth:
      forwardAuth:
        address: http://che-gateway.eclipse-che:8089?namespace=che-jlspaxaqtrq6daqvuo5e0v-zni1lnfhio3upw68vy5w-a0v7xo
        trustForwardHeader: false
    workspace6bc886cd73cf4de0-universal-developer-image-3100-healthz-strip-prefix:
      stripPrefix:
        prefixes:
        - /3100
    workspace6bc886cd73cf4de0-universal-developer-image-3100-strip-prefix:
      stripPrefix:
        prefixes:
        - /3100
  routers:
    workspace6bc886cd73cf4de0-universal-developer-image-3100:
      middlewares:
      - workspace6bc886cd73cf4de0-universal-developer-image-3100-strip-prefix
      - workspace6bc886cd73cf4de0-universal-developer-image-3100-auth
      priority: 105
      rule: PathPrefix(`/3100`)
      service: workspace6bc886cd73cf4de0-universal-developer-image-3100
    workspace6bc886cd73cf4de0-universal-developer-image-3100-healthz:
      middlewares:
      - workspace6bc886cd73cf4de0-universal-developer-image-3100-healthz-strip-prefix
      priority: 106
      rule: Path(`/3100/healthz`)
      service: workspace6bc886cd73cf4de0-universal-developer-image-3100-healthz
  services:
    workspace6bc886cd73cf4de0-universal-developer-image-3100:
      loadBalancer:
        servers:
        - url: http://127.0.0.1:3100
    workspace6bc886cd73cf4de0-universal-developer-image-3100-healthz:
      loadBalancer:
        servers:
        - url: http://127.0.0.1:3100


BinaryData
====

Events:  <none>

not sure why it has http://che-gateway.eclipse-che:8089

Note: The endpoints are not an issue, those were too early logs, but it gets resolved later on.

@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 20, 2024

i have reduced che-patch.yaml to minimal and redeployed che:

kind: CheCluster
apiVersion: org.eclipse.che/v2
spec:
  metrics:
    enable: false

  components:
    cheServer:
      extraProperties:
        CHE_OIDC_USERNAME__CLAIM: email

  devEnvironments:
    startTimeoutSeconds: 600
    secondsOfInactivityBeforeIdling: 3600000
    secondsOfRunBeforeIdling: -1
    maxNumberOfRunningWorkspacesPerUser: -1
    maxNumberOfWorkspacesPerUser: -1

  gitServices:
    gitlab:
      - secretName: git-gitlab-oauth-config

  networking:
    ingressClassName: nginx
    auth:
      oAuthClientName: 'redacted'
      oAuthSecret: 'redacted'
      identityProviderURL: 'redacted'
      gateway:
        oAuthProxy:
          cookieExpireSeconds: 600
        traefik:
          logLevel: FATAL

no luck, same issue!
Could this be any reason (how can i disable this):
image

@mcz-adhamsabry
Copy link
Author

This is the only certificate that I manually create:

sudo kubectl create secret tls che-tls --cert=/etc/letsencrypt/live/<domain>/cert.pem --key=/etc/letsencrypt/live/<domain>/privkey.pem -n eclipse-che

kubectl label secret che-tls app.kubernetes.io/part-of=che.eclipse.org -n eclipse-che

Is there any other certificates that I am expected to create?

@mcz-adhamsabry
Copy link
Author

If anyone can help, I'll really appreciate it, I am also open to send out Zoom meeting if needed.

@dkwon17
Copy link
Contributor

dkwon17 commented Aug 20, 2024

Hi, when it's stuck in the "Deployment is ready" state, what happens if you delete the workpsace's DevWorkspaceRouting CR?

DEVWORKSPACE_ID=$(kubectl get dw <workspace-name> -o jsonpath='{.status.devworkspaceId}' -n <namespace>)
kubectl delete dwr routing-${DEVWORKSPACE_ID} -n <namespace>

@mcz-adhamsabry
Copy link
Author

I restarted the dev workspace where I see workspace3fdbb475c53a4c08-64d67dcd7f-zwr62 6/6 Running
However on che, i see:
image
So I proceeded with deleting the dwr and automatically the page went into:
image
But when I refreshed, it would open vscode then straight afterwards redirects to dashboard then redirects back to vscode and it keeps on doing this in a loop.

What does this mean?

@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 20, 2024

I checked the network tab in browser console and i see this:

Request URL:
https://<domain>/<user>/<workspace>/3100/
Request Method:
GET
Status Code:
502 Bad Gateway

Nowhere in my project nor devfile config that this 3100 mentioned...

Everytime this bad gateway response is received, it would redirect

@ibuziuk ibuziuk moved this from ✅ Done to 🚧 In Progress in Eclipse Che Team B Backlog Aug 20, 2024
@ibuziuk ibuziuk moved this from 🚧 In Progress to Unplanned in Eclipse Che Team B Backlog Aug 20, 2024
@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 20, 2024

Ok, I described the dwr and tried to follow one of the links that the devfile is exposing, i.e phpmyadmin and it's not secure. I'm confused about this cuz che-tls should do the trick, right? I am confused by Kubernetes Ingress Controller Fake Certificate

@mcz-adhamsabry
Copy link
Author

Hmm, Che didn't assign che-tls to it hmm.

@mcz-adhamsabry
Copy link
Author

Why does it have:

TLS:
  workspace19da228de00a4796-endpoints terminates `<user>-<project>-code-redirect-1.<domain>`?

@mcz-adhamsabry
Copy link
Author

mcz-adhamsabry commented Aug 22, 2024

Here is how I installed che on macos

K8S Setup:

Note before setup Rancher Desktop works very well, please keep in mind that UI doesn't use watch on containers/images, to refresh the page, just switch page then back. I.e you could run a container and UI is not updated on the newly started container, just switch from container view then back to it and you will see the container

Install Rancher Desktop (it has K8S & Docker - docker must not be installed - do not enable K8S yet but docker must be working)
Open Rancher Desktop then go to Preferences then Virtual Machine then Volumes then select 9p / nmap / 2048 / 2p2000,L / mapped-xattr then Apply
Edit ~/Library/Application Support/rancher-desktop/lima/_config/override.yaml (it is likely to not exist) and add the following

mountType: 9p
mounts:
  - location: "~"
    9p:
      securityModel: mapped-xattr
      cache: "mmap"
env:
  K3S_EXEC: --kube-apiserver-arg oidc-issuer-url=<oidc-url> --kube-apiserver-arg oidc-username-claim=email --kube-apiserver-arg oidc-groups-claim=groups --kube-apiserver-arg oidc-client-id=<client-id>

Go back to Rancher Desktop then Preferences then Kubernetes then select v1.30.3 (stable) then untick Enable Traefik and untick Install Spine Operator then cilck Apply

Che setup
You do not need to install oidc just use any oidc provider or host a seperate one via docker directly
You do not need to install vcluster it makes no sense to have vcluster on a single host setup
Before installing che, you will need to run:

docker pull quay.io/eclipse/che-operator:7.90.0 --platform amd64
docker pull quay.io/eclipse/che-plugin-registry:7.90.0

You will need to apply:

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: oidc-cluster-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin

@dkwon17 dkwon17 self-assigned this Aug 28, 2024
@dkwon17 dkwon17 moved this from Unplanned to 📅 Planned for this Sprint in Eclipse Che Team B Backlog Aug 28, 2024
@dkwon17 dkwon17 moved this from 📅 Planned for this Sprint to 📋 Backlog (not in current Sprint) in Eclipse Che Team B Backlog Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/devworkspace-operator area/install Issues related to installation, including offline/air gap and initial setup kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. status/analyzing An issue has been proposed and it is currently being analyzed for effort and implementation approach
Projects
Status: 📋 Backlog (not in current Sprint)
Development

No branches or pull requests

6 participants