Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting for Deployment ingress-nginx-controller or my-gitea to become ready never ends #430

Open
eliassal opened this issue Nov 1, 2024 · 55 comments
Labels
bug Something isn't working

Comments

@eliassal
Copy link

eliassal commented Nov 1, 2024

What is your environment, configuration, and command?

Fedora 41 64 bit hyper-V VM
Followed instructions at https://cnoe.io/docs/reference-implementation/installations/idpbuilder/quick-start using RPM package
Docker destop works fine

What did you do and What did you see instead?

I ran
idpbuilder create
VM named local-dev-control created .On the command line, since 3 hours, I have the following output that never ends, it gets printed every 1 minute as if idpbuilder is not able to continue

ild.name=localdev namespace="" name=localdev reconcileID=67d99153-bba2-492a-85c3-0786cb6c6069
time=2024-11-01T18:32:27.900+01:00 level=INFO msg="Waiting for Deployment ingress-nginx-controller to become ready" controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild Localbuild.name=localdev namespace="" name=localdev reconcileID=b96fa8dd-5d4e-4a74-9c5a-d1c627171169
time=2024-11-01T18:32:30.621+01:00 level=INFO msg="Waiting for Deployment ingress-nginx-controller to become ready" controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild Localbuild.name=localdev namespace="" name=localdev reconcileID=67d99153-bba2-492a-85c3-0786cb6c6069
time=2024-11-01T18:32:50.619+01:00 level=INFO msg="Waiting for Deployment my-gitea to become ready" controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild Localbuild.name=localdev namespace="" name=localdev reconcileID=b96fa8dd-5d4e-4a74-9c5a-d1c627171169
time=2024-11-01T18:32:56.926+01:00 level=INFO msg="Waiting for Deployment my-gitea to become ready" controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild Localbuild.name=localdev namespace="" name=localdev reconcileID=67d99153-bba2-492a-85c3-0786cb6c6069
time=2024-11-01T18:32:57.900+01:00 level=INFO msg="Waiting for Deployment ingress-nginx-controller to become ready" controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild Localbuild.name=localdev namespace="" name=localdev reconcileID=b96fa8dd-5d4e-4a74-9c5a-d1c627171169
time=2024-11-01T18:33:00.621+01:00 level=INFO msg="Waiting for Deployment ingress-nginx-controller to become ready" controller=localbuild controllerGroup=idpbuilder.cnoe.io controllerKind=Localbuild Localbuild.name=localdev namespace="" name=localdev reconcileID=67d99153-bba2-492a-85c3-0786cb6c6069

When I try to access https://argocd.cnoe.localtest.me:8443/ or https://gitea.cnoe.localtest.me:8443/ I get

image

Here is a snpashot of the terminal
image

Ths log of the VM in docker desktop does not seem has any issue

image

Additional Information. .

I killed the process with Ctrl + C, surprisingly, I got a message indicating ArgoCD was created successfyly, I can get its password but still cant access it in firefox

image

image

@eliassal eliassal added the bug Something isn't working label Nov 1, 2024
@nabuskey
Copy link
Collaborator

nabuskey commented Nov 1, 2024

Hmm interesting. Can you give me the output of this command? I am guessing there was an issue with pod creation.

kubectl describe pod -n ingress-nginx

Do you see any errors in the ingress nginx pod?

kubectl logs -n ingress-nginx deployment/ingress-nginx-controller

@eliassal
Copy link
Author

eliassal commented Nov 2, 2024

Thanks @nabuskey , in fact, 1st, when I tried to run anu kubectl command, I was getting
command not found and discovered the Docker desktop installation on Fedora did not bring with it kubectl, nor kubeadm nor kubelet.
I did install them manually then I rerun the command
idpbuilder create
It creates the kind pods then it errors as follows

$ idpbuilder create
time=2024-11-02T12:13:16.892+01:00 level=INFO msg="Creating kind cluster" logger=setup
time=2024-11-02T12:13:16.913+01:00 level=INFO msg="Runtime detected" logger=setup provider=docker
########################### Our kind config ############################

Kind kubernetes release images https://github.com/kubernetes-sigs/kind/releases

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:

  • role: control-plane
    image: "kindest/node:v1.29.2"
    kubeadmConfigPatches:
    • |
      kind: InitConfiguration
      nodeRegistration:
      kubeletExtraArgs:
      node-labels: "ingress-ready=true"
      extraPortMappings:
    • containerPort: 443
      hostPort: 8443
      protocol: TCP

containerdConfigPatches:

  • |-
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:8443"]
    endpoint = ["https://gitea.cnoe.localtest.me"]
    [plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls]
    insecure_skip_verify = true

######################### config end ############################
time=2024-11-02T12:13:16.949+01:00 level=INFO msg="Creating kind cluster" logger=setup cluster=localdev
time=2024-11-02T12:19:04.217+01:00 level=ERROR msg="Error starting kind cluster" logger=setup err="command "docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1\nsigs.k8s.io/kind/pkg/errors.WithStack\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:59\nsigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/local.go:124\nsigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*nodeCmd).Run\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/providers/docker/node.go:146\nsigs.k8s.io/kind/pkg/exec.CombinedOutputLines\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/helpers.go:67\nsigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/actions/kubeadminit/init.go:81\nsigs.k8s.io/kind/pkg/cluster/internal/create.Cluster\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/create.go:135\nsigs.k8s.io/kind/pkg/cluster.(*Provider).Create\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:192\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Reconcile\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:202\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).ReconcileKindCluster\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:85\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).Run\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:133\ngithub.com/cnoe-io/idpbuilder/pkg/cmd/create.create\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/create/root.go:139\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\ngithub.com/cnoe-io/idpbuilder/pkg/cmd.Execute\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/root.go:30\nmain.main\n\t/home/runner/work/idpbuilder/idpbuilder/main.go:6\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267\nruntime.goexit\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/asm_amd64.s:1650\nfailed to init node with kubeadm\nsigs.k8s.io/kind/pkg/errors.Wrap\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:47\nsigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/actions/kubeadminit/init.go:84\nsigs.k8s.io/kind/pkg/cluster/internal/create.Cluster\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/create.go:135\nsigs.k8s.io/kind/pkg/cluster.(*Provider).Create\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:192\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Reconcile\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:202\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).ReconcileKindCluster\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:85\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).Run\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:133\ngithub.com/cnoe-io/idpbuilder/pkg/cmd/create.create\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/create/root.go:139\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\ngithub.com/cnoe-io/idpbuilder/pkg/cmd.Execute\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/root.go:30\nmain.main\n\t/home/runner/work/idpbuilder/idpbuilder/main.go:6\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267\nruntime.goexit\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/asm_amd64.s:1650"
Error: failed to init node with kubeadm: command "docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Usage:
idpbuilder create [flags]

Flags:
--build-name string Name for build (Prefix for kind cluster name, pod names, etc). (default "localdev")
--extra-ports string List of extra ports to expose on the docker container and kubernetes cluster as nodePort (e.g. "22:32222,9090:39090,etc").
-h, --help help for create
--host string Host name to access resources in this cluster. (default "cnoe.localtest.me")
--ingress-host-name string Host name used by ingresses. Useful when you have another proxy in front of ingress-nginx that idpbuilder provisions.
--kind-config string Path of the kind config file to be used instead of the default.
--kube-version string Version of the kind kubernetes cluster to create. (default "v1.29.2")
-n, --no-exit When set, idpbuilder will not exit after all packages are synced. Useful for continuously syncing local directories. (default true)
-p, --package strings Paths to locations containing custom packages
-c, --package-custom-file strings Name of the package and the path to file to customize the package with. e.g. argocd:/tmp/argocd.yaml
--port string Port number under which idpBuilder tools are accessible. (default "8443")
--protocol string Protocol to use to access web UIs. http or https. (default "https")
--recreate Delete cluster first if it already exists.
--use-path-routing When set to true, web UIs are exposed under single domain name.

Global Flags:
-l, --log-level string Set the log verbosity. Supported values are: debug, info, warn, and error. (default "info")

failed to init node with kubeadm: command "docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1

I dont see any ingress-nginx pod created so the command
kubectl describe pod -n ingress-nginx

fails with the following error

$ kubectl describe pod -n ingress-nginx
E1102 12:24:00.238351    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 12:24:00.240173    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 12:24:00.242138    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 12:24:00.243849    7501 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"

@eliassal eliassal closed this as completed Nov 2, 2024
@eliassal
Copy link
Author

eliassal commented Nov 2, 2024

Another strange thing is that when I issue the command
kubectl get nodes
I see the following error

E1102 18:40:02.933029    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 18:40:02.937221    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 18:40:02.939243    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 18:40:02.940877    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1102 18:40:02.942447    2784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?

So it is trying port 8080 and not usual 6443 port, why it is looking for 8080 in spite of the fact in .kube/config it is indicated

server: https://kubernetes.docker.internal:6443

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 5, 2024

Were you able to resolve this? You shouldn't need to install kubeadm or kubelet. All you need is docker.

The kubectl output seems to indicate your kubeconfig is misconfiugred somehow. Do you have a valid kubeconfig? ~/.kube/config Is it pointing to the right port and host?

@eliassal
Copy link
Author

eliassal commented Nov 5, 2024

No Sir. I think yes it is correct, here is the content of .kube/config

image

and when I run

kubectl config get-contexts

It points to the right cluster
image

An here is the hosts file

image

When I run
netstat -aen
I see clearly port 6443 is opened but cant see port 8080, so where or which component should respond on this 8080 port?

image

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 5, 2024

You seem to have another cluster created by docker? When idpbuilder creates a cluster, by default, it creates a cluster inside of a docker container, and it is named localdev-control-plane. Let's try...

  1. Run docker container ls -a. Do you see a container with the name localdev-control-plane?
  2. If you see a container with the name, delete it. docker rm -f localdev-control-plane.
  3. Run idpbuilder create.
  4. If this gets stuck (more than 3 min), then run kubectl describe deployments -n ingress-nginx and kubectl get pods -n ingress-nginx.
  5. At this point, your kubeconfig should be updated to use the cluster created by idpbuilder. The name of the context should be kind-localdev.
  6. If you see pods with the name pattern like ingress-nginx-controller-<RANDOM_CHARS> running, run kubectl -n ingress-nginx logs deployment/ingress-nginx-controller. Note any error messages.

@eliassal
Copy link
Author

eliassal commented Nov 5, 2024

Hi, no thers is no such container localdev-control-plane. Do you mean I should stop kubernetes to run in Docker desktop settings?
I tried running
sudo idpbuilder create
I get right away the following on the terminal

time=2024-11-05T18:40:34.840+01:00 level=INFO msg="Creating kind cluster" logger=setup
time=2024-11-05T18:40:34.901+01:00 level=INFO msg="Runtime detected" logger=setup provider=docker
time=2024-11-05T18:40:34.934+01:00 level=ERROR msg="Error starting kind cluster" logger=setup err="command \"docker ps -a --filter label=io.x-k8s.kind.cluster --format '{{.Label \"io.x-k8s.kind.cluster\"}}'\" failed with error: exit status 1\nsigs.k8s.io/kind/pkg/errors.WithStack\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:59\nsigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/local.go:124\nsigs.k8s.io/kind/pkg/exec.OutputLines\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/helpers.go:81\nsigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*provider).ListClusters\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/providers/docker/provider.go:107\nsigs.k8s.io/kind/pkg/cluster.(*Provider).List\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:202\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Exists\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:128\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Reconcile\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:167\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).ReconcileKindCluster\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:85\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).Run\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:133\ngithub.com/cnoe-io/idpbuilder/pkg/cmd/create.create\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/create/root.go:139\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\ngithub.com/cnoe-io/idpbuilder/pkg/cmd.Execute\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/root.go:30\nmain.main\n\t/home/runner/work/idpbuilder/idpbuilder/main.go:6\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267\nruntime.goexit\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/asm_amd64.s:1650\nfailed to list clusters\nsigs.k8s.io/kind/pkg/errors.Wrap\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:47\nsigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*provider).ListClusters\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/providers/docker/provider.go:109\nsigs.k8s.io/kind/pkg/cluster.(*Provider).List\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:202\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Exists\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:128\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Reconcile\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:167\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).ReconcileKindCluster\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:85\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).Run\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:133\ngithub.com/cnoe-io/idpbuilder/pkg/cmd/create.create\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/create/root.go:139\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\ngithub.com/cnoe-io/idpbuilder/pkg/cmd.Execute\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/root.go:30\nmain.main\n\t/home/runner/work/idpbuilder/idpbuilder/main.go:6\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267\nruntime.goexit\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/asm_amd64.s:1650"
Error: failed to list clusters: command "docker ps -a --filter label=io.x-k8s.kind.cluster --format '{{.Label "io.x-k8s.kind.cluster"}}'" failed with error: exit status 1
Usage:
  idpbuilder create [flags]

Flags:
      --build-name string             Name for build (Prefix for kind cluster name, pod names, etc). (default "localdev")
      --extra-ports string            List of extra ports to expose on the docker container and kubernetes cluster as nodePort (e.g. "22:32222,9090:39090,etc").
  -h, --help                          help for create
      --host string                   Host name to access resources in this cluster. (default "cnoe.localtest.me")
      --ingress-host-name string      Host name used by ingresses. Useful when you have another proxy in front of ingress-nginx that idpbuilder provisions.
      --kind-config string            Path of the kind config file to be used instead of the default.
      --kube-version string           Version of the kind kubernetes cluster to create. (default "v1.29.2")
  -n, --no-exit                       When set, idpbuilder will not exit after all packages are synced. Useful for continuously syncing local directories. (default true)
  -p, --package strings               Paths to locations containing custom packages
  -c, --package-custom-file strings   Name of the package and the path to file to customize the package with. e.g. argocd:/tmp/argocd.yaml
      --port string                   Port number under which idpBuilder tools are accessible. (default "8443")
      --protocol string               Protocol to use to access web UIs. http or https. (default "https")
      --recreate                      Delete cluster first if it already exists.
      --use-path-routing              When set to true, web UIs are exposed under single domain name.

Global Flags:
  -l, --log-level string   Set the log verbosity. Supported values are: debug, info, warn, and error. (default "info")

failed to list clusters: command "docker ps -a --filter label=io.x-k8s.kind.cluster --format '{{.Label "io.x-k8s.kind.cluster"}}'" failed with error: exit status 1


For the command
kubectl describe deployments -n ingress-nginx

I get

E1105 18:42:25.925570 10456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.1:8080: connect: connection refused"

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 5, 2024

Do you mean I should stop kubernetes to run in Docker desktop settings?

If you are running a k8s cluster with docker desktop, that's fine. I don't foresee it being an issue.

If you run this command, what do you get? docker ps -a --filter label=io.x-k8s.kind.cluster --format '{{.Label "io.x-k8s.kind.cluster"}}'.

You shouldn't need to run idpbuilder with root privilege as long as docker is setup correctly.

@eliassal
Copy link
Author

eliassal commented Nov 5, 2024

Nothing

image

I ran
idpbuilder create, it created localdev-control-plane

image

I waited almost for 5 minutes then errored as follows, and localdev-control-plane disappeared from the containers list

[salam@demosfedora41 ~]$ idpbuilder create
time=2024-11-05T20:21:54.922+01:00 level=INFO msg="Creating kind cluster" logger=setup
time=2024-11-05T20:21:54.943+01:00 level=INFO msg="Runtime detected" logger=setup provider=docker
########################### Our kind config ############################
# Kind kubernetes release images https://github.com/kubernetes-sigs/kind/releases
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: "kindest/node:v1.29.2"
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 443
    hostPort: 8443
    protocol: TCP

containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gitea.cnoe.localtest.me:8443"]
    endpoint = ["https://gitea.cnoe.localtest.me"]
  [plugins."io.containerd.grpc.v1.cri".registry.configs."gitea.cnoe.localtest.me".tls]
    insecure_skip_verify = true

#########################   config end    ############################
time=2024-11-05T20:21:55.000+01:00 level=INFO msg="Creating kind cluster" logger=setup cluster=localdev
time=2024-11-05T20:27:21.580+01:00 level=ERROR msg="Error starting kind cluster" logger=setup err="command \"docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6\" failed with error: exit status 1\nsigs.k8s.io/kind/pkg/errors.WithStack\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:59\nsigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/local.go:124\nsigs.k8s.io/kind/pkg/cluster/internal/providers/docker.(*nodeCmd).Run\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/providers/docker/node.go:146\nsigs.k8s.io/kind/pkg/exec.CombinedOutputLines\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/exec/helpers.go:67\nsigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/actions/kubeadminit/init.go:81\nsigs.k8s.io/kind/pkg/cluster/internal/create.Cluster\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/create.go:135\nsigs.k8s.io/kind/pkg/cluster.(*Provider).Create\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:192\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Reconcile\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:202\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).ReconcileKindCluster\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:85\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).Run\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:133\ngithub.com/cnoe-io/idpbuilder/pkg/cmd/create.create\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/create/root.go:139\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\ngithub.com/cnoe-io/idpbuilder/pkg/cmd.Execute\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/root.go:30\nmain.main\n\t/home/runner/work/idpbuilder/idpbuilder/main.go:6\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267\nruntime.goexit\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/asm_amd64.s:1650\nfailed to init node with kubeadm\nsigs.k8s.io/kind/pkg/errors.Wrap\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/errors/errors.go:47\nsigs.k8s.io/kind/pkg/cluster/internal/create/actions/kubeadminit.(*action).Execute\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/actions/kubeadminit/init.go:84\nsigs.k8s.io/kind/pkg/cluster/internal/create.Cluster\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/internal/create/create.go:135\nsigs.k8s.io/kind/pkg/cluster.(*Provider).Create\n\t/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/cluster/provider.go:192\ngithub.com/cnoe-io/idpbuilder/pkg/kind.(*Cluster).Reconcile\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/kind/cluster.go:202\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).ReconcileKindCluster\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:85\ngithub.com/cnoe-io/idpbuilder/pkg/build.(*Build).Run\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/build/build.go:133\ngithub.com/cnoe-io/idpbuilder/pkg/cmd/create.create\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/create/root.go:139\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039\ngithub.com/cnoe-io/idpbuilder/pkg/cmd.Execute\n\t/home/runner/work/idpbuilder/idpbuilder/pkg/cmd/root.go:30\nmain.main\n\t/home/runner/work/idpbuilder/idpbuilder/main.go:6\nruntime.main\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/proc.go:267\nruntime.goexit\n\t/home/runner/go/pkg/mod/golang.org/[email protected]/src/runtime/asm_amd64.s:1650"
Error: failed to init node with kubeadm: command "docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Usage:
  idpbuilder create [flags]

Flags:
      --build-name string             Name for build (Prefix for kind cluster name, pod names, etc). (default "localdev")
      --extra-ports string            List of extra ports to expose on the docker container and kubernetes cluster as nodePort (e.g. "22:32222,9090:39090,etc").
  -h, --help                          help for create
      --host string                   Host name to access resources in this cluster. (default "cnoe.localtest.me")
      --ingress-host-name string      Host name used by ingresses. Useful when you have another proxy in front of ingress-nginx that idpbuilder provisions.
      --kind-config string            Path of the kind config file to be used instead of the default.
      --kube-version string           Version of the kind kubernetes cluster to create. (default "v1.29.2")
  -n, --no-exit                       When set, idpbuilder will not exit after all packages are synced. Useful for continuously syncing local directories. (default true)
  -p, --package strings               Paths to locations containing custom packages
  -c, --package-custom-file strings   Name of the package and the path to file to customize the package with. e.g. argocd:/tmp/argocd.yaml
      --port string                   Port number under which idpBuilder tools are accessible. (default "8443")
      --protocol string               Protocol to use to access web UIs. http or https. (default "https")
      --recreate                      Delete cluster first if it already exists.
      --use-path-routing              When set to true, web UIs are exposed under single domain name.

Global Flags:
  -l, --log-level string   Set the log verbosity. Supported values are: debug, info, warn, and error. (default "info")

failed to init node with kubeadm: command "docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1






@nabuskey
Copy link
Collaborator

nabuskey commented Nov 5, 2024

What do you get if you run docker exec --privileged localdev-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6 ?

@eliassal
Copy link
Author

eliassal commented Nov 5, 2024

This is what I get which I think should be correct as container does not exist
image

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 5, 2024

I see kind deletes them if it fails creating nodes. I opened a PR to make this process much easier to debug. This should help once it gets merged.

#435

In the meantime, are you able to create kind cluster using kind cli?

Installation should be something like:

# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Then run kind create cluster
See: https://kind.sigs.k8s.io/docs/user/quick-start/

@eliassal
Copy link
Author

eliassal commented Nov 6, 2024

Good morning, I did another try, I stopped kubernetes in docker desktop and reissued
idpbuilder create. Exactly same behavior as the 1st time, it created
image
and I was able to copy different details of the container, txt files enclosed
and the process stuck in a loop of WAITING

image

Labels.txt
log.txt
NETWORK.txt
state.txt

PortBinding.txt

@eliassal
Copy link
Author

eliassal commented Nov 6, 2024

and it continues, I will leave it in this loop until if possible I receive a response from your side

image

@eliassal
Copy link
Author

eliassal commented Nov 6, 2024

Sorry, I did not see your response that you posted at midnight. So what should I do now? delete manually localdev-control-plane in docker desktop interface 1st then apply the commands

# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 6, 2024

and it continues, I will leave it in this loop until if possible I receive a response from your side

This seems awfully similar to what I was helping @cmoulliard with. He was using Fedora 41 as well. If you still have it running, could you run this command?

kubectl logs -n ingress-nginx deployment/ingress-nginx-controller

Do you see error messages like these?

pthread_create() failed (11: Resource temporarily unavailable)
worker process 41 exited with fatal code 2 and cannot be respawned

If so, the fix was:

  1. Create a file. touch /tmp/cm.yaml
  2. Populate the file with this content:
apiVersion: v1
data:
  allow-snippet-annotations: "true"
  proxy-buffer-size: 32k
  use-forwarded-headers: "true"
  worker-processes: "4"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.11.2
  name: ingress-nginx-controller
  namespace: ingress-nginx
  1. Run idpbuilder create --recreate -c nginx:/tmp/cm.yaml

@eliassal
Copy link
Author

eliassal commented Nov 6, 2024

@nabuskey Yes, the command
kind create cluster
went through and cluster was created
image

and I was able to issue
kubectl get nodes

image

So then what does this prove and help for running idpbuilder correctly?

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 6, 2024

So then what does this prove and help for running idpbuilder correctly?

This proves that your system can create kind cluster correctly and we should be able to do the same. Where you get stuck is very likely in the components we install (argocd, ingress-nginx, and gitea).

I think my previous comment here is the next step. #430 (comment)

@eliassal
Copy link
Author

eliassal commented Nov 6, 2024

OK, so I understand you need to update idpbuilder, how much time can this take? Should I now create the file /tmp/cm.yaml and run idpbuilder create again?

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 6, 2024

You don't need a new version of idpbuilder. You should be able to use your existing one.

@eliassal
Copy link
Author

eliassal commented Nov 7, 2024

@nabuskey I did the steps you mentioned ,

  • created /tmp/cm.yml
  • ran idpbuilder create --recreate -c nginx:/tmp/cm.yaml
    Cluster is created, nginx created but idpbuilder create command still stuck as always

when I run
kubectl get pods -n ingress-nginx
I get
image

Enclosed 2 files, for describe command

kubectl describe deployments -n ingress-nginx > ingress-nginxDescribe.txt
kubectl describe deployments -n ingress-nginx-controller

ingress-nginx-controllerDescribe.txt
ingress-nginxDescribe.txt

and again, idpbuilder is stuck in a loop
image

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 7, 2024

Looks like your node doesn't have enough CPU resources. What machine are you running this on? Ingress-nginx only requests 100m CPU so I am surprised you are getting this.

Run something like:

lshw lscpu

If your machine is underpowered, you could run it in Codespaces.

@eliassal
Copy link
Author

eliassal commented Nov 7, 2024

It is a Hyper-VM has 4 CPUs and 8Gig of RAM

@eliassal
Copy link
Author

eliassal commented Nov 7, 2024

Here is the output of both commands lshw and lscpu
lscpu.txt
lshw.txt

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 7, 2024

Hmm does the node actually reflect available resources? By default, it should make all resources available on the machine.

kubectl describe nodes

@nabuskey nabuskey reopened this Nov 7, 2024
@eliassal
Copy link
Author

eliassal commented Nov 7, 2024

As you can see I am in the right context but it is erroring

image

@eliassal
Copy link
Author

eliassal commented Nov 7, 2024

OK, here is the content of kubectl describe nodes, it seems everythinhg is OK
nodes.txt

I issued
kubectl get namespaces
I got
image

I issued
kubectl get pods -n argocd

I got
image

but still cant access the argo site nor gitae
image

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 7, 2024

Oh the node has 1 CPU available. And 95% of it is already utilized. Therefore K8s cannot schedule the pod.

Capacity:
  cpu:                1
  ephemeral-storage:  65739308Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2269160Ki
  pods:               110

And

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                950m (95%)   100m (10%)
  memory             290Mi (13%)  390Mi (17%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)

So. Why isn't it picking up the number of CPUs you have available. hmmm.

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

Really, I am not a k8s expert, have you came across this? how can we tell the node to use more CPUs?

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

In docker settings, I increased the CPU to 2 instead of 1, now it shows

Capacity:
cpu: 2
ephemeral-storage: 65739308Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2268908Ki
pods: 110

Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 1050m (52%) 100m (5%)
memory 380Mi (17%) 390Mi (17%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)

I was able to access the argo site

image

gitea also is accessible

Can you please share some tutorials to go further. I agree with you we shoul;d be able to be more able to debug setup process
So many thanks for your help

@cmoulliard
Copy link
Contributor

Can you share what this command shows please ?

kubectl describe node/<NAME_OF_THE_NODE>

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

Also, can you please tell me if it is possible to access argo and gitea remotely, such as from my lapto or another desktop inside my network and outside it as I dont want to access the VM and work

@cmoulliard
Copy link
Contributor

Also, can you please tell me if it is possible to access argo and gitea remotely, such as from my lapto or another desktop inside my network and outside it as I dont want to access the VM and work

Yes you can on your laptop using the urls documented here: https://cnoe.io/docs/reference-implementation/installations/idpbuilder/usage#basic-usage

You can also create a cluster on a remote machine and access it from another one using as parameter --host where value could be IP.nip.io or domain.name, etc

Remark: To discuss such questions, can you then contact the idpbuilder members using slack - https://github.com/cnoe-io/idpbuilder?tab=readme-ov-file#community please !

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

Thanks @cmoulliard , here is the outpu of
kubectl describe node/localdev-control-plane
node-localdev.txt

@cmoulliard
Copy link
Contributor

kubectl describe node/localdev-control-plane

I don't see any disk or cpu/memory issue according to

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 08 Nov 2024 10:45:48 +0100   Thu, 07 Nov 2024 11:19:41 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 08 Nov 2024 10:45:48 +0100   Thu, 07 Nov 2024 11:19:41 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 08 Nov 2024 10:45:48 +0100   Thu, 07 Nov 2024 11:19:41 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 08 Nov 2024 10:45:48 +0100   Thu, 07 Nov 2024 11:22:21 +0100   KubeletReady                 kubelet is posting ready status

Can you access the following url locally: https://argocd.cnoe.localtest.me:8443/applications

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

Yes as I said earlier, I am able now to access both, gitea took some more time to get ready that argocd container. However, as agreed with @nabuskey idpbuilder needs to give the right details for not being able to go through

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

@cmoulliard & @nabuskey please let me ask this last question and I will switch to slack. At https://cnoe.io/docs/reference-implementation/integrations/reference-impl
it is indicated that if I need to install the different components, I should run the create as follows

idpbuilder create --use-path-routing \
  --package https://github.com/cnoe-io/stacks//ref-implementation

As you know that I was struggling to make local-dev up and running, is it possible to add the different componenets indivisually (Argo Workflows , backstage, Crossplane.........)? If yes, what is the shape of the command and params? Thanks again

@cmoulliard
Copy link
Contributor

As you know that I was struggling to make local-dev up and running, is it possible to add the different componenets indivisually (Argo Workflows , backstage, Crossplane.........)? If yes, what is the shape of the command and params? Thanks again

Yes. You can install a package individually or several packages as a package is locally a folder containing manifest files + Argocd application(set) file.

Example to install top of the kubernetes cluster (= kind + ingress + gitea + argocd as core packages) the custom package1

git clone https://github.com/cnoe-io/stacks; cd stacks
idp create \
  -p basic/package1

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 8, 2024

As you know that I was struggling to make local-dev up and running, is it possible to add the different componenets indivisually (Argo Workflows , backstage, Crossplane.........)? If yes, what is the shape of the command and params? Thanks again

If you are talking about the ability to select components to install from the ref impl, it's not possible right now. A stack is a tightly coupled package bundle that work together, at least for now. To make it more modular, we have to think about how to go about it cleanly. We discussed about this in previous meetings but we haven't reached any conclusions.

Also, can you please tell me if it is possible to access argo and gitea remotely, such as from my lapto or another desktop inside my network and outside it as I dont want to access the VM and work

It's possible with SSH tunneling or any other tunneling mechanisms that map the port to your local loop back interface. For example, for my own development setup, I use SSH and SSM agent.

For example:

ssh -L 8443:localhost:8443 user@host

Then open your browser, go to https://argocd.cnoe.localtest.me:8443

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

@cmoulliard , I did clone, cd to stacks and tried to run
idpbuilder create -p basic/package1
I get right away (in spite of the fact tha local-dev is running and I can access argocd and gitea, as you can notice also in the 2nd snapshot docker is up and running)

image

image

This happens with any other package I tried

idpbuilder create -p ref-implementation
idpbuilder create -p ref-implementation/backstage

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

@nabuskey I tried
idpbuilder create -p ref-implementation
but getting the same error as above

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 8, 2024

I assume docker is running, correct? e.g. commands like docker images work.

Can you try it with the newest version (v0.8.1)? https://github.com/cnoe-io/idpbuilder/releases/tag/v0.8.1
We did update the way we detect runtime provider.

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

Yes @nabuskey
I ahve used
curl -fsSL https://raw.githubusercontent.com/cnoe-io/idpbuilder/main/hack/install.sh | bash
when I installed idpbuilder
does running it again will update it to the new version?

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 8, 2024

It should override it, but if it doesn't you can download it from the release page I linked above. The version should be v0.8.1 when you run idpbuilder version

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

OK Il will try

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

I updated to verion 0.8.1, now whateever I run I get

image

when I run

idpbuilder create --use-path-routing --package ref-implementation

I get
image

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 8, 2024

Ah you need to re-create the cluster because --use-path-routing flag is incompatible with your previous run. So run:

idpbuilder create --use-path-routing --package ref-implementation --recreate

See: https://cnoe.io/docs/reference-implementation/installations/idpbuilder/how-it-works#domain-based-and-path-based-routing

@eliassal
Copy link
Author

eliassal commented Nov 8, 2024

Thanks, but it errors wheather I use --use-path-routing or not! so what is the difference between a sstandard idpbuilder create and idpbuilder create --use-path-routing?
So if you look at the 1st screen, I issued
idp create -p basic/package1
and even with refernce
idpbuilder create --package ref-implementation
image

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 8, 2024

Did you pass the --recreate flag? This flag instructs ipdbuilder to recreate the cluster. Primary used when supplied options do not match.

idpbuilder create --use-path-routing --package ref-implementation --recreate

@eliassal
Copy link
Author

eliassal commented Nov 9, 2024

OK I will do but please tell me what is the bdifference between using -use-path-routing and not using?
2nd, is the exixiting cluster will be deleted and I lose all the config I mad to it?

@nabuskey
Copy link
Collaborator

nabuskey commented Nov 9, 2024

Path based routing vs domain based routing is documented here.

https://cnoe.io/docs/reference-implementation/installations/idpbuilder/how-it-works#domain-based-and-path-based-routing

If you recreate the cluster, you will lose your config changes. Depending on what config changes you made, there are ways to make it reproducible. What did you change?

If you want to just test something out in a new cluster, you can always create a new one with the name and port flag. E.g --name new-cluster --port 7443.

@eliassal
Copy link
Author

eliassal commented Nov 9, 2024

Thanks, I did recreate with the flag --use-path-routing, in spite of the fact during create I got also and endless loop,
as you can see in the snapshot
image

I rebooted, it seems that all namespaces, pods are there up and running
image

but I cant access https://argocd.cnoe.localtest.me:8443/ nor https://gitea.cnoe.localtest.me:8443/
For gitea site I always get

image

then
image
for argocd, always getting
image

@eliassal
Copy link
Author

eliassal commented Nov 9, 2024

@nabuskey please ignore my last email as I recreated the cluster without the flag --use-path-routing , everything went though no loop of waiting and I was able to access argocd and gitea.
So it seems that this flag has some issues for accessing https://..../argocd and https://..../gitea

@eliassal
Copy link
Author

@nabuskey really very interesting this tool idpbuilder, it seems that there has been a lot of efforts behind and I appreciate it.
I was able to access almost all sites, however, I tried to follow instructiona t
https://github.com/cnoe-io/stacks/blob/main/ref-implementation/README.md
when I access backstage with user1, click on create, I geta a screen where there are no templates as indicated in the readme in spite of the fact I see both, in ArgoCD are deployed

image

@nabuskey
Copy link
Collaborator

Hey @eliassal, I've been busy with KubeCon. Sorry for the late reply. Are you still having issues?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants