Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

higress-gateway安装失败报容器未准备就绪探针503 #402

Closed
fotan123456 opened this issue Jun 28, 2023 · 6 comments
Closed

higress-gateway安装失败报容器未准备就绪探针503 #402

fotan123456 opened this issue Jun 28, 2023 · 6 comments

Comments

@fotan123456
Copy link

通过官网介绍的helm方式安装,controller、console正常,gateway异常:
截取的部分pod describe
Volumes:
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istio-ca-root-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: higress-ca-root-cert
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: higress-config
Optional: false
istio-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
proxy-socket:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
requests.cpu -> cpu-request
limits.cpu -> cpu-limit
kube-api-access-hktvs:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning Unhealthy 95s (x34799 over 18h) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503

容器异常日志:
2023-06-28T12:59:51.805336555+08:00 [Envoy (Epoch 0)] [2023-06-28 04:59:51.805][16][error][config] listener '0.0.0.0_80' failed to bind or apply socket options: cannot bind '0.0.0.0:80': Permission denied

2023-06-28T12:59:51.805405296+08:00 [Envoy (Epoch 0)] [2023-06-28 04:59:51.805][16][warning][config] gRPC config for type.googleapis.com/envoy.config.listener.v3.Listener rejected: Error adding/updating listener(s) 0.0.0.0_80: cannot bind '0.0.0.0:80': Permission denied

@CH3CHO
Copy link
Collaborator

CH3CHO commented Jun 28, 2023

请提供以下信息:

  1. 宿主机操作系统及版本(如果是Linux,请同时提供内核版本)
  2. K8s 版本
  3. 是否安装至 Kind 集群
  4. Helm 安装命令

@fotan123456
Copy link
Author

fotan123456 commented Jun 28, 2023

1、Linux version 5.4.119-19-0009.11 centos
Linux 5.4.119-19-0009.11 SMP Wed Oct 5 18:41:07 CST 2022 x86_64 x86_64 x86_64 GNU/Linux
2、v1.26.1-tke.1
3、否
4、helm repo add higress.io https://higress.io/helm-charts
helm install higress -n higress-system higress.io/higress --create-namespace --render-subchart-notes --set higress-console.domain=console.higress.io,higress-core.gateway.replicas=1

@johnlanni
Copy link
Collaborator

是否调整deployment开启了 hostnetwork?导致监听宿主机的80端口没有权限,如果是这样请加上这个参数:--set higress-core.gateway.hostNetwork=true

@johnlanni
Copy link
Collaborator

也有可能是tke本身CNI的限制,可以参考这段配置调整deployment:
https://github.com/alibaba/higress/blob/fc05a3b256a775adcc64198c3a95ffd3a11a4cfd/helm/core/templates/deployment.yaml#L96C1-L105

@fotan123456
Copy link
Author

也有可能是tke本身CNI的限制,可以参考这段配置调整deployment: https://github.com/alibaba/higress/blob/fc05a3b256a775adcc64198c3a95ffd3a11a4cfd/helm/core/templates/deployment.yaml#L96C1-L105

感谢大佬,通过这种方式解决了

@johnlanni
Copy link
Collaborator

遇到类似问题,可以用个这个命令把配置固化在values里,避免后续升级配置被覆盖

helm upgrade higress higress.io/higress -n higress-system --set-json higress-core.gateway.containerSecurityContext='{"capabilities":{"drop":["ALL"],"add":["NET_BIND_SERVICE"]},"runAsUser":0,"runAsGroup":1337,"runAsNonRoot":false,"allowPrivilegeEscalation":true,"readOnlyRootFilesystem":true}'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants