Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to avoid unhealthy backend / 502s on rolling deployments #1718

Open
rocketraman opened this issue May 19, 2022 · 47 comments
Open

Unable to avoid unhealthy backend / 502s on rolling deployments #1718

rocketraman opened this issue May 19, 2022 · 47 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@rocketraman
Copy link

rocketraman commented May 19, 2022

I have a GCE ingress in front of an HPA-managed deployment (at this time, with a single replica).

On a rolling deployment, I sometimes run into the backend being marked as unhealthy and resulting 502 errors, usually for about 15-20 seconds.

According to the pod events, the neg-readiness-reflector appears to mark cloud.google.com/load-balancer-neg-ready to True for the pod before it is actually ready:

Normal   LoadBalancerNegNotReady            18m                neg-readiness-reflector                Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-600f13cf-default-my-svc-8080-f82bf741]
Normal   LoadBalancerNegWithoutHealthCheck  16m                neg-readiness-reflector                Pod is in NEG "Key{\"k8s1-600f13cf-default-my-svc-8080-f82bf741\", zone: \"europe-west1-c\"}". NEG is not attached to any Backend Service with health checking. Marking condition "cloud.google.com/load-balancer-neg-ready" to True.
Warning  Unhealthy                          16m                kubelet                                Readiness probe failed: Get "http://10.129.128.130:8080/healthz": dial tcp 10.129.128.130:8080: connect: connection refused

While in this state, the previous pod terminates, but the load balancer does not route requests to the new pod, resulting in 502s.

I do have the deployment strategy set that should not allow this but I guess the neg being set to Ready is subverting this:

  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

My deployment does also define a readiness probe as can be seen in the events above.

I do also have a health check configuration defined for the backend:

apiVersion: v1
kind: Service
metadata:
  name: my-svc
  labels:
    app.kubernetes.io/name: mysvc
  annotations:
    cloud.google.com/backend-config: '{"ports": {"8080":"my-backendconfig"}}'
    cloud.google.com/neg: '{"ingress": true}'
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: mysvc
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
spec:
  timeoutSec: 45
  connectionDraining:
    drainingTimeoutSec: 0
  healthCheck:
    checkIntervalSec: 5
    timeoutSec: 5
    healthyThreshold: 1
    unhealthyThreshold: 2
    type: HTTP
    requestPath: /healthz
    port: 8080

I found this stackoverflow in which the user works around the issue with delaying the pod stop with a sleep on the lifecycle.preStop, but that seems more like a hack than a proper solution to this issue: https://stackoverflow.com/questions/71127572/neg-is-not-attached-to-any-backendservice-with-health-checking.

@rocketraman
Copy link
Author

May also be related to #1656.

@rocketraman
Copy link
Author

The NEG is not attached to any Backend Service with health checking message seems to be a red herring. The problem happens on every update, and I haven't seen that message again.

  Normal   Started                  7m49s                  kubelet                                Started container my-server
  Warning  Unhealthy                7m34s (x3 over 7m44s)  kubelet                                Readiness probe failed: Get "http://10.129.128.66:8080/healthz": dial tcp 10.129.128.66:8080: connect: connection refused
  Normal   LoadBalancerNegReady     7m31s                  neg-readiness-reflector                Pod has become Healthy in NEG "Key{\"k8s1-600f13cf-default-my-svc-8080-f82bf741\", zone: \"europe-west1-b\"}" attached to BackendService "Key{\"k8s1-600f13cf-default-my-svc-8080-f82bf741\"}". Marking condition "cloud.google.com/load-balancer-neg-ready" to True. 

As soon as the LoadBalancerNegReady event occurred, the 502s started happening, and kept happening (intermittently) for about 15-20 seconds.

@swetharepakula
Copy link
Member

/kind support

A clarification question. Are you seeing that the requests are going to the pod that has been deleted when you get 502s or are seeing that the requests are going to the new pod?

I am wondering if the issue you are facing is that the old pod on termination has not been removed from the NEG yet, so there is a latency between the pod being removed in Kubernetes and the pod being removed from the NEG.

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label May 20, 2022
@rocketraman
Copy link
Author

rocketraman commented May 20, 2022

A clarification question. Are you seeing that the requests are going to the pod that has been deleted when you get 502s or are seeing that the requests are going to the new pod?

I am wondering if the issue you are facing is that the old pod on termination has not been removed from the NEG yet, so there is a latency between the pod being removed in Kubernetes and the pod being removed from the NEG.

I can see that the 502 responses are interspersed with valid responses from the new pod, so I think you are right -- the load balancer 502 errors are because the load balancer is still sending requests to the pod that has terminated in k8s.

I thought the issue was with my setting of unhealthyThreshold: 10 on the health check, overriding the default of 2 causing the system to take too long to detect the terminated pod is unhealthy, but that wasn't the case. I've reset all configurations to what I believe are the defaults and still the 502s are unavoidable on a deployment update.

@rocketraman
Copy link
Author

Forcing the stopping container to stick around for a few extra seconds via lifecycle.preStop hook that sleeps for 30 seconds seems to be the only valid workaround I've found so far. This does also confirm the problem is with requests being sent to the terminating pod, not the new pod.

@rocketraman
Copy link
Author

rocketraman commented May 26, 2022

Even lifecycle.preStop does not appear to be a 100% consistent workaround -- today during a deployment I saw two 502 errors while pinging the endpoint every 2s, even though the old pod was still in "Terminating" state due to lifecycle.preStop.

The new pod did have the error message I originally mentioned above:

  Normal   LoadBalancerNegWithoutHealthCheck  2m14s                  neg-readiness-reflector                Pod is in NEG "Key{\"k8s1-600f13cf-default-my-svc-8080-f82bf741\", zone: \"europe-west1-b\"}". NEG is not attached to any Backend
Service with health checking. Marking condition "cloud.google.com/load-balancer-neg-ready" to True.

in which the "cloud.google.com/load-balancer-neg-ready" condition appears to be marked True before it should be.

So, to summarize, it seems there are actually two problems contributing to 502 errors during rolling deployments:

  1. The load balancer is hitting the old pod even though it is being shut down (using lifecycle.preStop does seem to workaround this issue). This is consistent and easily reproduced.
  2. The load balancer is hitting the new pod even though it is not ready yet due to the "NEG is not attached to any Backend Service with health checking." error. This is intermittent and more difficult to reproduce, but not uncommon.

@swetharepakula
Copy link
Member

@rocketraman, as this pod changes are occurring, are there any node or zone changes that are occurring?

@rocketraman
Copy link
Author

@swetharepakula Yes, this cluster is an autopilot cluster. When I deploy this update, the deployment generally requires a new node to be created to host the new pod.

@rocketraman
Copy link
Author

In addition, usually a few minutes later the pod is migrated back to the original node, and the new node is shutdown. That process rarely completes without some 502s as well.

@swetharepakula
Copy link
Member

@rocketraman, is that node being created in a new zone? Is the autopilot cluster a regional cluster?

@rocketraman
Copy link
Author

is that node being created in a new zone? Is the autopilot cluster a regional cluster?

@swetharepakula I haven't checked specifically on the zone of the new node. It is a regional cluster (my understanding is that all autopilot clusters are regional clusters), so I presume autopilot is indeed balancing the new node across zones.

@rocketraman
Copy link
Author

Yes, I just confirmed the new node is in a different zone than the existing node that hosts that pod.

@swetharepakula
Copy link
Member

swetharepakula commented May 27, 2022

So I believe the following is what is happening:

  1. The load balancer is hitting the old pod even though it is being shut down (using lifecycle.preStop does seem to workaround this issue). This is consistent and easily reproduced.

Neg controller does immediately respond to endpoint changes. However, there can be latency due to the time it takes for a detach operation to complete. In the time it takes the detach operation to complete, the load balancer may still route traffic to the terminating pod. There are a few options to mitigate this:

  1. The lifecycle.preStop as you are already using.
  2. use terminationGracePeriodSeconds so that your application will continue to accept traffic for a little longer.
  3. Adjust health checks to be sensitive enough to recognize that the endpoint is gone. This option will only reduce the frequency of 502s but won't necessarily eliminate them.

Since you are already doing (1), that is probably the easiest approach.

2.The load balancer is hitting the new pod even though it is not ready yet due to the "NEG is not attached to any Backend Service with health checking." error. This is intermittent and more difficult to reproduce, but not uncommon.

This is a race between the Ingress controller and a workload being scheduled/started on a new node. The NEG controller sees the update before the Ingress controller and adds the endpoint. However if this is on a node in a new zone, the NEG controller creates a new NEG in the new zone. The Ingress controller then needs to attach that NEG to the backend service. However if the Ingress controller doesn't finish that before the workloads are scheduled on the node, those new pods will have their readiness gates switched to ready immediately since the NEG will not be in any BackendService.

For non auto-pilot clusters our recommendation is to reduce the number of zone changes and try to run workloads in every zone the cluster is in. Since this is an autopilot cluster, your options may be limited in this regard.

At this time we are still looking into how to make the experience better for both of these cases.

@rocketraman
Copy link
Author

rocketraman commented May 30, 2022

  • use terminationGracePeriodSeconds so that your application will continue to accept traffic for a little longer.

FYI I actually don't think this works -- at least it didn't when I tried it. I actually wouldn't expect it to: terminationGracePeriodSeconds tells k8s that if the workload takes longer to stop, to give it that extra time before forcefully terminating the pod. Changing the value to a larger value doesn't actually delay termination if the pod exits quickly on its own, which is what we want in this case. I believe the GKE docs on this are incorrect, and should recommend the lifecycle.preStop solution instead (with a corresponding increase in terminationGracePeriodSeconds if the default isn't high enough to deal with the normal shutdown plus preStop sleep time).

For non auto-pilot clusters our recommendation is to reduce the number of zone changes and try to run workloads in every zone the cluster is in. Since this is an autopilot cluster, your options may be limited in this regard.

That is too bad, but it does suggest a work-around. Scale up the deployment so that there is at least one pod per zone. That does raise system cost significantly when only one pod is required to meet load/redundancy requirements.

Will this become the tracking issue for improving this behavior, or can you reference me the issue I should follow?

@swetharepakula
Copy link
Member

The workaround I suggested with terminationGracePeriodSeconds is not the documented way to use the field. Typically it gives pod more time to shutdown so it can gracefully exist as you describe it. To use the workaround I suggested, you would have to modify the application shutdown logic to not immediately start shutting but continue accepting requests but failing the healthcheck for a period of time. So your terminationGracePeriodSeconds would timeToShutdown + GCE Programming latency and your application would have to wait that GCE Programming Latency before beginning shutdown operations. In comparison the lifecycle.preStop is probably easier configure.

For now we will keep this issue open to communicate updates on this front. There isn't another issue open to track this work.

@talzion12
Copy link

I'm also getting 502 errors during availability zone changes. I added the cloud.google.com/neg: '{"ingress":false}' annotation to my service so that the load balancer forwards to all instance groups in the cluster and I think it solves the problem relatively cleanly.

Are there drawbacks to this workaround? Any reason it wasn't suggested in this thread before?

@saez0pub
Copy link

saez0pub commented Dec 7, 2022

I think you missed this important message which shouldn't be occurring

Normal LoadBalancerNegWithoutHealthCheck 16m neg-readiness-reflector Pod is in NEG "Key{"k8s1-600f13cf-default-my-svc-8080-f82bf741", zone: "europe-west1-c"}". NEG is not attached to any Backend Service with health checking. Marking condition "cloud.google.com/load-balancer-neg-ready" to True.

This can occur If the attachement of the Pod to a NEG is done, but the backend isn't yet linked to this NEG.
This is not managed by the reflector https://github.com/kubernetes/ingress-gce/blob/master/pkg/neg/readiness/reflector.go#L184

The result is that the old pod is detached and you have a wonderful backend service without any NEG and as a result 502 errors.

I also added minReadySeconds as a safety of this event to wait a bit before removing the old pod, but it isn't a 100% guaranteed success, especially if something is exceptionally slow at pod startup, as I have two backend services linked to my service, I often encounter this issue 😕.

@revero-doug
Copy link

revero-doug commented Jan 23, 2023

@swetharepakula what are the recommended steps to reliably recover a system experiencing this issue, separate from the preventative configuration you recommended? is deleting the service, ingress, and deployment sufficient, then reapplying those k8s resources sufficient? I'm not seeing intermittent 502s but rather persistent once the system enters this failure mode.

thankfully this is a development cluster and I have the option to completely destroy/deprovision and reprovision/deploy, but that's obviously not a viable option in a production cluster

@swetharepakula
Copy link
Member

@talzion12 , in your approach, it means you have switched to using Instance Groups which is not our recommended or default stack. NEGs provide you with a container native solution, while the instance group solution is susceptible to the double hop problem. NEGs are our recommended approach.

@saez0pub , The neg controller and the Ingress controllers operate in parallel, so there can be a race when a new NEG is created but it has not been added to the backend service yet. We have released a fix as part of ingress-gce v1.20 that should ensure that NEGs are created are sooner (as soon as the node in the new zone is ready) which should hopefully give more time for the Ingress Controller to add the NEG to the backend service before the workloads are scheduled in the new zone.

@revero-doug , this does not sound like the a 502 due to a rolling development. Can you expand more on what the symptoms are, and what is occurring in the cluster? Since it is a different kind of an issue, can you open a new issue with those details. Thanks!

@bgroupe
Copy link

bgroupe commented Feb 6, 2023

@swetharepakula This started happening very frequently in one of our clusters after upgrading to 1.24. The k8s/gce-ingress have not been previously mentioned in this issue -- is it possible this race was introduced between 1.21 - 1.24?

The version mapping in the readme has not been updated in some time...is there any other way to determine what gce-ingress version we are using? Or what future k8s version the fix in v1.20 will be tied to?

@saez0pub
Copy link

Hello, my version is 1.23.14-gke.1800.

As I'm using cloud native load balancing and Cluster IP, the NEG is created at each deployment.
Expecting the neg creation at the node creation is not sufficient.
So This race condition is very likely to happen.

@talzion12
Copy link

@swetharepakula I understand but I'd rather have slightly worse performance with the double hop than downtime with NEG unless I'm missing some other considerations.

@derek-gfs
Copy link

derek-gfs commented Mar 1, 2023

You may find this GCP documentation helpful, as it describes the problems here are some possible solutions: https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#traffic_does_not_reach_endpoints

@saez0pub
Copy link

saez0pub commented Mar 4, 2023

Indeed, but this is not a stop problem, it is a start problem, kube starts to delete the old pod while the gclb has not attached the new pod to the NEG. It takes a random time to auto fix, so 60 seconds is not always sufficient.

There is obviously a problem in the ingress management. I'm praying each time I deploy, kubernetes just brings me downtime 💢 .

Can't Google estimate the rate of occurences of LoadBalancerNegWithoutHealthCheck in their gke ?

@denismccarthykerry
Copy link

Is there any update on when we might see a fix for this problem hitting GCP itself? I'm seeing 502's during rolling upgrades of pods on an autopilot cluster, even though the new pod is fully warmed up (I'm using argo rollouts to enable blue-green deployments). The preStop config does help but does not eliminate the problem. I've tried to update my number of replicas to 3 to ensure that there are replicas running in every NEG, but that also doesn't resolve the issue. I'm seeing failed_to_pick_backend errors in the load balancer logs when the 502's occur.

@GuillaumeMorini
Copy link

@denismccarthykerry did you try to add a minReadySeconds of 30 seconds to your deployment ?
This is in general a good workaround to avoid the 502 errors.

To solve the issue, you need to ensure you are using container native load balancer https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing
Take care of the special conditions where you may need to force the configurations detailed in this doc: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#container-native_load_balancing

HTH

@brianstorti
Copy link

We have minReadySeconds set and are using container-native load balancing, and we still see 502s every now and then.

@denismccarthykerry
Copy link

@denismccarthykerry did you try to add a minReadySeconds of 30 seconds to your deployment ? This is in general a good workaround to avoid the 502 errors.

To solve the issue, you need to ensure you are using container native load balancer https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing Take care of the special conditions where you may need to force the configurations detailed in this doc: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#container-native_load_balancing

HTH

Thanks for the reply Guillaume. I've tried minReadySeconds (my services were already annotated with cloud.google.com/neg: '{"ingress": true}'as they fulfilled the criteria for container-native loadbalancing) but I get the same issue. I'm baffled. I've gone through everything I can find online to resolve this, but nothing seems to work. The preStop hook at least reduces the window during which I get 502's to maybe 5 seconds, but I still get plenty of them during that time.

If I knew it was a known bug at least I could rest a little easier that it's not something I'm doing wrong myself...

@rocketraman
Copy link
Author

@denismccarthykerry Is connectionDraining.drainingTimeoutSec necessary to solve the issue? My understanding of that is that would affect existing requests, but not new ones, so it would surprise me if that has an affect on solving the 502s, unless you were only seeing the 502s for requests started before the deployment began. Though the docs do have the statement "The load balancer does not send new requests to the removed backend." I thought that was already the case without this setting as well.

I'm also a bit surprised that adding healthCheck.checkIntervalSec = 15 solves anything, as the default health check actually has a shorter interval of 5s, and so your config should actually take longer to detect a terminated node, which is the opposite of what you want here. See comment #1718 (comment).

@denismccarthykerry
Copy link

You're right, that was not it. It did make it appear that the issue was resolved, but only because the health check itself was not configured correctly, so the deployment never routed traffic to the new node - so obviously I didn't get the 502's on switchover. I've deleted the comment so as not to mislead people. Looks like this one is still unresolved...

@notcool11
Copy link

@swetharepakula , this issue seems related to a problem we are having with 503s when HPA scales down a service. We get a bunch of pods getting killed, followed by a bunch of 503s.

The difference in our configuration is an istio service mesh that injects a sidecar proxy container. From looking at the events and logs, the application container dies almost immediately on the kill request. It then takes ~15s for the istio container to abort, but in that window it is constantly returning 503s because the app container is dead.

Ultimately the problem looks the same, namely GCLB is still sending traffic to a pod that has been killed. Are there any different recommendations for fixing the issue when a service mesh is in play? Thanks!

@IvanUkhov
Copy link

This can potentially be of relevance:

https://cloud.google.com/kubernetes-engine/docs/troubleshooting/troubleshoot-load-balancing#500-series-errors

It explains what might be happening and how to tackle it.

@rocketraman
Copy link
Author

This can potentially be of relevance:

https://cloud.google.com/kubernetes-engine/docs/troubleshooting/troubleshoot-load-balancing#500-series-errors

It explains what might be happening and how to tackle it.

While it's a great summary and shows visually what is happening, there is nothing new in it that was not already discussed above in this issue -- even my original post mentions the preStop sleep workaround/hack.

Also see comment #1718 (comment) and the following comments with a solution for the second problem that can cause 500s related to zones. I think this information should also be added to that article.

On a managed platform, I don't think users in general should be configuring application-level resources with things like preStop hooks where the sole reason is to solve a problem with how the platform operates.

@denismccarthykerry
Copy link

denismccarthykerry commented Dec 23, 2023

Have you any workarounds for this problem @rocketraman ? It's still occurring for me. The downtime is a matter of a few seconds per deploy, but it is super irritating and is a real concern when considering rolling updates. I'm considering switching to some other technology at this point due to the lack of progress on this issue. E.g. if it was a case that creating a new production autopilot cluster could resolve it I would take the pain to do that, but there's no indication that that is the case. Then again, this does not seem to be a widespread issue affecting all gke clusters using global load balancing, so there must be something specific- somewhere - at the root of this problem.

@GuillaumeMorini
Copy link

GuillaumeMorini commented Dec 23, 2023 via email

@denismccarthykerry
Copy link

Thanks @GuillaumeMorini . I have added the cloud.google.com/neg: '{"ingress": true}' and the minReadySeconds parameter - it seems to have resolved the 502s (at least I haven't been able to reproduce it myself by hammering the endpoints with my browser at the time of the rolling deploy - which I used to be able to do every time). I'll comment further if this turns out to not be resolved. Thanks again for the advice!

@IvanUkhov
Copy link

IvanUkhov commented Jan 4, 2024

This can potentially be of relevance:

https://cloud.google.com/kubernetes-engine/docs/troubleshooting/troubleshoot-load-balancing#500-series-errors

It explains what might be happening and how to tackle it.

I think what the troubleshooting guide fails to mention is that the pre-stop hook is not exactly the right approach. For the load balancer to know that the container is terminating, the container should start failing the readiness probe as soon as the pod termination is triggered (kubernetes/kubernetes#110191), and a pre-stop hook is just wasting time in this regard. Instead, the timeout should happen in the main process. It should listen to SIGTERM, and as soon as it is received, it should start failing the readiness probe, while continuing doing business as usual, including all other probes, until the timeout expires, at which point it should do a graceful shutdown.

@rocketraman
Copy link
Author

Instead, the timeout should happen in the main process. It should listen to SIGTERM, and as soon as it is received, it should start failing the readiness probe, while continuing doing business as usual, including all other probes, until the timeout expires, at which point it should do a graceful shutdown.

That's really abusing the normal meaning of SIGTERM, and therefore coupling application code to kubernetes in an uncomfortable way.

@denismccarthykerry
Copy link

denismccarthykerry commented Jan 4, 2024

I'm still seeing the 502's unfortunately. Couldn't reproduce them yesterday after I made the change, happening today though when I deploy. Still waiting for some sort of a fix or mitigation for this issue. I'm amazed it doesn't seem to be more prevalent - has anybody who experienced the issue tried recreating their cluster I wonder? There's a bit of work in this for me, but if it were to work I would do it. @IvanUkhov I'll try your approach when I get the time to implement the change in the app - it does couple behaviour to K8S as @rocketraman points out, but I would sell body parts at this point to resolve the problem.

@IvanUkhov
Copy link

IvanUkhov commented Jan 5, 2024

@denismccarthykerry, it is not really a solution but a mitigation strategy. I don't see as many errors in the application load balancer as before, but I have seen a few still.

One observation is that is that it mainly happens in clusters with minimal replication. I am experimenting with two: one has one replica per deployment, and the other three. And it is always the first one that is troubling. When there are more resources, it is more forgiving, it seems. It could be so due to the sheer number of replicas, or it can have something to do with not having a working node in each region or zone. The errors appear not so much during the rollout itself but the subsequent rebalancing.

@notcool11
Copy link

@IvanUkhov , we see this issue happen when our replica counts scale down.

In an effort to prepare for large increases in volume (typically after a mobile app push notification) we pre-scale the deployments by increasing min replicas. Then later we reduce the replicas to their original value.

It is on the scale down that we see lots of errors from pods that k8s killed still receiving traffic. What is even more crazy is they ARE failing the readiness probes yet they continue to get traffic. Seems to take about 10s for the traffic to stop going to the killed pods regardless of state of the application container. 🤷

We did add the sleep as mentioned earlier in this thread and that has helped. We are also looking into the icky workaround to find a way to fail readiness yet keep the app container alive and serving other traffic. Though given that we are already failing readiness today and still get traffic I'm not holding my breath that it will help much. :(

@IvanUkhov
Copy link

We see this issue happen when our replica counts scale down.

That is that happens after a rollout. It first needs to scale up to get the new pods running somewhere before doing anything with the old ones, and then it scales back down, which is when errors start to happen.

What is even more crazy is they ARE failing the readiness probes yet they continue to get traffic. Seems to take about 10s for the traffic to stop going to the killed pods regardless of state of the application container.

How are they failing the readiness probe if you later say you are only planning to implement that? Do you mean they are failing because they are simply not running? That probe should be failing while the rest should be working as usual for a while until the load balancer catches up. And the 10 seconds you mentioned could be the periodicity of the probe: if the pod shuts down, it can take up to 10 seconds for the rest of the system to realizes what happened. That is why the pod should stay alive and continue to respond.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2024
@pdfrod
Copy link

pdfrod commented Apr 9, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 9, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2024
@pdfrod
Copy link

pdfrod commented Sep 2, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2024
@yunusefendi52
Copy link

Asked Google Support related to this, they said it is intended, related code is here

ne.IPv6 = healthStatus.NetworkEndpoint.Ipv6Address

🤷‍♂️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests