-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Apply labels after a service is healthy #595
Comments
@sowinski we have this issue too - once we issue a Even a more rudimentary approach like a manually-configured delay would be acceptable if not ideal (you could run into a situation where all of the new containers were available and ready before the delay expires, which means Docker would have removed all of the old containers and so the service would be unavailable). This might actually be harder to implement anyway since a task's health status is (probably?) exposed via the Docker API. @lucaslorentz any thoughts on this? |
How to reproduceI decided to put together a simple deployment which illustrates this in a Docker Swarm environment. Pre-requisites: a Docker Swarm setup (doesn't matter how many servers). In my environment, Caddy is running on every node and Caddy's network is First, our YAML file for the deployment. I used a stock Docker HTTP "Hello World" image based on version: '3.3'
services:
http_pause:
image: crccheck/hello-world
command: sh -c 'sleep 20 ; echo "httpd started" && trap "exit 0;" TERM INT; httpd -v -p 8000 -h /www -f & wait'
deploy:
labels:
caddy: health.example.com
caddy.reverse_proxy: "{{upstreams 8000}}"
networks:
- qr-caddy
networks:
qr-caddy:
external: true I deployed this using
Now for the test. On my workstation, I ran 10,000 queries (in 10 parallel batches) against the service, and while that was running, forced the service to update/redeploy. I captured only the HTTP status code from
And while that's running, I do the following on the server:
The update took about a minute to converge, because
The I will continue to iterate on this example setup, but at the moment it does look like this is a valid breaking use case. I can't imagine too many people have a "set it and forget it" setup where they publish a service once and never have to update it - in our case, we often push updates daily, and often multiple times per day, so we will encounter this at least daily. |
@lucaslorentz any thoughts about this issue? |
Hey @smaccona this be achieved with Caddy reverse proxy health checks. But I do agree that CDP could simplify this setup by not including unhealthy containers/services in the Caddyfile. I checked the image you shared above, and the container has health status information, we could easily use that to filter the upstreams. Do you know how it works for services and service tasks as well? Does each service task have its own health status? |
Hi,
currently I try to implement rolling deployments with docker. (Using docker swarm).
When I start a new container, the traffic is immediately forwarded to the newest container, even if it is still "booting up".
I would like to continue to forward the traffic to the old container until the new one is "healty".
It would be nice, if we could change this kind of behavior with a label.
The text was updated successfully, but these errors were encountered: