-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate new deployment for each app version #63
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!!!
internal/impl/kube.go
Outdated
Strategy: appsv1.DeploymentStrategy{ | ||
Type: "RollingUpdate", | ||
RollingUpdate: &appsv1.RollingUpdateDeployment{}, | ||
}, | ||
// Number of old ReplicaSets to retain to allow rollback. | ||
RevisionHistoryLimit: ptrOf(int32(1)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we delete RevisionHistoryLimit
and MinReadySeconds
now too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point about 'RevisionHistoryLimit. As for
MinReadySeconds` I think it still makes sense, but I guess we can let someone configure it if they really want so we set less knobs in the config.
In the current implementation, we generate new deployments for each replica set other than the one that has the public listeners (e.g., main). When we rollout a new version of the app, we do rolling upgrade on the main group, hence we can do automated rollouts. However, this approach is not great and it doesn't make sense for someone to do this in production. For development/testing, someone can simply start a new app deployment and kill the old one. This PR changes the deployment such that: * we now generate unique deployments for all groups (including main) * we generate unique name for the listener service iff the user wants that; this allows the user to reuse the same listener across multiple app versions - makes it easy to do port-forwarding, curl the same IP, etc. * we added a "version" label to the deployments/HPAs/listeners, s.t., we can easily delete old deployments; e.g., `kubectl delete all --selector=version=foo` deletes all the k8s resources for the app version foo.
34bf4ca
to
6f33c54
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Michael
internal/impl/kube.go
Outdated
Strategy: appsv1.DeploymentStrategy{ | ||
Type: "RollingUpdate", | ||
RollingUpdate: &appsv1.RollingUpdateDeployment{}, | ||
}, | ||
// Number of old ReplicaSets to retain to allow rollback. | ||
RevisionHistoryLimit: ptrOf(int32(1)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point about 'RevisionHistoryLimit. As for
MinReadySeconds` I think it still makes sense, but I guess we can let someone configure it if they really want so we set less knobs in the config.
In the current implementation, we generate new deployments for each replica set other than the one that has the public listeners (e.g., main). When we rollout a new version of the app, we do rolling upgrade on the main group, hence we can do automated rollouts.
However, this approach is not great and it doesn't make sense for someone to do this in production. For development/testing, someone can simply start a new app deployment and kill the old one.
This PR changes the deployment such that:
kubectl delete all --selector=version=foo
deletes all the k8s resources for the app version foo.