v0.1.0: Canary support
Charts affected
k8s-service
[BACKWARDS INCOMPATIBLE]
Description
This release introduces support for deploying a canary deployment to test new versions of your app. Refer to the updated README for more information.
Migration guide
To use canaries, the existing Deployment
needs to be updated to watch for different set of labels compared to the canary Deployment
. Unfortunately, updating selector labels is an unsupported operation in Kubernetes and helm
does not handle this transition gracefully.
To support this, the Deployment
resource needs to be recreated so that they get the new labels. The easiest way to handle this is to delete the Deployment
resource (using kubectl delete deployment
) and let helm recreate it during the upgrade process. Note that this means you will have downtime as the Pods are being deleted and recreated, despite any pod disruption budgets.
If you wish to avoid downtime, you can perform a blue-green deployment by recreating the Deployment resource under a new name. You can do this with the following approach:
# Retrieve the configuration for the deployment created by helm.
kubectl get deployment $DEPLOYMENT_NAME -n $DEPLOYMENT_NAMESPACE -o yaml > temp.yaml
# Open temp.yaml and update the name of the Deployment so that it can be created alongside the old one.
# Apply the updated yaml file to create a temporary Deployment object under a different name and wait for rollout.
kubectl apply -f temp.yaml
kubectl rollout status deployments $DEPLOYMENT_NAME -n $DEPLOYMENT_NAMESPACE
# Delete the old deployment so that the chart will recreate it.
kubectl delete deployment $DEPLOYMENT_NAME -n $DEPLOYMENT_NAMESPACE
# Rollout the update. This should recreate the Deployment resource with the new selector labels
helm upgrade --wait $RELEASE_NAME gruntwork/k8s-service
# At this point, it is safe to delete the temporary deployment resource we created
kubectl delete -f temp.yaml
Special thanks
Special thanks to @zackproser for their contribution!