Releases: gruntwork-io/helm-kubernetes-services
v0.1.7
v0.1.6
Charts affected
k8s-service
Description
Add the ability to configure securityContext
at the pod level using podSecurityContext
input value.
Special thanks
Special thanks to @RyuCaelum for their contribution for this feature!
Related links
v0.1.5
Charts affected
k8s-service
Description
You can now configure custom resources that are not managed by the helm chart. Refer to the updated docs for more information.
Special thanks
Special thanks to @paul-pop for their initial contribution for this feature!
Related links
v0.1.4
Charts affected
k8s-service
Description
You can now set persistent volumes on the Deployment using the new input persistentVolumes
.
Special thanks
Special thanks to @austinphilp for their contribution!
Related links
v0.1.3
v0.1.2
v0.1.1
v0.1.0: Canary support
Charts affected
k8s-service
[BACKWARDS INCOMPATIBLE]
Description
This release introduces support for deploying a canary deployment to test new versions of your app. Refer to the updated README for more information.
Migration guide
To use canaries, the existing Deployment
needs to be updated to watch for different set of labels compared to the canary Deployment
. Unfortunately, updating selector labels is an unsupported operation in Kubernetes and helm
does not handle this transition gracefully.
To support this, the Deployment
resource needs to be recreated so that they get the new labels. The easiest way to handle this is to delete the Deployment
resource (using kubectl delete deployment
) and let helm recreate it during the upgrade process. Note that this means you will have downtime as the Pods are being deleted and recreated, despite any pod disruption budgets.
If you wish to avoid downtime, you can perform a blue-green deployment by recreating the Deployment resource under a new name. You can do this with the following approach:
# Retrieve the configuration for the deployment created by helm.
kubectl get deployment $DEPLOYMENT_NAME -n $DEPLOYMENT_NAMESPACE -o yaml > temp.yaml
# Open temp.yaml and update the name of the Deployment so that it can be created alongside the old one.
# Apply the updated yaml file to create a temporary Deployment object under a different name and wait for rollout.
kubectl apply -f temp.yaml
kubectl rollout status deployments $DEPLOYMENT_NAME -n $DEPLOYMENT_NAMESPACE
# Delete the old deployment so that the chart will recreate it.
kubectl delete deployment $DEPLOYMENT_NAME -n $DEPLOYMENT_NAMESPACE
# Rollout the update. This should recreate the Deployment resource with the new selector labels
helm upgrade --wait $RELEASE_NAME gruntwork/k8s-service
# At this point, it is safe to delete the temporary deployment resource we created
kubectl delete -f temp.yaml
Special thanks
Special thanks to @zackproser for their contribution!
Related links
v0.0.13
v0.0.12
Charts affected
k8s-service
Description
- You can now optionally request to create the
ServiceAccount
directly from the chart, using the newserviceAccount.create
parameter. - You can now optionally configure horizontal pod autoscalers using the
horizontalPodAutoscaler
parameter.
Special thanks
Special thanks to @AechGG for their contribution!