Hi all,
Lately, I’ve been noticing various customers are leveraging Helm Hooks in their charts. When running these Helm charts through Harness, some customers experience issues with their deployment! Today, I will be reviewing some best practices that the Customer Success team recommends.
Helm Hook Introduction
For those who don’t know, Helm provides a hook mechanism that allows developers to intervene at specific points in a release cycle. Most developers leverage hooks to:
-
Load a ConfigMap or Secret during install before any other charts are loaded
-
Execute a Job to back up a database before installing a new chart and a second job to restore the data in the database
-
Run a job before deleting a release to gracefully take out a service before completely removing it.
Sample Job with a hook:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
Here are the available Hooks to leverage:
Hook Name | Description |
---|---|
pre-install | Executes after templates are rendered, but before any resources are created in Kubernetes. |
post-install | Executes after all resources are loaded into Kubernetes |
post-delete | Executes on a deletion request after all of the release’s resources have been deleted. |
pre-upgrade | Executes on an upgrade request after templates are rendered, but before any resources are loaded into Kubernetes (e.g. before a Kubernetes apply operation) |
post-upgrade | Executes on an upgrade after all resources have been upgraded |
post-rollback | Executes on a rollback request after all resources have been modified |
crd-install | Adds CRD resources before any other checks are run. This is used only on CRD definitions that are used by other manifests in the chart |
test-success | Executes when running helm test and expects the pod to return successfully (return code == 0). |
test-failure | Executes when running helm test and expects the pod to fail (return code != 0) |
pre-rollback | Executes on a rollback request after templates are rendered, but before any resources have been rolled back |
Harness & Hooks
The Helm Basic Deployment Strategy Way
In Harness, customers port over their existing Helm charts and start deploying using Kubernetes V2 Deployment type. As a result, the hook is not rendered and ignored. The reason why this occurs is that when a user selects Kubernetes V2 Deployment, we are running a kubectl apply to all of the manifest files. There is no Tiller involved in this process because we are not running any helm commands. To work around this, the team has come up with two solutions. One way is to leverage the Helm Deployment type, which allows a user to utilize the Helm and Tiller capabilities and mimic exactly what they are doing before Harness. The trade-off to this path is that you cannot use Canary Deployment or Blue-Green Deployments. With Helm Deployment a developer can only leverage a basic deployment.
The Helm Hook that I have leveraged is a post-install
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
so after the chart is deployed:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
we run a sleep command for 10 seconds.
The Harness Alternative
The Helm Hook alternative is to leverage the Kubernetes V2 Deployment methodology by removing the Hook annotations and splitting out the Kubernetes Job as a separate yaml. It should look something like this:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
And Apply it in a workflow like so
As a user, we give you the flexibility to deploy your YAMLs in any order you want. So if there is a post-install job, we can place this job.yaml after the canary deployment step. The additional benefit is that you still get to use the Canary deployment strategy and the Blue-Green Strategy.
Happy Halloween!