Rollback feature in Harness:
Harness gives the ability to users to rollback their kubernetes deployment to the last successful release which is a very important feature to achieve stability in case of deployment failures. We implement this by creating an internal configmap which stores metadata like release number and a harness created versioning, in case of a failure this configmap is used to refer to the last successful release which is then selected to be rolled back to.
Pruning Kubernetes resources:
This is currently behind a feature flag “PRUNE_KUBERNETES_RESOURCES”.
Pruning in Harness for kubernetes resources is similar to the effect achieved by below in kubernetes cli
kubectl apply --prune
Kubernetes pruning queries the API server for all objects matching a set of labels and attempts to match the returned live object configurations against the object configuration files.
Similarly, Harness compares the objects you are deploying with the objects it finds in the cluster. If Harness finds objects which are not in the current release, it prunes them.
Changes to the manifests used in Harness Kubernetes deployments can result in orphaned resources you are unaware of.
For example, one deployment might deploy resources A and B but the next deployment deploys A and C. C is the new resource and B was removed from the manifest. Without pruning, resource B will remain in the cluster.
You can manually delete Kubernetes resources using the Delete step, but Harness will also perform resource pruning automatically during deployment.
Pruning helps in maintaining a healthy system and saving your cluster resources.
Release history Configmap becoming too large:
Note that in some cases where your K8s configmaps may have a lot of information e.g variables, keys, multiple namespace data (shared), the release history configmap that harness creates may parse all of this information and become too large.
By default all kubernetes objects stored in etcd have a limit of < 1mb, this can be overcome by mounting them to an additional volume, however as the release history configmap is generated by harness and is stored in our internal etcd setup, users cannot modify this. Also whilst using the pruning feature, the maximum manifest size reduces to 0.5mb and during execution may raise an error like:
invalid request: Failed to replace default/ConfigMap/release-267c4a3a-5b6c-398c-9222-a43292e2946b. Code: 422, message: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"ConfigMap \"release-267c4a3a-5b6c-398c-9222-a43292e2946b\" is invalid: []: Too long: must have at most 1048576 bytes","reason":"Invalid","details":{"name":"release-267c4a3a-5b6c-398c-9222-a43292e2946b","kind":"ConfigMap","causes":[{"reason":"FieldValueTooLong","message":"Too long: must have at most 1048576 bytes","field":"[]"}]},"code":422}
As our internal teams at Harness work continuously to reduce the data size stored in configmaps, there is a simple workaround which may be achieved even with the pruning FF enabled as follows:
Using a helm chart or k8s manifest: Mark a configMap with “harness.io/skipPruning” under “metadata.annotations”
{{- if .Values.env.config}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Values.name}}
annotations:
harness.io/skipPruning: true
This will temporarily disable the pruning capability of Harness and allow for a larger configmap size. Once the pruning is disabled it will allow us to not take the size into account for this configmap, however it will mean that temporarily if you delete a k8s file, Harness will not delete it (prune it) from the cluster and it would need to be deleted manually.