[Helm Native] How to make Harness detect Pods as Deployed Instances when using Helm Native Deployment Type

Howdy, gang! :rocket:

Introduction

This article is going to walk you through the steps required to make Harness detect Deployed Instances after a Helm Native Deployment. We will help Harness with a label, but we need a Feature Flag to make it work.

Assuming k8s cluster version > 1.16, there are two requirements:

  • Enable FF HELM_STEADY_STATE_CHECK_1_16;
  • Update the deployment manifest to have a “release” label.

Buckle up! :rocket:

Tutorial

First Step

First, let’s make sure the target Account has “HELM_STEADY_STATE_CHECK_1_16” enabled:

Second Step

Now, we need to follow this doc here.
You must add a release label on all Manifests related to Deployable Objects!

I used Bitnami Nginx because their Helm Chart has a few shortcuts (with Go Templates) to add labels everywhere. I just added the release label, and submitted the hacked Chart on a GCS Lab:


Important: The only critical label to achieve what we need is the “release” one. You may skip that “harness.io/release-name”, ok?

Last Step - Let’s test that!

I’ll run a Basic Deployment, using a Helm Native Service:

And voilĂ :

And you can see that it takes the Pod correctly!

Outcome:

With that done, now you are able to keep using Helm Native, and explore the features that need Deployed Instances as a requirement.

Further reading:

2 - Helm Services - Harness.io Docs

Tags:

<cloud: aws, azure, tanzu, gcp, on-prem, kubernetes, helm, charts>
<function: ci,cd>
<role: swdev,devops,secops,itexec>
<type: howto>
<category: triggers, gitops, templates>

2 Likes