[Vault] Vault Agent - Advanced use case with Kubernetes Delegates and Shared Volumes

Howdy, gang! :rocket:

Introduction

This tutorial is an advanced continuation of my first article about Vault Agent and Harness.

I received some feedback from expert customers, and I decided to create another tutorial focusing on Kubernetes Delegates and our new capability to support Vault Agent as the integration method for the Harness Secrets Manager.

We’ll take advantage of ConfigMaps, PersistentVolumes, Secrets, etc to create a very reliable Vault Agent deployment.

IMPORTANT: It’s super important to keep in mind that the Vault Agent IS NOT a component of Harness. It’s inside Hashicorp’s Kingdom, ok?
At the end of the day, our Delegate only needs to be able to reach for a sink file containing a good token, even in case Tom Marvolo Riddle himself put it there.
The Vault Agent is just a… facilitator. Harness is NOT responsible for your Vault Secrets Manager.

Buckle up! :rocket:

Important note - Managing multiple Vault Servers

Let’s say you have one Vault Server per Environment (like DEV, QA, PROD).

I decided to take advantage of the Harness Environment Name in some Manifest templates. That’s a good way to have this set up with more than one Vault Server.

Also, to keep things atomic, and avoid a single point of failure, I consider 1-to-1 the relationship between Vault Servers and Vault Agents.

So, in case you have one Vault per Environment, you can have multiple Deployments of this Vault Agent.

To help us address that use case, we’ll take advantage of some Service Config Variables that will be overwritten by the Environment Service Configuration Overrides:
This is a preview of this tutorial, but let’s just take a peek!

This is the Service Config Var:
image

And this is the Override coming from the DEV Environment!

Tutorial - Part 1 - Creating a more professional Vault Agent Deployment in K8s

Requirement

A little K8s experience, a target cluster, and a good old Harness Account.
And, since we will use Persistent Volumes, the Vault Agent Workload must reside in the same Kubernetes Cluster Namespace as the Delegate.

Tasks on Vault

Please refer to my first tutorial at the beginning of this article to configure a good AppRole in your Vault Server.

Security Concerns

Since the RoleID and the SecretID are also super important Secrets, I decided to store them in the Default Google KMS Secrets Manager managed by Harness.
And then use the templating engine to retrieve them for me.

Naturally, this is something YOU must decide with your SecOps or Security Architects. I’m showing what I did in my use case, ok?

The Vault Agent Kubernetes Manifest Source

Guys, I’ll keep all files related to our Vault Agent K8s Service Manifests here, ok?

I won’t break this repo, but it’s a Lab repo. Just fork it, to be safe.

First Step

The first step is to store both RoleID and SecretID in Harness. They were previously created in my first tutorial, in case you want to adopt the same approach. Again, the underlying Auth Method has no relationship with Harness, but it’s a decision you should make with your Vault Admin.

image

image

Important: if you want to add another security layer, you can store it as base64, and then use a Data Secret (in the Secret Kubernets Manifest that is currently using stringData), instead of a stringData one, ok? It’s up to you!

Second Step

Let’s create the Harness Service that will host the Vault Agent workload!
image

And add the VAULT Docker Image that is available in the Artifact Source:
image

And make sure to link your Remote Manifest:

And kindly add that Service Config Variable we talked about earlier:
image

Third Step

Now, just create a good Environment, a good Infrastructure Definition, and add the Override.
This is the trick to handle multiple Vault Servers… we will use these variables in the Templating trick.

Fourth Step

Now, let’s explore a few things related to the Vault Agent Manifest files.

If you take a look at the values.yaml file, you will see that there are a few important things there:

  • The Persistent Volume Claim name, which is the mechanism I choose to share the SINK FILE between this Deployment and the Delegate Deployment;
  • the underlying Auth Method Config HCL File;
  • and the RoleID and SecretID functions to retrieve them from the Default Secrets Manager.

Also, notice that I use the Harness ${env.name} when I need to keep one Object per Environment.
This will help us to design a multiple Vault Server strategy.

image

And you can see how I’m handling all that in the deployments.yaml file, from inside the templates folder.

Fifth Step

Now, let’s create a good Rolling Deployment Workflow in Harness.
image

Sixth Step

If you run it as is… you are already shipping one Vault Agent!

Seventh Step

So, how do I know that this is working?
You can output the logs to your favorite tool that has Logging Capabilities. In my case: Splunk, ELK, and Graylog.

As we are already very far from home, let’s keep the party inside the K8s world. Let’s use ReadinessProbes to make sure that the sink file is available!
Since the sink file will only be present and filled with a token only at the end of this process, I guess this is a good way to monitor if the Pod is good.

image

Even with that, nothing on that probe will tell if the Token is good.
And the Vault Agent will create a file even that your Auth Method has bad credentials. This, like everything else, is a Hashicorp Vault design. There’s nothing related to Harness.

Maybe, you want to create a custom script that will export VAULT_ADDR and VAULT_TOKEN, and test that token with a good command like:

VAULT_TOKEN=<the_token_generated_in_sink> vault secrets list

So, I recommend you check the logs with this command:

kubectl -n harness-delegate logs pod/vault-agent-<...>

This is a good Deployment:
image

And that’s bad news:
image

Part One - Outcome

So, using the same logic that you would use in an advanced Readiness Probe, the token must be good!

Tutorial - Part 2 - Sharing the token with the Delegate Deployment (StatefulSet, to be specific)

So, I guess we are almost at the end of the hard work. Now, it’s time to change our Harness Delegate Manifest. Yes, the one that you used to install the Delegate in your Cluster.

The only requirement is that both Vault Agent and the Delegate must live in the same K8s Cluster Namespace. We are using PVC to share the sink file, right?

First Step

So, let’s change our Delegate Manifest a little bit, so it can reach our sink file via Volume.

I am using the very same manifest that comes from the Delegate UI wizard, no tricks here:

The first trick is to add the Volume definition at the end of the Harness Delegate StatefulSet block:
image

volumes:
- name: vault-agent-sink-shared
  persistentVolumeClaim:
    claimName: vault-agent-sink-pvc-dev

And, of course, adding the mount to make the file available to our Harness Delegate Container:
image

Don’t worry, you can get a good example here, at the same project:

Second Step:

Well, we can see that now the Delegate Container can see the sink file. This is awesome!

So, let’s go ahead and check if Harness is able to integrate with Vault via the new Agent Method, but now using a Kubernetes Delegate!

Third Step:

Now, we create and edit a few Secrets, just to stress the Token a little!
image

Nice!!! :hot_face: :100: :shamrock: :rocket: :partying_face: :fireworks:
image

Further reading:

Tags:

<cloud: aws, gcp, azure>
<function: ci,cd>
<role: swdev,devops,secops,itexec>
<type: howto, experts>
<category: gitops, vault, secrets>

3 Likes