[Vault] Vault Agent - Handling multiple Vault Servers with Kubernetes Delegates

Howdy, gang! :rocket:

Introduction

You can consider this post as a real-life use case showing how Vault Agent can help you in case you have multiple Vault Servers. Usually, one Server per Environment (DEV, QA, UAT, PRD, etc).

We’ll use Harness to handle some smart variables that will make the Vault Agent Deployment in multiple Environments something easy to achieve. You can refer to the tutorial above if you need a deep dive into how Vault Agent can live inside a Kubernetes Cluster.

Buckle up! :rocket:

Scenario Description

In my case, I only have two different Vault Servers. One for DEV, and the other for PRD.
They are both living in my AWS Account:
image

And I have two “client” Kubernetes Cluster as DEV and PRD:

  • My DEV Cluster is a GKE;
  • and my PRD one is an EKS.

The key to this use case is still Kubernetes Persistent Volumes. This PV can exist outside our Manifest YAML files. In my case, I decided to keep the Persistent Volume Claim directly inside the Manifest files, in case you don’t have it.

Storage Class

Also, I decided to not templatize the StorageClass name. So, there’s an SC named as standard in both my AWS and GCP Accounts.

AWS EKS StorageClass:

GCP GKE StorageClass:

Tutorial

Requirement

Since we’ll need Harness to deploy the Vault Agent Workload, make sure you have a Delegate running on both K8s Clusters.
Please keep in mind that my Vault Agent will run in the same namespace as the Harness Delegate. So, in my case: harness-delegate.

First Step

If you read the tutorial referred at the beginning of this post, you will see that we have a single and super clear Harness Service to manage our Vault Agent specs:

You can see that I decided to use Vault’s default GitHub Docker Image:
image
image

Second Step

Still at the Service UI, please make sure that you add Config Variables that will handle the differences between the Vault Servers:
image

The next step will explain how we can make this a good template slot for multiple environments.

Third Step

Alright, time to define our Environments!

You can see that my Harness Application already has two nice Environments:
image

And that I have Cloud Providers defined for DEV (GCP) and PRD (EKS):
image
image

So, our Infrastructure Definition (inside the Environments UI) will hold our Override trick.
For the dev Environment:

For the prd Environment:

Making sure you add Service Configuration Overrides to your Vault Service:
1-) roleid_secretname: the Secret Name that is holding your RoleID for the given Environment.
2-) secretid_secretname: the Secret Name that is holding your SecretID for the given Environment.
3-) vault_server_full_addr: Your Vault Server Address, just like you would put in the VAULT_ADD env variable.

Fourth Step

Naturally, for this lab, please make sure that you have those Secrets stored in Harness.
If you decide to store those secrets elsewhere, you can use Shell Scripts steps to fetch them and make them available as context variables. Then, you can customize this lab to fit your use case.

image

Fifth Step

Now, let’s create a good Rolling Deployment Workflow in Harness.

We will templatize it, so we can run the exact same Workflow for DEV and for PRD:


Sixth Step

Alright, they were Deployed successfully:

In AWS - my PRD Cluster:
image

In GCP - my DEV Cluster:

Now, we are ready to share the sink file with our Delegates.

Seventh Step

So, let’s change our Delegate Manifest a little bit, so it can reach our sink file via Volume.
Since we usually run the Delegate Manifest with kubectl apply, just make sure that your Volume and VolumeMounts are referring to the correct PVC.
I decided to keep them with dev and prd suffix. But that’s up to you.

I am using the very same manifest that comes from the Delegate UI wizard, no tricks here:

The first trick is to add the Volume definition at the end of the Harness Delegate StatefulSet block:
image

volumes:
- name: vault-agent-sink-shared
  persistentVolumeClaim:
    claimName: vault-agent-sink-pvc-dev

And, of course, adding the mount to make the file available to our Harness Delegate Container:
image

Don’t worry, you can get a good example here, at the same project:

Eighth Step:

Well, we can see that now the Delegate Container can see the sink file. This is awesome!

This is the Vault Agent Pod writing into the sink file, inside the PVC:

And this is the Delegate Pod reading it from inside its FS:

We can test this sink token with something like:

Last Step:

Sweet. Time to add our both Vault Servers as Harness Secrets Managers:
image

And here goes my config:

Let’s create a few Secrets, to test them out against our new Secrets Managers!
image

And you can see that it worked!
image

image

Further reading:

Tags:

<cloud: aws, gcp, azure>
<function: ci,cd>
<role: swdev,devops,secops,itexec>
<type: howto, experts>
<category: gitops, vault, secrets>

3 Likes