Howdy, gang!
Introduction
So… you are living the dream, right? You have your workload running in Kubernetes, you have Splunk in your SRE/Monitoring/Observability Tech Stack. And the best thing: you are a Harness happy user.
Naturally, you decide to run your Harness Delegate as a K8s StatefulSet.
You are reading this amazing documentation from Kubernetes.io, and you are still not sure how to achieve a good logging-and-forwarding design yet.
Worry no more, my noble SRE. That integration is only one file descriptor away from you!
Ahh, with automatic fields detection + removing the coloring and other control characters that can mess with the Event.
Buckle up!
Tutorial
Requirement
-
A Harness Delegate running in a K8s Cluster;
-
A good Splunk HEC (HTTP Event Collector);
I decided to run a small Search Head + Indexer in AWS. -
And, we’ll take advantage of this Project (a Splunk Helm Chart) to integrate everything:
splunk-connect-for-kubernetes/helm-chart/splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging at develop · splunk/splunk-connect-for-kubernetes · GitHub
First Step
Let’s configure our Splunk HEC and then store the Token in Harness, as a Secret.
1-) HEC UI - An example:
2-) And then there’s the safely stored token (the one we get in Splunk’s HEC setup):
With that, you can see that index=harness_deployed_apps
will be the home for our Delegate Logs.
Second Step
Alright, time to configure this guy with Harness.
You can see that this is a very easy to understand Splunk-managed Helm Chart.
For a deeper dive regarding the mechanism, please check the README.md
.
So, in this step, we create a Harness Service:
And the trick starts here:
1-) Let’s create a Source Repo that points to the Splunk’s Project:
2-) Back to the Service UI - Let’s link that nice Helm Chart called splunk-kubernetes-logging
like this:
Branch: main
File/Folder path: helm-chart/splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/
Helm Version: v3
Alright, looking good!
Third Step
We’ll use Harness powerful Override Engine so we don’t need to manage a fork from that project. We can do all the steps we need by overriding a few entries in the default (the one that is in the GH Project) values.yaml
.
Important: The Harness Delegate Pod will stdout the logs, so we can use it with very few customization.
You can read the values.yaml to confirm if you need to change anything. You may also need to ask a Splunk Admin to review it with you. But it’s pretty straightforward!
This is my Override section, at Harness UI:
# Local splunk configurations
splunk:
# Configurations for HEC (HTTP Event Collector)
hec:
# host is required and should be provided by user
host: "<SPLUNK_HEC_HOSTNAME>"
# port to HEC, optional, default 8088
port: "8088"
# token is required and should be provided by user
token: "${secrets.getValue("splunk_hec_delegate_logs")}"
# protocol has two options: "http" and "https", default is "https"
protocol: "http"
# indexName tells which index to use, this is optional. If it's not present, will use the "main".
indexName: "harness_deployed_apps"
fluentd:
# path of logfiles, default /var/log/containers/*.log
path: /var/log/containers/*delegate*.log
Fourth Step
Hey, let’s use props.conf
to remove any coloring log characters that might blow your readability in the Splunk’s Search Head.
To make this easier, I’ll map our Harness Delegate Source Type (Splunk) to a very nice SED command, in “System Local”. This will do the trick for us!
Your Splunk Admin might have a Deployer + Apps + etc to organize it for you, ok?
Anyway, follow me:
1-) If you did not change much from the starting values YAML, this is our source type:
2-) So, please go on and write this file:
vim /opt/splunk/etc/system/local/props.conf
3-) And let’s append our very smart and magic sed
command:
[kube:container:harness-delegate-instance]
SEDCMD-Harness = s/\x1B\[[0-9;]*[a-zA-Z]//g
Fifth Step
Now let’s deploy that! I’ll use a very simple Rolling Deployment!
Here it goes:
Last Step
Alright, time to check if this is working. I’ll jump into my Search Head Lab.
And I’ll run a bad Splunk query, but I’m the only one in my Lab Cluster. So, no harm done! Here:
Outcome:
That looks amazing!
Let’s deploy a Dummy NGINX, and then we can put our well-known TaskID to make sure I’m not doing anything crazy:
Let’s get our Task ID:
I’m so lucky to have this Tech Stack! Check it out:
Further reading:
Tags:
<cloud: aws, azure, tanzu, gcp, on-prem, kubernetes, github, splunk, observability, monitoring>
<function: ci,cd>
<role: swdev,devops,secops,itexec,sre>
<type: howto, experts>
<category: triggers, gitops, templates, s3>