We can finally start scraping the Delegate Metrics and Health Endpoints!
In this quick tutorial, I’ll just edit our Delegate Service to expose the port we need, and I’ll share a quick Prometheus configuration that scrapes the METRICS endpoint via the Delegate’s Service.
Naturally, I opted for the Load Balancer Service because my Prometheus is on AWS, and my K8s Cluster is GKE. I know you will achieve something similar with NodePort, or even ClusterIP (if the Prometheus scraping entity is running inside your cluster).
Important: This will only work for the Immutable Delegate. And we may call it another name soon, ok?
Maybe Next-Gen Delegate or something cooler.
And please keep in mind that, for the Prometheus part, this is not a fancy Thanos implementation!
I opted for creating a lab using a Prometheus systemd unit I had from another gig.
Keeping it as simple as possible!
Those two endpoints are only available for the Immutable (new) Delegates; so your account should be using them already.
This will be the GA Delegate soon, so no worries. If you are really in a hurry, you can submit a support ticket for more details.
A Prometheus server, to scrape the Metrics endpoint. I’ll share my systemd unit running on Linux, if you need a quick lab.
Let’s edit the Kubernetes Delegate Service entity, via its YAML definition.
This action will just expose the port that serves both endpoints on the Delegate K8s Service YAML:
So, this should be something as simple as:
apiVersion: v1 kind: Service metadata: name: delegate-service namespace: harness-delegate<NAMESPACE> spec: type: LoadBalancer<OR_ANY_OTHER> selector: harness.io/name: <DELEGATE_NAME> ports: - port: 3460 name: metrics protocol: TCP
And this already works for me - considering that I opted for using the LoadBalancer option:
Second Step - OPTIONAL
In this step, I’ll just share the way I like to start Prometheus Labs.
With a good old SYSTEMD UNIT, like a Service.
I usually download the binary here and then:
# I send all the Prometheus files to /opt/prometheus # and then sudo useradd --no-create-home --shell /bin/false prometheus sudo chown -R prometheus:prometheus /opt/prometheus
[Unit] Description=Prometheus Wants=network-online.target After=network-online.target [Service] User=prometheus Group=prometheus Type=simple ExecStart=/opt/prometheus/prometheus \ --config.file /opt/prometheus/prometheus.yml \ --storage.tsdb.path /opt/prometheus/ \ --web.console.templates=/opt/prometheus/consoles \ --web.console.libraries=/opt/prometheus/console_libraries \ --web.enable-lifecycle [Install] WantedBy=multi-user.target
sudo systemctl daemon-reload sudo systemctl enable prometheus sudo systemctl start prometheus sudo systemctl status prometheus
Scrapping the Metrics Endpoint using Prometheus
I’ve seen much more complex implementations of Prometheus. Using Thanos, multiple nodes, and servers.
And maybe a configuration management tool like Puppet to manage and enforce the Prometheus configuration and job files (service discoveries, etc. included in the challenge).
But, for this tutorial, we can add this block to the
- job_name: "harness-delegate-metrics" metrics_path: /api/metrics honor_labels: true scheme: http static_configs: - targets: ["220.127.116.11:3460"] labels: system: "harness" component: "delegate"
And then, just hit Prometheus with that SIGHUP CURL I’m sure you are used to:
curl -X POST localhost:9090/-/reload
For now, that’s pretty much it.
I believe we’ll constantly be enhancing our probe and exporter capabilities for the Delegate, since this is a project on its initial steps.
I’m already playing a bit with Grafana. I’ll keep you guys posted!
<cloud: aws, gcp, azure>
<type: howto, experts>
<category: observability, prometheus, kubernetes, k8s, deployment, monitoring, delegate>