Altering the 'readiness' check for rolling deployments

I’ve come across an issue when performing rollout deployments of my application.

I’ve got a couple of apps that use the leader-elector pattern, and I can’t deploy them in Harness. I’m looking for a steer on how I can configure Harness to allow the deployment, or possibly alter my helm chart.

In the leader-elector pattern, you run a kubernetes deployment with several replicas (say, three) and the pods communicate with each other and elect a leader - this leader pod is the only one supposed to receive traffic. The idea here is that you have a single instance dealing with requests, and two other pods on standby. If the leader dies, one of the other pods is quickly elected leader and service is quickly resumed. It’s faster than spinning up just one replicas and assuming kubernetes will bring up a replacement pod in a timely fashion.

In this pattern, only the leader pod responds as ‘live, ready’ to kubernetes, while the other two report just as ‘live’. This means that you can use a useful feature of kubernetes services - they only forward traffic to ready pods. Since only the leader is ready, any traffic to the deployment goes straight to the leader and the electors can lay about as redundant pods.

So, here’s how that relates to Harness.

When I try to install my app with more than one replica, Harness deploys the software then tests for, I guess, deployment complete, probably using something like kubectl rollout status deploy <my-app>.

However, that never comes ready in my service, for the reasons above.

Are there any options for me to alter the post-deploy completion check?

In my case what I’m probably looking for is something like ‘the deployment is available’ (true when one replicas is available on the latest version of the image) and not ‘the deployment is complete’ (true when spec.replicaCount = status.availableReplicas)

Welcome to the Community, @stevecooperorg!

Interesting use case. I use to avoid leader election/consensus-based workloads in K8s in the past, they would be hard to pull off cough Apache Zookeeper cough. Glad that there are ways to pull it off now.

Potentially you can have Harness not manage the deployment and “Direct Apply” the manifests that are needed: https://docs.harness.io/article/4vjgmjcj6z-deploy-manifests-separately-using-apply-step

Cheers,

-Ravi

Thanks! We’ll take a look.

1 Like