Can drone-runner-kube create worker Pods with custom Specs ( Like ReadinessProb )?

Hey Community
I’m deploying Drone server and Drone-runner-kube to a Kubernetes Cluster. I got to the point where the runner tries to create the pods to execute my pipeline steps, it’s failing because my kubernetes cluster enforces some rules that any Pod deployment should respect ( Like livenessProb and ReadinessProb … ) My question is how can i pass a template or a deployment structure of the Pods that the kube-runner should create ?
Thanks for reading and helping

The Kubernetes runner does not not support any form of templating for creating Pods, which means there is no way to configure a liveness or readiness probe. The cluster admin would need to loosen restrictions for Pods created by Drone.

Thank you for your answer.
Do you think this is a feature that can be added in future releases? or how complicated it is to customize the source code to add this functionality for someone new to the source code and GoLang ?
I’d appreciate it if you point me to the parts of the code responsible for creating and launching the Pods?
Thank you

Templating sort of implies the Kubernetes runner generates a Yaml file and then uses the Kubernetes command line tools to apply the Yaml, but this is not quite how things work under the hood. Instead, the Kubernetes runner interacts directly with the Kubernetes Go client to build Go data structures and create Pods using the API. Templating would make sense if we were building Yaml files, but since this is not the case, it is not really a viable solution.

I also want to be fully transparent that I am not entirely sure we would accept a patch for liveness probe options. These things make sense for long-running services but not for short-lived Pipeline containers, especially considering Pipeline containers must be configured to never restart. Also, many of our plugins use scratch as the base image, which means they contain no shell or extra programs, which means creating dummy probes would not be possible.

I think best solution to this issue is for your cluster admin to to loosen their policy for Pods created by the Kubernetes runner. I am not sure how they have this policy configured, but perhaps they can exempt Pods that match certain labels or Pods created in a specific namespace (these things can be configured for the Kubernetes runner).