Drone Runner Kube High Resource Requests

We have been following the development closely of drone-runner-kube and it works really well for us.
The only thing I notice is when setting the limits using the environment variables is that it does it per container. This is able to limit well but because each service and step is a new container and they all have the limit it will request alot of resources.

Our example was one build had three steps and with a limit of 2cpus only needed 6 but then when we ran a build which needed 8 containers we then needed 16 cpus which we could not schedule.

I have been looking and I cannot seem to find a way to do limits across the entire pod so have just disabled them for now.

you can set resource limits for each pipeline step, for example:

  - name: build
    image: golang
      - go build
      - go test
+   resources:
+     limits:
+       cpu: 1000
+       memory: 500MiB

Yep I could do that but I assume this would work the same. If I wanted 1 cpu for multiple steps it would need me to have 2 or more cpus free to allow the pod to get scheduled. Unless I make every step as small as possible I cant set limits and requests

I am not aware of any other options. Kubernetes defines resource limits per container [1], and Drone has to operate within the constraints of the Kubernetes scheduler. I believe Tekton faces similar design constraints [2][3].

[1] https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container
[2] https://github.com/tektoncd/pipeline/issues/598
[3] https://github.com/tektoncd/pipeline/issues/1045#issuecomment-519990599

Hello @bradrydzewski

Is there a way to define a default setting for the limits at the pipeline level and not at each step ?

yes, the resource limits in the yaml can be set globally by passing the following configuration parameters to the runner:


Excellent ! Thanks you !
Maybe this should be in the runner documentation in the reference variables (I guess also DRONE_RESOURCE_REQUEST_CPU and DRONE_RESOURCE_REQUEST_MEMORY) exist.

Also, tell me if I’m wrong but if it possible to have the placeholder container only have a few resources ? What I observed is that the placeholder want the same amount of resources as the step container.


when it is time for the pipeline step to start, the container image is changed from placeholder to the actual image, which in turn starts the step. To my knowledge only image, labels and annotations can be changed once the pod is created; resource limits cannot be changed. Therefore it would not be possible to change this behavior. With that being said, I would encourage you to dive into the code and test this out, and send a patch if you can demonstrate it works.

hum from what I have observed, it seems the placeholder is not replaced by the step container but is running concurrently.
To be honest, I don’t really understand the aim of the placeholder.
Will investigate a little bit more by the way

You can audit the code to see where and how the placeholder images are being used:

The kubernetes runner uses these placeholder images to prevent all pipeline steps from immediately executing when the pod starts. When the pod starts, all containers will use the placeholder image, whose entrypoint is a no-op that indefinitely waits. When the runner determines a step is ready to begin, the runner replaces the placeholder image with the actual image, which in turn starts the actual step. Without placeholder images, sequential step execution would not be possible. The approach is similar to Tekton, albeit the implementation is slightly different.

What’s the DRONE_RESOURCE_LIMIT_CPU int64 unit? Is it millicpu or microcpu?

What’s the equivalent of K8s’ spec.containers[].resources.requests.cpu value of 0.1 or "100m"?

[1] https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu

Looks like millicpu. When I set cpu: 2000 the generated pod has cpu: 2