Drone services mount source code

We are running drone v1.0.0 in Kubernetes with Github. We have a bunch of services and steps defined in a pipeline for end to end testing.

Our service containers seem mount the source code (/drone/src) and use the source code path as their working dir when executing commands. This makes it a bit cumbersome to write custom commands in services because I need to change the working dir manually, and to do that I need to look up the Dockerfile for the services I’m using.

The source code mounting also seems to make Kubernetes schedule everything on the same node. This causes problems for us when the selected node cannot fit all steps and services. Currently we have are many builds that just halt forever because Kubernetes is not able to schedule all steps and services on the same node. I think this can be partially solved by making services not need to mount the same volume as the steps, since they can then be scheduled on another node.

The build tasks that halt have pods get the following errors:

Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  2m29s (x2017 over 10h)  default-scheduler  0/12 nodes are available: 1 NodeUnschedulable, 1 PodToleratesNodeTaints, 11 MatchNodeSelector, 2 Insufficient cpu.

Edit: it seems that the reason for all services and steps are run on the same node is that KUBERNETES_NODE env variable is equal to the node the drone-job is run on. Maybe this node selector does not need to be used for services, but only steps?

The source code mounting also seems to make Kubernetes schedule everything on the same node.

This is by design. Drone runs everything on the same machine, using the same workspace, similar to Knative.

Why do services need to run on the same machine? Pods can communicate fine across nodes.
How can I make sure that builds are not scheduled on a node that doesn’t have enough resources? The node is initially chosen by the drone job, which is not very resource intensive. It might therefore end up on a node with almost no memory or CPU left to allocate…

because sometimes services need to load configuration files and other data from the repository, and therefore need to share the workspace.

Is there a way to disable this behavior? Like I mentioned it causes an issue where pods cannot be scheduled on the same node because the node is close to full in regards to resource requests. I use limits and requests for all steps/services in drone. If the wrong node is picked by kubernetes for the drone-job, the tests/build will hang indefinitely because they cannot start on the selected node.

If it cannot be disabled, is there another way to work around this? I think this will be an issue for anybody using resource requests/limits (which you really should) in a kubernetes cluster.