Networks field in Pipeline


Because of unfortunate circumstances, the default Docker network collides with one of our internal ones, which forces us to run everything on a non-default IP range.

So I wanted to create non-default network on another subnet inside the .drone.yml. But Drone does not seem to pick it up at all, and creates new networks called 0_ID_stepX instead of reading the networks field.

Can you give an example of now to use networks with Drone? For both using and creating networks, if possible.


the default Docker network collides with one of our internal ones

FYI Drone doesn’t use the default bridge Docker network. It creates a user-defined network per-pipeline.

But Drone does not seem to pick it up at all

Drone 0.8 (current stable) does not support the networks section in the yaml.

So I wanted to create non-default network on another subnet inside the .drone.yml.

I feel like this is something we should allow people to configure at the Drone level. For example, with a DRONE_NETWORK_SUBNET flag.

Just found this Controlling the CIDR for bridge networks created by Drone?

Exactly that problem that we’re having. Not sure how to get around it. We want to put everything Drone oriented on a specific IP range :slight_smile: Any suggestions on how to do that?

I wonder if we could provide an environment variable allowing you to configure the subnet:


Yes, that definitely solve our problems. I tried using “DRONE_NETWORK”, but it seems that it still creates the default networks?


root@droneci01:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d84c8bcd414e        bridge              bridge              local
839e5a7656f2        drone_drone         bridge              local
fdfa6d860f38        host                host                local
bd5b21f62584        none                null                local

but it seems that it still creates the default networks

Correct the DRONE_NETWORK parameter does not override the default network. It attaches additional networks defined in this flag, in addition to the default.

Yes, that definitely solve our problems.

I would recommend we add the following new, optional environment variables to give you additional control over how drone creates user-defined networks:


Also out of curiosity, what sort of error messages do you receive when this network collision exists. Are you running on kubernetes? I only ask because others might run into this same issue, and knowing how it manifests would be useful. Thanks :slight_smile:

We don’t stumble upon any error messages per se, but the network collision renders the server inaccessible as it collides with default gateway :slight_smile: All traffic going out of the server still works tho, so builds run just fine! It’s just that the UI nor SSH can’t be reached as builds are going.

Started noticing this problem when I ramped up DRONE_PROCS, which let the docker network reach 172.21 which is our own internal network :open_mouth:

Those would solve our problems, yes! I wish Docker itself did this, but I guess that would take longer to implement.

Can we work around this until next release/release that implements the optional network config? We run about 25 builds that can be ran perfectly in parallel using Matrix, but using 1 proc is very slow as each build is about 1-2 minutes each.

Unfortunately I am not aware of any workarounds, however, this could likely be implemented with a minimal amount of code if you are willing to send a pull request. Perhaps we could get something merged mainline for you to start using soonish.

First we need to update our pipeline runner:

Once these changes are complete, need to update the vendored package in the core drone repository, and add these configuration parameters as environment variables.

I would be willing to do that, but not sure if I have the time to spare at this moment. Could I create an issue/feature request for it and write the details in there?

A colleague of mine found a workaround for this. If you put a IP route on, Docker will ignore that network and put it on instead. We don’t use that network at all, so now I’m able to run all our builds concurrently with 1*nThreads MAX_PROCS!

Might be valuable to put that in documentation or such until Docker merges this PR: . Which seems to finally be ready for merge after 2+ years of waiting :sunny:

PS: Thanks for the amazingly fast answers here! Hyped to see developers on top of their game with community responses!