Docker Hub rate limits and Drone Cloud exemption?

It turns out our fix with didn’t seem to work.

We have since started using /root/.docker/config.json as described in How to prevent DockerHub pull rate limit errors

We have ~55 pipelines with ~200 steps that build using plugin/docker - it’s untenable to simply modify all of those pipelines in a reasonable fashion, and we have no interest or time to build, deploy, and maintain a registry mirror.

Since we can’t modify pipelines or setup a mirror, we are trying to use DRONE_RUNNER_VOLUMES to have drone agents mount the host docker credentials into the pipelines step so that the daemon is already authenticated with dockerhub by the time it starts up. However we are hitting a problem where the variable is somehow defined twice in the agent’s configuration:

We are using a drone autoscaler to launch drone-docker-runner instances in AWS EC2, and providing some additional configuration by defining an agent configuration file using DRONE_AGENT_ENV_FILE.

The env file looks like this:


All these environment variables are set as expected, but somehow DRONE_RUNNER_VOLUME ends up defined twice in the agent container that starts. The follow is the relevant portion of “docker inspect agent” for one of the auto-scaled agents. Since the empty version of DRONE_RUNNER_VOLUMES is defined second, the value we set is ignored, and the mount does not happen as expected.

            "Env": [

To be clear, we are a paying customer with a drone enterprise license, and so far the response to this issue has been lackluster. We need a solution, not a work around.

the DRONE_RUNNER_VOLUMES variable is used to configure global volumes that are mounted into each pipeline step, which is not going to be the solution you are looking for in this case. If you want to configure the autoscaler to create the runner (agent) container with a host volume mount to load the config.json file, you can use the following environment variable:

I feel like we’re talking past one another. We have already configured the DRONE_AGENT_VOLUMES to allow the agent on each build machine to pull from dockerhub without additional configuration. This works as expected and has mostly solved the dockerhub rate limiting for us.

The problem we face now is that when using steps with ‘plugins/docker’ you can only provide a single set of credentials. We host our private images on quay, so we already provide those.

The options provided of setting up a repository mirror or using the setting.config for each pipeline step is simply not viable with our currently development capacity.

We hoped to use DRONE_RUNNER_VOLUMES to mount the agent’s docker credentials all pipeline steps, so that plugins/drone could pull images using an actual account. In local testing with ‘docker run’, this worked as intended, but we are currently blocked on the duplicate DRONE_RUNNER_VOLUMES as detailed above.

I’m not sure how that happened, and in some basic testing, I couldn’t get ‘docker run’ to reproduce that behavior locally. Unless I’m failing to see a typo in my env vars above, somehow the drone agent is starting with DRONE_RUNNER_VOLUMES defined twice.

the autoscaler accepts a named DRONE_RUNNER_VOLUMES variable which means you do not need to set using the freeform DRONE_AGENT_ENV_FILE. Using the former will ensure the value is correctly set, while using the latter would result in duplicate values.

Thanks, We’ll try that - However, I want to call attention to the fact this is not documented in the autoscaler configuration reference so I had no way of knowing that was supported.

I wanted to follow up here - DRONE_RUNNER_VOLUMES was succesful in letting us share the host docker login with steps in plugins/docker. However, one this we failed to consider is that the plugins/docker “docker login” would persist it’s login to the host. This led to auth failures in the middle of docker builds because another pipeline would login and change the active user.

I would not recommend future readers to use this approach, unless all pipelines are either not authenticating with a docker repository, or they are all using a shared global account for some reason.

We’ve opted to revert this change, and accept that we will occasionally hit rate limits with dockerhub until we get our pipelines updated to inject credentials themselves.

everything discussed in this thread has been consolidated in this faq:

the above faq is going to provide solutions and workarounds for the majority of installations. If you have an edge case, please open a new thread that is specific to your edge case.