How to enable Docker Layer Caching with 1.0

I would like to share several snippets on how to take advantage of Docker Layer Caching with Drone CI. Hopefully this will be useful for others.

Docker Hub

---
kind: pipeline
name: build-example-dockerhub
globals:
  - &docker_creds
    username:
      from_secret: docker_username
    password:
      from_secret: docker_password

steps:
  - name: prepare
    image: busybox
    commands:
      - mkdir -p /cache/${DRONE_REPO}/docker
    volumes:
      - name: cache
        path: /cache

  - name: package-dockerhub
    image: plugins/docker
    settings:
      tags: "${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}"
      repo: foo/bar
      create_repository: true
      use_cache: true
      <<: *docker_creds
    volumes:
      - name: docker
        path: /var/lib/docker

volumes:
  - name: cache
    host:
      path: /var/cache
  - name: docker
    host:
      path: /var/cache/${DRONE_REPO}/docker

AWS ECR

---
kind: pipeline
name: build-example-ecr

globals:
  - &registry
      999999999999.dkr.ecr.us-east-1.amazonaws.com
  - &aws_creds
    access_key:
      from_secret: aws_access_key_id
    secret_key:
      from_secret: aws_secret_access_key
    region:
      from_secret: aws_default_region

steps:
  - name: prepare
    image: busybox
    commands:
      - mkdir -p /cache/${DRONE_REPO}/docker
    volumes:
      - name: cache
        path: /cache

  - name: package-ecr
    image: plugins/ecr
    settings:
      tags: "${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}"
      repo: 999999999999.dkr.ecr.us-east-1.amazonaws.com/bar
      registry: *registry
      create_repository: true
      use_cache: true
      <<: *aws_creds
    volumes:
      - name: docker
        path: /var/lib/docker

volumes:
  - name: cache
    host:
      path: /var/cache
  - name: docker
    host:
      path: /var/cache/${DRONE_REPO}/docker

More complex
The snippet below shows how to:

  • use package-specific Docker Layer Cache
  • cache Maven system’s local repository (~/.m2)
  • cache target directory pipeline-wide using drillster/drone-volume-cache plugin
---
kind: pipeline
name: build-example-maven

globals:
  - &docker_creds
    username:
      from_secret: docker_username
    password:
      from_secret: docker_password

steps:
  - &cache_settings
    name: restore-cache
    image: drillster/drone-volume-cache
    settings:
      restore: true
      mount:
        - target
    volumes:
      - name: cache
        path: /cache

  - name: prepare
    image: busybox
    commands:
      - mkdir -p /cache/${DRONE_REPO}/target
      - mkdir -p /cache/${DRONE_REPO}/docker-lib1
      - mkdir -p /cache/${DRONE_REPO}/docker-lib2
      - mkdir -p /cache/${DRONE_REPO}/m2
    volumes:
      - name: cache
        path: /cache

  - name: build-war
    pull: default
    image: maven:3.6.0-jdk-8-alpine
    commands:
      - mvn -B -Dproject.version=${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}-${DRONE_COMMIT_SHA} -Pwar prepare-package war:exploded
      - ls ./target -la
    volumes:
      - name: m2
        path: /root/.m2

  - name: package-1
    image: plugins/docker
    settings:
      dockerfile: Dockerfile.main
      tags:
        - "${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}"
        - "${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}-${DRONE_COMMIT_SHA}"
      repo: foo/main
      use_cache: true
      <<: *docker_creds
    volumes:
      - name: docker-lib1
        path: /var/lib/docker

  - name: package-2
    image: plugins/docker
    settings:
      dockerfile: Dockerfile.sidecar
      tags:
        - "${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}"
        - "${DRONE_BRANCH//\\//-}${DRONE_TAG//\\//-}-${DRONE_COMMIT_SHA}"
      repo: foo/sidecar
      use_cache: true
      <<: *docker_creds
    volumes:
      - name: docker-lib2
        path: /var/lib/docker

  - <<: *cache_settings
    name: rebuild-cache
    settings:
      rebuild: true

volumes:
  - name: cache
    host:
      path: /var/cache
  - name: target
    host:
      path: /var/cache/${DRONE_REPO}/target
  - name: docker-lib1
    host:
      path: /var/cache/${DRONE_REPO}/docker-lib1
  - name: docker-lib2
    host:
      path: /var/cache/${DRONE_REPO}/docker-lib2
  - name: m2
    host:
      path: /var/cache/${DRONE_REPO}/m2
9 Likes

Just wanted to stop by and say thank you!
Keep being awesome!

it seems plugins/ecr or who knows is pruning layers at the end of the step

/usr/local/bin/docker system prune -f
Deleted Images:
deleted: sha256:4768473d4c4d7675e1e1ca5f1f4378422ff32941ff34318e1b5f3ecad5da60f3
deleted: sha256:44430f741feb17d4495f05a4317cc69cc97c58afb9e5da0d7ffcef2520ef4354

is that perhaps the reason why I don’t get any cached layers in subsequential docker builds?

The reason you do not get cached layers is because the plugins/ecr image uses Docker-in-Docker. When the container is stopped and removed, so is the cache.

thanks, what do you suggest for caching docker layers in a kubernetes drone setup (no agents, see helm chart)?

Caching docker layers on Kubernetes is definitely a challenge for all CI systems (not just Drone). Have you considered using something like Makisu by Uber which was created for this purpose?

One strategy that I’ve come up with to keep my image caching through builds - is I persist a DinD folder per-project.

It allows me to save caching layers through subsequent build, the only trade-off is that I force backed-up builds of the same project to wait for the folder to no longer be in use by another build. If the dockerd has the folder mounted already the second one in line will error out, so I simply loop until it is ready while the very first step in the pipeline runs a docker client command to see if it is ready

Example

services:
  - name: docker-in-docker
    image: docker:dind
    privileged: true
    commands:
      - while ! dockerd > /dev/null 2>&1; do echo "Waiting for another build of the same project to complete..."; sleep 10; done
    environment:
      DOCKER_DRIVER: overlay2
    volumes:
      - name: docker_sock_internal
        path: /var/run
      - name: docker_var_lib
        path: /var/lib/docker

volumes:
  - name: docker_sock_host
    host:
      path: /var/run/docker.sock
  - name: docker_var_lib
    host:
      path: /var/lib/dind/blender/
  - name: docker_sock_internal
    temp: {}
  - name: docker_config
    host:
      path: /root/.docker/config.json

Command in first step to check if docker is ready

while ! docker system info > /dev/null 2>&1; do sleep 1; done

List of DinD folders for example (Project names redacted)

du -sh /var/lib/dind/*
22G	/var/lib/dind/project-1
50G	/var/lib/dind/project-2
24G	/var/lib/dind/project-3
9.3G	/var/lib/dind/project-4

You can garbage collect however you want, or just run a cron to blow the folders away periodically

thanks both, can the @Akkadius strategy be implemented via helm for the dind service? because I’m not running "agents on k8s

also I’m trying plugins/docker cache-from option but no option for ecr

Trying to have it cached as well, no luck. Also I have one repo that builds a big image (10GB) usind DinD and it fails with ephemeral disk space full error

@lehno have you tried to use cache_from instead? Check out this article for more details.

There are also tools for this specific purpose. You may want to check out uber’s makisu which may help you more efficiently build and publish images from inside containerized environments (docker, kubernetes, etc).