Strategy for more efficient git lfs clone?

We currently disable the built-in clone and perform our own clone step as part of our build pipeline. However, with one of our ~4GB git LFS repos, it clones this for every build, adding a long time to clone each time. Any suggestions for improving this, either through caching on the host, or persisting the build workspace between builds, or something else entirely? We have a small fleet of instances running drone-runner, and I don’t mind if each instance has its own cache.

For our non-LFS repos, we can clone shallow and it’s fast enough.

Thanks for any suggestions,

The default clone step is meant to be general purpose and therefore may not be the best option for all use cases. In this particular case, I would recommend forking the git plugin [1] and modifying to meet your needs. Specifically, you could configure a global volume [2], and then alter the clone plugin to cache the repository in the volume.

You can disable the clone step [3] and use your own clone plugin, which it sounds like you are doing. For those reading and are unfamiliar, here is an example of how that would work:

kind: pipeline
type: docker
name: default

  disable: true

- name: clone
  image: my-forked-clone-plugin

Alternatively, you could globally override the clone image that Drone uses [4] and provide your own custom clone image instead. This prevents you from having to configure custom clone steps in your yaml (as shown above). However, since it sounds like this only impacts a single repository / yaml, it probably doesn’t make sense.


Thanks for the pointers, @bradrydzewski – I’ll look into that.

An easier method: if you specify the environment variable GIT_LFS_SKIP_SMUDGE=“1” clone will not download lfs files, eg:

kind: pipeline
type: docker
name: build  & release packages