Docker-drone auth issues: Kubernetes, 1.0.0-rc4

Outline

Since upgrading our Drone installation from 0.8 to 1.0.0-rc4 we are experiencing authentication issues with the drone-docker plugin.

Setting up docker credential secrets via the Drone interface for a specific repo: DOCKER_USERNAME and DOCKER_PASSWORD the plugin returns:

+ /usr/local/bin/dockerd -g /var/lib/docker
time="2019-01-16T01:04:01Z" level=fatal msg="Error authenticating: exit status 1"

Setup

Drone: v1.0.0-rc4
Kubernetes:

  • GKE node version v1.11.5-gke.5
  • docker v7.3.2
  • Vanilla kubernetes on GKE

See our configs here https://gist.github.com/andrewmclagan/cb276479ec4f0afced163106eaae8afa

Steps

See our pipeline steps here https://gist.github.com/andrewmclagan/cb276479ec4f0afced163106eaae8afa

  1. Login

In step one we attempt to login via a vanilla docker container using secret variables as would be injected into the drone-docker plugin. This works with the output:

+ docker login -u $PLUGIN_USERNAME -p $PLUGIN_PASSWORD
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
  1. Publish

The second step attempts to build the image and publish it using the drone-docker plugin. This uses the exact same secrets as above that work, although this fails with:

+ /usr/local/bin/dockerd -g /var/lib/docker
time="2019-01-16T01:04:01Z" level=fatal msg="Error authenticating: exit status 1"

This could be related to issue #31. Drone mounts a config map to root to store the netrc file, which mounts a read-only directory to root. The Docker credentials are written to /root/.docker/config.json, which probably fails, and is the root cause of the login error.

I have a fix planned for issue #31 which I documented here. The fix is a little involved and requires a good amount of regression testing just to make sure that I don’t break anything.

1 Like

Agreed. Looked through that solution - seems allot better than mounting to /root.

as a temporary solution you can override the $HOME env variable to switch to another storage location for the .docker files.

I used the following pipeline config to archive that:

- name: container-build-push
  image: plugins/docker
  privileged: true
  settings:
    repo: eu.gcr.io/test
    registry: eu.gcr.io
    tag: ${DRONE_BUILD_NUMBER}   
    password:
      from_secret: google_credentials
    username: _json_key
    debug: true
  environment:
    HOME: "/tmp"

Can confirm this works, although breaks drone exec CLI

I wanted to provide an update:

  1. I have a fix for this locally and will publish early next week
  2. It will be included in the rc.5 release, planned for end of next week
  3. In the meantime please use the workaround described above
1 Like

Very much appreciated!

This should be patched now. Instead of mounting the .netrc as a config map we are injecting as environment variables [1]. I have a more permanent fix planned, but for now this should solve the problem.

I will close this thread once I have confirmation that the fix is working for a few of you.

[1] https://github.com/drone/drone-yaml/commit/1e125e10a9c8c290de8ca6a05428f38ec8be97db

1 Like

Ok i will boot up the image, look through the commit and report back here :slight_smile:

It works for us on GKE!

excellent, thanks for testing it out and reporting back!

In what version is it working? I’ve tested it with :1 and :1.8 but no success.
The docker running in the host machine (GCE) can connect fine to the registry.

My logs are the following:

Digest: sha256:014a753cb3c1178df355a6ce97c4bf1d1860802f41ed5ae07493ff8a74660d0f
3 Status: Image is up to date for plugins/docker:latest
4 + /usr/local/bin/dockerd --data-root /var/lib/docker --host=unix:///var/run/docker.sock
5 time=“2020-06-23T23:14:39.585067816Z” level=info msg=“Starting up”
6 time=“2020-06-23T23:14:39.587882448Z” level=warning msg=“could not change group /var/run/docker.sock to docker: group docker not found”
7 time=“2020-06-23T23:14:39.589461087Z” level=info msg=“libcontainerd: started new containerd process” pid=31
8 time=“2020-06-23T23:14:39.589658999Z” level=info msg=“parsed scheme: “unix”” module=grpc
9 time=“2020-06-23T23:14:39.589713699Z” level=info msg=“scheme “unix” not registered, fallback to default scheme” module=grpc
10 time=“2020-06-23T23:14:39.589982435Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }” module=grpc
11 time=“2020-06-23T23:14:39.590215757Z” level=info msg=“ClientConn switching balancer to “pick_first”” module=grpc
12 time=“2020-06-23T23:14:39.612165470Z” level=info msg=“starting containerd” revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
13 time=“2020-06-23T23:14:39.612620800Z” level=info msg=“loading plugin “io.containerd.content.v1.content”…” type=io.containerd.content.v1
14 time=“2020-06-23T23:14:39.613134435Z” level=info msg=“loading plugin “io.containerd.snapshotter.v1.btrfs”…” type=io.containerd.snapshotter.v1
15 time=“2020-06-23T23:14:39.615147019Z” level=warning msg=“failed to load plugin io.containerd.snapshotter.v1.btrfs” error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter”
16 time=“2020-06-23T23:14:39.615223226Z” level=info msg=“loading plugin “io.containerd.snapshotter.v1.aufs”…” type=io.containerd.snapshotter.v1
17 time=“2020-06-23T23:14:39.620358648Z” level=warning msg=“failed to load plugin io.containerd.snapshotter.v1.aufs” error=“modprobe aufs failed: “ip: can’t find device ‘aufs’\naufs 258048 0 \nmodprobe: can’t change directory to ‘/lib/modules’: No such file or directory\n”: exit status 1”
18 time=“2020-06-23T23:14:39.620384661Z” level=info msg=“loading plugin “io.containerd.snapshotter.v1.native”…” type=io.containerd.snapshotter.v1
19 time=“2020-06-23T23:14:39.620574898Z” level=info msg=“loading plugin “io.containerd.snapshotter.v1.overlayfs”…” type=io.containerd.snapshotter.v1
20 time=“2020-06-23T23:14:39.620820415Z” level=info msg=“loading plugin “io.containerd.snapshotter.v1.zfs”…” type=io.containerd.snapshotter.v1
21 time=“2020-06-23T23:14:39.621150228Z” level=info msg=“skip loading plugin “io.containerd.snapshotter.v1.zfs”…” type=io.containerd.snapshotter.v1
22 time=“2020-06-23T23:14:39.621171863Z” level=info msg=“loading plugin “io.containerd.metadata.v1.bolt”…” type=io.containerd.metadata.v1
23 time=“2020-06-23T23:14:39.621232310Z” level=warning msg=“could not use snapshotter btrfs in metadata plugin” error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter”
24 time=“2020-06-23T23:14:39.621241992Z” level=warning msg=“could not use snapshotter aufs in metadata plugin” error=“modprobe aufs failed: “ip: can’t find device ‘aufs’\naufs 258048 0 \nmodprobe: can’t change directory to ‘/lib/modules’: No such file or directory\n”: exit status 1”
25 time=“2020-06-23T23:14:39.621252518Z” level=warning msg=“could not use snapshotter zfs in metadata plugin” error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin”
26 time=“2020-06-23T23:14:39.628938216Z” level=info msg=“loading plugin “io.containerd.differ.v1.walking”…” type=io.containerd.differ.v1
27 time=“2020-06-23T23:14:39.628978564Z” level=info msg=“loading plugin “io.containerd.gc.v1.scheduler”…” type=io.containerd.gc.v1
28 time=“2020-06-23T23:14:39.629023113Z” level=info msg=“loading plugin “io.containerd.service.v1.containers-service”…” type=io.containerd.service.v1
29 time=“2020-06-23T23:14:39.629040927Z” level=info msg=“loading plugin “io.containerd.service.v1.content-service”…” type=io.containerd.service.v1
30 time=“2020-06-23T23:14:39.629054801Z” level=info msg=“loading plugin “io.containerd.service.v1.diff-service”…” type=io.containerd.service.v1
31 time=“2020-06-23T23:14:39.629069094Z” level=info msg=“loading plugin “io.containerd.service.v1.images-service”…” type=io.containerd.service.v1
32 time=“2020-06-23T23:14:39.629084538Z” level=info msg=“loading plugin “io.containerd.service.v1.leases-service”…” type=io.containerd.service.v1
33 time=“2020-06-23T23:14:39.629120801Z” level=info msg=“loading plugin “io.containerd.service.v1.namespaces-service”…” type=io.containerd.service.v1
34 time=“2020-06-23T23:14:39.629136616Z” level=info msg=“loading plugin “io.containerd.service.v1.snapshots-service”…” type=io.containerd.service.v1
35 time=“2020-06-23T23:14:39.629155271Z” level=info msg=“loading plugin “io.containerd.runtime.v1.linux”…” type=io.containerd.runtime.v1
36 time=“2020-06-23T23:14:39.629384797Z” level=info msg=“loading plugin “io.containerd.runtime.v2.task”…” type=io.containerd.runtime.v2
37 time=“2020-06-23T23:14:39.629509973Z” level=info msg=“loading plugin “io.containerd.monitor.v1.cgroups”…” type=io.containerd.monitor.v1
38 time=“2020-06-23T23:14:39.629981908Z” level=info msg=“loading plugin “io.containerd.service.v1.tasks-service”…” type=io.containerd.service.v1
39 time=“2020-06-23T23:14:39.630021986Z” level=info msg=“loading plugin “io.containerd.internal.v1.restart”…” type=io.containerd.internal.v1
40 time=“2020-06-23T23:14:39.630075400Z” level=info msg=“loading plugin “io.containerd.grpc.v1.containers”…” type=io.containerd.grpc.v1
41 time=“2020-06-23T23:14:39.630091176Z” level=info msg=“loading plugin “io.containerd.grpc.v1.content”…” type=io.containerd.grpc.v1
42 time=“2020-06-23T23:14:39.630104663Z” level=info msg=“loading plugin “io.containerd.grpc.v1.diff”…” type=io.containerd.grpc.v1
43 time=“2020-06-23T23:14:39.630167222Z” level=info msg=“loading plugin “io.containerd.grpc.v1.events”…” type=io.containerd.grpc.v1
44 time=“2020-06-23T23:14:39.630183988Z” level=info msg=“loading plugin “io.containerd.grpc.v1.healthcheck”…” type=io.containerd.grpc.v1
45 time=“2020-06-23T23:14:39.630198287Z” level=info msg=“loading plugin “io.containerd.grpc.v1.images”…” type=io.containerd.grpc.v1
46 time=“2020-06-23T23:14:39.630212407Z” level=info msg=“loading plugin “io.containerd.grpc.v1.leases”…” type=io.containerd.grpc.v1
47 time=“2020-06-23T23:14:39.630225858Z” level=info msg=“loading plugin “io.containerd.grpc.v1.namespaces”…” type=io.containerd.grpc.v1
48 time=“2020-06-23T23:14:39.630241473Z” level=info msg=“loading plugin “io.containerd.internal.v1.opt”…” type=io.containerd.internal.v1
49 time=“2020-06-23T23:14:39.630775971Z” level=info msg=“loading plugin “io.containerd.grpc.v1.snapshots”…” type=io.containerd.grpc.v1
50 time=“2020-06-23T23:14:39.630808635Z” level=info msg=“loading plugin “io.containerd.grpc.v1.tasks”…” type=io.containerd.grpc.v1
51 time=“2020-06-23T23:14:39.630823755Z” level=info msg=“loading plugin “io.containerd.grpc.v1.version”…” type=io.containerd.grpc.v1
52 time=“2020-06-23T23:14:39.630837937Z” level=info msg=“loading plugin “io.containerd.grpc.v1.introspection”…” type=io.containerd.grpc.v1
53 time=“2020-06-23T23:14:39.631236295Z” level=info msg=serving… address="/var/run/docker/containerd/containerd-debug.sock"
54 time=“2020-06-23T23:14:39.631335174Z” level=info msg=serving… address="/var/run/docker/containerd/containerd.sock"
55 time=“2020-06-23T23:14:39.631352859Z” level=info msg=“containerd successfully booted in 0.019879s”
56 time=“2020-06-23T23:14:39.639920173Z” level=info msg=“parsed scheme: “unix”” module=grpc
57 time=“2020-06-23T23:14:39.639960571Z” level=info msg=“scheme “unix” not registered, fallback to default scheme” module=grpc
58 time=“2020-06-23T23:14:39.639985896Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }” module=grpc
59 time=“2020-06-23T23:14:39.639998454Z” level=info msg=“ClientConn switching balancer to “pick_first”” module=grpc
60 time=“2020-06-23T23:14:39.641054038Z” level=info msg=“parsed scheme: “unix”” module=grpc
61 time=“2020-06-23T23:14:39.641082435Z” level=info msg=“scheme “unix” not registered, fallback to default scheme” module=grpc
62 time=“2020-06-23T23:14:39.641104996Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }” module=grpc
63 time=“2020-06-23T23:14:39.641164098Z” level=info msg=“ClientConn switching balancer to “pick_first”” module=grpc
64 time=“2020-06-23T23:14:39.642676887Z” level=error msg=“No zfs dataset found for root” backingFS=extfs root=/var/lib/docker storage-driver=zfs
65 time=“2020-06-23T23:14:39.671801221Z” level=warning msg=“Your kernel does not support swap memory limit”
66 time=“2020-06-23T23:14:39.671836411Z” level=warning msg=“Your kernel does not support cgroup rt period”
67 time=“2020-06-23T23:14:39.671844903Z” level=warning msg=“Your kernel does not support cgroup rt runtime”
68 time=“2020-06-23T23:14:39.671850645Z” level=warning msg=“Your kernel does not support cgroup blkio weight”
69 time=“2020-06-23T23:14:39.671860895Z” level=warning msg=“Your kernel does not support cgroup blkio weight_device”
70 time=“2020-06-23T23:14:39.672410332Z” level=info msg=“Loading containers: start.”
71 time=“2020-06-23T23:14:39.732348361Z” level=info msg=“Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address”
72 time=“2020-06-23T23:14:39.764680340Z” level=info msg=“Loading containers: done.”
73 time=“2020-06-23T23:14:39.775296436Z” level=info msg=“Docker daemon” commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
74 time=“2020-06-23T23:14:39.775518256Z” level=info msg=“Daemon has completed initialization”
75 time=“2020-06-23T23:14:39.808804189Z” level=info msg=“API listen on /var/run/docker.sock”
76 time=“2020-06-23T23:14:39.949008471Z” level=error msg=“Handler for POST /v1.40/auth returned error: Get https://us-central1-docker.pkg.dev/v2/: unauthorized: authentication failed”
77 time=“2020-06-23T23:14:39Z” level=fatal msg=“Error authenticating: exit status 1”

@manobi this issue was for the 1.0 release candidate (and is quite old). There are no known issues with using the docker plugin if you are using the latest version of the kubernetets runner or docker runner (providing your yaml is generally recommended so we can advise further). Make sure you are providing the plugin with your registry credentials via the username and password attributes as shown here: http://plugins.drone.io/drone-plugins/drone-docker/

Thanks @bradrydzewski, I’m already declaring my credentials, it might be something with the new Google Artifact registry.

I could not make it work with plugins/docker, which I’ve had beeing using for 6 months with Oracle container registry.
Unfortunately I had to move to plugins/gcr now it’s working, I’m migrating more repos today and will again and post my pipeline here.

The first does not work and last works as expected (please click on the link to see the original with credentials, which don’t appear in preview).
Do you think it’s how my “key” is formated someway?