"Error authenticating: exit status 1" only when using dockerhub

The Problem:

We recently switched to the Kubernetes runner and are now unable to push images to dockerhub (public). Pushing to our private registry is working fine. The credentials are correct and passed to the step.

Here is the .drone.yml:

kind: pipeline
type: kubernetes
name: deploy
steps:
  - name: build
    image: plugins/docker
    settings:
      auto_tag: true
      repo: dorfjungs/test-public-repo
      password: { from_secret: docker_user }
      username: { from_secret: docker_pass }
      debug: true

This gives me the following error after 10 seconds:
time="2021-01-21T14:16:01Z" level=fatal msg="Error authenticating: exit status 1"

With debug: true:

+ /usr/local/bin/dockerd --data-root /var/lib/docker --host=unix:///var/run/docker.sock --mtu 1440
2 time="2021-01-21T14:15:59.462454852Z" level=info msg="Starting up"
3 time="2021-01-21T14:15:59.464293005Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
4 time="2021-01-21T14:15:59.465957999Z" level=info msg="libcontainerd: started new containerd process" pid=36
5 time="2021-01-21T14:15:59.466103659Z" level=info msg="parsed scheme: \"unix\"" module=grpc
6 time="2021-01-21T14:15:59.466156959Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
7 time="2021-01-21T14:15:59.466285479Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
8 time="2021-01-21T14:15:59.466314369Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
9 time="2021-01-21T14:15:59.487204429Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
10 time="2021-01-21T14:15:59.487950690Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
11 time="2021-01-21T14:15:59.488153300Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
12 time="2021-01-21T14:15:59.488462431Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
13 time="2021-01-21T14:15:59.488490811Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
14 time="2021-01-21T14:15:59.493667461Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
15 time="2021-01-21T14:15:59.493725531Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
16 time="2021-01-21T14:15:59.493901291Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
17 time="2021-01-21T14:15:59.494065671Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
18 time="2021-01-21T14:15:59.494532992Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
19 time="2021-01-21T14:15:59.494551932Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
20 time="2021-01-21T14:15:59.494599942Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
21 time="2021-01-21T14:15:59.494611272Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
22 time="2021-01-21T14:15:59.494622002Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
23 time="2021-01-21T14:15:59.496902837Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
24 time="2021-01-21T14:15:59.496941157Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
25 time="2021-01-21T14:15:59.497007567Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
26 time="2021-01-21T14:15:59.497030497Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
27 time="2021-01-21T14:15:59.497049047Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
28 time="2021-01-21T14:15:59.497067027Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
29 time="2021-01-21T14:15:59.497088367Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
30 time="2021-01-21T14:15:59.497144467Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
31 time="2021-01-21T14:15:59.497163857Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
32 time="2021-01-21T14:15:59.497180607Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
33 time="2021-01-21T14:15:59.497397367Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
34 time="2021-01-21T14:15:59.497618507Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
35 time="2021-01-21T14:15:59.498389119Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
36 time="2021-01-21T14:15:59.498440729Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
37 time="2021-01-21T14:15:59.498509999Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
38 time="2021-01-21T14:15:59.498528189Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
39 time="2021-01-21T14:15:59.498570019Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
40 time="2021-01-21T14:15:59.498583849Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
41 time="2021-01-21T14:15:59.498693400Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
42 time="2021-01-21T14:15:59.498708460Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
43 time="2021-01-21T14:15:59.498720520Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
44 time="2021-01-21T14:15:59.498751170Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
45 time="2021-01-21T14:15:59.498764920Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
46 time="2021-01-21T14:15:59.499079101Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
47 time="2021-01-21T14:15:59.499105791Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
48 time="2021-01-21T14:15:59.499118781Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
49 time="2021-01-21T14:15:59.499134831Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
50 time="2021-01-21T14:15:59.499444291Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
51 time="2021-01-21T14:15:59.499513981Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
52 time="2021-01-21T14:15:59.499535501Z" level=info msg="containerd successfully booted in 0.013597s"
53 time="2021-01-21T14:15:59.512362985Z" level=info msg="parsed scheme: \"unix\"" module=grpc
54 time="2021-01-21T14:15:59.512421245Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
55 time="2021-01-21T14:15:59.512461285Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
56 time="2021-01-21T14:15:59.512517546Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
57 time="2021-01-21T14:15:59.513843989Z" level=info msg="parsed scheme: \"unix\"" module=grpc
58 time="2021-01-21T14:15:59.513950719Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
59 time="2021-01-21T14:15:59.514079219Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
60 time="2021-01-21T14:15:59.514148289Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
61 time="2021-01-21T14:15:59.537987373Z" level=warning msg="Your kernel does not support swap memory limit"
62 time="2021-01-21T14:15:59.538035143Z" level=warning msg="Your kernel does not support cgroup rt period"
63 time="2021-01-21T14:15:59.538044383Z" level=warning msg="Your kernel does not support cgroup rt runtime"
64 time="2021-01-21T14:15:59.538052443Z" level=warning msg="Your kernel does not support cgroup blkio weight"
65 time="2021-01-21T14:15:59.538060113Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
66 time="2021-01-21T14:15:59.538429714Z" level=info msg="Loading containers: start."
67 time="2021-01-21T14:15:59.590842272Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
68 time="2021-01-21T14:15:59.622888902Z" level=info msg="Loading containers: done."
69 time="2021-01-21T14:15:59.634444843Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
70 time="2021-01-21T14:15:59.634657433Z" level=info msg="Daemon has completed initialization"
71 time="2021-01-21T14:15:59.661299094Z" level=info msg="API listen on /var/run/docker.sock"
72 time="2021-01-21T14:16:01Z" level=fatal msg="Error authenticating: exit status 1"

What I’ve already tried:

  1. Changing the build settings to look like this:
    tag: staging
    registry: registry-1.docker.io/v2/
    repo: registry-1.docker.io/v2/dorfjungs/test-public-repo
    
  2. Using a couple of different users (+ admin user)
  3. Putting the username and password directly into the file without from_secret

I know that the kubernetes runner is still in beta, but getting this to work would be awesome! This worked on docker swarm, so I assume it has something to do with the kubernetes runner.

I am not aware of any issues with the docker plugin on kubernetes. I just ran a quick test and was able to use the plugin without issue.

Error authenticating: exit status 1

There are only a few root causes that I am aware of for this error. The first is when the docker-in-docker daemon cannot start. This usually results in an error that looks like this:

failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.3 (legacy): can’t initialize iptables table `nat’: Permission denied (you must be root)

However, the below logs you provided (with debug: true in the yaml) indicate the daemon successfully started (see below snippet) so we can rule out this root cause.

time=“2021-01-21T14:15:59.634657433Z” level=info msg=“Daemon has completed initialization”
time=“2021-01-21T14:15:59.661299094Z” level=info msg=“API listen on /var/run/docker.sock”

There are a number of warnings in the above logs, however, these warnings are not abnormal. For example, see this build which is running on Drone Cloud and emits similar warning logs, but still succeeds.

The second root cause that I am aware of for this issue is an invalid username or password. The fact that the daemon successfully starts, and your username and password fail even when hard coded, would lead me to believe it is a problem with the credentials.

The third root cause that I am aware of for this issue is when you have networking issues that prevent the system from reaching the registry. This can include dns issues, but issues with self-signed certificates can also cause login to fail. If you are using self-signed certificates you can try adding insecure: true to your yaml configuration.


EDIT: the easiest way to debug this is to remove the username and password and then configure the plugin to perform a dry run.

kind: pipeline
type: kubernetes
name: deploy
steps:
  - name: build
    image: plugins/docker
    settings:
      auto_tag: true
      repo: dorfjungs/test-public-repo
+     dry_run: true
-     password: { from_secret: docker_user }
-     username: { from_secret: docker_pass }
-     debug: true

The dry run will attempt to build your image but will not login to dockerhub and will not publish to dockerhub. If the dry run is successful then we have isolated the problem to the login, which could indicate a problem with your dockerhub credentials or perhaps a problem with your networking or dns that is preventing docker from establishing a connection with dockerhub from inside the container.

Than you very much for the quick response. Unfortunately I’ve tried everything and the credentials are in fact correct. I can use them on my local machine and they are working on our old setup (droner via docker swarm). I even tried to create a new dockerhub account. But again, no luck with drone :frowning:

We’re using linode for our kubernetes cluster. I’m not an expert when it comes to networking, but is it perhaps possible that they are blocking something crucial for this to work?

Edit: dry_run works. I think that means it’s an issue with linode blocking something?

the plugin is executing a docker login command which makes an http request to dockerhub to exchange your username and password for a token. It is hard to say why this fails since this is really outside the boundaries of Drone (and therefore outside our area of expertise). It could be a problem with the username or password, or it could be a problem with your cluster networking (dns, etc) that is causing the http request to fail.

I’ve tested that and it seems that linode is not blocking the dockerhub login. Here’s how I tested it:
linode_dockerhub

The cluster is the exact same which drone is running on. The credentials used in the screencast are the same that I am using for my deployment to dockerhub. And I am certain that they are correct. By testing this I am now sure it can’t be linode (the hoster) blocking it. I honestly don’t know what else I could to to investigate this.

did you remove / revert these lines which could cause problems?

kind: pipeline
type: kubernetes
name: deploy
steps:
  - name: build
    image: plugins/docker
    settings:
      auto_tag: true
+     repo: dorfjungs/test-public-repo
-     registry: registry-1.docker.io/v2/
-     repo: registry-1.docker.io/v2/dorfjungs/test-public-repo

I do not want to speculate why the docker login would be failing since I am not well versed in the internals of docker and all the possible root causes for this failure. We publish the code for all plugins (see below link) in case you are interested in looking at the code to see how the docker login is performed.

I genuinely wish I could to more to help, however, since I am unable to reproduce and since this is our most used plugin and we are not receiving reports of issues from our existing users, I’m not sure there is much more I can do at this time. If you find more information that you think could be useful, however, please post and I will be happy to take a look and re-assess.