Debug the pulling of private images

Dear all,

first of all thanks for drone! It is just awesome.
I am running drone in k8s with the helm charts from:

I have the issue that the pull of private images does not work. My questions are:

  1. Do I need the drone-kubernetes-secrets chart running for that?
  2. If not how can I debug this further?

Any help is appreciated!
thanks - Chris

Here are the details:

This is the working example:

kind: pipeline
type: kubernetes
name: default

clone:
  skip_verify: true

steps:
- name: build
  image: golang:1.12
  commands:
  - ls -al ~

But this does not work:

kind: pipeline
type: kubernetes
name: default

clone:
  skip_verify: true

steps:
- name: build
  image: registry.me.com/golang:1.12
  commands:
  - ls -al ~

The symptom is that the clone is fine but the first build step fails without any log.

These are the helm values for the drone chart:

imagePullSecrets:
 - name: drone-imagepull
ingress:
  enabled: true
  hosts:
   - host: drone.me.com
     paths:
      - "/"
extraVolumes:
 - name: ca-bundle
   configMap:
     name: utils-ca-bundle
extraVolumeMounts:
 - name: ca-bundle
   mountPath: /etc/ssl/certs/ca-certificates.crt
   subPath: ca-bundle.pem
extraSecretNamesForEnvFrom:
 - drone-github
 - drone-rpc
env:
  DRONE_LOGS_DEBUG: true
  DRONE_GITHUB_SERVER: https://github.me.com
  DRONE_GIT_ALWAYS_AUTH: true
  DRONE_SERVER_HOST: drone.me.com
  DRONE_SERVER_PROTO: https
  DRONE_GITHUB_CLIENT_ID: "from extraSecretNamesForEnvFrom"
  DRONE_GITHUB_CLIENT_SECRET: "from extraSecretNamesForEnvFrom"
  DRONE_RPC_SECRET: "from extraSecretNamesForEnvFrom"

And here are the helm values for the drone-runner-kube chart:

imagePullSecrets:
 - name: drone-imagepull
replicaCount: 1
extraVolumes:
 - name: ca-bundle
   configMap:
     name: utils-ca-bundle
extraVolumeMounts:
 - name: ca-bundle
   mountPath: /etc/ssl/certs/ca-certificates.crt
   subPath: ca-bundle.pem
extraSecretNamesForEnvFrom:
 - drone-rpc
env:
  DRONE_DEBUG: true
  DRONE_TRACE: true
  DRONE_RPC_SECRET: "from extraSecretNamesForEnvFrom"

My image pull secret is defined like that:

apiVersion: v1
data:
  .dockerconfigjson: ey..19
kind: Secret
metadata:
  creationTimestamp: "2020-03-03T14:53:02Z"
  name: drone-imagepull
  namespace: default
  resourceVersion: "6003981"
  selfLink: /api/v1/namespaces/default/secrets/drone-imagepull
  uid: 6e23ed24-c272-4faa-a6a5-e2e002e59683
type: kubernetes.io/dockerconfigjson

A base64 decode of the data looks like this:

{"auths":{"registry.me.com":{"username":"drone","password":"x8..is","auth":"ZH..=="}}}

And of course it is working fine with docker pull.

The logs from drone are:

kubectl logs drone-runner-kube-6495f8df85-jthhb
...
time="2020-03-03T16:21:24Z" level=debug msg="stage received" stage.id=2 stage.name=default stage.number=1 thread=8
time="2020-03-03T16:21:24Z" level=debug msg="stage accepted" stage.id=2 stage.name=default stage.number=1 thread=8
time="2020-03-03T16:21:24Z" level=debug msg="stage details fetched" build.id=2 build.number=2 repo.id=33 repo.name=hdbcc repo.namespace=gocat stage.id=2 stage.name=default stage.number=1 thread=8
time="2020-03-03T16:21:24Z" level=debug msg="updated stage to running" build.id=2 build.number=2 repo.id=33 repo.name=hdbcc repo.namespace=gocat stage.id=2 stage.name=default stage.number=1 thread=8
time="2020-03-03T16:21:29Z" level=debug msg="updated stage to complete" build.id=2 build.number=2 repo.id=33 repo.name=hdbcc repo.namespace=gocat stage.id=2 stage.name=default stage.number=1 thread=8
time="2020-03-03T16:21:29Z" level=debug msg="request stage from remote server" thread=8
time="2020-03-03T16:21:29Z" level=trace msg="http: context canceled"
time="2020-03-03T16:21:29Z" level=debug msg="done listening for cancellations" build.id=2 build.number=2 repo.id=33 repo.name=hdbcc repo.namespace=gocat stage.id=2 stage.name=default stage.number=1 thread=8
...

And the logs from the runner are:

kubectl logs drone-7c769b4d4-fthzt
...
{"commit":"b0792376c76bbb8c8cecbd00979fb73747001ae3","event":"push","level":"debug","msg":"webhook parsed","name":"hdbcc","namespace":"gocat","time":"2020-03-03T16:21:23Z"}
{"commit":"b0792376c76bbb8c8cecbd00979fb73747001ae3","event":"push","level":"debug","msg":"trigger: received","ref":"refs/heads/master","repo":"gocat/hdbcc","time":"2020-03-03T16:21:23Z"}
{"fields.time":"2020-03-03T16:21:24Z","latency":245273149,"level":"debug","method":"POST","msg":"","remote":"100.100.2.101:36446","request":"/hook","request-id":"1YcqyMZTRzH7Eu4uhadVCF73LNa","time":"2020-03-03T16:21:24Z"}
{"level":"debug","machine":"drone-runner-kube-6495f8df85-jthhb","msg":"manager: accept stage","stage-id":2,"time":"2020-03-03T16:21:24Z"}
{"arch":"","kernel":"","kind":"pipeline","level":"debug","msg":"manager: request queue item","os":"","time":"2020-03-03T16:21:24Z","type":"kubernetes","variant":""}
{"level":"debug","machine":"drone-runner-kube-6495f8df85-jthhb","msg":"manager: stage accepted","stage-id":2,"time":"2020-03-03T16:21:24Z"}
{"level":"debug","msg":"manager: fetching stage details","step-id":2,"time":"2020-03-03T16:21:24Z"}
{"level":"debug","msg":"manager: updating step status","step.id":3,"step.name":"clone","step.status":"running","time":"2020-03-03T16:21:24Z"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqyUkXN2M8TwdTJpVTKt3J1tD","time":"2020-03-03T16:21:24Z","user.login":"d055539","visibility":"public"}
{"fields.time":"2020-03-03T16:21:24Z","latency":826706,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36446","request":"/api/repos/gocat/hdbcc","request-id":"1YcqyUkXN2M8TwdTJpVTKt3J1tD","time":"2020-03-03T16:21:24Z"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqyYvtOi2P8QhBc1nYmBGw2Ee","time":"2020-03-03T16:21:24Z","user.login":"d055539","visibility":"public"}
{"fields.time":"2020-03-03T16:21:24Z","latency":793484,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36456","request":"/api/repos/gocat/hdbcc/builds?page=1","request-id":"1YcqyYvtOi2P8QhBc1nYmBGw2Ee","time":"2020-03-03T16:21:24Z"}
{"level":"debug","msg":"manager: updating step status","step.id":3,"step.name":"clone","step.status":"success","time":"2020-03-03T16:21:26Z"}
{"level":"debug","msg":"manager: updating step status","step.id":4,"step.name":"build","step.status":"running","time":"2020-03-03T16:21:26Z"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqyuSAkcVnm12SXCFpKPlpnRV","time":"2020-03-03T16:21:27Z","user.login":"d055539","visibility":"public"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqywMplGb14hXx9Vq1s4MHjQa","time":"2020-03-03T16:21:27Z","user.login":"d055539","visibility":"public"}
{"fields.time":"2020-03-03T16:21:27Z","latency":636916,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36456","request":"/api/repos/gocat/hdbcc","request-id":"1YcqyuSAkcVnm12SXCFpKPlpnRV","time":"2020-03-03T16:21:27Z"}
{"fields.time":"2020-03-03T16:21:27Z","latency":1082904,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36446","request":"/api/repos/gocat/hdbcc/builds/2","request-id":"1YcqywMplGb14hXx9Vq1s4MHjQa","time":"2020-03-03T16:21:27Z"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1Ycqys8ANT1ZaUpxx2RVITwdjGl","time":"2020-03-03T16:21:27Z","user.login":"d055539","visibility":"public"}
{"fields.time":"2020-03-03T16:21:27Z","latency":1098646,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36446","request":"/api/repos/gocat/hdbcc/builds/2/logs/1/1","request-id":"1Ycqys8ANT1ZaUpxx2RVITwdjGl","time":"2020-03-03T16:21:27Z"}
{"level":"debug","msg":"manager: updating step status","step.id":4,"step.name":"build","step.status":"success","time":"2020-03-03T16:21:29Z"}
{"level":"debug","msg":"manager: stage is complete. teardown","stage.id":2,"time":"2020-03-03T16:21:29Z"}
{"build.id":2,"build.number":2,"level":"debug","msg":"manager: build is finished, teardown","repo.id":33,"stage.id":2,"time":"2020-03-03T16:21:29Z"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqzMAllwsSL5LsKFLcVJgEe23","time":"2020-03-03T16:21:31Z","user.login":"d055539","visibility":"public"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqzLmPboENP55jkEmtTNe1CJZ","time":"2020-03-03T16:21:31Z","user.login":"d055539","visibility":"public"}
{"fields.time":"2020-03-03T16:21:31Z","latency":711324,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36456","request":"/api/repos/gocat/hdbcc","request-id":"1YcqzLmPboENP55jkEmtTNe1CJZ","time":"2020-03-03T16:21:31Z"}
{"fields.time":"2020-03-03T16:21:31Z","latency":1211545,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36446","request":"/api/repos/gocat/hdbcc/builds/2","request-id":"1YcqzMAllwsSL5LsKFLcVJgEe23","time":"2020-03-03T16:21:31Z"}
{"level":"debug","msg":"api: read access granted","name":"hdbcc","namespace":"gocat","request-id":"1YcqzOvXWFoJHGhrUuEcSkDYwJ2","time":"2020-03-03T16:21:31Z","user.login":"d055539","visibility":"public"}
{"fields.time":"2020-03-03T16:21:31Z","latency":924294,"level":"debug","method":"GET","msg":"","remote":"100.100.2.101:36446","request":"/api/repos/gocat/hdbcc/builds/2/logs/1/2","request-id":"1YcqzOvXWFoJHGhrUuEcSkDYwJ2","time":"2020-03-03T16:21:31Z"}
...

to pull private images you need to store your docker registry credentials (config.json file) in a secret [1] and then provide that secret to your pipeline [2].

[1] https://docs.drone.io/secret/repository/
[2] https://docs.drone.io/pipeline/kubernetes/syntax/images/#pulling-private-images

With the docs I got it working but I had to create multiple orgsecrets because drone serves multiple organizations in may case. I was expecting that the image pull secret from the helm chart is used globally for all images in pipeline steps from all repositories. But this is obviously not the case.

Therefor I’d like to propose that this secret is passed to the pods of the pipeline steps. Drone already creates temporary image pull secrets for the pod. With a pipline like this

kind: pipeline
type: kubernetes
name: default
image_pull_secrets:
- mysecret

we get a pod with such a temporary secret from the orgsecret.

  imagePullSecrets:
  - name: drone-23hvcpt4ektkefbtb1ld

Can we add the image pull secret from the helm chart here e.g. like this

  imagePullSecrets:
  - name: drone-23hvcpt4ektkefbtb1ld
  - name: drone-image-pull-secret-from-chart

such that we can omit the image_pull_secrets part in the pipeline?

The helm chart requires the drone server and the image pull secret used in the chart in the same namespace. So it should be possible for the drone server to either reuse this secret in the pipeline pod if it is running in the same namespace or creating a second temporary image pull secret as a copy from the chart secret in the target name space.

If this is wise and feasible then please let me know if I can help here.

Global registry credentials can be provided to Drone through extensions. See https://docs.drone.io/extensions/registry/

Therefor I’d like to propose that this secret is passed to the pods of the pipeline steps. Drone already creates temporary image pull secrets for the pod. With a pipline like this

Many of our users execute pipelines in different namespaces [1]. We cannot make the assumption that every pipeline is running in the same namespace as the runner, which means we cannot assume the secret will be available to all pipelines.

[1] https://docs.drone.io/pipeline/kubernetes/syntax/metadata/