Downstream Error: unable to get latest build for

I’m trying to get a successful build in repoA to trigger a build in repoB. I’m certain that plugin/downstream is what I need. However like others I cannot get it to work no matter what I do try. Build output states it cannot get latest build for repo. Generally this is considered due to auth problems. I have copied my auth token from user settings and put it in a secret called drone_token. I’ve then tried all sorts of combinations of secrets / token key values in the .drone.yml config and nothing I try will trigger a build. Here is a list of commits detailing every attempt I’ve made to get a build to trigger. :sob:

Can someone offer me some suggestions on how I can debug this further please?

I hate to “bump” this but I’m really struggling with getting a build of repoB to happen on a successful build of repoA.

I run drone in a docker container behind an nginx proxy (that is in a docker container) that is secured with ssl certs (that are handled by yet another docker container). The images I use in other steps can talk to the internet (I’ve published to npm for example) yet I cannot for the life of me get a build to trigger using drone-downsteam. My current .drone.yml file looks like this (I cut it down cos some of the steps were taking their sweet time) :

kind: pipeline
type: docker
name: default

  - name: install dependencies
    image: node
    - yarn

  - name: trigger
    image: plugins/downstream
        from_secret: downstream_token
      fork: true
        - jackw/drone-test

Both the repo that contains this .drone.yml and the drone-test repo have a downstream_token secret. If I remove the repository the trigger step is successful. I cannot find any logs that are related to failures.

Are there any other ways to trigger a master build of another repo? Is there any way to log out more than access logs in drone?

So it appears to be a docker networking issue from what I can tell. I added the following step to the pipeline:

  - name: curl
    image: byrnedo/alpine-curl
    - ping -c 1 HOST
    - curl -s
    - curl --connect-timeout 5 https://drone.HOST

which results in:

+ ping -c 1 HOST
PING HOST ( 56 data bytes
64 bytes from seq=0 ttl=64 time=0.031 ms

--- ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.031/0.031/0.031 ms
+ curl -s // <- this is the same IP as HOST
+ curl --connect-timeout 5 https://drone.HOST
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
curl: (28) Connection timed out after 5001 milliseconds

I’m using custom networks with the drone container:

    image: drone/drone:1.4
    container_name: drone
    restart: unless-stopped
      - ./config-files/new-drone:/var/lib/drone
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - "80"
    env_file: ./config-files/drone.env
      - VIRTUAL_HOST=drone.HOST
      - VIRTUAL_PORT=80
      - productivity
      - webproxy

From what I’ve read it doesn’t seem this is Drone specific. It appears to be docker in docker dns related. But apparently using custom networks solves it. But I’m already doing that. :thinking:
productivity allows drone to talk to postgres. webproxy is used for all nginx services that have a frontend.

Can anyone suggest how / where to go to ask for help with this?

Drone creates a new user-defined network for each pipeline execution. So in your example, your pipeline containers will not be attached to productivity or webproxy (or whatever network drone server itself is attached to). If you would like to attach additional networks to you pipeline steps you can configure your agent / runner accordingly with

Amazing. All working now. Thank you sooooo much for this Brad. :clap: I was beginning to give up hope!