Multiple build machines

Howdy!
I really like what I’m reading about Drone but the documentation is really confusing. Lots of good documentation seems to have been removed and only exists in some old tags.

I have some questions I would be grateful to get answers for:

  1. Can I use multiple build servers with Drone? Where can I read more about it?
  2. Can the build servers be heterogeneous? For example on Gitlab CI it’s possible to attach a MacOSX machine with a type “shell” and build software natively on MacOSX. If yes, can I read more about it somewhere?
  3. I see that support for Kubernetes is coming soon.
  4. Any ideas how the Enterprise pricing is gonna look like? We looked at other CIs such as CircleCI and they seem very expensive considering that we want to host our own build servers.

Thank you very much!
Regards,
Damian Kaczmarek

I really like what I’m reading about Drone but the documentation is really confusing. Lots of good documentation seems to have been removed and only exists in some old tags.

I think this is valid feedback. Would you mind providing some specific examples of items missing from docs.drone.io so that we can prioritize and address?

Can the build servers be heterogeneous

Yes, the upcoming (0.8) version of drone supports linux/arm and linux/arm64 environments. Because 0.8 is not yet released the feature is only documented in the github issue comments. https://github.com/drone/drone/issues/1767#issuecomment-316429910

For example on Gitlab CI it’s possible to attach OSX

Because drone is a container-centric build system, and osx does not have native container support, we are unable to support native osx build environments at this time. There are some workarounds that you can use to support osx, for example https://www.fwd.cloud/commit/post/drone-ios-deployments/

I see that support for Kubernetes is coming soon

It is already possible to run Drone on kubernetes, and there are a number of kubernetes and helm plugins available. Some are listed at plugins.drone.io while others can be found in google search.

The ‘coming soon’ refers to documentation coming soon, not the feature coming soon :slight_smile:

Any ideas how the Enterprise pricing is gonna look like?

You can find enterprise pricing at https://drone.io/enterprise/

We looked at other CIs such as CircleCI and they seem very expensive considering that we want to host our own build servers.

The drone community edition is free and open source, so if pricing is a concern you always have the option to run the community edition and use the public community support channels.

1 Like

I forgot to address your very first question

Yes, it is possible to hook up multiple build agents to the drone server. If you look at the official installation documentation you will see the agent configuration, using docker-compose. You provide the agent with your drone server address and shared secret:

  drone-agent:
    image: drone/drone:0.7
    command: agent
    restart: always
    depends_on:
      - drone-server
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
+     - DRONE_SERVER=ws://<server>/ws/broker
      - DRONE_SECRET=<secret>

You can start as many agent containers as you want on as many machines as you want. All you need to do is make sure they have the correct server address and secret.

Thanks for the answer. So basically to get some old links you can try to google a drone related answer:

And then http://readme.drone.io/setup/config/workers/

Trying to find the same information on the current website lead me to nowhere.

Another problem I just encountered. I tried to use the guide to caching http://readme.drone.io/0.4/usage/caching/ which was the only one I found. The error I was getting was some “unmarshal” error but it happened only on the server. When doing drone exec everything worked fine.

Caching was moved to external plugins and is no longer a core drone feature.

Here are some plugins you can use:

Thanks. It looks like drone is very extensible! Congratulations.

I think I know why I had the problem with one config working locally and not on the server. Somehow I had drone-cli 0.5 instead of 0.7. Were there any changes to how services work in Drone 0.7?

On 0.5 drone exec I had:

services:
  mongo:
    image: mongo:3.2

which was accessible in my node:6 pipeline via default mongo port 27017. When running drone exec with 0.7 it’s no longer the case.

I found the answer on a page cached by google http://webcache.googleusercontent.com/search?q=cache:SaE6T5FyxmUJ:docs.drone.io/mongodb-example/+&cd=3&hl=en&ct=clnk&gl=us

It appears that one should use name of a service as the mongo hostname instead of localhost, in my case it’s mongo.

There is a mongo example in the docs you can reference, as well http://docs.drone.io/mongodb-example/

oops, sorry, didn’t realize you were answering your own question :slight_smile: that is what I get for scrolling to the bottom …

FYI, that example seems to be broken

» drone --version
drone version 0.7.0

» drone exec
2017/09/14 21:20:31 Cannot unmarshal 'map[mongo --host mongo --eval "{ ping:1 }"]' of type map[interface {}]interface {} into a string value

I fixed the YAML but it does not seem to work very reliably. Sometimes both pings pass, sometimes only one of them.

pipeline:
  ping:
    image: mongo:3.2
    commands:
      - sleep 1
      - 'mongo --host mongo --eval "{ ping: 1 }"'
  ping2:
    image: mongo:3.2
    commands:
      - sleep 1
      - 'mongo --host mongo --eval "{ ping: 1 }"'

services:
  mongo:
    image: mongo:3.2
    command: [ --smallfiles ]
» drone exec
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=c65dbfc0f2cd
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] db version v3.2.16
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] git version: 056bf45128114e44c5358c7a8776fb582363e094
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] modules: none
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] build environment:
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten]     distmod: debian81
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten]     distarch: x86_64
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2017-09-15T02:27:37.162+0000 I CONTROL  [initandlisten] options: { storage: { mmapv1: { smallFiles: true } } }
2017-09-15T02:27:37.174+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-09-15T02:27:37.330+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2017-09-15T02:27:37.330+0000 I CONTROL  [initandlisten] 
2017-09-15T02:27:37.330+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2017-09-15T02:27:37.330+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2017-09-15T02:27:37.330+0000 I CONTROL  [initandlisten] 
2017-09-15T02:27:37.434+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-09-15T02:27:37.434+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2017-09-15T02:27:37.434+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
+ sleep 4
+ mongo --host mongo --eval "{ ping: 1 }"
MongoDB shell version: 3.2.16
connecting to: mongo:27017/test
2017-09-15T02:27:42.197+0000 I NETWORK  [initandlisten] connection accepted from 172.23.0.3:45476 #1 (1 connection now open)
1
2017-09-15T02:27:42.199+0000 I NETWORK  [conn1] end connection 172.23.0.3:45476 (0 connections now open)
+ sleep 4
+ mongo --host mongo --eval "{ ping: 1 }"
MongoDB shell version: 3.2.16
connecting to: mongo:27017/test
2017-09-15T02:27:47.920+0000 W NETWORK  [thread1] Failed to connect to 104.239.207.44:27017, in(checking socket for error after poll), reason: errno:113 No route to host
2017-09-15T02:27:47.921+0000 E QUERY    [thread1] Error: couldn't connect to server mongo:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:231:14
@(connect):1:6

exception: connect failed
2017/09/14 21:27:50 drone_step_1 : exit code 1

This is weird as it appears mongo is resolved to different IPs

pipeline:
  ping:
    group: pinging
    image: podnov/network-utils
    commands:
      - ping mongo -c 5
  ping2:
    group: pinging
    image: podnov/network-utils
    commands:
      - ping mongo -c 5

services:
  mongo:
    image: mongo:3.2
    command: [ --smallfiles ]

Debugging DNS:

» drone exec
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=e9b1e777c188
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] db version v3.2.16
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] git version: 056bf45128114e44c5358c7a8776fb582363e094
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] modules: none
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] build environment:
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten]     distmod: debian81
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten]     distarch: x86_64
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2017-09-15T02:34:29.741+0000 I CONTROL  [initandlisten] options: { storage: { mmapv1: { smallFiles: true } } }
2017-09-15T02:34:29.753+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-09-15T02:34:29.870+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2017-09-15T02:34:29.870+0000 I CONTROL  [initandlisten] 
2017-09-15T02:34:29.870+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2017-09-15T02:34:29.870+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2017-09-15T02:34:29.870+0000 I CONTROL  [initandlisten] 
2017-09-15T02:34:29.935+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-09-15T02:34:29.935+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2017-09-15T02:34:29.935+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
+ ping mongo -c 5
PING mongo (104.239.207.44) 56(84) bytes of data.
64 bytes from 104.239.207.44 (104.239.207.44): icmp_seq=1 ttl=46 time=42.4 ms
+ ping mongo -c 5
PING mongo (172.23.0.2) 56(84) bytes of data.
64 bytes from drone_services_0.drone_default (172.23.0.2): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from 104.239.207.44 (104.239.207.44): icmp_seq=2 ttl=46 time=41.4 ms
64 bytes from drone_services_0.drone_default (172.23.0.2): icmp_seq=2 ttl=64 time=0.057 ms
64 bytes from 104.239.207.44 (104.239.207.44): icmp_seq=3 ttl=46 time=41.1 ms
64 bytes from drone_services_0.drone_default (172.23.0.2): icmp_seq=3 ttl=64 time=0.060 ms
64 bytes from 104.239.207.44 (104.239.207.44): icmp_seq=4 ttl=46 time=41.3 ms
64 bytes from drone_services_0.drone_default (172.23.0.2): icmp_seq=4 ttl=64 time=0.066 ms
64 bytes from 104.239.207.44 (104.239.207.44): icmp_seq=5 ttl=46 time=41.0 ms

--- mongo ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 41.033/41.474/42.414/0.521 ms
64 bytes from drone_services_0.drone_default (172.23.0.2): icmp_seq=5 ttl=64 time=0.056 ms

--- mongo ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4100ms
rtt min/avg/max/mdev = 0.050/0.057/0.066/0.010 ms

As you can see both containers resolve mongo to a different IP

I built drone-cli 0.8 and I can observe the same problem. I’m running Docker version 17.04.0-ce, build 78d1802

sleep 1 is likely not enough time for mogodb to initialize and begin accepting connections. The example works fine for me when providing enough time to initialize.

this is documented in the official docs as well:

If you are unable to connect to the mongo container please make sure you are giving mongodb adequate time to initialize and begin accepting connections.

Below is the yaml I am testing with, which is consistently passing on every attempt.

pipeline:
  ping:
    image: mongo:3.0
    group: ping
    commands:
      - sleep 15
      - 'mongo --host mongo --eval "{ ping: 5 }"'

  ping2:
    image: mongo:3.0
    group: ping
    commands:
      - sleep 15
      - 'mongo --host mongo --eval "{ ping: 5 }"'

services:
  mongo:
    image: mongo:3.0
    command: [ --smallfiles ]

I am going to close this thread as it has gotten a bit off topic.