1.0.0-rc.5 release notes

I published 1.0.0-rc.5. This release candidate includes a number of bug fixes and improvements to our Kubernetes runtime. There are no breaking changes (database or yaml or otherwise if you want to upgrade from rc.3 or rc.4 to rc.5).

Server and Runner

  • kubernetes runtime support for global secrets
  • kubernetes runtime support for global registry credentials
  • kubernetes runtime support for global resource limits
  • kubernetes runtime support for arm and arm64. See issue #2573.
  • change how we mount the .netrc file in kubernetes. See issue #31.
  • support for devices (regression from 0.8). See this thread.
  • support for global docker networks (regression from 0.8)
  • allow custom commit status messages (regression from 0.8)
  • metrics improvement, running & pending builds should not include blocked builds

User Interface

Roadmap

These are items I would like to address before snapshotting a final release, which means we can expect another release candidate end of next week. These are some of the items I plan to address:

  • improvements to documentation
  • improvements to the agent <> server rpc mechanism
  • finish migration utility
  • support for an agentless Docker runtime, similar to the agentless Kubernetes runtime
  • audit configuration parameters to ensure clear, accurate names are used
  • audit environment variables to ensure clear, accurate names are used

Other Notes

We have made some progress on the migration utility, which is now capable of migrating secrets and registry credentials. We will continue work on the migration utility in the rc.6 sprint. I expect rc.6 to be the final release candidate prior to the final release.

I will also begin to focus on preparing the source code for public release. I am targeting mid-February for both the 1.0-final release and the code release, and will post further updates when I have news to share.

14 Likes

How can I make use of this?

Where can I read more about this?

support for global docker networks
Where can I read more about this?

This is only supported in the Docker runtime, not the Kubernetes runtime (guessing this is related to your other question).

1 Like

Any info on this, do you set the global network on the drone server and the running jobs inherit the network or do you specify it in your drone.yml I want to expose a running zelenium instance on the same host so the jobs don’t need to start and stop the nodes for each run.

Is the settings documented and if not what should I set ?