Is there a clean way for a plugin to “stop” a pipeline without causing it to fail?
For example, in the case of a monorepo, we have a build pipeline that shouldn’t progress if there’s no need to continue (e…g, a plugin that detects if the given commit applies to its pipeline).
Some (probably bad) ideas:
Returning a special error code (e.g., 64) from the step container
A special when condition (though this is probably confusing)
when:
# if this results in a non-zero exit code then the pipeline stops (without marking a failure)
commands:
- scripts/should_i_build.sh
we have a build pipeline that shouldn’t progress if there’s no need to continue
In this case, are you detecting if the Pipeline should run based on files changed (subdirectory)? This could be an interesting use case for a configuration plugin. I discuss this briefly here.
sorry, I only partially answered your question. I am sort of partial to a custom exit code. Can we pick a custom exit code > 256? I wonder how a basic posix shell and Docker (and Kubernetes) will handle a non-standard exit code outside the standard range.
Yes, essentially, but maybe not just for the files changed.
For example, in some cases, we compute a hash based on all of the dependencies for a single binary in a package, and if an image tagged with that hash exists, we know nothing has changed and therefore don’t need to build, publish, or deploy. This is especially useful in mono-repos with many potential images.
Also, I’ve strongly considered moving this logic to our configuration plugin, thereby “dynamically” building a .drone.yml, which is certainly a solution, but I don’t want to necessarily mask the entire pipeline from developers. So my next thought was to have our configuration plugin capable of doing something like the following:
Fetch the canonical drone.yml/jsonnet file based on its location (this is just for some global things that always occur or are built)
Read all files of type .build.yml" in the repo. These are a custom YAML files that the configuration plugin understands, and there’s generally one per “project” in a repo. E.g.,
It has information such as the ancilliary drone.yml file for the custom configuration (e.g. a go program) and but only build if we detect that a build is required (e.g., by doing the hash thing mentioned above for the dependencies of this package)
Actually, simpler than a bunch of random .build.yml files lying around, we just have one which defines something like:
Of course, the weird part is that the server asks specifically for a Repo.Config file based on the repo settings, but the configuration plugin would essentially ignore this or have some other magical means of knowing. I guess we could add something special to .drone.yml that the configuration plugin understands:
And if there’s go_package we know to check to see if it needs to be built before loading its .drone.yml.
(I like this because there’s not much magic. The repo setting still points to a .drone.yml file and the configuration plugin understands kind “metadata”)
For example, in some cases, we compute a hash based on all of the dependencies for a single binary in a package, and if an image tagged with that hash exists, we know nothing has changed and therefore don’t need to build, publish, or deploy. This is especially useful in mono-repos with many potential images.
This is a really interesting approach, I had not thought about this before.
I can understand why you might want to do this in the Pipeline because you will have all the required data available (all files, dependencies, etc). You could replicate this in a config plugin, but at that point you are replicating a lot of what Drone is doing in the pipeline.
Read all files of type .build.yml" in the repo. These are a custom YAML files that the configuration plugin understands, and there’s generally one per “project” in a repo. E.g.,
Makes sense.
As for the error code, i only picked 64 because I believe that is in the range of “user defined” error codes
ah sorry, I didn’t see your suggestion. I tested an exit code > 256 (out of range) and the shell interprets this as a 0 exit code, which definitely does not work (so ignore my suggestion).
Maybe we could also come up with some sort of when clause plugin. I have been trying to think of how we use the kubernetes-style approach to allow more types of custom objects. I have not thought much about this, but we might be able to do some interesting things:
The only issue w/ this approach (given my use case) is that it’s a single condition unless there was a condition for every project, and that is further wasteful when you need to get access to the code for each (if that makes any sense). Here’s a way it could work:
Syntax cold be improved, but the upside to this concept is the conditional is done within the workspace and you get the ability to have a custom plugin to do the work. This combined w/ the error code approach could be powerful.
I feel like we might be on the same page. I was thinking, in my example, that the plugin would have access to the underlying workspace. This would define the custom condition:
and this would trigger condition execution (each time it was encountered):
when:
using_condition: hash
which would basically look something like:
docker run -v workspace:/drone/src your-mono-repo-plugin
if running the condition multiple times was slow, perhaps you would write some sort of temporary file to disk that cached the results? The condition would still run multiple times, but much faster if the cache file exists?