Am trying to work out how to achieve the following:
When a commit is tagged “staging-*” on branch feature/BRANCHNAME we want to build and publish a container image with the name “staging-BRANCHNAME”
Currently the closest I can get is
publish:
image: plugins/docker
repo: myorg/sample
tags: staging-${DRONE_BRANCH}
secrets: [ DOCKER_USERNAME, DOCKER_PASSWORD, DOCKER_EMAIL ]
when:
event: tag
branch:
include: [ staging-* ]
But when using tag events as the condition, the branch name is actually the name of the tag not the name of the branch. Is there any way of doing this?
As an alternative it could be an option to set a variable in earlier steps using bash and then refer to that variable in later steps (eg in the image tag name above) but cant find any way of doing this?
It is not possible to use branches in conjunction with tag events because git tags are tied to commits, not to branches. This is a limitation at the git level [1]
This is not possible. Each step in your pipeline is a separate unix process, which makes sharing variables between steps difficult. The only way to pass information between steps, at this time, is via disk. The drone workspace [2] is a volume that is shared between all builds steps, and files you write to the workspace are available to subsequent steps.
It is up to the plugin author to decide if they want to read variables from a disk, and how. The docker plugin can read tags from a .tags
file in the root of your repository. You can therefore do something like this:
pipeline:
setup:
image: ubuntu
commands:
- echo "1.0.0" > .tags
publish:
image: plugins/docker
repo: foo/bar
[1] https://stackoverflow.com/questions/12754078/how-to-get-the-branch-from-tag-name-in-git
[2] http://docs.drone.io/workspace/
thanks @bradrydzewski, can you confirm the syntax for the docker plugin to read from a local .tags file as I cant see ita anywhere looking at the source on GitHub? Or will this happen automatically if the file exists/
okay tested and can confirm this is the observed behaviour - really helpful thanks @bradrydzewski
@bradrydzewski, would you consider a more general-purpose information-sharing solution? What I’m picturing is some way to “lift” values from the workspace into the pipeline context itself, probably as environment variables. This would allow the pipeline steps to refer to those values directly, rather than relying on an individual plugin to know that file-based information might be useful. For instance, to use the info in the command-line of a plugin that doesn’t have a general-purpose shell in its image, and thus can’t do any shell scripting as a precursor to the actual command.
Note that this is slightly different from [Feature Request] Setting environment variables as build steps and Pass environment variables to pipeline, because the information to be lifted is/might be generated as a byproduct of the pipeline step itself. (In my particular case, a main-line build uses npm version patch
to bump the version number to a new value, and I need that version number elsewhere.)
I haven’t yet wrapped my head around the actual step-by-step, under-the-covers process of the pipeline, but I’m hoping it would be possible to get an agent to do something like use the Upload()
RPC call to pass a .env
-style file back to the server, and then update the pipeline’s environment variables for all subsequent steps. For this to even be feasible, though, it requires that subsequent steps aren’t evaluated/expanded until after the previous steps complete. I’m pretty sure that this is, in fact, the case, though, since variables like DRONE_BUILD_STATUS
/CI_BUILD_STATUS
can change as the pipeline progresses.
If this seems like a reasonable approach, I’ll figure out a concrete proposal and work on putting a PR together.
1 Like
I would need to better understand the use case. If you look at the below yaml file, it would appear that this change would only remove a single line (per-step) but would add a decent amount of complexity under-the-hood for drone. Is this something that could be solved by thinking about your problem / use case differently? This could also introduce a new attack vector, where a malicious pull request could override variables at runtime to expose secrets (e.g. set HTTP_PROXY and capture my docker username/password)
pipeline:
one:
image: alpine
commands:
- echo -n "foo" >> .env
- echo -n "bar" >> .env
two:
image: alpine
shell: /bin/sh
commands:
- - source .env
- echo $FOO
- echo $BAR
Information disclosure would be the biggest potential risk; I was imagining that “lifted” variables would either only be incorporated if they didn’t match a black-list (i.e. you can’t fundamentally reconfigure Drone using them), or that they would be prefixed to prevent anything like that from even being possible. (Which would make them somewhat less intuitive to use, but much, much safer.)
For example:
pipeline:
one:
image: alpine
commands:
- echo "FOO=bar" >> .env
lift-environment:
image: nyi/env-lifter # <== doesn't exist yet
file: .env
prefix: LIFTED # *must* be provided, and can't be DRONE...
two:
image: alpine
commands:
- echo $LIFTED_FOO # ==> prints out "bar"
I (personally) would probably avoid anything like automatically detecting/lifting a specific file at the end of each pipeline step… there’s no telling what random files a project might have floating around at the root of their tree, and since this is a new behavior, I think pipelines should have to explicitly opt-in if they want it.
The use case described in this thread is to avoid publishing a docker image if the tag originated from a specific branch. This information is not available to drone, but is available to a pipeline step with disk access to the cloned git repository, against which it can run git commands. One could therefore customize / wrap the docker plugin with a script that checks if the tag originated from a particular branch (via git branch --contains tags/<tag>
) and either continue or abort. This is probably much simpler to implement, and does not require any significant changes to Drone core.
1 Like
And you make a very good point that any “file-unaware” plugin could be wrapped by something in order to add “file-aware” behavior. That’s certainly a reasonable workaround in most cases. Thanks!
Hi to everybody. I’m trying to share variables between steps as in the subject, and also with the runner parent process.
In my case need to convert current time in a {YEAR}/{MONTH}/${DAY} string to use as a DATE_PATH variable that helps us to organize some generated data in next steps ( generate_data_on_dir, upload_data )
The use of .env/.tags files is not working as described before.
there is any way to do that?
This is my data_organize_test pipeline
kind: pipeline
type: docker
name: data_organize_test
steps:
- name: prepare_dir_struct
image: alpine
commands:
- DATE_PATH=$(date '+%Y/%m/%d')
- mkdir -p ${DATE_PATH}/${DRONE_BUILD_NUMBER}
- name: generate_data_on_especified_dir
image: my_genate_data_image
settings:
- OUPUT_DATA_DIR=${DATE_PATH}/${DRONE_BUILD_NUMBER}
- name: upload_data
image: plugins/s3
settings:
bucket: my_out_bucket
access_key: XXXXXXXXXXX
secret_key: YYYYYYYYYYYYYYYYYYYYYy
source: ${DATE_PATH}/${DRONE_BUILD_NUMBER}/**
target: /
path_style: true
endpoint: http://minio:9000
The only way to share variables between pipeline steps is through the filesystem. You write the variables to a file and then read those variables from a file in a subsequent step.
For example:
kind: pipeline
name: default
clone:
disable: true
steps:
- name: write
pull: if-not-exists
image: alpine
commands:
- echo LANG=en > .env
- name: read
pull: if-not-exists
image: alpine
commands:
- echo $LANG
- source .env
- echo $LANG
Here are the results:
[write:0] + echo LANG=en > .env
[read:0] + echo $LANG
[read:1]
[read:2] + source .env
[read:3] + echo $LANG
[read:4] en
The first step writes the variable to a file and the second step reads the variable. You are responsible for writing code to read the file. This is not done automatically, not even by plugins.
The use of .env/.tags files is not working as described before.
there is any way to do that?
The .tags
file is a special file that the Docker plugin reads. You mentioned it not working, however, I do not see the Docker plugin being used in your example.
I think perhaps the misunderstanding is the assumption that plugins automatically read variables from a file. They do not. The Docker plugin has special code in place to read the tags
setting from a .tags
file.
Ok @bradrydzewski, Thank you a lot for your fast response.
If I have understood, to do what I would like to do, I need to design my my_generate_data_image with this behaviour inside , I can do this because is a self made image , but I can’t rewrite the “plugins/s3” image because is not mine. 
I would like suggest this new feature for a future version of drone. Thank you very much again!