Hidden Variables and their Power

One of the massive benefits of Harness is the templatizing ability that it has. You can templatize almost anything in almost any place, but because of this there are so many potential variables to use that it can become difficult to know which ones work in which areas, especially when the variable dropdown doesn’t offer all available options.

Based on my usage of Harness and helping different companies with standing their instance up, here are some hidden variables and the power that they hold:

Important Note: If you type in one of the variables shown here and the list does not show up in Harness, do not assume that the variable won’t work

There are two main ways to reference secrets in your deployments.

The first, and the more common, is to create a ConfigVariable in the Service or Environment and then you can pass that in using ${serviceVariable.variableName}.

The other way is to add a reference to the secret directly by using ${secrets.getValue("secretName")}, which is useful for passing secrets through a script or command in a workflow. A good use case for this is if you need to pass a username or password to a CLI or API command.

An important note here is that the later option, referencing the secret directly, can be placed in a remote manifest (i.e. GitHub) and will still work in Harness at runtime.

Workflow Variable Overrides
One of the most common uses of Variable Overrides is in a Workflow. You create the Workflow Variable in the Workflow and then assign it to a part of the Workflow Phase by leveraging ${workflow.variables.variableName}.

However, that variable does not need to be created before you can reference it. For example, if you are needing to leverage an Artifact and you want your team to always define a specific value at runtime, you can put the variable ${workflow.variables.variableName} in the manifest, whether local in Harness or Remote in Git, and then make sure you put the correct variable in the workflow later.

Another place to add a variable to a workflow, which is somewhat hidden, is for the service itself. The best place to look for this in the workflow is where the Service is defined (i.e. Rolling Workflow = pencil icon to the right of Rolling, Canary Workflow = pencil icon to the right of the phase when drilled into the desired phase). This is also where you can templatize the service in a Canary or Multi-Service Workflow as needed.

Keep in mind that these Workflow Variables can be passed into almost any phase in the workflow, which includes Scripts, Jira/ServiceNow Integration, HealthCheck Websites, Verification Requirements, etc.

Environment Variable Override
In some cases, we know that a Service needs to have a variable that is defined by the Environment. In fact, every Service being deployed to that Environment should have the same value (i.e. Database Password). What you can do is set a Service Configuration Override for All Services and then reference it as ${environmentVariable.variableName} in the Workflow or Service.

Infrastructure Provisioners
These are actually well documented in our Infrastructure Provisioner doc (Terraform and CloudFormation). But one thing to note is ability for harness to pass in values at runtime when we execute the Provisioner Commands.

Pipeline Templatization
This one is very powerful, allowing for you to leverage one Workflow for every stage in a Pipeline. First, you’ll need to templatize the workflow (follow the steps in Workflow Variable Overrides if you want to templatize your Workflow completely) and make the Environment, Service, and Service Infrastructure templatized. When you add the Workflow to the desired Pipeline, you’ll be asked to add the values for each Templatized Variable. You’ll want to hardcode the variables that make sense for the Pipeline (i.e. hardcode the Environment for each stage if you want to promote your Service across each Environment in succession). Then, in the other options, you’ll want to add a relative variable (i.e. Service = ${svc}, Service Infrastructure = ${si}). This will then allow you to supply that information at runtime, especially in a Trigger so that one Trigger can execute one Pipeline that reuses one Workflow (Say goodbye to the need for Snowflakes in your CD process!).

The ability to provide values to your Workflows/Pipelines via a Trigger is an extremely powerful feature. Not only can values be provided by the Git-based Trigger (See our Git Trigger doc for more). But if you want to provide information through an API call or Webhook Trigger, that can be significantly more powerful!

When you setup your Trigger, set the Condition to be On Webhook with a Custom payload type. Then, in the Action, you’ll point to the desired Workflow/Pipeline and then you will be prompted to fill in the values for the Trigger. Instead of selecting the values from the dropdown (similar to the Pipeline Templatization process above), type in the relative variable (i.e. Environment/env = ${env}) and then the Trigger output will show key:value pairs where the value adds a _placeholder to the end. You’ll need to replace all of the placeholders with the desired values and you’ll be set to kick off the Trigger via cURL or Webhook (This is a great way to have a CI tool pass important information to the Workflow/Pipeline)

Final Thoughts
Whenever someone looks at Harness, it is inevitable that they limit their use of Harness to that which the Developers or DevOps Engineers are most interested in. Hopefully this post has helped expand the usability of Harness for you. However, if you’d like to hear about some use cases that could benefit you and your team even though they might not be advertised, here is a short list for you to look through and consider:

Ephemeral Environment Management
PCI/PII Compliance
Feature Flag Management
Testing Suite/Script Execution
Integration with any API/CLI as needed

Hopefully this will help as you explore Harness’s extensibility. Will update this post as some functionality is improved.

Don’t forget to Comment/Like/Share!

1 Like

Thanks for the contribution!!

1 Like

how do i make sure a pipeline variable always resolves to null in values if it is empty (and not the expression itself?