Hey all!
There was a post I had made originally on how to deploy out Cloudfront with Harness. Since that post, Harness has released a really great feature called Customer Deployer. This allows for a more native approach to extending the deployment capabilities of Harness.
This post will show how to get up and running with Harness Custom Deployment and Cloudfront!
Here is the Git Repo with the required scripts, if needed.
Initial Setup
There are three basic requirements to make this process work:
-
Starting at your Harness Dashboard, go to Setup in the bottom left:
-
Select Harness Delegates in the bottom right:
-
Create a Delegate Profile with the following script:
apt-get update
apt-get install -y python
apt-get install -y zip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
aws --version
-
Once that is done, assign the Delegate Profile to a Delegate of choice:
This will create an Implicit Selector for you to use later:
- The artifact server is where the main Cloudfront Artifact will come from. This example will be an S3 bucket that stores a ZIP artifact, but Artifactory, Nexus, and others can be used instead. To setup the S3 bucket as the artifact source there needs to be an AWS Cloud Provider in Harness that is associated to a Delegate with the correct S3 permissions assigned to it.
AWS Programmatic Access Key and Secret Key
- The last piece of setup you’ll need to do before we get the workflow built is to put the AWS programmatic keys to the Harness Secrets Manager:
Custom Deployer Setup
-
In
Setup
>Template Library
there is a Template type called Custom Deployer (if you don’t see it, ask Harness Support for the Feature Flag to be turned on) -
Add the Fetch Instance script into the appropriate section. Make sure to update the secrets to the correct secrets you added into the Harness Secrets Manager.
-
If you run this command locally, you’ll see an output that looks similar to this:
This structure will be used to get information into theHost Attributes
section
-
The first service command, called Download Artifact, is an Exec command with this script
-
The second service command, called Unzip and Upload, is also an Exec command with this script
-
The final Service Command should look like this:
-
Create a new Service in Harness, and make sure that the name of the Service is the name of the Destination Bucket to deploy to (i.e. If the bucket that Cloudfront is using as the Origin is called
special-bucket
, call the Servicespecial-bucket
) and add the correct Artifact Type -
Add a Service Config Variable called
destination
and add the destination path for the Cloudfront S3 destination folder
The first thing to do is add a dummy cloud provider
Environment and Infrastructure Definition
- Create an Environment in the Application and then add the Custom Infrastructure Definition
-
Create a Multi-Service Workflow and select the Environment that you added or configured in the last step
-
When the Workflow opens, there is a section in the Deployment Phase where a new Deployment step can be added
-
In the new Workflow phase, there is a Fetch instance step that exists. This is where the Service Command that was created before needs to be added
Deploy and Dashboard
-
Once the deployment goes out, there will be two main steps, which is the main deployment step and then the Fetch Instance step
-
The Fetch Instance step should have at least one mapped instance coming out of it. When that happens, the host instance will now be able to be seen in the Service Dashboard
Hope this helps!
Don’t forget to Like/Comment/Share!