Harness and Cloudfront

Hey all!

There was a post I had made originally on how to deploy out Cloudfront with Harness. Since that post, Harness has released a really great feature called Customer Deployer. This allows for a more native approach to extending the deployment capabilities of Harness.

This post will show how to get up and running with Harness Custom Deployment and Cloudfront!

Here is the Git Repo with the required scripts, if needed.

Initial Setup

There are three basic requirements to make this process work:

Delegate Profile

  1. Starting at your Harness Dashboard, go to Setup in the bottom left:

  2. Select Harness Delegates in the bottom right:

  3. Create a Delegate Profile with the following script:

    apt-get update
    apt-get install -y python
    apt-get install -y zip
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    ./aws/install
    aws --version

  4. Once that is done, assign the Delegate Profile to a Delegate of choice:


    This will create an Implicit Selector for you to use later:

Artifact Server

  1. The artifact server is where the main Cloudfront Artifact will come from. This example will be an S3 bucket that stores a ZIP artifact, but Artifactory, Nexus, and others can be used instead. To setup the S3 bucket as the artifact source there needs to be an AWS Cloud Provider in Harness that is associated to a Delegate with the correct S3 permissions assigned to it.

AWS Programmatic Access Key and Secret Key

  1. The last piece of setup you’ll need to do before we get the workflow built is to put the AWS programmatic keys to the Harness Secrets Manager:

Custom Deployer Setup

  1. In Setup > Template Library there is a Template type called Custom Deployer (if you don’t see it, ask Harness Support for the Feature Flag to be turned on)

  2. Add the Fetch Instance script into the appropriate section. Make sure to update the secrets to the correct secrets you added into the Harness Secrets Manager.
    image

  3. If you run this command locally, you’ll see an output that looks similar to this:


    This structure will be used to get information into the Host Attributes section

Service Commands

  1. The first service command, called Download Artifact, is an Exec command with this script

  2. The second service command, called Unzip and Upload, is also an Exec command with this script

  3. The final Service Command should look like this:image

Service Setup

  1. Create a new Service in Harness, and make sure that the name of the Service is the name of the Destination Bucket to deploy to (i.e. If the bucket that Cloudfront is using as the Origin is called special-bucket, call the Service special-bucket) and add the correct Artifact Typeimage

  2. Add a Service Config Variable called destination and add the destination path for the Cloudfront S3 destination folder
    imageimage

The first thing to do is add a dummy cloud provider
image

Environment and Infrastructure Definition

  1. Create an Environment in the Application and then add the Custom Infrastructure Definition

Workflow

  1. Create a Multi-Service Workflow and select the Environment that you added or configured in the last step
    image

  2. When the Workflow opens, there is a section in the Deployment Phase where a new Deployment step can be added


    image

  3. In the new Workflow phase, there is a Fetch instance step that exists. This is where the Service Command that was created before needs to be added
    image
    image


    image

Deploy and Dashboard

  1. Once the deployment goes out, there will be two main steps, which is the main deployment step and then the Fetch Instance step


  2. The Fetch Instance step should have at least one mapped instance coming out of it. When that happens, the host instance will now be able to be seen in the Service Dashboard
    image

Hope this helps!

Don’t forget to Like/Comment/Share!

1 Like