Common Usecases
DinD (Docker-in-Docker)
You can run Docker-in-Docker (dind) in a CI Stage. This is useful whenever you need to run Docker commands as part of your build process. For example you can build images from two separate codebases in the same Pipeline: one using a step such as Build and Push an Image to Docker Registry and another using Docker commands in a Run step.
The Docker daemon, which is responsible for managing the building and deployment of Docker images, can be run within a containerized environment known as Docker-in-Docker (dind). In order for the Docker daemon to function correctly within this environment, it requires the execution of its entrypoint. This is achieved by using a service container, which is responsible for executing the entrypoint defined within its corresponding Dockerfile.
When building and pushing a Docker image, the Docker command must interact with the Docker daemon in order to perform these actions. By utilizing the shared path configuration, specifically the /var/run path, between the service container and the run step, it is possible to facilitate this interaction. This method is preferred over mounting the host Docker socket, as it adheres to best practices for containerized environments.
Storing Bazel cache to GCP bucket
When utilizing Harness CI to store Bazel cache in a GCP bucket, it is important to consider the fact that the cache step runs within a Docker container. To facilitate this process, the /home/ubuntu folder (also known as the home directory) must be shared using the sharedPaths configuration. To configure this, the following steps should be taken:
- Create a GCP bucket to store the Bazel cache.
- Create a sharedPaths configuration in the Harness CI pipeline that shares the /home/ubuntu folder between the cache step container and the host.
- In the save cache step, specify the path to the Bazel cache as /home/ubuntu/.cache/bazel.
- In the restore cache step, specify the path to the Bazel cache as /home/ubuntu/.cache/bazel and the GCP bucket as the source of the cache.
By following these steps, the Bazel cache will be stored in the GCP bucket, and the cache step will be able to access it within the Docker container via the shared /home/ubuntu folder.
It is recommended to also use the cache keys to identify the cache and also use some expiration time for the cache, to avoid storing stale data.
Please note that this is a general guide and specific implementation might vary depending on the pipeline setup.
- step:
identifier: "save_cache"
type: "SaveCacheGCS"
name: "save cache"
spec:
connectorRef: "gcp-connector"
bucket: "bucket-name"
key: "bazel"
sourcePaths:
- "/home/ubuntu/.cache/bazel"
archiveFormat: "Tar"
Upload non-image artifacts to GCS Bucket / AWS S3
The pipeline given below is an example of how to use Harness CI to upload non-image artifacts, such as the pom.xml file, to a Google Cloud Storage (GCS) bucket.
The pipeline is composed of several steps:
-
The first step is to create a GCS bucket using the “Run” step with the command gsutil mb -p ci-play gs://harness-gcs-testing. This step also authenticates to GCP using the gcloud auth activate-service-account command, with the key-file being a GCP secret key defined as a pipeline variable.
-
The second step is to upload the pom.xml file to the GCS bucket created in step 1. This is done using the “GCSUpload” step and specifying the connector, bucket name, source path, and target for the file.
-
The third step is to verify that the pom.xml file was successfully uploaded to the GCS bucket, and if not, to delete the bucket. This is done using the “Run” step, with the command gsutil cp gs://harness-gcs-testing/test /tmp/pom.xml to copy the file to a local directory and check whether the file exists. If the file does not exist, the pipeline exits with an error.
-
The pipeline is executed in the KubernetesDirect infrastructure and the codebase is cloned using the properties defined in the pipeline.
In this case, the paths /.config and /.gsutil are being shared under the Shared Path configuration.
This is particularly useful when certain steps in the pipeline need access to files or folders created or modified by previous steps. For example, in the pipeline, the steps “createBucket” and “verifyAndDeleteBucket” both need access to the GCP credentials stored in the shared path /.config, which is used by gcloud command to authenticate the service account and perform various operations on GCP bucket.
To use S3 bucket instead of GCS, you need to change the connector reference to an S3 connector, change the commands used in the “Run” step to interact with S3 instead of GCS, and change the “GCSUpload” step to “S3Upload” step. Also, you need to make sure you have correct permissions to access the S3 bucket and the correct endpoint if it is hosted on a private cloud.
stages:
- stage:
identifier: gcp_upload_success
name: stage 1
type: CI
variables:
- name: GCP_SECRET_KEY
type: Secret
value: account.test
spec:
sharedPaths:
- /.config
- /.gsutil
execution:
steps:
- step:
identifier: createBucket
name: create bucket
type: Run
spec:
command: |
echo $GCP_SECRET_KEY > secret.json
cat secret.json
gcloud auth activate-service-account --key-file=secret.json
gsutil rm -r gs://harness-gcs-testing || true
gsutil mb -p ci-play gs://harness-gcs-testing
connectorRef: account.test
image: "google/cloud-sdk:alpine"
- step:
identifier: upload
name: upload
type: GCSUpload
spec:
connectorRef: account.test
bucket: "harness-gcs-testing"
sourcePath: pom.xml
target: test
- step:
identifier: upload1
name: upload1
type: GCSUpload
spec:
connectorRef: account.test
bucket: "harness-gcs-testing"
sourcePath: pom.xml
target: test
- step:
identifier: verifyAndDeleteBucket
name: verify
type: Run
spec:
command: |
mkdir -p /tmp
echo $GCP_SECRET_KEY > /tmp/secret.json
gcloud auth activate-service-account --key-file=/tmp/secret.json
gsutil cp gs://harness-gcs-testing/test /tmp/pom.xml
echo "Deleting the bucket"
gsutil rm -r gs://harness-gcs-testing
echo "Checking whether file exists"
if [ ! -f /tmp/pom.xml ]; then
echo "No file present with name pom.xml"
echo "GCS upload failed!"
exit 1
fi
connectorRef: account.test
image: "google/cloud-sdk:alpine"
infrastructure:
type: KubernetesDirect
spec:
connectorRef: account.test
namespace: harness-delegate
cloneCodebase: true
Save Cache in VM infrastructure
When using a VM infrastructure in Harness, there is a known issue where the save cache step is unable to access the /root/.gradle folder. This can be resolved by adding a shared path configuration to the pipeline.
To add a shared path for saving cache on a VM infrastructure, follow these steps:
-
In your Harness pipeline, navigate to the “Infrastructure” section and select “VM” as the infrastructure type.
-
Under the “Spec” section, add a “sharedPaths” field and specify the path that you would like to share. In this case, it is /root/.gradle
-
Save the pipeline and re-run the pipeline to ensure that the cache is saved to the shared path.
Note: It is important to keep in mind that adding shared paths may have security implications and should be used with caution.