This article describes the process for preparing a Kubernetes environment suitable for running ZeroNorth’s integration Docker images, referred to as Integration Container and Integration Orchestrator . These two Docker images respectively provide:
- An automated SSDLC process via CI/CD-to-ZeroNorth integration
- The ability to perform ZeroNorth-orchestrated scans of private (onprem) resources
- Use the link at the bottom of this article for the ZIP archive with the necessary ZeroNorth files.
- Provision a VM with:
helmversion 2.15.2 or later
- A working Kubernetes cluster
- Optionally, a dedicated Kubernetes namespace
- The Kubernetes user doing the deployment should have sufficient privileges to create roles and service accounts.
The VM size will depend on whether you intend to run just the Integration Container , just the Integration Orchestrator , or both. Refer to the respective KB articles for sizing information, which should be used to adjust the sizing required for the base Kubernetes environment.
For the I ntegration C ontainer, the file system can be configured in one of 2 ways:
- Uing P ersistent V olume C laims
- Using an AWS S3 bucket
codeis the PVC containing the code/build to be scanned (required if doing a code/build scan).
resultsis the PVC where the I-C will write its results. Required.
certificatesis the PVC where any custom CA certs are stored (required if the I-C needs to make connections to resources with SSL certificates signed with custom CAs)
Using an S3 Bucket
Add the following environment variables to your
env.local (see the Configuration section below):
S3_PATH- Set to the desire S3 path.
Refer to the I-C Readme in Docker Hub for more information on these variable.
For the Integration Orchestrator, it requires a single PVC:
sharedis the PVC containing a location where code can be checked out from GitHub or Bitbucket. This needs to be a ReadWriteMany claim as the location needs to be visible by both the Integration Orchestrator and the spawned code scanners (also called “runners”).
You must have a kube config that points at your cluster with a user with enough permissions to be able to create service accounts & roles.
Test your connection to Kubernetes with
kubectl get nodes .
Start by making a copy of the top-level
env.template file and naming it
env.local . This will contain the variables that will need to be set for the IO & IC to function.
You should only use the ZeroNorth production environment unless told otherwise. # Your JWT can be generated by following the instructions in this article: # https://support.zeronorth.io/hc/en-us/articles/115003679033 CYBRIC_ENVIRONMENT=production CYBRIC_JWT= CYBRIC_VERIFY_SSL=1 # Add your Dockerhub credentials. Your Dockerhub user must have access to pull # the private ZeroNorth images. If you’re unable to pull, reach out to ZeroNorth # support: firstname.lastname@example.org DOCKER_HUB_USERNAME= DOCKER_HUB_PASSWORD= QUIET_MODE=1 # Proxy configurations HTTP_PROXY= HTTPS_PROXY= FTP_PROXY= NO_PROXY=
env.local file should also live at the top-level since it will be shared between the Integration Orchestrator deployment and the Integration Container deployment.
Ensure that your
env.local file is properly configured with correct values before continuing.
Using a Name Space
To use a namespace other than the default, add the
ZN_NAMESPACE environment variable to your
env.local , setting its value to the name of the desired namespace. The specified namespace must already exist. If not specified, will use the default namespace.
Let’s Do It
make init to deploy Helm’s Tiller and create the
zeronorth-secrets Kubernetes secrets in
zeronorth-secretscontains the values stored in
env.local. This includes your
CYBRIC_JWT, so make sure these values are correct. If you need to correct them, you can re-run
zeronorth-dockerhubcontains your DockerHub credentials. These are the credentials used to pull ZeroNorth private images, so they need to be correct and your DockerHub account must have access to these images.
Contact email@example.com with any issues or questions.
The procedure will create 4 Kubernetes ServiceAccounts (SA):
tillerwill be the SA that Helm’s tiller runs as.
zeronorth-icwill be the SA that launched Integration Containers run as.
zeronorth-iowill be the SA that the Integration Orchestrator runs as.
zeronorth-runnerwill be the SA that any runners spawned from Integration Orchestrator will run as.
helm_setup/accounts.yaml contains the descriptors used to set these service accounts & roles up. If changes are required to the permissions assigned to these service accounts, it can be done here. Each change will require a run of
make init to push the changes to Kubernetes.
make init step only needs to be done once. It creates objects used by both the I-C and I-O deployments.
Use the link below to download the ZIP archive you will need:
20 KB Download