Overview
This article describes the process for preparing a Kubernetes environment suitable for running ZeroNorth’s integration Docker images, referred to as Integration Container and Integration Orchestrator . These two Docker images respectively provide:
- An automated SSDLC process via CI/CD-to-ZeroNorth integration
- The ability to perform ZeroNorth-orchestrated scans of private (onprem) resources
Prerequisites
- Use the link at the bottom of this article for the ZIP archive with the necessary ZeroNorth files.
- Provision a VM with:
make
sed
-
helm
version 2.15.2 or later -
kubectl
command - A working Kubernetes cluster
- Optionally, a dedicated Kubernetes namespace
- The Kubernetes user doing the deployment should have sufficient privileges to create roles and service accounts.
The VM size will depend on whether you intend to run just the Integration Container , just the Integration Orchestrator , or both. Refer to the respective KB articles for sizing information, which should be used to adjust the sizing required for the base Kubernetes environment.
Integration Container
For the I ntegration C ontainer, the file system can be configured in one of 2 ways:
- Uing P ersistent V olume C laims
- Using an AWS S3 bucket
Using PVCs:
-
code
is the PVC containing the code/build to be scanned (required if doing a code/build scan). -
results
is the PVC where the I-C will write its results. Required. -
certificates
is the PVC where any custom CA certs are stored (required if the I-C needs to make connections to resources with SSL certificates signed with custom CAs)
Using an S3 Bucket
Add the following environment variables to your env.local
(see the Configuration section below):
-
S3_PATH
- Set to the desire S3 path. - Optionally,
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
, andAWS_REGION
.
Refer to the I-C Readme in Docker Hub for more information on these variable.
Integration Orchestrator
For the Integration Orchestrator, it requires a single PVC:
-
shared
is the PVC containing a location where code can be checked out from GitHub or Bitbucket. This needs to be a ReadWriteMany claim as the location needs to be visible by both the Integration Orchestrator and the spawned code scanners (also called “runners”).
You must have a kube config that points at your cluster with a user with enough permissions to be able to create service accounts & roles.
Test your connection to Kubernetes with kubectl get nodes
.
Configuration
Start by making a copy of the top-level env.template
file and naming it env.local
. This will contain the variables that will need to be set for the IO & IC to function.
Sample env.template
:
You should only use the ZeroNorth production environment unless told otherwise. # Your JWT can be generated by following the instructions in this article: # https://support.zeronorth.io/hc/en-us/articles/115003679033 CYBRIC_ENVIRONMENT=production CYBRIC_JWT= CYBRIC_VERIFY_SSL=1 # Add your Dockerhub credentials. Your Dockerhub user must have access to pull # the private ZeroNorth images. If you’re unable to pull, reach out to ZeroNorth # support: support@zeronorth.io DOCKER_HUB_USERNAME= DOCKER_HUB_PASSWORD= QUIET_MODE=1 # Proxy configurations HTTP_PROXY= HTTPS_PROXY= FTP_PROXY= NO_PROXY=
The env.local
file should also live at the top-level since it will be shared between the Integration Orchestrator deployment and the Integration Container deployment.
Ensure that your env.local
file is properly configured with correct values before continuing.
Using a Name Space
To use a namespace other than the default, add the ZN_NAMESPACE
environment variable to your env.local
, setting its value to the name of the desired namespace. The specified namespace must already exist. If not specified, will use the default namespace.
Let’s Do It
Run make init
to deploy Helm’s Tiller and create the zeronorth-dockerhub
and zeronorth-secrets
Kubernetes secrets in $ZN_NAMESPACE
:
-
zeronorth-secrets
contains the values stored inenv.local
. This includes yourCYBRIC_JWT
, so make sure these values are correct. If you need to correct them, you can re-runmake init
. -
zeronorth-dockerhub
contains your DockerHub credentials. These are the credentials used to pull ZeroNorth private images, so they need to be correct and your DockerHub account must have access to these images.
Contact support@zeronorth.io with any issues or questions.
The procedure will create 4 Kubernetes ServiceAccounts (SA):
-
tiller
will be the SA that Helm’s tiller runs as. -
zeronorth-ic
will be the SA that launched Integration Containers run as. -
zeronorth-io
will be the SA that the Integration Orchestrator runs as. -
zeronorth-runner
will be the SA that any runners spawned from Integration Orchestrator will run as.
The file helm_setup/accounts.yaml
contains the descriptors used to set these service accounts & roles up. If changes are required to the permissions assigned to these service accounts, it can be done here. Each change will require a run of make init
to push the changes to Kubernetes.
NOTE: The make init
step only needs to be done once. It creates objects used by both the I-C and I-O deployments.
Use the link below to download the ZIP archive you will need:
20 KB Download