Howdy Everyone
Introduction
In this topic, I will talk a little about how Harness NextGen behaves in the execution of a pipeline because at first glance the statuses that the Pods reach can cause a wrong interpretation of the real status of the execution.
Firstly, in NextGen Harness changed the nomenclature, and Workflow is no longer used, so for our context whenever I refer to a Pipeline I am referring to complete execution.
To explain the execution, I will use a CI Pipeline, the Pipeline will build and run a unit test on a codebase, upload the artifact to Docker Hub, and then run integration tests.
This is what my CI pipeline looks like in Next-Gen
We can see that they have two major stages:
- Build Test and Push
- Run Integration Test
For this explanation, let’s focus only on the first stage: Build Test and Push
This is what my stage execution looks like
My execution has 3 steps, 2 of them in parallel:
- Run Unit Tests
- Build and push image to Docker Hub
- Build and push image to ECR
Let’s run our Pipeline
Some things that we can already notice just by looking at the execution of our first stage Build Test and Push is that in addition to the 3 steps that we configured, 2 more were added and are basically essential for the execution of Harness, we don’t need to worry about configuring them because they are internal processes created based on the Pipeline configuration.
Now let’s look at the Kubernetes cluster where Harness will run the steps
Here we can identify the new-quickstart-new-0, which is an existing delegate in that namespace, and the harnessci-buildtestandpus-* ** which is the Pod that Harness started in the namespace configured for the execution of the steps present in the stage, we see that in the UI there are 5 steps and therefore we see 5 containers in this Pod.
Pod reaches NotReady status after a few minutes of running
We noticed that the Running Status of the pod disappears and it assumes the NotReady status, and over time the containers decrease in the Ready container count
Why does it happen ?
This happens because each container in a Pod has in its YAML a group of properties called State and Ready
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 01 Feb 2022 08:57:07 -0300
Finished: Tue, 01 Feb 2022 08:57:14 -0300
Ready: False
They are responsible for assuming certain patterns according to the result of the execution of the container and with that Kubernetes understands these properties and sets the status for the Pod, for example.
Outcome
Is this NotReady status expected for our execution or is some error happening?
This is expected, as each step is created as a container in our Pod, and as soon as this step is completed, the container reaches the Completed status with a success exit code 0 (if there is no execution failure because if there is any error, will be displayed in the UI as it normally is) and its Ready property is set to False, that’s why our Pod assumes the status of NotReady, because over time, as the steps (container) are being executed and completed they reach a final status.
Example of step-1 completed while running our Pipeline
Now we can understand a little better the motivation of this Status in our Execution Pods