How to output Kubernetes Pod Logs during/after the deployment

This article is intended for NextGen.


One of the biggest challenges when automating a code delivery process is providing visibility to the user. With this, the user can understand what caused a deployment to fail, dispensing the need for someone (usually a DevOps) to troubleshoot the issue.
Fortunately, Harness natively offers many observability tools, from manifest generation to application rollout. Sometimes, it is necessary to access the logs from within the application’s container to find out what caused a crash. To achieve this, we wrote this article that covers this functionality, as we don’t have this natively.


  1. Add a Shell Script step after/parallel to the Rolling Deployment or rollback section.
    option 1
    option 2

    option 3

  2. Include the code below in the Shell Script step and customize it as needed:


echo "Pods:"
kubectl get pods

echo "Deployments"
kubectl get deployments

echo "Logs :"
pods=$(kubectl get pods -n <+infra.namespace> --selector=<your-selector> --output=jsonpath='{.items[*]}')

for pod in $pods
    echo "Logs for  pod $pod"
    kubectl -n <+infra.namespace> logs $pod --all-containers=true --since=5m
    kubectl -n <+infra.namespace> logs $pod --all-containers=true --since=5m --previous

Replace the --selector=<your-selector> in the code and that’s it. Run your pipeline and see the results.


If you have any suggestions on how to improve this article, or helpful and specific examples of permissions related issues that may be of use to others, please leave a comment with the information as this document is intended to evolve over time.
If this article cannot resolve your issue, don’t hesitate to contact us here: – or through the Zendesk portal in Harness SaaS.

1 Like