Kubernetes Delegate CrashLoop


I’m new to Harness and am working on deploying a delegate to my Kubernetes cluster however the pods that are created from the provided YAML manifest go into a continuous CrashLoopBackOff. I checked and the image is successfully pulled and I appear to have enough resources on my nodes. When I run kubectl describe on the delegate pods there is only a warning saying “Back-off restarting failed container”. If I look at the logs for one of the delegate pods it looks like I see “Failed to start Watcher.” and that the delegate pod cannot resolve “host.docker.internal” and if I look at the YAML manifest I see several entries for “host.docker.internal:7143” for example for the “MANAGER_HOST_AND_PORT” environment variable. Is this supposed to be pointing back to my harness manager, it looks like harness is auto-populating these values as “host.docker.internal:7143” is that correct?