Helm Deploy Errors - Github Enterprise

I’m on the struggle bus here, getting the same result trying to deploy the helm chart from the helm/stable repository and the drone repository (the newer chart) with the same result. Has anyone else run into this or have an idea where to look? I have entered the server info for my Github enterprise server, put in all other git level variables and this is always the result. I set enable persistent volume to false so I take out any DB issues.

{“level”:“info”,“msg”:“main: internal scheduler enabled”,“time”:“2020-03-09T18:05:48Z”}

{“acme”:false,“host”:“myHostHere”,“level”:“info”,“msg”:“starting the http server”,“port”:":80",“proto”:“https”,“time”:“2020-03-09T18:05:48Z”,“url”:“myHostHere”}

{“interval”:“30m0s”,“level”:“info”,“msg”:“starting the cron scheduler”,“time”:“2020-03-09T18:05:48Z”}

interrupt received, terminating process

{“error”:“context canceled”,“level”:“fatal”,“msg”:“program terminated”,“time”:“2020-03-09T18:06:18Z”}

This message occurs when something external to Drone sends a SIGTERM or SIGINT to Drone (for example, if you run docker stop). This tells me Kubernetes (or someone on the host OS) is stopping your Drone instance. Since this is happening outside of Drone and is not within Drone’s control, I cannot say what would be causing this.

@bradrydzewski If I add DRONE_CRON_DISABLED: true to the manifest the cron scheduler piece goes away and I receive the following message:

interrupt received, terminating process

{“error”:“listen tcp :80: bind: permission denied”,“level”:“fatal”,“msg”:“program terminated”,“time”:“2020-03-09T18:31:45Z”}

Does that give any other ideas?

nope, in either case interrupt received, terminating process can only be received when an external source sends a SIGINT or SIGTERM

It looks like something is killing Drone as soon as it starts. The reason you are seeing different logs when enabling / disabling cron is because it changes the initialization steps. In both instances Drone is being killed, but at different points in the initialization process which yields slightly different logs. But in either case, as Brad suggests, it looks like Drone is being killed.

Is something killing Drone when it tries to access :80 ? perhaps something related to security? I see permission denied errors when trying to bind to the port.

@ashwilliams1 I’m running an unrestricted pod security policy in my namespace and I don’t have resource limits (I’m nowhere near the resources of the node I am on)…

the error, which comes from the Go standard library, is throwing a permission denied error when binding to :80. Since this error comes from the Go standard library it would lead me to believe there must be some permission error. Perhaps the port is already in use?

also please note the Drone chart in Helm stable was officially deprecated.

you should instead be using drone/charts.

yes this is the one I am referencing while we are conversing here. I had tried the deprecated chart as well as it is what ships with Rancher

I googled this error and, interestingly, came upon a thread where many of the commenters in the thread were running Rancher as well. I wonder if there is any correlation. Permission denied on port 80 & 443 · Issue #17 · rawmind0/alpine-traefik · GitHub

I tried modifying the service port in the service, and in the ingress to 8080 and same issue unfortunately… still trying to look for other things.

Have you considered engaging Rancher support? The error message (permission denied when binding to the port) does not seem like a Drone problem to me, but a problem with the runtime environment and / or its security settings.

The bind permission denied issue was coming from the Rancher project( a rancher construct that groups namespaces) I am using that didn’t have a default pod security policy set. In Rancher, you can set default PSP’s at the cluster level, which was set to restricted (blocks port 80 from binding). A pod security policy of unrestricted was set in the project ( and thus inherited in my namespace) which resolved the issue. Thank you @ashwilliams1 and @bradrydzewski for the help. Hopefully someone else running this in rancher will find this useful at some point.

awesome glad to hear you got it figured out :tada: