In this article, we will explain how to install Drone on Kubernetes and MetalLB as load balance.
MetalLB already provides the manifests we need in their installation guide:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yamlkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml# On first install onlykubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
In addition, you need to deploy a config map for MetalLB. In ours, I just assign a range of IP addresses that can be assigned to services:
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.31.0.1-172.31.255.254
EOF
Installing the ingress
The next step is to deploy ingress-Nginx. The ingress serves multiple purposes:
In this case, I also start by just deploying the manifests for ingress-nginx
found in the installation guide:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
The default setup will use NodePort
for the service, meaning that HTTP and HTTPS get assigned to some arbitrary ports on each node. However, you may want to serve on ports 80 and 443 on a static IP assigned by the load balancer we set up in the last step. To do this, need to change the service type to LoadBalancer
and optionally specify a static IP from the subnet that was set up for MetalLB:
# run this command to edit a deployed service manifest:
kubectl edit -n ingress-nginx svc/ingress-nginx-controller
...
spec:
- type: NodePort
+ type: LoadBalancer
+ loadBalancerIP: 172.31.0.1 # static IP for ingress-nginx
...
After this change has been applied, you can test out our ingress by connecting to it.
Deploying a Drone Helm Chart with ingress
Now, Deploy the drone helm chart.
First, I just added the changes to use the ingress, change it according to your configuration ( repository provide authentication and other configs), and save it with a custom name, in my case I used the name drone.vinicima.com.yaml
image: repository: drone/drone
tag: 1.9.0
pullPolicy: IfNotPresent
## If you need to pull images from a private Docker image repository, pass in the name
## of a Kubernetes Secret that contains the needed secret. For more details, see:
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - name: "image-pull-secret"
nameOverride: ""
fullnameOverride: ""
# Drone server does not interact with the Kubernetes API server
automountServiceAccountToken: false
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
## Add extra annotations to the Drone server pods here. See below example for
## Prometheus scrape annotations.
##
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "80"
updateStrategy: {}
service:
type: ClusterIP
port: 80
ingress: enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- host: YOUR.FQDN.EXAMPLE
paths:
- "/"
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## If you'd like to force the Drone server to run on a specific node or set of nodes,
## set a selector here.
##
nodeSelector: {}
tolerations: []
affinity: {}
## If you'd like to make additional files or volumes available to Drone, declare additional
## Volumes here per the Pod spec's "volumes" section.
## Ref: https://kubernetes.io/docs/concepts/storage/volumes/
##
extraVolumes: []
## If you have declared extra volumes, mount them here, per the Pod Container's
## "volumeMounts" section.
##
extraVolumeMounts: []
persistentVolume:
## If you are using SQLite as your DB for Drone, it is recommended to enable persistence. If
## enabled, the Chart will create a PersistentVolumeClaim to store its state in. If you are
## using a DB other than SQLite, set this to false to avoid allocating unused storage.
## If set to false, Drone will use an emptyDir instead, which is ephemeral.
##
enabled: true
## Drone server data Persistent Volume access modes
## Must match those of existing PV or dynamic provisioner
## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
accessModes:
- ReadWriteOnce
## Drone server data Persistent Volume annotations
##
annotations: {}
## If you'd like to bring your own PVC for persisting Drone state, pass the name of the
## created + ready PVC here. If set, this Chart will not create the default PVC.
## Requires server.persistentVolume.enabled: true
##
existingClaim: ""
## Drone server data Persistent Volume mount root path
##
mountPath: /data
## Drone server data Persistent Volume size
##
size: 8Gi
## Drone server data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: ""
## Drone server data Persistent Volume Binding Mode
## If defined, volumeMode: <volumeMode>
## If empty (the default) or set to null, no volumeBindingMode spec is
## set, choosing the default mode.
##
volumeMode: ""
## Subdirectory of Drone server data Persistent Volume to mount
## Useful if the volume's root directory is not empty
##
subPath: ""
## If persistentVolume.enabled is set to false, Drone will mount an emptyDir instead of
## a PVC for any state that it needs to persist.
##
emptyDir:
## Total space to request for the emptyDir. An empty value here means no limit.
sizeLimit: ""
## If you'd like to provide your own Kubernetes Secret object instead of passing your values
## in un-encrypted, pass in the name of a created + populated Secret in the same Namespace
## as the Drone server. All secrets within this configmap will be mounted as environment
## variables, with each key/value mapping to a corresponding environment variable on the
## Drone server.
##
extraSecretNamesForEnvFrom: []
# - my-drone-secrets
## The keys within the "env" map are mounted as environment variables on the Drone server pod.
## See the full reference of Drone server environment variables here:
## Ref: https://docs.drone.io/installation/reference/
##
env:
## REQUIRED: Set the user-visible Drone hostname, sans protocol.
## Ref: https://docs.drone.io/installation/reference/drone-server-host/
##
DRONE_SERVER_HOST: "drone.vinicima.com"
## The protocol to pair with the value in DRONE_SERVER_HOST (http or https).
## Ref: https://docs.drone.io/installation/reference/drone-server-proto/
##
DRONE_SERVER_PROTO: http
## REQUIRED: Set the secret secret token that the Drone server and its Runners will use
## to authenticate. This is commented out in order to leave you the ability to set the
## key via a separately provisioned secret (see existingSecretName above).
## Ref: https://docs.drone.io/installation/reference/drone-rpc-secret/
##
# DRONE_RPC_SECRET:
## If you'd like to use a DB other than SQLite (the default), set a driver + DSN here.
## Ref: https://docs.drone.io/installation/storage/database/
##
# DRONE_DATABASE_DRIVER:
# DRONE_DATABASE_DATASOURCE:
## If you are going to store build secrets in the Drone database, it is suggested that
## you set a database encryption secret. This must be set before any secrets are stored
## in the database.
## Ref: https://docs.drone.io/installation/storage/encryption/
##
# DRONE_DATABASE_SECRET:
## If you are using self-hosted GitHub or GitLab, you'll need to set this to true.
## Ref: https://docs.drone.io/installation/reference/drone-git-always-auth/
##
# DRONE_GIT_ALWAYS_AUTH: false
## ===================================================================================
## Provider Directives (select ONE)
## -----------------------------------------------------------------------------------
## Select one provider (and only one). Refer to the corresponding documentation link
## before filling the values in. Also note that you can use the 'secretMounts' value
## if you'd rather not have secrets in Kubernetes Secret instead of a ConfigMap.
## ===================================================================================
## GitHub-specific variables. See the provider docs here:
## Ref: https://docs.drone.io/installation/providers/github/
##
# DRONE_GITHUB_CLIENT_ID:
# DRONE_GITHUB_CLIENT_SECRET:
## GitLab-specific variables. See the provider docs here:
## Ref: https://docs.drone.io/installation/providers/gitlab/
##
# DRONE_GITLAB_CLIENT_ID:
# DRONE_GITLAB_CLIENT_SECRET:
# DRONE_GITLAB_SERVER:
## Bitbucket Cloud-specific variables. See the provider docs here:
## Ref: https://docs.drone.io/installation/providers/bitbucket-cloud/
##
# DRONE_BITBUCKET_CLIENT_ID:
# DRONE_BITBUCKET_CLIENT_SECRET:
## Bitbucket-specific variables. See the provider docs here:
## Ref: https://docs.drone.io/installation/providers/bitbucket-server/
##
# DRONE_GIT_USERNAME:
# DRONE_GIT_PASSWORD:
# DRONE_STASH_CONSUMER_KEY:
# DRONE_STASH_PRIVATE_KEY:
# DRONE_STASH_SERVER:
## Gitea-specific variables. See the provider docs here:
## Ref: https://docs.drone.io/installation/providers/gitea/
##
# DRONE_GITEA_CLIENT_ID:
# DRONE_GITEA_CLIENT_SECRET:
# DRONE_GITEA_SERVER:
## Gogs-specific variables. See the provider docs here:
## Ref: https://docs.drone.io/installation/providers/gogs/
##
# DRONE_GOGS_SERVER:
In the above values.YAML, we assume that drone.vinicima.com
is a DNS name that is mapped to the IP address of the ingress. (In my setup, it’s mapped to my lab network’s public IP, which port forwards traffic to the MetalLB issued IP for the ingress.)
helm repo add drone https://charts.drone.io
helm repo update
helm install drone drone/drone -f drone.vinicima.com.yaml
Adding SSL support with cert-manager
Finally, let’s deploy cert-manager to let us issue TLS certificates and integrate that with our ingress setup.
As with the other steps, deploying cert-manager is as simple as just applying the manifest file found in the installation guide:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml
With cert-manager installed, you can set up a cluster-wide ACME issuer to let us issue certificates for your domains. While you can use HTTP to validate your ownership of a domain name.
I’m using Cloudflare as my DNS provider, so I set up my ClusterIssuer
to automatically set the needed TXT
records for my domain names as I issue certificates for them.
kubectl -n cert-manager create secret generic cloudflare-api-token --from-literal=api-token=<SOME-API-TOKEN>
kubectl -n cert-manager apply -f - <<EOF
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: public-ca
spec:
acme:
email: user@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: public-ca-account-key
solvers:
- dns01:
cloudflare:
email: user@example.com
apiTokenSecretRef:
name: cloudflare-api-token
key: api-token
EOF
Now adding a TLS certificate for our cluster becomes trivial:
kubectl edit ingress/drone-server
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: drone-server
annotations:
kubernetes.io/ingress.class: nginx
+ cert-manager.io/cluster-issuer: public-ca
spec:
rules:
- host: drone.vinicima.com
http:
paths:
- path: /
backend:
serviceName: drone-server
servicePort: 80
+ tls:
+ - hosts:
+ - hello.example.com
+ secretName: drone-server-tls
you can see that a certificate is being requested:
kubectl get certificaterequests
NAME READY AGE
drone-server-tls-pptqw False 42s
After waiting a while, if all goes well it should become ready, and a certificate should be issued and automatically used by the ingress.