When testing Ansible roles, my systemd services fail to startup. This is the error I get,
TASK [memcached : Packages Present] ********************************************
changed: [localhost] => (item=[u'memcached', u'libmemcached'])
TASK [memcached : Service Enabled] *********************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service memcached: host"}
My .drone.yml
pipeline:
build:
image: geerlingguy/docker-centos7-ansible:latest
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
commands:
- echo 'sslverify=0' >> /etc/yum.conf
- yum install -y redhat-lsb-core python-devel openldap-devel git gcc gcc-c++ python2-pip
- pip install -U pip tox
- tox
My docker-compose.yml
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 8000:8000
- 9000
volumes:
- /var/lib/drone:/var/lib/drone/
- /etc/ssl/certs/ca-bundle.crt:/etc/ssl/certs/ca-certificates.crt
restart: always
environment:
- DRONE_OPEN=true
- DRONE_HOST=https://example.server
- DRONE_ADMIN=drone
- DRONE_VOLUME=/etc/ssl/certs/ca-bundle.crt:/etc/ssl/certs/ca-certificates.crt
- DRONE_GOGS_GIT_USERNAME=drone
- DRONE_GOGS_GIT_PASSWORD=XXXXXXXX
- DRONE_GOGS=true
- DRONE_GOGS_URL=https://example.gogs
- DRONE_SECRET=${DRONE_SECRET}
drone-agent:
image: drone/agent:0.8
command: agent
restart: always
depends_on:
- drone-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
- DOCKER_API_VERSION=1.24
I’ve tried to do a memcached install manually, starting a base centos:7 docker container from my fedora workstation, and the service starts as expected when --privileged. The drone dockers are running on a RHEL 7 host. I have already set the repository to trusted, within the Drone interface. I also attempted to add /sys/fs/cgroup:/sys/fs/cgroup:ro
as a volume to pretty much everything, as well as attempted to run the build-agent as privileged.
I have no idea how to determine where this error is happening,
I’ve tried to do a memcached install manually
The recommended way to run services is inside a docker container in the services section of the yaml [1]. I would therefore expect your configuration to look something like this:
pipeline:
build:
image: golang
commands:
- go test -v
+services:
+ cache: # memcache will be available at tcp://cache:11211
+ image: memcached
I also attempted to add /sys/fs/cgroup:/sys/fs/cgroup:ro
I am not sure I understand why this is required. I am not aware of circumstances in which mounting cgroups is required. It probably should be avoided for security purposes.
[1] http://docs.drone.io/services/
I’m using this to test my ansible roles, this is not actually testing or using memcached, it was just the example ansible playbook I was testing on. I start an image, and confirm the ansible role works as expected, it’s part of my CI process.
I’m using this to test my ansible roles
Sorry, I am not familiar with ansible and what testing ansible roles would require.
I start an image, and confirm the ansible role works as expected, it’s part of my CI process.
In this case, it sounds like maybe you want to interact with the host machine docker daemon to check and see if the started container is running?
pipeline:
build:
image: docker
commands:
- docker ps # list running docker containers on the host
volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
Ansible is just config management, so I’m attempting to test config management actions originally written for deploying to RHEL/CentOS boxes. The idea is that the docker image starts, installs dependencies, and then execute the ansible playbook to localhost. In other words it executes against the running docker image. As my roles often configure some repos, install a package, and then configure and start the service. The docker image must have systemd.
This is actually a very established pattern, used by Ansible Galexy and TravisCI. I’m just trying to do it in Drone. I have no real need to launch sibling containers from within the CI process.
Wanted to post the answer, just in case anyone had the same type of problem. Turns out systemd must really take control of the init, you cannot pass through commands, and expect it to work like you would a non-systemd docker.
---
pipeline:
system:
image: cyberpunkspike/docker-centos7-ansible:latest
labels:
com.amtrustna.it.infr.serv.system: "true"
privileged: true
cap_add:
- SYS_ADMIN
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
init: /usr/lib/systemd/systemd
detach: true
exec:
image: docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock
privileged: true
environment:
- TERM=xterm-256color
commands:
- CONTAINER_ID="$(docker ps -qf "label=com.amtrustna.it.infr.serv.system")"
- test -n "$CONTAINER_ID" || { echo "Container Not Found"; exit 1 ;}
- docker exec -t "$CONTAINER_ID" sh -c "cd $PWD && tox"
This is what I had to do to get this working as expected. Basically I start the docker image, and override init with the systemd binary. You cannot do both init:
and commands:
simultaneously, so I detach:
from the systemd image.
Then I use the labels:
i set in the first image, to locate the running container using docker ps
. Thus I’m able to docker exec
to the systemd image. You must also set the second image with the mount for accessing docker.