So I am trying to mount a volume on a postgres image so that it will create multiple databases. Locally, through docker-compose, this works. I guess it’s not getting mounted because the databases are not getting created.
Help?
Thanks!
So I am trying to mount a volume on a postgres image so that it will create multiple databases. Locally, through docker-compose, this works. I guess it’s not getting mounted because the databases are not getting created.
Help?
Thanks!
I am not sure I understand the problem as described. It might help to provide the following information:
Sure. Here’s a sample:
pipeline:
postgres:
image: mdillon/postgis:9.6-alpine
detach: true
environment:
POSTGRES_MULTIPLE_DATABASES: 'foo,moo,boo'
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- '5432:5432'
volumes:
- /go/src/github.com/momo/fofo/scripts/docker/create-multiple-postgres-dbs:/docker-entrypoint-initdb.d
another_service:
....
When this image is started, I expect to have the databases created.
For reference, the script was taken here.
Exposing ports is not supported and is not required. All pipeline containers are members of a shared bridge network, similar to docker-compose, and can be accessed by hostname (step name) as demonstrated here http://docs.drone.io/postgres-example/
This will not work the way you expect because /go/src/github.com/...
is a path inside a container. When you mount a volume it is on the host machine. The path you are trying to mount does not exist on the host machine.
I see. So what should the path be? And do services
support volumes
?
So what should the path be?
I offer enterprise support if you would like help researching and implementing a solution. See https://drone.io/enterprise/ for more information.
And do services support volumes?
Yes
Cool. We plan to get the Enterprise, but wanted to get this sorted out first though.
Thanks for the support!
the best way to share data between two containers is the workspace, because all containers have access to the workspace (eg /go/src/github.com/momo/fofo). I would therefore recommend altering the postgres entrypoint and command to copy the scripts to the correct location, from the workspace. Once copied, you can execute the original entrypoint and command. Something like this:
pipeline:
postgres:
image: mdillon/postgis:9.6-alpine
detach: true
entrypoint: [ /bin/sh ]
command: [ -c, "mv /go/src/github.com/momo/fofo/scripts/docker/create-multiple-postgres-dbs /docker-entrypoint-initdb.d; docker-entrypoint.sh postgres"]
environment:
POSTGRES_MULTIPLE_DATABASES: 'foo,moo,boo'
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
Future versions of drone (0.9) will support data volumes using the below syntax. This will be much cleaner, but in the meantime, I recommend the above workaround.
pipeline:
setup:
image: alpine
volumes: [ pginit:/docker-entrypoint-initdb.d ]
commands:
- cp scripts/docker/create-multiple-postgres-dbs/* /docker-entrypoint-initdb.d
postgres:
image: postgres
detach: true
volumes: [ pginit:/docker-entrypoint-initdb.d ]
environment:
POSTGRES_MULTIPLE_DATABASES: 'foo,moo,boo'
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
pginit:
driver: bridge
For anyone also looking for a simpler solution to this problem, we found this works:
pipeline:
postgres:
detach: true
image: postgres:10.6
commands:
- cp ./scripts/docker/create-multiple-postgres-dbs/* /docker-entrypoint-initdb.d
- /docker-entrypoint.sh postgres