Reputation: 1161
I want to create a tarball that contains an empty db (ie a series of empty tables) and a shell script launching a Postgres container that would connect to the empty db. The end product is a .tar.gz that has a copy of the db, a start script and a stop script. All this is meant to work in macOS
To create the db I started a Postgres server locally on my laptop and created a db called 'postgres-15year'. Using 'DBeaver' database manager and 'psql' at the CLI I can see that the dbase is correctly created and functional.
I then created the following scripts:
start.sh
#!/bin/bash
echo $(pwd)
docker run --rm --name payments-15years -e POSTGRES_PASSWORD=docker -d -p5432:5432 -v$(pwd)/postgres-15year/data:/var/lib/postgresql/data:delegated postgres:11.6
stop.sh
#!/bin/bash
docker stop payments-15years
echo "docker stop payments-15years"
I then put all this in a directory that I would like to tarball.
So user would receiver the tarball, unpack it and run the start script and be able to connect to a db that has a predefined structure and schemas. Here is what I think 'my start.sh' script is doing:
My challenge is that after running the start script 'docker ps' returns:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
14d2cba6abe2 postgres:11.6 "docker-entrypoint.s…" 44 seconds ago Up 42 seconds 0.0.0.0:5432->5432/tcp payments-15years
But when I do so and then run 'psql' from CLI:
docker exec -it payments-15years psql -U postgres
the database ('postgres-15year') is not listed.
Does anyone see what is wrong with my approach?
NOTE:
Upvotes: 0
Views: 1881
Reputation: 59936
Reference to your comment first,
I was able to create the DB using interactive. To populate the empty db with the proper schema I have a pg-dump file (.sql). I'm not sure how to make that file available to the container. Thoughts?
If you have dump file why one should bother to run the command once container up?
Better to make it part of Dockerfile and you will not need to and dump manually Docker image will take care of it.
FROM postgres
ADD mydb.sql /docker-entrypoint-initdb.d
Now if you up the container the DB will be populated automatically.
Initialization scripts
If you would like to do additional initialization in an image derived from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d
(creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service.
Now come to your question, you can do that with Tarball, tar does not deal with volume while Postgres docker image come with volume. so you need to follow the Dockerfile to import on Data on startup.
Upvotes: 1
Reputation: 1141
Actually when you ran the command with -v $(pwd)/postgres-15year/data:/var/lib/postgresql/data:delegated
, According to the official documentation of the postgres-docker
.
The
-v /my/own/datadir:/var/lib/postgresql/data
part of the command mounts the/my/own/datadir
directory from the underlying host system as/var/lib/postgresql/data
inside the container, where PostgreSQL by default will write its data files.
So you are just creating a data directory not a database. You can check it by using inspect
keyword.
docker container inspect <container_name>
In your scenario payments-15years gives volume details of this,
"Mounts": [
{
"Type": "bind",
"Source": "/Users/macair/postgres-15year/data",
"Destination": "/var/lib/postgresql/data",
"Mode": "delegated",
"RW": true,
"Propagation": "rprivate"
}
],
To create a database execute a interactive session as did before and create a db manually or try this.
docker run -it --rm --network some-network postgres psql -h some-postgres -U postgres
Upvotes: 1