Reputation: 2289
I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data
and want to set it as a volume to a PostgreSQL container.
My part from docker-compose.yml file about PostgreSQL:
db:
image: postgres:9.4
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data
volumes:
- /projects/own/docker_php/pgdata:/var/lib/postgresql/data
When I make docker-compose up I get this message:
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
I tried to create my own image from the container, so my Dockerfile is:
FROM postgres:9.4
COPY pgdata /var/lib/postgresql/data
But I got the same error, what am I doing wrong?
I got a SQL dump using pg_dumpall
, and put it in /docker-entrypoint-initdb.d
, but this file executes every time I do docker-compose up
.
Upvotes: 66
Views: 92971
Reputation: 4578
To restore your data from an existing dumped.sql
file.
Create your Dockerfile:
version: "3"
services:
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_DB: YOUR_DATABASE
POSTGRES_USER: your_username_if_needed
POSTGRES_PASSWORD: your_password_if_needed
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
Launch it and then kill it:
docker-compose up
docker-compose stop
Then migrate your data from that dump:
docker exec -i YOUR_CONTAINER_NAME psql -U your_username_if_needed -W -d YOUR_DATABASE < your_dump.sql
-W
is only needed if you set a password.
You should now see your console executing the import.
When its done, you're good to go: docker-compose up
Upvotes: 0
Reputation: 869
It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.
You should be creating data-only Docker containers, instead. Like this:
postgres9:
image: postgres:9.4
ports:
- 5432:5432
volumes_from:
- pg_data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data/pgdata
pg_data:
image: alpine:latest
volumes:
- /var/lib/postgresql/data/pgdata
command: "true"
which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good
As for: how to import initial data, you can either:
docker cp
, into the data-only container of the setup, orUpvotes: 17
Reputation: 15190
To build on irakli's answer, here's an updated solution:
volumes
sectionversion: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
Start Postgres database server:
$ docker-compose up
Show all tables in the database. In another terminal, talk to the container's Postgres:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
It'll show nothing, as the database is blank. Create a table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()'
List the newly-created table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.
Now, kill the Postgres server container:
$ docker-compose stop
Start up the Postgres container again:
$ docker-compose up
We expect that the database server will re-use the storage, so our very important data is still there. Check:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.
First, make a backup, storing our data on the host:
$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql
Zap our data from the guest database:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer'
Restore our backup (stored on the host) into the Postgres container.
Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.
$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql
List the tables to verify the restore worked:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.
Thanks Tomasz!
Upvotes: 77