Reputation: 120
From windows, I connected to Postgres Docker container from the local machine. But I can't see the tables that are existed in postgres container. The data is not replicating locally. I followed this tutorial for running the postgres container on windows.
I managed to create the tables from dump file.
$ docker volume create --name postgres-volume
$ docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
$ docker exec -it <container-id> bash -c "pg_dump -h <source-url> -U postgres -d postgres > /tmp/dump.sql"
$ docker exec -it <container-id> bash -c "psql -f /tmp/dump.sql -U postgres -d postgres"
Any help, appreciated.
Upvotes: 0
Views: 2284
Reputation: 1417
Containers are meant to be an isolated instance of a program/service. They are isolated both from the host and subsequent spawns of the same image. They start off in an isolated island, with nothing in it (that it didn't bring itself). Any data they generate is lost upon their death. They are, also, completely oblivious to any data on the host (for now). But, sometimes, we want their data to be persistent or "inject" our own data each time they start up. Such as your case with PostgreSQL
. We want PostgreSQL
to have our schema available each time it starts up. And, it would also be great if it retained any changes we made or data we loaded.
Enter docker volumes. It is a good method to manage persistent storage for containers. They are meant to be mounted in containers and let them write their data (or read from prior instances) which will not be deleted if the container instance is deleted. Once you create a volume with docker volume create myvolume1
, it'll create a directory in /var/lib/docker/volumes/
(on windows it'll be another default. Can be changed). You never have to be aware of the physical directory on your host. You only need be aware of the volume name myvolume1
(or whatever name you choose it to have).
As we said, containers, by default, are completely isolated from the host. Specifically its filesystem, too. Which means, when a container starts up, it doesn't know what's on the host's filesystem. And, when the container instance is deleted, the data it generated during its life perishes with it.
But, that'll be different if we use docker volumes
. Upon a container's start-up, we can mount within it data from "outside". This data can either be the docker volume
we spoke of earlier or a specific path we want (such as /home/me/somethingimport which we manage ourselves). The latter isn't a docker volume
but works just the same.
The tutorial you linked talks about mounting both a path and a docker volume
(in separate examples). This is done with the -v
flag when you execute docker run
. Because using docker
on windows, there is an issue with permissions to the PostgreSQL
data directory on the host (which is mounted in the container), they recommend using docker volumes
.
This means you'll have to create your schema and load any data you need after you used a docker volume
with your instance of PostgreSQL
. Subsequent restarts of the container must use the same docker volume
.
docker volume create --name postgres-volume
docker run -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=password -v postgres-volume:/var/lib/postgresql/data -d postgres
From the tutorial
These are the two important lines. The first creates creates a docker volume
and the second starts a fresh PostgreSQL
instance. Any changes you make to that instance's data (DML DDL), will be saved in the docker volume
postgres-volume. If you've previously spun up a container (for example, PostgreSQL
) that uses that volume, it'll find the data just as it was left last time. In other words, what makes the second line a fresh instance is the fact that the docker volume
is empty (it was just created). Subsequent instances of PostgreSQL
will find the schema+data you loaded previously.
Upvotes: 1