sebastian
sebastian

Reputation: 488

How to keep a data store portable in Docker?

I am in the progress of changing my development environment to Docker and I'm pretty happy so far but I have one basic question. But first let me describe on what kind of setup I've landed.

I'm using the example of an environment for web development.
I'm organizing every service in its own container, so

  1. PHP who talks to a mysql container and has a data container (called app) for the source.
  2. nginx links to the PHP container and serves the files from the data container (app).
  3. app is basically the same container as PHP (to save space) and mounts my project folder into the container. app then serves the project folder to the other containers.
  4. then there is a mysql container who has his own data container called data
  5. a phpmyadmin container that talks to the mysql container
  6. and finally there is data, the data container for the DB.

I'm not sure if the benefits are clear for everyone, so there it is (because you could put everything into one container...).
Mounting the project folder from my host machine into the Docker container lets me use my favorite editor and gives me continuous development.
Decoupling the database engine from its store gives you the freedom to change the engine but keep the data. (And of cause you don't have to install any programming stuff apart from an editor and Docker.)

My goal is to have the whole setup highly portable, so having the latest version of my project code on the host system and not living inside a container is a huge plus. I am organizing the setup described above in a ´docker-compose.yml´ file inside my project folder. So I can just copy that whole project folder to a different machine, type ´docker-compose´ and be up and running.
I actually have it in my Dropbox and can switch machines just like that. Pretty sweet.

But there is one drawback. The DB store is not portable as it lies somewhere in the Virtualbox file system. I tried mounting the data store into the host OS but that doesn't really work. The files are there but I get various errors when I try to read or write to it.

I guess my question is if there is a best practice to keep the database store in sync (or highly portable) between different dev machines.

Upvotes: 1

Views: 1530

Answers (1)

BMitch
BMitch

Reputation: 264821

I'd nix the data containers and switched over to named volumes. Data containers haven't been needed for quite a while despite some outdated documentation indicating otherwise.

Named volumes let you select from a variety of volume drivers that make it possible to mount the data from outside sources (including NFS, gluster, and flocker). It also removes the requirement to pick a container that won't have a significant disk overhead, allows you to mount the folders in any location in each container, and separates container management from data management (so a docker rm -v $(docker ps -aq) doesn't nuke your data).

The named volume is as easy to create as giving the volume a name on the docker run, e.g. docker run -v app-data:/app myapp. And then you can list them with docker volume ls.

Upvotes: 1

Related Questions