flamey
flamey

Reputation: 2379

Docker commit - DB changes are no saved

I installed latest Docker CS, got a LAMP image from Docker Hub. I'm trying to create a DB in it and make a new image with that DB saved in it.

  1. Start the container: docker run --name mycontainer fauria/lamp This starts the Ubuntu-based container and starts Apache server. MySQL server is also running in the container.
  2. Access container shell: docker exec -i -t mycontainer bash
  3. Create a DB and run a few commands MySQL commands on it: mysql -u root CREATE DATABASE mydbname; USE mydbname; CREATE FUNCTION ... ...
  4. Stop the container: docker stop mycontainer
  5. Create an image: docker commitdocker ps -l -qmynickname/appname:v1
  6. Remove the container docker container rm mycontainer

Now I expected that if I run the container based on the new image I'd have the database there already. But it's not there. docker run --name mycontainer --rm -p 80:80 -e LOG_STDOUT=true -e LOG_STDERR=true -e LOG_LEVEL=debug -v /home/username/dev/appname/www:/var/www/html mynickname/appname:v1

What am I missing?

Upvotes: 3

Views: 6058

Answers (4)

silver_mx
silver_mx

Reputation: 1382

In my case I am saving db schema changes by dumping the database to a SQL file. What I do is the following:

  1. I created a simple bash script that dumps the DB to /docker-entrypoint-initdb.d/dump.sql as initialization.

#!/bin/bash

mysqldump -u root cepheid_development --single-transaction --quick --lock-tables=false > /docker-entrypoint-initdb.d/dump.sql

  1. I copy the script to the container in my Dockerfile:

COPY docker_resources/dump_db.sh /tmp

  1. I run my script automatically from outside the container (command line or another script):

docker exec $container_id /tmp/dump_db.sh

Upvotes: 0

Guido U. Draheim
Guido U. Draheim

Reputation: 3271

Having data being committed with the image is a useful scenario for testing. When using a prefabricated base image then you may have to remove the VOLUME entries however, see docker-copyedit for that.

Upvotes: 3

Abdullah Shah
Abdullah Shah

Reputation: 780

This is not how databases work through containers. You have to persist the data on the host to retain it. Containers are disposable, they come and go.

I would recommend you to persist database related data on the host through volumes. In case that container crashes, the next container should retain the services of that container and data will be retained.

For example:-

In the official mysql example

$ docker run --name some-mysql -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag

You can see they are mounting a volume to persist the data and retain if the container restarts.

Cheers!

Upvotes: 2

NZD
NZD

Reputation: 1970

The reason is that /var/lib/mysql is listed as a VOLUME in the Dockerfile.

The changes you make are retained between docker stop <yourcontainer> and docker start <yourcontainer> commands. But when you commit a container, each directory marked as VOLUME in the Dockerfile, is replaced with its original content. (This happens even if you haven't mounted an external volume to that directory.) See docker commit.

You can easily check that your other changes are kept in a commit by making changes somewhere outside the VOLUME directories. For instance, run date>/mydate inside the container and then commit it. When you then run a new container from that image, the file /mydate will still be there.

If you want to retain the the database changes, you can do that by cloning the repo and then remove the line VOLUME /var/lib/mysql from the Dockerfile. If you then build the new image and run it, it will retain your database changes when you commit the container.

Normally, in a production environment, you mount your database files either in a data container or on the host. This way the database data will be retained in the data container or on the host if you would commit the container.

Upvotes: 7

Related Questions