praks5432
praks5432

Reputation: 7792

Why is data persisting in mongo docker container?

From my understanding, new containers should not persist data after they die.

Right now I have two Dockerfiles. One is for my application and the other is to seed a database.

This is my project structure

/
 package.json
 Dockerfile
 docker-compose.yml
 mongo-seeding/
   Dockerfile
   seedDatabase.mjs

This is what my seedDatabase looks like

import mongodb from "mongodb";
import data from "./dummyData";

// Connection URL
const url = "mongodb://mongo:27017/poc";

async function mongoSeeder() {
  try {
    // first check to see if the database exists
    const client = await mongodb.MongoClient.connect(url);
    const db = client.db("poc");
    const questionCollection = await db.createCollection("question");
    const answerCollection = await db.createCollection("answer");
    await questionCollection.insertMany(data.questions);
    await answerCollection.insertMany(data.answers);
  } catch (e) {
    console.log(e);
  }
}

mongoSeeder();

This is what my the Dockerfile for seeding the database looks like

FROM node:10

WORKDIR /usr/src/app
COPY package*.json ./
COPY mongo-seeding mongo-seeding
RUN yarn add mongodb uuid faker

CMD ["yarn", "seed"]

This is what the other Dockerfile looks like

FROM node:10

WORKDIR /usr/src/app
COPY package*.json ./
RUN yarn install
RUN yarn add mongodb uuid faker
RUN yarn global add nodemon babel-cli babel-preset-env
COPY . .
EXPOSE 3000
CMD ["yarn", "dev-deploy"]

This is what my docker-compose.yml looks like

version: "3"
services:
  mongo:
    container_name: mongo
    image: mongo
    ports:
      - "4040:27017"
    expose:
      - "4040"
  app:
    container_name: ingovguru
    build: .
    ports:
      - "3000:3000"
    depends_on:
      - mongo
    links:
      - mongo
    expose:
      - "3000"
  seed:
    container_name: seed
    build:
      context: .
      dockerfile: ./mongo-seeding/Dockerfile
    links:
      - mongo
    depends_on:
      - mongo
      - app

Initially, I am seeding the database with 1000 questions and 200 answers.

Notice that I am not using any volumes, so nothing should be persisted.

I run

docker-compose build docker-compose up

I jump into the mongo container, and I see that I have a 1000 questions and 100 answers.

I then Ctrl-C and re-run.

I now have 2000 questions and 100 answers.

Why does this happen?

Upvotes: 3

Views: 3108

Answers (2)

Motakjuq
Motakjuq

Reputation: 2259

The problem that you have is because the mongo image has defined some volumes on its Dockerfile, you can see them doing:

docker history --no-trunc mongo | grep VOLUME

If you only stop the mongo container (you can check doing docker ps -a | grep mongo), the created volumes are kept. If you want to remove the container and its volumnes, you can remove the stopped mongo container with docker rm CONTAINER_ID or removing all the containers created by compose with docker-compose down.

In your case, if you want to build and run the services, you have to do:

docker-compose build && docker-compose down && docker-compose up  -d

Upvotes: 0

King Chung Huang
King Chung Huang

Reputation: 5644

Even though you are not declaring any volumes, the mongo image itself declares volumes for /data/db and /data/configdb in containers, which cover database data (see Dockerfile#L87). With Docker Compose, these anonymous volumes are re-used between invocations of docker-compose up and docker-compose down.

To remove the volumes when you bring down a stack, you need to use the -v, --volumes option. This will remove the volumes, giving you a clean database on the next up.

docker-compose down --volumes

docker-compose down documentation

-v, --volumes           Remove named volumes declared in the `volumes`
                        section of the Compose file and anonymous volumes
                        attached to containers.

Upvotes: 7

Related Questions