CCCC
CCCC

Reputation: 6461

Docker - volumes explanation

As far as I know, volume in Docker is some permanent data for the container, which can map local folder and container folder.

In early day, I am facing Error: Cannot find module 'winston' issue in Docker which mentioned in:
docker - Error: Cannot find module 'winston'

Someone told me in this post:

Remove volumes: - ./:/server from your docker-compose.yml. It overrides the whole directory contains node_modules in the container.

After I remove volumes: - ./:/server, the above problem is solved.

However, another problem occurs.
[solved but want explanation]nodemon --legacy-watch src/ not working in Docker

I solve the above issue by adding back volumes: - ./:/server, but I don't know what is the reason of it

Question

  1. What is the cause and explanation for above 2 issues?
  2. What happen between build and volumes, and what is the relationship between build and volumes in docker-compose.yml

Dockerfile

FROM node:lts-alpine

RUN npm install --global sequelize-cli nodemon

WORKDIR /server

COPY package*.json ./

RUN npm install

COPY . . 

EXPOSE 3030

CMD ["npm", "run", "dev"]

docker-compose.yml

version: '2.1'

services:
  test-db:
    image: mysql:5.7
    ...
  test-web:
    environment:
      - NODE_ENV=local
      - PORT=3030
    build: .           <------------------------ It takes Dockerfile in current directory
    command: >
      ./wait-for-db-redis.sh test-db npm run dev
    volumes:
      - ./:/server     <------------------------ how and when does this line works?
    ports:
      - "3030:3030"
    depends_on:
      - test-db

Upvotes: 1

Views: 145

Answers (1)

David Maze
David Maze

Reputation: 158908

When you don't have any volumes:, your container runs the code that's built into the image. This is good! But, the container filesystem is completely separate from the host filesystem, and the image contains a fixed copy of your application. When you change your application, after building and testing it in a non-Docker environment, you need to rebuild the image.

If you bind-mount a volume over the application directory (.:/server) then the contents of the host directory replace the image contents; any work you do in the Dockerfile gets completely ignored. This also means /server/node_modules in the container is ./node_modules on the host. If the host and container environments don't agree (MacOS host/Linux container; Ubuntu host/Alpine container; ...) there can be compatibility issues that cause this to break.

If you also mount an anonymous volume over the node_modules directory (/server/node_modules) then only the first time you run the container the node_modules directory from the image gets copied into the volume, and then the volume content gets mounted into the container. If you update the image, the old volume contents take precedence (changes to package.json get ignored).

When the image is built only the contents of the build: block have an effect. There are no volumes: mounted, environment: variables aren't set, and the build environment isn't attached to networks:.


The upshot of this is that if you don't have volumes at all:

version: '3.8'
services:
  app:
    build: .
    ports: ['3000:3000']

It is completely disconnected from the host environment. You need to docker-compose build the image again if your code changes. On the other hand, you can docker push the built image to a registry and run it somewhere else, without needing a separate copy of Node or the application source code.

If you have a volume mount replacing the application directory then everything in the image build is ignored. I've seen some questions that take this to its logical extent and skip the image build, just bind-mounting the host directory over an unmodified node image. There's not really benefit to using Docker here, especially for a front-end application; install Node instead of installing Docker and use ordinary development tools.

Upvotes: 2

Related Questions