Sarthak Batra
Sarthak Batra

Reputation: 597

Docker container not updating on code change

I have a Dockerfile to build my node container, it looks as follows:

FROM node:12.14.0

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 4500

CMD ["npm", "start"]

based on this docker file, I am using docker compose to run this container and link it to a mongo container such that it refers to mongo-service. The docker-compose.yml looks as follows

version: '3'
services:
    backend:
        container_name: docker-node-mongo-container
        restart: always
        build: .
        ports: 
            - '4700:4500'
        links: 
            - mongo-service

    mongo-service:
        container_name: mongo-container
        image: mongo
        ports: 
            - "27017:27017"

Expected behavior: Everytime I make a new change to the project on my local computer, I want the docker-compose to restart so that the new changes are reflected.

Current behavior: To make the new changed reflect on docker-compose, I have to do docker-compose down and then delete images. I am guessing that it has to rebuild images. How do I make it so that whenever I make change, the dockerfile builds a new image?

I understand that need to use volumes. I am just failing to understand how. Could somebody please help me here? docker

Upvotes: 26

Views: 39513

Answers (4)

Kalpesh Chavda
Kalpesh Chavda

Reputation: 11

This worked for me

version: '3.8'

services:
  api_app:
    build:
      context: ./  # Path to the directory containing the Dockerfile
      dockerfile: Dockerfile  # Name of the Dockerfile
    ports:
      - "8080:8080"  # Map port 8080 on the host to port 8080 in the container
    volumes:

      - .:/var/www/api  # Mount the current directory to /var/www/legacy in the container
    environment:
      - APACHE_RUN_USER=${APACHE_RUN_USER}
      - APACHE_RUN_GROUP=${APACHE_RUN_GROUP}
      - APACHE_LOG_DIR=${APACHE_LOG_DIR}
      - APP_NAME=GPUBackend
      - APP_ENV=${APP_ENV}
      - APP_KEY=${APP_KEY}
    restart: unless-stopped  # Automatically restart the container unless it's stopped
    networks:
      - api_network  # Use a custom network (optional)

networks:
  api_network:
    driver: bridge  # Use the default bridge network

Upvotes: 0

Slawa
Slawa

Reputation: 1217

Since docker version 2.22 there's a way for docker to detect file changes. Use docker-compose.yml like this:

services:
  web:
    build: .
    command: npm start
    develop:
      watch:
        - action: sync
          path: ./web
          target: /src/web
          ignore:
            - node_modules/
        - action: rebuild
          path: package.json

And run like this:

docker compose up -d --build && docker compose watch

If you install concurrently you can sync file changes and see the server logs at the same terminal:

node_modules/.bin/concurrently "docker compose watch" "docker compose logs web -f"

More details here: https://docs.docker.com/compose/file-watch/

Upvotes: 1

leandrofahur
leandrofahur

Reputation: 99

In order for you to 'restart' your docker application, you need to use docker volumes.

Add into your docker-compose.yml file something like:

version: '3'
services:
    backend:
        container_name: docker-node-mongo-container
        restart: always
        build: .
        ports: 
            - '4700:4500'
        links: 
            - mongo-service
        volumes:
            - .:/usr/src/app


    mongo-service:
        container_name: mongo-container
        image: mongo
        ports: 
            - "27017:27017"

The volumes tag is a simple saying: "Hey, map the current folder outside the container (the dot) to the working directory inside the container".

Upvotes: 9

David Maze
David Maze

Reputation: 158837

When you make a change, you need to run docker-compose up --build. That will rebuild your image and restart containers as needed.

Docker has no facility to detect code changes, and it is not intended as a live-reloading environment. Volumes are not intended to hold code, and there are a couple of problems people run into attempting it (Docker file sync can be slow or inconsistent; putting a node_modules tree into an anonymous volume actively ignores changes to package.json; it ports especially badly to clustered environments like Kubernetes). You can use a host Node pointed at your Docker MongoDB for day-to-day development, and still use this Docker-based setup for deployment.

Upvotes: 22

Related Questions