Ewaren
Ewaren

Reputation: 1763

Clean AND practical way to handle node_modules in a Dockerized Node.js dev environment?

I have been trying to dockerize (both for development and later for production) a MERN stack application recently, and the interaction between Node.js (more especially the node_modules) and Docker kind of puts me off. In the whole question, I will designate the computer that is used for development as the "host machine".

TL;DR

Is there a way that is not too impractical to Dockerize a Node.js app without mounting the node_modules folder of your host machine to the container when developing ?

My attempted approaches (and why none of them satisfies me)

I can think of 3 big advantages of using Docker (and its docker-compose utility) for development (I will refer to these as points 1, 2 and 3) :

  1. It makes it easy to setup the dev environment on new machines, or integrate new members in the project, in that you do not have to manually install anything besides Docker itself.
  2. The app is easy to run for debugging while developing (just a quick docker-compose up and the db, the backend and the frontend are up and running).
  3. The runtime environment is consistent across all machines and the app's final production environment, since a Docker container is kind-of its own small Linux virtual machine.

The first 2 points pose no problem when dockerizing a Node.js application ; however I feel like the third one is harder to achieve in a dev environment because of how the dependencies work in Node, and its node_modules functioning. Let me explain :

This is my simplified project folder structure :

project
│   docker-compose.yml
│
└───node-backend
│   │   Dockerfile
│   │   package.json
│   │   server.js
│   │
│   └───src
│   │   │   ...
│   │
│   └───node_modules
│       │   ...
│
└───react-frontend
    │   ...

From what I have tried and what I have seen in articles and tutorials on the internet, there are basically 3 approaches to developing Node.js with Docker. In all 3 approaches, I assume I am using the following Dockerfile to describe the image of my app :

# node-backend/Dockerfile
FROM node:lts-alpine
WORKDIR /usr/src/app

COPY package*.json ./
RUN npm install

COPY . ./

EXPOSE 8000
CMD [ "npm", "start" ]

Conclusion

As you can see, I am yet to find an approach that combines all of the advantages I am looking for. I need an approach that is easy to setup and to use in development ; that respects point 3 because it is an important aspect of the philosophy and purpose of containerization ; that doesn't make synchronizing node_modules between the container and the host machine a headache, while preserving all of the IDE's functionalities (Intellisense, linting, etc.).

Is there something I am completely missing on ? What are you guys' solutions to this problem ?

Upvotes: 29

Views: 12408

Answers (2)

Marcos
Marcos

Reputation: 1435

Approach 2 + 3

Your docker-compose will look exactly like you said on Approach 2 but you attach the VSCode into the container as you mentioned on 3.

This is easy to do with a devcontainer configuration file.

My current setup below. I have one command aliased to run start.sh and it builds the containers (if necessary) and spins them up with VSCode attached with all the dev tools

├── backstage     # app code
├── db            # init.sql configuration for the database
├── dev.sh        # script to switch between services to attach to and enable debug mode
├── docker-compose.yml
├── Dockerfile           # backend
├── Dockerfile.frontend  # frontend
├── Dockerfile.other     # other service
├── postgres.conf
├── README.md
└── start.sh             # script to replace backend container initial command adding important developer aliases and enabling debug mode

docker-compose.yml

version: "3.9"
services:
  db:
    image: ...
    environment:
      - ...
    volumes:
      - ./db:/docker-entrypoint-initdb.d/
      - pgdata:/var/lib/postgresql/data:rw
  web:
    image: ...
    build: .
    environment:
      - ...
    command: bash -c "chmod +x /start.sh && /start.sh"
    volumes:
      - ./backstage/:/backstage      # app code, hot reloading
      - ./.git/:/.git                # git with submodules
      - ~/.config/:/root/.config     # my git creds
      - ./start.sh:/start.sh         # see below
      - ./.vscode/launch.json:/backstage/.vscode/launch.json
     # overwriting my teammates' who don't develop in a container debugging configs without checking anything into git
    ports:
      - ...
    depends_on:
      - db
  frontend:
    image: ...
    build:
      context: .
      dockerfile: Dockerfile.frontend
    command: bash -c "cd path/to/app && yarn start:dev"
    ports:
      - ...
    volumes:
      - ./backstage/:/backstage/     
      - ./.git/:/.git                
      - ~/.config/:/root/.config
      - /backstage/apps/prime/frontend/node_modules
      - /backstage/node_modules # prevent host from overwriting container packages
    depends_on:
      - web
  other:
    ...
  adminer:
    ... # cause why not
volumes:
  pgdata:

start.sh

echo "Container Start Script"

DEBUG_MODE=false

function runserver() {
    python path/to/manage.py runserver 0.0.0.0:...
}

# Add useful aliases to the container
echo "alias django='python -W i /backstage/.../manage.py'" >> /root/.bashrc
echo "alias pylint='\
black ... && \
flake8 ... && \
...etc'" >> /root/.bashrc
source /root/.bashrc

# keep the container alive even if Django crashes
while true; do
    if [ $(ps auxw | grep runserver | wc -l) -ge 2 ] || [ "$DEBUG_MODE" = true ]; then
        sleep 1   # yes, this is hacky
    else
        runserver # but hey, it works
    fi
done

dev.sh

# Shortcut to attach to multiple services
# Requires devcontainer CLI

SCRIPTPATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )"
DEBUG=0
SERVICE=0

for arg in "$@"; do
    case $arg in
        -d|--debug)
            DEBUG=1
            ;;
        web|frontend)
            SERVICE="$arg"
            ;;
        *)
            echo "Unknown option $arg"
            exit 1
            ;;
    esac
done

if [ $SERVICE != 0 ]; then
    echo "This will rebuild the dev container and it might take a minute or two."
    echo "To prevent this just use this script without arguments and attach via VSCode."
    echo "Are you sure? y/n"
    read user_input
    if [ $user_input != "y" ]; then
        echo "Aborted"
        exit 0
    fi
    sed -i -E 's/"service":.*/"service": "'$SERVICE'",/' "SCRIPTPATH/.devcontainer/devcontainer.json"
fi

if [ $DEBUG = 1 ]; then
    echo "[ DEBUG MODE ]"
    echo "Please run web server with debugger attached."
    sed -i -E "s,DEBUG_MODE=.*,DEBUG_MODE=true," "$SCRIPTPATH/start.sh"
else
    sed -i -E "s,DEBUG_MODE=.*,DEBUG_MODE=false," "$SCRIPTPATH/start.sh"
   # sed <-- hacky hacky, yayayay... i don't care
fi


devcontainer open $SCRIPTPATH

devcontainer.json

{
    "name": "Dev Container",
    "features": {
        "git": "latest" // it's vscode-suggested solution
// but I might change it in the future,
// it adds a ton of time and wasted space
// when building the dev container image for the first time
    },
    // extensions --> customize to taste
    "extensions": [
        ...
        // they'll be installed inside the container
    ],
    "dockerComposeFile": "../docker-compose.yml",
    "service": "web",
    "workspaceFolder": "/backstage",
    "postStartCommand": "git config --global --add safe.directory /backstage"
    // I think this command became necessary after the latest security vuln found on git
}

Upvotes: 0

Nikos
Nikos

Reputation: 724

I would suggest to look at this project https://github.com/BretFisher/node-docker-good-defaults, it supports both local and container node_modules by using a trick, but is not compatible with some frameworks (eg strapi):

  • The container node_modules are placed one folder level up from the app (node's algo to resolve deps, recursively looks for up the folder of the app for the node_modules), and they are removed from the app folder
  • The host node_modules are placed inside the app folder (no changes there)
  • You extra bind-mount the package.json + lock files specifically between the container (remember up one folder from app) to host (app folder) so they are always kept in sync => So when you docker exec npm install dep, you only need to npm install dep on host and eventually rebuild your docker image.

Upvotes: 3

Related Questions