Reputation: 20014
I have an app with the following services:
web/
- holds and runs a python 3 flask web server on port 5000. Uses sqlite3.worker/
- has an index.js
file which is a worker for a queue. the web server interacts with this queue using a json API over port 9730
. The worker uses redis for storage. The worker also stores data locally in the folder worker/images/
Now this question only concerns the worker
.
worker/Dockerfile
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
RUN npm install
COPY . /worker/
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
When I run docker-compose build
, everything works as expected and all npm modules are installed in /worker/node_modules
as I'd expect.
npm WARN package.json [email protected] No README data
> [email protected] install /worker/node_modules/pageres/node_modules/screenshot-stream/node_modules/phantom-bridge/node_modules/phantomjs
> node install.js
<snip>
But when I do docker-compose up
, I see this error:
worker_1 | Error: Cannot find module 'async'
worker_1 | at Function.Module._resolveFilename (module.js:336:15)
worker_1 | at Function.Module._load (module.js:278:25)
worker_1 | at Module.require (module.js:365:17)
worker_1 | at require (module.js:384:17)
worker_1 | at Object.<anonymous> (/worker/index.js:1:75)
worker_1 | at Module._compile (module.js:460:26)
worker_1 | at Object.Module._extensions..js (module.js:478:10)
worker_1 | at Module.load (module.js:355:32)
worker_1 | at Function.Module._load (module.js:310:12)
worker_1 | at Function.Module.runMain (module.js:501:10)
Turns out none of the modules are present in /worker/node_modules
(on host or in the container).
If on the host, I npm install
, then everything works just fine. But I don't want to do that. I want the container to handle dependencies.
What's going wrong here?
(Needless to say, all packages are in package.json
.)
Upvotes: 295
Views: 228908
Reputation: 1
In case I haven't understood the problem, in my case I don't want my node_modules folder and my pachage-lock.json on my development machine, I want my container to create the node_modules folder, since I'm only going to work on the node application, follow the example:enter image description here
Upvotes: 0
Reputation: 61
In addition to FrederikNS's answer:
Newly installed npm packages will not end up in the container after rebuild and restart.
To solve this run:
docker compose build --no-cache
docker compose up --force-recreate -V # <--- Notice the -V flag
The -V flag removes anonymous volumes.
volumes:
- ./worker/:/worker/
- /worker/node_modules
When using a data volumes (Or anonymous volumes) Docker copies the node_modules folder from the built docker image to a anonymous volume.
However anonymous volumes persist between builds and restarts. So the newly installed package ends up in your container when building, but because the anonymous volume still contains the old node_modules contents this is overlayed in the container, thus hiding the new node_modules.
The -V flag deletes anonymous volumes and then a new anonymous volume is recreated from the docker container, with the newly installed package.
I created a make file that has all the docker commands that i need, so that every time I need to make changes to packages, I don't forget to add the -V flag to bring the container up again. No all I need to do is run
npm install <package>
make build
make up
and the package is installed, the containers are rebuild and the containers are restarted with a newly created anonymous volume with my newly installed packages.
COMPOSE_FILE := docker-compose.yml
# Base docker compose command
DC := docker compose -f $(COMPOSE_FILE)
.PHONY: build up
# Main targets
build:
$(DC) build --no-cache
up:
$(DC) up -d -V --force-recreate
Upvotes: 6
Reputation: 931
Another way to overcome this issue of client
container NOT finding/ NOT having node_modules is the following:
Build a frontend and push it to dockerhub that can run on its own.
So when you do docker-compose up
for your backend and db, it'll pull your frontend image from your specified dockerhub repo.
In this way, the nodemodules issue is not a problem because your frontend container is already running separately and your are just pulling that in your docker-compose.yml
Example directory structure:
my-app/
-docker-compose.yml
-- backend/
-- client/
-------Dockerfile
And in your docker-compose.yml
, you simply specify the build like this:
version: '3'
services:
frontend:
image: <your-docker-frontend-rep>/<project>:tag
I am still a new to docker, please feel free to comment if i'm wrong. TY.
Upvotes: 0
Reputation: 5772
This happens because you have added your worker
directory as a volume to your docker-compose.yml
, as the volume is not mounted during the build.
When docker builds the image, the node_modules
directory is created within the worker
directory, and all the dependencies are installed there. Then on runtime the worker
directory from outside docker is mounted into the docker instance (which does not have the installed node_modules
), hiding the node_modules
you just installed. You can verify this by removing the mounted volume from your docker-compose.yml
.
A workaround is to use a data volume to store all the node_modules
, as data volumes copy in the data from the built docker image before the worker
directory is mounted. This can be done in the docker-compose.yml
like this:
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- ./worker/:/worker/
- /worker/node_modules
links:
- redis
I'm not entirely certain whether this imposes any issues for the portability of the image, but as it seems you are primarily using docker to provide a runtime environment, this should not be an issue.
If you want to read more about volumes, there is a nice user guide available here: https://docs.docker.com/userguide/dockervolumes/
EDIT: Docker has since changed it's syntax to require a leading ./
for mounting in files relative to the docker-compose.yml file.
Upvotes: 449
Reputation: 2093
With Yarn you can move the node_modules outside the volume by setting
# ./.yarnrc
--modules-folder /opt/myproject/node_modules
See https://www.caxy.com/blog/how-set-custom-location-nodemodules-path-yarn
Upvotes: 1
Reputation: 1745
I tried the most popular answers on this page but ran into an issue: the node_modules
directory in my Docker instance would get cached in the the named or unnamed mount point, and later would overwrite the node_modules
directory that was built as part of the Docker build process. Thus, new modules I added to package.json
would not show up in the Docker instance.
Fortunately I found this excellent page which explains what was going on and gives at least 3 ways to work around it: https://burnedikt.com/dockerized-node-development-and-mounting-node-volumes/
Upvotes: 2
Reputation: 263
You can just move node_modules
into a /
folder.
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
RUN npm install \
&& mv node_modules /node_modules
COPY . /worker/
Upvotes: 4
Reputation: 14791
The solution provided by @FrederikNS works, but I prefer to explicitly name my node_modules volume.
My project/docker-compose.yml
file (docker-compose version 1.6+) :
version: '2'
services:
frontend:
....
build: ./worker
volumes:
- ./worker:/worker
- node_modules:/worker/node_modules
....
volumes:
node_modules:
my file structure is :
project/
│── worker/
│ └─ Dockerfile
└── docker-compose.yml
It creates a volume named project_node_modules
and re-use it every time I up my application.
My docker volume ls
looks like this :
DRIVER VOLUME NAME
local project_mysql
local project_node_modules
local project2_postgresql
local project2_node_modules
Upvotes: 61
Reputation: 1568
If you don't use docker-compose you can do it like this:
FROM node:10
WORKDIR /usr/src/app
RUN npm install -g @angular/cli
COPY package.json ./
RUN npm install
EXPOSE 5000
CMD ng serve --port 5000 --host 0.0.0.0
Then you build it: docker build -t myname .
and you run it by adding two volumes, the second one without source: docker run --rm -it -p 5000:5000 -v "$PWD":/usr/src/app/ -v /usr/src/app/node_modules myname
Upvotes: 1
Reputation: 330
If you want the node_modules
folder available to the host during development, you could install the dependencies when you start the container instead of during build-time. I do this to get syntax highlighting working in my editor.
# We're using a multi-stage build so that we can install dependencies during build-time only for production.
# dev-stage
FROM node:14-alpine AS dev-stage
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
# `yarn install` will run every time we start the container. We're using yarn because it's much faster than npm when there's nothing new to install
CMD ["sh", "-c", "yarn install && yarn run start"]
# production-stage
FROM node:14-alpine AS production-stage
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn install
COPY . .
Add node_modules
to .dockerignore
to prevent it from being copied when the Dockerfile
runs COPY . .
. We use volumes to bring in node_modules
.
**/node_modules
node_app:
container_name: node_app
build:
context: ./node_app
target: dev-stage # `production-stage` for production
volumes:
# For development:
# If node_modules already exists on the host, they will be copied
# into the container here. Since `yarn install` runs after the
# container starts, this volume won't override the node_modules.
- ./node_app:/usr/src/app
# For production:
#
- ./node_app:/usr/src/app
- /usr/src/app/node_modules
Upvotes: 4
Reputation: 2824
The node_modules
folder is overwritten by the volume and no more accessible in the container. I'm using the native module loading strategy to take out the folder from the volume:
/data/node_modules/ # dependencies installed here
/data/app/ # code base
Dockerfile:
COPY package.json /data/
WORKDIR /data/
RUN npm install
ENV PATH /data/node_modules/.bin:$PATH
COPY . /data/app/
WORKDIR /data/app/
The node_modules
directory is not accessible from outside the container because it is included in the image.
Upvotes: 62
Reputation: 1420
You can also ditch your Dockerfile, because of its simplicity, just use a basic image and specify the command in your compose file:
version: '3.2'
services:
frontend:
image: node:12-alpine
volumes:
- ./frontend/:/app/
command: sh -c "cd /app/ && yarn && yarn run start"
expose: [8080]
ports:
- 8080:4200
This is particularly useful for me, because I just need the environment of the image, but operate on my files outside the container and I think this is what you want to do too.
Upvotes: 4
Reputation: 6205
I encountered the same problem. When the folder /worker
is mounted to the container - all of it's content will be syncronized (so the node_modules folder will disappear if you don't have it locally.)
Due to incompatible npm packages based on OS, I could not just install the modules locally - then launch the container, so..
My solution to this, was to wrap the source in a src
folder, then link node_modules
into that folder, using this index.js file. So, the index.js
file is now the starting point of my application.
When I run the container, I mounted the /app/src
folder to my local src
folder.
So the container folder looks something like this:
/app
/node_modules
/src
/node_modules -> ../node_modules
/app.js
/index.js
It is ugly, but it works..
Upvotes: 17
Reputation: 318
You can try something like this in your Dockerfile:
FROM node:0.12
WORKDIR /worker
CMD bash ./start.sh
Then you should use the Volume like this:
volumes:
- worker/:/worker:rw
The startscript should be a part of your worker repository and looks like this:
#!/bin/sh
npm install
npm start
So the node_modules are a part of your worker volume and gets synchronized and the npm scripts are executed when everything is up.
Upvotes: 2
Reputation: 523
In my opinion, we should not RUN npm install
in the Dockerfile. Instead, we can start a container using bash to install the dependencies before runing the formal node service
docker run -it -v ./app:/usr/src/app your_node_image_name /bin/bash
root@247543a930d6:/usr/src/app# npm install
Upvotes: 4
Reputation: 1970
There is also some simple solution without mapping node_module
directory into another volume. It's about to move installing npm packages into final CMD command.
Disadvantage of this approach:
- run
npm install
each time you run container (switching fromnpm
toyarn
might also speed up this process a bit).
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
COPY . /worker/
CMD /bin/bash -c 'npm install; npm start'
redis:
image: redis
worker:
build: ./worker
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
Upvotes: 9
Reputation: 9560
There's elegant solution:
Just mount not whole directory, but only app directory. This way you'll you won't have troubles with npm_modules
.
Example:
frontend:
build:
context: ./ui_frontend
dockerfile: Dockerfile.dev
ports:
- 3000:3000
volumes:
- ./ui_frontend/src:/frontend/src
Dockerfile.dev:
FROM node:7.2.0
#Show colors in docker terminal
ENV COMPOSE_HTTP_TIMEOUT=50000
ENV TERM="xterm-256color"
COPY . /frontend
WORKDIR /frontend
RUN npm install update
RUN npm install --global typescript
RUN npm install --global webpack
RUN npm install --global webpack-dev-server
RUN npm install --global karma protractor
RUN npm install
CMD npm run server:dev
Upvotes: 24
Reputation: 141
Installing node_modules in container to different from project folder, and setting NODE_PATH to your node_modules folder helps me (u need to rebuild container).
I'm using docker-compose. My project file structure:
-/myproject
--docker-compose.yml
--nodejs/
----Dockerfile
docker-compose.yml:
version: '2'
services:
nodejs:
image: myproject/nodejs
build: ./nodejs/.
volumes:
- ./nodejs:/workdir
ports:
- "23005:3000"
command: npm run server
Dockerfile in nodejs folder:
FROM node:argon
RUN mkdir /workdir
COPY ./package.json /workdir/.
RUN mkdir /data
RUN ln -s /workdir/package.json /data/.
WORKDIR /data
RUN npm install
ENV NODE_PATH /data/node_modules/
WORKDIR /workdir
Upvotes: 8
Reputation: 9977
There are two seperate requirements I see for node dev environments... mount your source code INTO the container, and mount the node_modules FROM the container (for your IDE). To accomplish the first, you do the usual mount, but not everything... just the things you need
volumes:
- worker/src:/worker/src
- worker/package.json:/worker/package.json
- etc...
( the reason to not do - /worker/node_modules
is because docker-compose will persist that volume between runs, meaning you can diverge from what is actually in the image (defeating the purpose of not just bind mounting from your host)).
The second one is actually harder. My solution is a bit hackish, but it works. I have a script to install the node_modules folder on my host machine, and I just have to remember to call it whenever I update package.json (or, add it to the make target that runs docker-compose build locally).
install_node_modules:
docker build -t building .
docker run -v `pwd`/node_modules:/app/node_modules building npm install
Upvotes: 5
Reputation: 501
I recently had a similar problem. You can install node_modules
elsewhere and set the NODE_PATH
environment variable.
In the example below I installed node_modules
into /install
FROM node:0.12
RUN ["mkdir", "/install"]
ADD ["./package.json", "/install"]
WORKDIR /install
RUN npm install --verbose
ENV NODE_PATH=/install/node_modules
WORKDIR /worker
COPY . /worker/
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
Upvotes: 39
Reputation: 6291
Due to the way Node.js loads modules, node_modules
can be anywhere in the path to your source code. For example, put your source at /worker/src
and your package.json
in /worker
, so /worker/node_modules
is where they're installed.
Upvotes: 12