Handling node modules with docker-compose - node.js

I am building a set of connected node services using docker-compose and can't figure out the best way to handle node modules. Here's what should happen in a perfect world:
Full install of node_modules in each container happens on initial build via each service's Dockerfile
Node modules are cached after the initial load -- i.e. functionality so that npm only installs when package.json has changed
There is a clear method for installing npm modules -- whether it needs to be rebuilt or there is an easier way
Right now, whenever I npm install --save some-module and subsequently run docker-compose build or docker-compose up --build, I end up with the module not actually being installed.
Here is one of the Dockerfiles
FROM node:latest
# Create app directory
WORKDIR /home/app/api-gateway
# Intall app dependencies (and cache if package.json is unchanged)
COPY package.json .
RUN npm install
# Bundle app source
COPY . .
# Run the start command
CMD [ "npm", "dev" ]
and here is the docker-compose.myl
version: '3'
services:
users-db:
container_name: users-db
build: ./users-db
ports:
- '27018:27017'
healthcheck:
test: exit 0'
api-gateway:
container_name: api-gateway
build: ./api-gateway
command: npm run dev
volumes:
- './api-gateway:/home/app/api-gateway'
- /home/app/api-gateway/node_modules
ports:
- '3000:3000'
depends_on:
- users-db
links:
- users-db

It looks like this line might be overwriting your node_modules directory:
# Bundle app source
COPY . .
If you ran npm install on your host machine before running docker build to create the image, you have a node_modules directory on your host machine that is being copied into your container.
What I like to do to address this problem is copy the individual code directories and files only, eg:
# Copy each directory and file
COPY ./src ./src
COPY ./index.js ./index.js
If you have a lot of files and directories this can get cumbersome, so another method would be to add node_modules to your .dockerignore file. This way it gets ignored by Docker during the build.

Related

Install Node dependencies in Debug Container

I am currently setting up a Docker container that will be used to Debug a NodeJS application. This container needs to support live-reloading (using nodemon) and needs to be a Linux container (my workstation is a Windows machine).
My current setup is the following:
Dockerfile.debug
FROM node:current-alpine
VOLUME /app
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production --registry=http://172.16.102.123:8182/repository/npm/
RUN npm install -g nodemon
ENV NODE_ENV=test
EXPOSE 8000
EXPOSE 9229
CMD [ "nodemon", "--inspect=0.0.0.0:9229", "--ignore", "dist/test/**/*.js", "dist/index.js" ]
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile.debug
volumes:
- .:/app
- /app/node_modules
ports:
- 8000:8000
Everything works fine except the dependencies because some of these are plattform specific. That means, it is not possible to simply mount the node_modules directory into the container (like I do with the rest of the codebase). I tried setting up my files in such a way, that the dependencies are different for each platform but I either end up with an empty node_modules directory or with the node_modules directory from the host (the current set up gives me an empty directory). Does anybody know how to fix my problem? I have looked at other solutions (like this one) but they did not work.

Node.js docker container not updating to changes in volume

I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.
I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.
Dockerfile:
FROM node:12.15.0-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
EXPOSE 5000
CMD [ "npm", "start" ]
docker-compose.yml:
version: "3"
services:
backend:
container_name: backend
build:
context: ./backend
dockerfile: Dockerfile
volumes:
- ./backend:/usr/app
environment:
- APP_PORT=80
ports:
- '5000:5000'
client:
container_name: client
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./client:/app
ports:
- '80:8080'
For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.
I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.
Any help would be appreciated.
The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.
I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).
If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one: https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.
Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here: https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]

docker container crashes with exited with code 139

I have this docker file build/and run an node application
# ---- Base Node ----
FROM node:10 AS base
# Create app directory
WORKDIR /app
# ---- Dependencies ----
FROM base AS dependencies
COPY package.json ./
# install app dependencies including 'devDependencies'
RUN npm install
# ---- Copy Files/Build ----
FROM dependencies AS build
WORKDIR /app
COPY . /app
# Build the app
RUN npm run build
WORKDIR /app/dist
# install npm models in dist
RUN npm install --only=production
# --- Release with Alpine ----
FROM node:10-alpine AS release
# Create app directory
WORKDIR /app
# optional
ENV NODE_ENV=development
ENV MONGO_HOST=mongodb://localhost/chronas-api
ENV MONGO_PORT=27017
ENV PORT=80
# copy app from build
COPY --from=build /app/dist/ ./
CMD ["node", "index.js"]
and using this docker-compose file
version: '3'
services:
database:
image: mongo
container_name: mongo
ports:
- "27017:27017"
app:
build: .
container_name: chronas_api
ports:
- "80:80"
- "5858:5858"
links:
- database
environment:
- JWT_SECRET='placeholder'
- MONGO_HOST=mongodb://database/chronas-api
- APPINSIGHTS_INSTRUMENTATIONKEY='placeholder'
- TWITTER_CONSUMER_KEY=placeholder
- TWITTER_CONSUMER_SECRET=placeholder
- TWITTER_CALLBACK_URL=placeholder
- PORT=80
depends_on:
- database
stdin_open: true
tty: true
always when I try to write to the mongodb the node container crashes with this error:
exited with code 139
can anyone help? When I run the application only with docker it works fine
The issue may be caused by the fact that node:10-alpine is a tag where the underlying image may change with updates, so when you are building the same app without using compose it won't pull the most recent image, docker-compose will do a pull from the docker hub instead.
Images based on alpine may have some dependency issues that are quite hard to debug from one version to another, you can find some possibilities for this particular issue here
I was using the tag node:8-alpine in my application and I found out that the current latest node:8.15.1-alpine is causing the Exited with code 139 issue that was not present in the previous image node:8.15.0-alpine. Downgrading may be the easiest solution to solve this kind of issue, check if you are using bcrypt too.
Another option is to use a debian based image that will be less likely to have this kind of issues (just consider it's slighly bigger in size).

Docker with node bcrypt — invalid ELF header

I've tried every solution from this post and this post
I'm not finding a solution to get rid of the following error when running docker-compose up:
module.js:598
return process.dlopen(module, path._makeLong(filename));
^
Error: /code/node_modules/bcrypt/lib/binding/bcrypt_lib.node: invalid ELF header
Here's my latest attempt docker-compose.yml
version: "2"
services:
app:
build: ./client
ports:
- "3000:3000"
links:
- auth
volumes:
- ./client:/code
auth:
build: ./auth-service
ports:
- "3002:3002"
links:
- db
volumes:
- ./auth-service:/code
db:
...
And my auth service Dockerfile:
FROM node:7.7.1
EXPOSE 3002
WORKDIR /code
COPY package.json /code
RUN npm install
COPY . /code
CMD npm start
After trying each of the solution from the above two links, I rebuild the containers and it always results in the same error.
Also worth noting, the service runs fine locally, when I don't use docker.
How do I get docker to work with bcrypt?
Update
I was able to get it working by doing the following:
finding the id of the container: docker ps
accessing the container: docker exec -t -i containerId /bin/bash
installing bcrypt: npm install bcrypt
This isn't ideal for portability
I spent a few hours trying to solve this and in the end I came up with the following solution.
My compose file looks like this.....
version: "3"
services:
server:
build:
context: ./server
volumes:
- ./server:/usr/src/app
- /usr/src/app/node_modules/
ports:
- 3050:3050
depends_on:
- db
command: ["nodemon", "./bin/www"]
The second volume mount there is the important one as this gets around the local node_module issue.
Just for reference my dockerfile is like this:
FROM node
RUN npm install -g nodemon
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install
EXPOSE 3050
CMD ["nodemon", "./bin/www"]
was struggling with this one few times, the ".dockerignore" solution wont work if you use volumes sadly, since its only related to the "Copy" command and only when you build the container.
the only solution i found which i think makes the most sense, is to volume only what you need, what does it mean - divide your source code and the "configurations" files (such as package.json):
- src
-- morecode
-- morecode2
-- server.js
- index.js
- package.json
- jslint.json
- tsconfig.json
- .env
- .dockerignore
- ... etc
and put the volume only on the "src" folder, that way your builds will also be much faster plus your node modules will be built and installed on the correct operation system, d'ont forget to add .dockerignore to node_modules to prevent builds form taking unnecessary longer time
do note that doing so will require re-build of the application every time your adding new package, but if you use npm best practice and you divide the npm installation in your docker file to be cached, it will be faster
The reason this error occurs is that the node module bcrypt is first compiled on your original machine (specific for your OS) and when an image is built on docker it cannot run since the OS is no longer the same. solution create a .dockerignore file in your root folder and add node_modules to this file.

Docker-compose: node_modules not present in a volume after npm install succeeds

I have an app with the following services:
web/ - holds and runs a python 3 flask web server on port 5000. Uses sqlite3.
worker/ - has an index.js file which is a worker for a queue. the web server interacts with this queue using a json API over port 9730. The worker uses redis for storage. The worker also stores data locally in the folder worker/images/
Now this question only concerns the worker.
worker/Dockerfile
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
RUN npm install
COPY . /worker/
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
When I run docker-compose build, everything works as expected and all npm modules are installed in /worker/node_modules as I'd expect.
npm WARN package.json unfold#1.0.0 No README data
> phantomjs#1.9.2-6 install /worker/node_modules/pageres/node_modules/screenshot-stream/node_modules/phantom-bridge/node_modules/phantomjs
> node install.js
<snip>
But when I do docker-compose up, I see this error:
worker_1 | Error: Cannot find module 'async'
worker_1 | at Function.Module._resolveFilename (module.js:336:15)
worker_1 | at Function.Module._load (module.js:278:25)
worker_1 | at Module.require (module.js:365:17)
worker_1 | at require (module.js:384:17)
worker_1 | at Object.<anonymous> (/worker/index.js:1:75)
worker_1 | at Module._compile (module.js:460:26)
worker_1 | at Object.Module._extensions..js (module.js:478:10)
worker_1 | at Module.load (module.js:355:32)
worker_1 | at Function.Module._load (module.js:310:12)
worker_1 | at Function.Module.runMain (module.js:501:10)
Turns out none of the modules are present in /worker/node_modules (on host or in the container).
If on the host, I npm install, then everything works just fine. But I don't want to do that. I want the container to handle dependencies.
What's going wrong here?
(Needless to say, all packages are in package.json.)
This happens because you have added your worker directory as a volume to your docker-compose.yml, as the volume is not mounted during the build.
When docker builds the image, the node_modules directory is created within the worker directory, and all the dependencies are installed there. Then on runtime the worker directory from outside docker is mounted into the docker instance (which does not have the installed node_modules), hiding the node_modules you just installed. You can verify this by removing the mounted volume from your docker-compose.yml.
A workaround is to use a data volume to store all the node_modules, as data volumes copy in the data from the built docker image before the worker directory is mounted. This can be done in the docker-compose.yml like this:
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- ./worker/:/worker/
- /worker/node_modules
links:
- redis
I'm not entirely certain whether this imposes any issues for the portability of the image, but as it seems you are primarily using docker to provide a runtime environment, this should not be an issue.
If you want to read more about volumes, there is a nice user guide available here: https://docs.docker.com/userguide/dockervolumes/
EDIT: Docker has since changed it's syntax to require a leading ./ for mounting in files relative to the docker-compose.yml file.
The node_modules folder is overwritten by the volume and no more accessible in the container. I'm using the native module loading strategy to take out the folder from the volume:
/data/node_modules/ # dependencies installed here
/data/app/ # code base
Dockerfile:
COPY package.json /data/
WORKDIR /data/
RUN npm install
ENV PATH /data/node_modules/.bin:$PATH
COPY . /data/app/
WORKDIR /data/app/
The node_modules directory is not accessible from outside the container because it is included in the image.
The solution provided by #FrederikNS works, but I prefer to explicitly name my node_modules volume.
My project/docker-compose.yml file (docker-compose version 1.6+) :
version: '2'
services:
frontend:
....
build: ./worker
volumes:
- ./worker:/worker
- node_modules:/worker/node_modules
....
volumes:
node_modules:
my file structure is :
project/
│── worker/
│  └─ Dockerfile
└── docker-compose.yml
It creates a volume named project_node_modules and re-use it every time I up my application.
My docker volume ls looks like this :
DRIVER VOLUME NAME
local project_mysql
local project_node_modules
local project2_postgresql
local project2_node_modules
I recently had a similar problem. You can install node_modules elsewhere and set the NODE_PATH environment variable.
In the example below I installed node_modules into /install
worker/Dockerfile
FROM node:0.12
RUN ["mkdir", "/install"]
ADD ["./package.json", "/install"]
WORKDIR /install
RUN npm install --verbose
ENV NODE_PATH=/install/node_modules
WORKDIR /worker
COPY . /worker/
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
command: npm start
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
There's elegant solution:
Just mount not whole directory, but only app directory. This way you'll you won't have troubles with npm_modules.
Example:
frontend:
build:
context: ./ui_frontend
dockerfile: Dockerfile.dev
ports:
- 3000:3000
volumes:
- ./ui_frontend/src:/frontend/src
Dockerfile.dev:
FROM node:7.2.0
#Show colors in docker terminal
ENV COMPOSE_HTTP_TIMEOUT=50000
ENV TERM="xterm-256color"
COPY . /frontend
WORKDIR /frontend
RUN npm install update
RUN npm install --global typescript
RUN npm install --global webpack
RUN npm install --global webpack-dev-server
RUN npm install --global karma protractor
RUN npm install
CMD npm run server:dev
UPDATE: Use the solution provided by #FrederikNS.
I encountered the same problem. When the folder /worker is mounted to the container - all of it's content will be syncronized (so the node_modules folder will disappear if you don't have it locally.)
Due to incompatible npm packages based on OS, I could not just install the modules locally - then launch the container, so..
My solution to this, was to wrap the source in a src folder, then link node_modules into that folder, using this index.js file. So, the index.js file is now the starting point of my application.
When I run the container, I mounted the /app/src folder to my local src folder.
So the container folder looks something like this:
/app
/node_modules
/src
/node_modules -> ../node_modules
/app.js
/index.js
It is ugly, but it works..
Due to the way Node.js loads modules, node_modules can be anywhere in the path to your source code. For example, put your source at /worker/src and your package.json in /worker, so /worker/node_modules is where they're installed.
There is also some simple solution without mapping node_module directory into another volume. It's about to move installing npm packages into final CMD command.
Disadvantage of this approach:
run npm install each time you run container (switching from npm to yarn might also speed up this process a bit).
worker/Dockerfile
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
COPY . /worker/
CMD /bin/bash -c 'npm install; npm start'
docker-compose.yml
redis:
image: redis
worker:
build: ./worker
ports:
- "9730:9730"
volumes:
- worker/:/worker/
links:
- redis
Installing node_modules in container to different from project folder, and setting NODE_PATH to your node_modules folder helps me (u need to rebuild container).
I'm using docker-compose. My project file structure:
-/myproject
--docker-compose.yml
--nodejs/
----Dockerfile
docker-compose.yml:
version: '2'
services:
nodejs:
image: myproject/nodejs
build: ./nodejs/.
volumes:
- ./nodejs:/workdir
ports:
- "23005:3000"
command: npm run server
Dockerfile in nodejs folder:
FROM node:argon
RUN mkdir /workdir
COPY ./package.json /workdir/.
RUN mkdir /data
RUN ln -s /workdir/package.json /data/.
WORKDIR /data
RUN npm install
ENV NODE_PATH /data/node_modules/
WORKDIR /workdir
There are two seperate requirements I see for node dev environments... mount your source code INTO the container, and mount the node_modules FROM the container (for your IDE). To accomplish the first, you do the usual mount, but not everything... just the things you need
volumes:
- worker/src:/worker/src
- worker/package.json:/worker/package.json
- etc...
( the reason to not do - /worker/node_modules is because docker-compose will persist that volume between runs, meaning you can diverge from what is actually in the image (defeating the purpose of not just bind mounting from your host)).
The second one is actually harder. My solution is a bit hackish, but it works. I have a script to install the node_modules folder on my host machine, and I just have to remember to call it whenever I update package.json (or, add it to the make target that runs docker-compose build locally).
install_node_modules:
docker build -t building .
docker run -v `pwd`/node_modules:/app/node_modules building npm install
In my opinion, we should not RUN npm install in the Dockerfile. Instead, we can start a container using bash to install the dependencies before runing the formal node service
docker run -it -v ./app:/usr/src/app your_node_image_name /bin/bash
root#247543a930d6:/usr/src/app# npm install
You can also ditch your Dockerfile, because of its simplicity, just use a basic image and specify the command in your compose file:
version: '3.2'
services:
frontend:
image: node:12-alpine
volumes:
- ./frontend/:/app/
command: sh -c "cd /app/ && yarn && yarn run start"
expose: [8080]
ports:
- 8080:4200
This is particularly useful for me, because I just need the environment of the image, but operate on my files outside the container and I think this is what you want to do too.
You can just move node_modules into a / folder.
How it works
FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
RUN npm install \
&& mv node_modules /node_modules
COPY . /worker/
You can try something like this in your Dockerfile:
FROM node:0.12
WORKDIR /worker
CMD bash ./start.sh
Then you should use the Volume like this:
volumes:
- worker/:/worker:rw
The startscript should be a part of your worker repository and looks like this:
#!/bin/sh
npm install
npm start
So the node_modules are a part of your worker volume and gets synchronized and the npm scripts are executed when everything is up.
If you want the node_modules folder available to the host during development, you could install the dependencies when you start the container instead of during build-time. I do this to get syntax highlighting working in my editor.
Dockerfile
# We're using a multi-stage build so that we can install dependencies during build-time only for production.
# dev-stage
FROM node:14-alpine AS dev-stage
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
# `yarn install` will run every time we start the container. We're using yarn because it's much faster than npm when there's nothing new to install
CMD ["sh", "-c", "yarn install && yarn run start"]
# production-stage
FROM node:14-alpine AS production-stage
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn install
COPY . .
.dockerignore
Add node_modules to .dockerignore to prevent it from being copied when the Dockerfile runs COPY . .. We use volumes to bring in node_modules.
**/node_modules
docker-compose.yml
node_app:
container_name: node_app
build:
context: ./node_app
target: dev-stage # `production-stage` for production
volumes:
# For development:
# If node_modules already exists on the host, they will be copied
# into the container here. Since `yarn install` runs after the
# container starts, this volume won't override the node_modules.
- ./node_app:/usr/src/app
# For production:
#
- ./node_app:/usr/src/app
- /usr/src/app/node_modules
I tried the most popular answers on this page but ran into an issue: the node_modules directory in my Docker instance would get cached in the the named or unnamed mount point, and later would overwrite the node_modules directory that was built as part of the Docker build process. Thus, new modules I added to package.json would not show up in the Docker instance.
Fortunately I found this excellent page which explains what was going on and gives at least 3 ways to work around it:
https://burnedikt.com/dockerized-node-development-and-mounting-node-volumes/
If you don't use docker-compose you can do it like this:
FROM node:10
WORKDIR /usr/src/app
RUN npm install -g #angular/cli
COPY package.json ./
RUN npm install
EXPOSE 5000
CMD ng serve --port 5000 --host 0.0.0.0
Then you build it: docker build -t myname . and you run it by adding two volumes, the second one without source: docker run --rm -it -p 5000:5000 -v "$PWD":/usr/src/app/ -v /usr/src/app/node_modules myname
With Yarn you can move the node_modules outside the volume by setting
# ./.yarnrc
--modules-folder /opt/myproject/node_modules
See https://www.caxy.com/blog/how-set-custom-location-nodemodules-path-yarn

Resources