Node.js + Docker Compose: node_modules disappears - node.js

I'm attempting to use Docker Compose to bring together a number of Node.js apps in my development environment. I'm running into an issue, however, with node_modules.
Here's what's happening:
npm install is run as part of the Dockerfile.
I do not have node_modules in my local directory. (I shouldn't because the installation of dependencies should happen in the container, right? It seems to defeat the purpose otherwise, since I'd need to have Node.js installed locally.)
In docker-compose.yml, I'm setting up a volume with the source code.
docker-compose build runs fine.
When I docker-compose up, the node_modules directory disappears in the container — I'm assuming because the volume is mounted and I don't have it in my local directory.
How do I ensure that node_modules sticks around?
Dockerfile
FROM node:0.10.37
COPY package.json /src/package.json
WORKDIR /src
RUN npm install -g grunt-cli && npm install
COPY . /src
EXPOSE 9001
CMD ["npm", "start"]
docker-compose.yml
api:
build: .
command: grunt
links:
- elasticsearch
ports:
- "9002:9002"
volumes:
- .:/src
elasticsearch:
image: elasticsearch:1.5

Due to the way Node.js loads modules, simply place node_modules higher in the source code path. For example, put your source at /app/src and your package.json in /app, so /app/node_modules is where they're installed.

I tried your fix, but the issue is that most people run npm install in the /usr/src/app directory, resulting in the node_modules folder ending up in the /app directory. As a result the node_modules folder ends up in both the /usr/src and /usr/src/app directory in the container and you end up with the same issue you started with.

Related

How to prevent a non-empty host folder from overriding a mounted container folder?

I am trying to compile my assets using Docker for a Laravel project.
So, I have created a service called npm which is built from the following Dockerfile:
FROM node:16-alpine as node
WORKDIR /usr/src
ADD ./resources ./resources
COPY ["package.json", "package-lock.json", "vite.config.js", "./"]
RUN npm install --global cross-env
RUN npm install
RUN npm run build
Also, I am using the following Docker-compose configuration
node:
build:
context: ./
dockerfile: ./services/nodejs/Dockerfile
working_dir: /var/www
container_name: "nodejs"
volumes:
- ./:/var/www
tty: true
depends_on:
- php
Although the service is built successfully, it seems that my host directory (which is non-empty) is overriding the content of my node container. So, eventually I end up with no "node_modules" directory and my compiled assets and resources get lost.
So, what should I do?
I think that I can first copy the content of my host folder to the container, then delete the content of my host folder, and then run my scripts and then copy it back. But that seems to be a very time consuming thing to do. What is the best practice for cases like this? I'm sure I'm not the first one that has dockerized a full-stack Laravel project. Thanks in advance
It doesn't look like you COPY your application code into the container. That leaves you in a position where you need to bind-mount things in, but since the host directory is missing some built parts, the final container image is incomplete.
There's an especially important detail here around using this container as part of your build sequence which I'll get back to.
The first thing to do is to make sure all of the things you need are in the image itself.
FROM node:16-alpine as node
WORKDIR /usr/src
RUN npm install --global cross-env
# run `npm install` first to avoid rebuilds if `package.json` is unchanged
COPY ["package.json", "package-lock.json", "vite.config.js", "./"]
RUN npm install
# then copy in the rest of the application
# (make sure `node_modules` and `build` are in `.dockerignore`)
COPY ./ ./
RUN npm run build
# and explain how to run the container (*)
CMD npm run start
Then in your docker-compose.yml file, you don't need to specify volumes: because the code is already in your image. You similarly don't need to declare a working_dir: because the Dockerfile sets WORKDIR already. This section of your Compose file can probably be trimmed down to
node:
build:
context: ./
dockerfile: ./services/nodejs/Dockerfile
tty: true # actually needed?
depends_on:
- php
I don't think you actually want this container, though, since
I am trying to compile my assets using Docker for a Laravel project.
In this case you can use this Dockerfile (and it must be the complete self-contained Dockerfile) as part of a multi-stage build. Put this Dockerfile at the start of your PHP application Dockerfile and give it a name, and then you can COPY the built assets from one stage to another.
FROM node:16-alpine AS node # <-- `AS somename` is important
...
RUN npm run build
# no specific need for a CMD here
FROM php:fpm
...
COPY --from=node /usr/src/build ./static/
Again, since this will get filled in at Docker image build time, it's important to not overwrite the image content with a volumes: mount.
If you do this then there's not a "Node container" at all, and you can just outright remove this from your docker-compose.yml file. For this setup you don't need a long-running server, you just need to have Node available to run a compilation step at build time, and including it in a Dockerfile stage is enough for that.

Command in dockerfile doesn't take effect

Below is my dockerfile. After the dependencies are installed, I want to delete a specific binary (ffmpeg) from the node_modules folder on the container, and then reinstall it using the install.js file that exists under the same folder in node_modules.
FROM node:16-alpine
WORKDIR /web
COPY package.json package-lock.json ./
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV:-development}
ARG _ENV
ENV _ENV ${_ENV:-dev}
RUN npm install
RUN rm /web/node_modules/ffmpeg-static/ffmpeg
RUN node /web/node_modules/ffmpeg-static/install.js
COPY . .
EXPOSE 8081
ENTRYPOINT [ "npm" ]
I want the rm and node commands to take effect on the container after the startup and when all the dependencies are installed and copied to the container, but it doesn't happen. I feel like the commands after RUN are only executed during the build process, not after the container starts up.
After the container is up, when I ssh to it and execute the two commands above directly (RUN rm... and RUN node...), my changes are taking effect and everything works perfectly. I basically want these commands to automatically run after the container is up and running.
The docker-compose.yaml file looks like this:
version: '3'
services:
serverless:
build: web
user: root
ports:
- '8081:8081'
env_file:
- env/web.env
environment:
- _ENV=dev
command:
- run
- start
volumes:
- './web:/web'
- './env/mount:/secrets/'
if this dockerfile builds, it means you have a package-lock.json. That is evidence that npm install was executed in the root directory for that image, which means that node_modules exists locally and is being copied in the last copy.
You can avoid this by creating a .dockerignore file that includes files and directories (aka node_modules) that you'd like to exclude from being passed to the Docker build context, which will make sure that they don't get copied to your final image.
You can follow this example:
https://stackoverflow.com/a/43747867/3669093
Update
Dockerfile: Replace the ENTRYPOINT with the following
ADD ./start.sh /start.sh
CMD ["/start.sh"]
Start.sh
rm /web/node_modules/ffmpeg-static/ffmpeg
node /web/node_modules/ffmpeg-static/install.js
npm
Try rm -rf /web/node_modules/ffmpeg-static/ffmpeg
I assume that is directory, not a file.

Docker: node_modules symlink not working for typescript

I am working on containerization of Express app in TS. But not able to link node_modules installed outside the container. Volume is also mounted for development.But still getting error in editor(vscode) Cannot find module 'typeorm' or its corresponding type declarations., similar for all dependencies.
volumes:
- .:/usr/src/app
Dockerfile:
FROM node:16.8.0-alpine3.13 as builder
WORKDIR /usr/src/app
COPY package.json .
COPY transformPackage.js .
RUN ["node", "transformPackage"]
FROM node:16.8.0-alpine3.13
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/package-docker.json package.json
RUN apk update && apk upgrade
RUN npm install --quiet && mv node_modules ../ && ln -sf ../node_modules node_modules
COPY . .
EXPOSE 3080
ENV NODE_PATH=./dist
RUN npm run build
CMD ["npm", "start"]
I've one workaround where I can install dependencies locally, and then use those, but need another solution where we should install dependencies only in the container and not the outside.
Thanks in advance.
Your first code section implies you use docker-compose. Probably the build (of the Dockerfile) is also done there.
The point is that the volume mappings in the docker-compose are not available during build-phase in that same Docker-service.

Docker Compose node_modules in container empty after file change

I am trying to run a Node.js in a Docker container via Docker Compose.
The node_modules should be created in the image and the source code should be synced from the host.
Therefore, I use 2 volumes in docker-compose.yml. One for my project source and other for the node_modules in the image.
Everything seems to be working. The node_modules are installed and nodemon starts the app. In my docker container I have a node_modules folder with all dependencies. On my host an empty node_modules is created (I am not sure if this is expected).
But, when I change a file from the project. The nodemon process detects a file change and restarts the app. Now the app crashes because it can't find modules. The node_modules folder in the Docker container is empty now.
What am I doing wrong?
My folder structure looks like this
/
├── docker-compose.yml
├── project/
│ ├── package.json
│ ├── Dockerfile
docker-compose.yml
version: '3'
services:
app:
build: ./project
volumes:
- ./project/:/app/
- /app/node_modules/
project/Dockerfile
# base image
FROM node:9
ENV APP_ROOT /app
# set working directory
RUN mkdir $APP_ROOT
WORKDIR $APP_ROOT
# install and cache app dependencies
COPY package.json $APP_ROOT
COPY package-lock.json $APP_ROOT
RUN npm install
# add app
COPY . $APP_ROOT
# start app
CMD ["npm", "run", "start"]
project/package.json
...
"scripts": {
"start": "nodemon index.js"
}
...
Mapping a volume works to make files available to the container, not the other way round.
You can fix your issue by running "npm install" as part of the CMD. You can achieve this by having a "startup" script (eg start.sh) that runs npm install && npm run start. The script should be copied in the container with a normal COPY command and be executable.
When you start your container you should see files in the node_modules folder (on host).
You could use any of these two solutions :
npm install on the host and make a volume that the container can use and that references the host node_modules folder.
Use npm install in the image/container build process - which can be a pain for a development setup since it will npm i everytime you restart the container (if you change some files).

How can I copy node_modules folder out of my docker container onto the build machine?

I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.

Resources