What causes yarn/npm install from not installing any packages in a volume?
I have a Dockerfile that has the instruction to RUN yarn install (tested with both NPM and YARN), but the packages node_modules directory inside the container is empty. If I exec -it service_name bash and run the install command manually, it installs the packages correctly.
I've noticed this after a refactor, where I had a Worker service that did the process of installation and a second that ran the development server. Decided to keep all in the same Docker-compose declaration instead but since the issue start happening, it persists. Tried a full reset etc without success (down, rm containers, prune, etc).
The Service in question declared in the Docker-compose file:
node_dev:
build:
context: .
dockerfile: ./.docker/dockerFiles/node.yml
image: foobar/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
- GATSBY_WEBPACK_PUBLICPATH=/
volumes:
- ./foobar-blog-ui/:/home/node/app
- ui_node_modules:/home/node/app/node_modules
- ui_gatsbycli_node_module:/usr/local/lib/node_modules/gatsby-cli
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- yarn start'
depends_on:
- mysql
- wordpress
networks:
- foobar-wordpress-network
The related Volumes references in the service:
volumes:
ui_node_modules:
ui_gatsbycli_node_module:
Finally, the Dockerfile that generates the image:
FROM node:8.16.0-slim
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /home/node/app
RUN apt-get update
RUN apt-get install -y rsync vim git libpng-dev libjpeg-dev libxi6 build-essential libgl1-mesa-glx
RUN yarn global add gatsby-cli
RUN yarn install
Also, tried to yarn install --force --no-lockfile and made sure that it was tested without any package or yarn lock files present in the project root, and vice-versa.
I'm finding this quite odd and definitely a typo somewhere but I haven't been able to spot yet.
The host system is macOS Mojave.
I'd like to mention that if exec -it service_name bash and execute the NPM/YARN install the node_modules is populated with the packages.
Before most tests I did, I've also tried to reset by:
docker-compose stop
docker-compose rm -f
docker volume prune -f
docker network prune -f
And now tested with:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker volume prune -f
docker volume rm $(docker volume ls -qf dangling=true)
docker network prune -f
docker system prune --all --force --volumes
rm -rf ./.docker/certs/ ./.docker/certs-data/ ./.docker/logs/nginx/ ./.docker/mysql/data
The logs for the particular image:
Building node_dev
Step 1/8 : FROM node:8.16.0-slim
8.16.0-slim: Pulling from library/node
9fc222b64b0a: Pull complete
7d73b1e8f94b: Pull complete
1059045652d5: Pull complete
08cd60b80e4e: Pull complete
b7d875c65da4: Pull complete
Digest: sha256:0ec7ac448d11fa1d162fb6fd503ec83747c80dcf74bdf937b507b189b610756a
Status: Downloaded newer image for node:8.16.0-slim
---> 67857c9b26e1
Step 2/8 : ARG NODE_ENV=development
---> Running in da99a137d733
Removing intermediate container da99a137d733
---> 0f9b718d3f66
Step 3/8 : ARG NPM_TOKEN=3ea44a41-9293-4569-a235-a622ae216d60
---> Running in e339a4939029
Removing intermediate container e339a4939029
---> e47b42008bc3
Step 4/8 : ENV NODE_ENV=${NODE_ENV}
---> Running in fdc09147e9da
Removing intermediate container fdc09147e9da
---> 3b28ab5539d3
Step 5/8 : WORKDIR /home/node/app
---> Running in 44eef1d9293d
Removing intermediate container 44eef1d9293d
---> 2d07ecf3de2e
Step 6/8 : RUN echo "//registry.npmjs.org/:_authToken=$NPM_TOKEN" > .npmrc
---> Running in a47d5e22839b
Removing intermediate container a47d5e22839b
---> bd9f896846b7
Step 7/8 : RUN yarn global add gatsby-cli
---> Running in ca3e74d12df4
yarn global v1.15.2
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Installed "gatsby-cli#2.7.47" with binaries:
- gatsby
Done in 15.51s.
Removing intermediate container ca3e74d12df4
---> bc8d15985ad0
Step 8/8 : RUN yarn install --force --no-lockfile
---> Running in 3f0e35e5487b
yarn install v1.15.2
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Rebuilding all packages...
Done in 0.04s.
Removing intermediate container 3f0e35e5487b
---> 485b9e9bccba
Successfully built 485b9e9bccba
Successfully tagged foobar/node_dev:latest
Changed the Service command to sleep 300s and exect -it and ls -la the /home/node/app/node_modules to find:
.yarn-integrity
And when cat .yarn-integrity I see:
{
"systemParams": "linux-x64-57",
"modulesFolders": [],
"flags": [],
"linkedModules": [],
"topLevelPatterns": [],
"lockfileEntries": {},
"files": [],
"artifacts": {}
}
You install node packages during build time and installed properly as you see from logs without any error. So the issue with something docker-compose or existing volume.
So one thing that can help you to debug is create volume and then try with docker run instead of docker-compose.
docker volume create my_node_modules
docker run -it --rm -v my_node_modules:/home/node/app foobar/node_dev:lates bash -c "cd /home/node/app;npm list"
So now check the volume by attaching any container that to verify the behaiour
docker run -it --rm -v my_node_modules:/home/test/ node:alpine ash -c "cd /home/test/;npm list"
Or another option is to install packages at run time and then the installed packages will be available new any volume.
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- yarn install && yarn start'
To verify that the packages exit or not try to run without docker-compose.
docker run -it --rm foobar/node_dev:lates bash -c "cd /home/node/app;npm list"
Move yarn install to entrypoint or do not attached volume for node modules.
I found a solution by taking advantage of Docker's cache system through the COPY instruction, that I've set to COPY package.json, package-lock.json and yarn.lock; Which caches the build step and only updates if there is any difference in the files, skipping reinstalling the packages otherwise.
To summarize, the service in the Docker compose remains the same, with the volumes, following the best practices shared in the community for nodejs projects.
We can see in the volumes a bind-mount ./foobar-blog-ui/:/home/node/app that mounts the App source code on the host to the app directory in the container. This allows a fast development environment because the changes we make in the host are immediately populated in the container, not possible to do otherwise.
Finally the named volume for the node_modules that I've named ui_node_modules, which is quite tricky to understand as we started by bind mount the app source code that includes the root where node_modules sits. When npm install runs the node_modules directory is created in the container, correct? But the bind-mount we've declared hides it. So, the named node_modules volume called ui_node_modules solves it by persisting the content of the /home/node/app/node_modules directory into the container and bypassing the hidden bind-mount.
node_dev:
build:
context: .
dockerfile: ./.docker/dockerFiles/node.yml
image: foobar/node_dev:latest
container_name: node_dev
working_dir: /home/node/app
ports:
- 8000:8000
- 9000:9000
environment:
- NODE_ENV=development
- GATSBY_WEBPACK_PUBLICPATH=/
volumes:
- ./foobar-blog-ui/:/home/node/app
- ui_node_modules:/home/node/app/node_modules
- ui_gatsbycli_node_module:/usr/local/lib/node_modules/gatsby-cli
- ./.docker/scripts/wait-for-it.sh:/home/node/wait-for-it.sh
command: /bin/bash -c '/home/node/wait-for-it.sh wordpress-reverse-proxy:80 -t 10 -- yarn start'
depends_on:
- mysql
- wordpress
networks:
- foobar-wordpress-network
volumes:
ui_node_modules:
ui_gatsbycli_node_module:
But the Dockerfile includes the instruction to copy the package*.json files that helps determinate when the Docker cache for the instruction step or layer should be updated.
FROM node:8.16.0-slim
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
# The package files should use Docker cache system
COPY ./foobar-blog-ui/package*.json .
COPY ./foobar-blog-ui/yarn.lock .
RUN yarn global add gatsby-cli
RUN yarn install
RUN apt-get update
RUN apt-get install -y rsync vim git libpng-dev libjpeg-dev libxi6 build-essential libgl1-mesa-glx
EXPOSE 8000
Hope this helps someone else in the future!
Related
I am building a Node.JS application using Postgres as the database. I am using Docker/Docker compose for the development environment. I could dockerise my environment. However there is one issue remain which is the project is not refreshing or showing the latest changes using Nodemon. Basically it is not detecting the code changes.
I have a docker-compose.yaml file with the following code.
version: '3.8'
services:
web:
container_name: web-server
build: .
ports:
- 4000:4000
restart: always
This is my Dockerfile.
FROM node:14-alpine
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /
RUN cd /; npm install;
RUN npm install -g nodemon
EXPOSE 4000
CMD ["npm", "run", "docker-dev"]
This is my docker-dev script in package.json file
"docker-dev": "nodemon -L --inspect=0.0.0.0 index.js"
When I spin up the environment running docker-compose up -d and go to the localhost:4000 on the browser, I can see my application up and running there. But when I make the changes to the code and refresh the page, the new changes are not there. It seems like Nodemon is not working. I had to remove Docker images and spin up the environment again to see the latest changes.
How can I fix it?
During development, I often use a raw node image and run it like this
docker run --rm -d -v $(pwd):/app --workdir /app -u $(id -u):$(id -g) -p 3000:3000 --entrypoint /bin/bash node -c "npm install && npm start"
The different parts of the command do the following
--rm remove the container when it exits
-d run detached
-v $(pwd):/app map the current directory to /app in the container
--workdir /app set the working directory in the container to /app
-u $(id -u):$(id -g) run the container with my UID and GID so any files created in the host directory will be owned by me
-p 3000:3000 map port 3000 to the host
--entrypoint /bin/bash set the entrypoint to bash
node run the official node image
-c "npm install && npm start" on container start, install npm packages and start the app
You can do something similar. If we replace a few things, this should match your project
docker run --rm -d -v $(pwd):/app --workdir /app -u $(id -u):$(id -g) -p 4000:4000 --entrypoint /bin/sh node:14-alpine -c "npm install && npm install -g nodemon && npm run docker-dev"
I've changed the entrypoint because Alpine doesn't have bash installed. The image is node:14-alpine. The port is 4000. And on start, you want it to install nodemon and run the docker-dev script.
I put the command in a shell script so I don't have to type it all out every time.
You are doing your changes in your local file system and not in the docker container file system.
You need to use a volume of you want to see your changes reflecting real time.
We have been using docker-compose for 1 year and we had no problems. in the past week we each started getting a weird error related to permissions
at internal/main/run_main_module.js:17:47 : Error: EPERM: operation not permitted, open <PATH TO FILE>
only happens when I switch branches
compose structure:
version: "2.4"
# template:
x-base: &base-service-template
env_file:
- ./.env
working_dir: /app/
volumes:
- ./src:/app/src:cached
services:
service1:
image: service1
<<: *base-service-template
service2:
image: service2
<<: *base-service-template
we are all working on OSX. we tried giving docker permissions over the filesystem and it still didn't work.
BUT there is something that works. restarting the daemon. but i don't want to restart the daemon each time I switch branches
additional info:
the docker file base of each service looks like this
FROM node:12-alpine as builder
ENV TZ=Europe/London
RUN npm i npm#latest -g
RUN mkdir /app && chown node:node /app
WORKDIR /app
RUN apk add --no-cache python3 make g++ tini \
&& apk add --update tzdata
USER node
COPY package*.json ./
RUN npm install --no-optional && npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
COPY . .
FROM builder as dev
USER node
CMD ["nodemon", "src/services/service/service.js"]
FROM builder as prod
USER node
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "src/services/service/service.js"]
I run on the dev layer so that we can leverage the nodemon code reload.
docker version: Docker version 19.03.13, build 4484c46d9d
docker-compose version: docker-compose version 1.27.4, build 40524192
So after trying some weird stuff, an answer i found was:
after doing this:
remember to restart the shell that you run your compose from.
An edit from the future: it still misbehaves sometimes so this may not fix for everyone
I am run a single Node.js script like so: docker run -it --rm -u node --name build_container -v $(pwd):/home/node/app -w "/home/node/app" node:lts bash -c "yarn install --no-cache --frozen-lockfile".
However, the script log shows the displays info No lockfile found, and, what is even weirder, a message that says a package-lock.json was found. However, the work directory has no package-lock.
Are there any ideas what could be the issue?
I would suggest using your own Dockerfile to build your image - and then run it inside the build - like so:
Dockerfile
FROM node:12-alpine
# Create work directory
RUN mkdir -p /express
WORKDIR /express
# Bundle app sources
COPY . .
# Build app
RUN yarn install --prod --frozen-lockfile && yarn run build
EXPOSE 3000
# Set default NODE_ENV to production
ENV NODE_ENV production
ENTRYPOINT [ "yarn", "start" ]
.dockerignore
node_modules
build
dist
config
docs
*.log
.git
.vscode
And then build the image:
docker build -t <your image name> -f <Dockerfile (if omitted uses local folder .Dockerfile> <path to code your code>
Once built, run it as you would a normal image - as everything is already in.
Considering the following development environment with Docker:
# /.env
CONTAINER_PROJECT_PATH=/projects/my_project
# /docker-compose.yml
version: '2'
services:
web:
restart: always
build: ./docker/web
working_dir: ${CONTAINER_PROJECT_PATH}
ports:
- "3000:3000"
volumes:
- .:${CONTAINER_PROJECT_PATH}
- ./node_modules:${CONTAINER_PROJECT_PATH}/node_modules
# /docker/web/Dockerfile
FROM keymetrics/pm2:latest-alpine
FROM node:8.11.4
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN mkdir -p /projects/my_project
RUN chown node:node /projects/my_project
# If you are going to change WORKDIR value,
# please also consider to change .env file
WORKDIR /projects/my_project
USER node
RUN npm install pm2 -g
CMD npm install && node /projects/my_project/src/index.js
How do I install a new module inside my container? npm install on host won't work because of node_module belongs to the root user. Is there any way to run it from the container?
Edit: Is there something "one-liner" that I could run outside the container to install a module?
Assuming you don't mean to edit the Dockerfile, you can always execute a command on a running container with the -it flag:
$ docker run -it <container name> /usr/bin/npm install my_moduel
First access inside container:
$ docker exec -it <container name> /bin/bash
Then inside container:
$ npm install pm2 -g
You either
a) want to install pm2 globally, which you need to run as root, so place your install before USER node so that you instead run at the default user (root), or
b) you just want to install pm2 to use in your project, in which case just drop the -g flag which tells npm to install globally, and it will instead be in your project node_modules, and you can run it with npx pm2 or node_modules/.bin/pm2, or programmatically from your index.js (for the latter I'd suggest adding it to your package.json and no manually installing it at all).
Yarn version: 0.21.2
Nodejs version: 6.9 and 4.7
When running yarn locally it works
When running npm install it works
When running yarn install Dockerfile (docker build .) it fails with: error Couldn't find a package.json file in "/root/.cache/yarn/npm-readable-stream-2.2.2-a9e6fec3c7dda85f8bb1b3ba7028604556fc825e"
I have absolutely no idea why.
Step 16 : RUN yarn install
---> Running in 917c2b1b57fb
yarn install v0.21.2
[1/4] Resolving packages...
[2/4] Fetching packages...
error Couldn't find a package.json file in "/root/.cache/yarn/npm-readable-stream-2.2.2-a9e6fec3c7dda85f8bb1b3ba7028604556fc825e"
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
The command '/bin/sh -c yarn install' returned a non-zero code: 1
Yarn is installed this way in the Dockerfile
RUN curl -o- -L https://yarnpkg.com/install.sh | bash -s -- --version 0.21.2
ENV PATH /root/.yarn/bin:$PATH
When you build a docker image, no files are copied automatically in the docker image despite that are part of the docker build context . (depends also of your .dockerignore file configuration if you have any). To add files from the docker context to your docker image you can do it explicitly with running commands like ADD or COPY.
Below an example of dockerfile:
WORKDIR /app
COPY ["yarn.lock", "package.json", "./"]
# Install app dependencies
RUN yarn install --check-files --non-interactive --ignore-optional --frozen-lockfile