New package.json packages are not showing in Docker container - node.js

I am using Docker with Docker Compose and these are my files:
#DOCKERFILE
FROM mhart/alpine-node
# Create app directory
RUN mkdir -p /home/app
# Bundle app soure
COPY . /home/app
# From now on we work in /home/app
WORKDIR /home/app
# Install yarn and node modules
RUN echo -e 'http://dl-cdn.alpinelinux.org/alpine/edge/main\nhttp://dl-
cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-
cdn.alpinelinux.org/alpine/edge/testing' > /etc/apk/repositories \
&& apk add --no-cache yarn \
&& yarn
EXPOSE 8080
This is the docker-compose file for dev:
app:
build: .
command: yarn start:dev
environment:
NODE_ENV: development
ports:
- '8080:8080'
volumes:
- .:/home/app
- /home/app/node_modules
The problem I am having is that this setup seems to work just once because no matter which new module I add to the package.json, whenever I run docker-compose build it will not install the new package.
The reason why I am using the volumes is because nodemon would not work without .:/home/app, but if the node modules are not installed in the host then it will fail, reason why I need /home/app/node_modules. I suspect this could be the cause of my error, but I am not sure how to circumvent that.

I solved this by moving my src code inside an src directory.
This means my docker-compose.yml file now looks like this:
app:
build: .
command: yarn start:dev
environment:
NODE_ENV: development
ports:
- '8080:8080'
volumes:
- ./src:/home/app/src
Since I am not mounting the whole dir with the node_modules, new ones seem to be installed correctly.

The package.json should be copied into app directory and "npm install" should be invoked in Dockerfile before copying the bundle line.
#DOCKERFILE
FROM mhart/alpine-node
# Create app directory
RUN mkdir -p /home/app
WORKDIR /home/app
# Install app dependencies
COPY package.json /home/app
RUN npm install
# Bundle app soure
COPY . /home/app
# Install yarn and node modules
RUN echo -e 'http://dl-cdn.alpinelinux.org/alpine/edge/main\nhttp://dl-
cdn.alpinelinux.org/alpine/edge/community\nhttp://dl-
cdn.alpinelinux.org/alpine/edge/testing' > /etc/apk/repositories \
&& apk add --no-cache yarn \
&& yarn
EXPOSE 8080
If there is any new dependency registers in package.json, it should be installed when the docker build command is invoked.

Related

docker-compose volume started acting weird, no permissions on files after switching branches

We have been using docker-compose for 1 year and we had no problems. in the past week we each started getting a weird error related to permissions
at internal/main/run_main_module.js:17:47 : Error: EPERM: operation not permitted, open <PATH TO FILE>
only happens when I switch branches
compose structure:
version: "2.4"
# template:
x-base: &base-service-template
env_file:
- ./.env
working_dir: /app/
volumes:
- ./src:/app/src:cached
services:
service1:
image: service1
<<: *base-service-template
service2:
image: service2
<<: *base-service-template
we are all working on OSX. we tried giving docker permissions over the filesystem and it still didn't work.
BUT there is something that works. restarting the daemon. but i don't want to restart the daemon each time I switch branches
additional info:
the docker file base of each service looks like this
FROM node:12-alpine as builder
ENV TZ=Europe/London
RUN npm i npm#latest -g
RUN mkdir /app && chown node:node /app
WORKDIR /app
RUN apk add --no-cache python3 make g++ tini \
&& apk add --update tzdata
USER node
COPY package*.json ./
RUN npm install --no-optional && npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
COPY . .
FROM builder as dev
USER node
CMD ["nodemon", "src/services/service/service.js"]
FROM builder as prod
USER node
ENTRYPOINT ["/sbin/tini", "--"]
CMD ["node", "src/services/service/service.js"]
I run on the dev layer so that we can leverage the nodemon code reload.
docker version: Docker version 19.03.13, build 4484c46d9d
docker-compose version: docker-compose version 1.27.4, build 40524192
So after trying some weird stuff, an answer i found was:
after doing this:
remember to restart the shell that you run your compose from.
An edit from the future: it still misbehaves sometimes so this may not fix for everyone

Why can't yarn.lock found in my Docker container?

I am run a single Node.js script like so: docker run -it --rm -u node --name build_container -v $(pwd):/home/node/app -w "/home/node/app" node:lts bash -c "yarn install --no-cache --frozen-lockfile".
However, the script log shows the displays info No lockfile found, and, what is even weirder, a message that says a package-lock.json was found. However, the work directory has no package-lock.
Are there any ideas what could be the issue?
I would suggest using your own Dockerfile to build your image - and then run it inside the build - like so:
Dockerfile
FROM node:12-alpine
# Create work directory
RUN mkdir -p /express
WORKDIR /express
# Bundle app sources
COPY . .
# Build app
RUN yarn install --prod --frozen-lockfile && yarn run build
EXPOSE 3000
# Set default NODE_ENV to production
ENV NODE_ENV production
ENTRYPOINT [ "yarn", "start" ]
.dockerignore
node_modules
build
dist
config
docs
*.log
.git
.vscode
And then build the image:
docker build -t <your image name> -f <Dockerfile (if omitted uses local folder .Dockerfile> <path to code your code>
Once built, run it as you would a normal image - as everything is already in.

Node.js error in docker

I using node:latest image.
And get
ModuleBuildError: Module build failed: ModuleBuildError: Module build failed: Error: spawn /hobover_web_client/node_modules/pngquant-bin/vendor/pngquant ENOENT.
Dockerfile
FROM node:latest
# set working directory
RUN mkdir -p /hobover_web_client
WORKDIR /hobover_web_client
ENV NPM_CONFIG_LOGLEVEL=warn
COPY package.json yarn.lock /hobover_web_client/
# install app dependencies
RUN rm -rf node_modules/ && yarn install --ignore-scripts && yarn global add babel babel-cli webpack nodemon pngquant optipng recjpeg
ADD . /hobover_web_client
In docker-compose.yml
version: '2'
hobover_web_client:
container_name: hobover_web_client
build: ./hobover_web_client
command: yarn start
ports:
- "8080:8080"
volumes:
- ./hobover_web_client:/hobover_web_client
- /hobover_web_client/node_modules
Build work successfully, but up cause an error.
How can I fix it if without docker it works?
Your issue the is mount of app and node_modules in the same directory. When you use below in docker-compose
- ./hobover_web_client:/hobover_web_client
You are overshadowing the existing node_modules. So you need to use NODE_PATH to relocate your packages. Change your Dockerfile to below
FROM node:latest
# set working directory
RUN mkdir -p /hobover_web_client /node_modules
WORKDIR /hobover_web_client
ENV NPM_CONFIG_LOGLEVEL=warn NODE_PATH=/node_moudles
COPY package.json yarn.lock /hobover_web_client/
# install app dependencies
RUN yarn install --ignore-scripts && yarn global add babel babel-cli webpack nodemon pngquant optipng recjpeg
ADD . /hobover_web_client
Change your compose to below
version: '2'
hobover_web_client:
container_name: hobover_web_client
build: ./hobover_web_client
command: yarn start
ports:
- "8080:8080"
volumes:
- ./hobover_web_client:/hobover_web_client
- /node_modules
So now your /node_modules goes to a anonymous volume, which you as such don't need and can remove, because the path is inside different folder

Docker Compose gulp is not found in $PATH

I have just started learning docker-compose and I am using a nodejs image. I want to install gulp to create some tasks and have one of them working on the background.
When I run: docker-compose run --rm -d server gulp watch-less
I get this error: ERROR: oci runtime error: container_linux.go:247: starting container process caused "exec: \"gulp\": executable file not found in $PATH"
Here are my file:
# Dockerfile
FROM node:6.10.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN npm install --quiet
COPY . /usr/src/app
CMD ["npm", "start"]
# docker-compose.yml
version: "2"
services:
server:
build: .
ports:
- "5000:5000"
volumes:
- ./:/usr/src/app
I also have a .dockerignore to ignore the node_modules folder and the npm-debug.log
EDIT:
When I run docker-compose run --rm server npm install package-name I don't have any problem and the package is installed.
Try adding gulp install in Dockerfile:
# Dockerfile
FROM node:6.10.2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g gulp
COPY package.json /usr/src/app
RUN npm install --quiet
COPY . /usr/src/app
CMD ["npm", "start"]
I have found a solution that works but maybe is not the best solution. I have created an script that runs a gulp task on the package.json and if I run:
docker-compose run --rm server npm run gulp_task it works and does what it has to do.
I find that just referencing gulp via
./node_modules/.bin/gulp
instead of directly works fine. ./node_modules/.bin isn't in the path by default. Another option would be to add that dir to the PATH.

Docker - Override content of linked volume

Having simple Node.js docker container
docker-compose.yml:
app:
build: ./dockerfiles/app
volumes:
- /Users/home/work/app:/usr/app
Dockerfile:
FROM node:6.7-slim
COPY package.json /tmp
RUN cd /tmp && npm install
RUN mkdir -p /usr/app
WORKDIR /usr/app
CMD ["node", "./src/app.js"]
What I want to achieve is container where I have package.json and installed node modules (npm install). Part where I copy package.json and install modules inside container is pretty straighforward, but problem occur, when I want to use these node_modules inside linked app. I can't find any way, how to copy /tmp/node_modules into /usr/app/node_modules
Is there any Docker way ho to do that? If not, can I tell my node app to look for node_modules somewhere else than in root directory?
You can achieve what you want by changing the CMD used when starting the container, either in your Dockerfile, or in the docker-compose.yml file.
Instead of just starting node ./src/app.js, you want to do two things:
Copy the node_modules over.
Start Node
Using the docker-compose.yml, I would do the following:
app:
build: ./dockerfiles/app
volumes:
- /Users/home/work/app:/usr/app
command: >
bash -c "
rm -rf /usr/app/node_modules
&& cp -R /tmp/node_modules /usr/app/node_modules
&& node ./src/app.js
"
This will delete the existing node modules on the mapped-in volume, then copy in the ones from container, and then finally starts the node app. This is going to happen every time the container is started.
As #schovi has mentioned in order to not override the contents of node_modules within the container and the contents of node_modules within the folder of the host machine, it is necessary to create another internal volume in the docker-compose.yml file:
volumes:
- ${APP_PATH}:/usr/app
- /usr/app/node_modules
Doing that makes it safe to copy the files from /tmp/node_modules into /usr/app/node_modules using this instructions.
FROM node
# Node modules
COPY *.json /tmp/
RUN cd /tmp && yarn
# App
RUN mkdir -p /usr/app
WORKDIR /usr/app
RUN cp -a /tmp/node_modules /usr/app/node_modules
ENV NODE_ENV docker
CMD ["run-node", "src/app.js"]
However, I would create first the app folder and install node_modules directly on it, reducing considerably the cache layers and increasing the building speed.
FROM node:12.8.1 #always mind the version
# Node modules
RUN mkdir -p /usr/app
WORKDIR /usr/app
#Mind that point (workdir)
COPY package*.json .
RUN yarn
ENV NODE_ENV docker
CMD ["run-node", "src/app.js"]
I hope it helps! :D
Thing that helped me is following usage of volumes
volumes:
- ${APP_PATH}:/usr/app
# Empty node_modules directory
- /usr/app/node_modules
Then in Dockerfile:
FROM node
# Node modules
COPY *.json /tmp/
RUN cd /tmp && yarn
ENV NODE_PATH /tmp/node_modules:${NODE_PATH}
# App
RUN mkdir -p /usr/app
WORKDIR /usr/app
ENV NODE_ENV docker
CMD ["run-node", "src/app.js"]
This allow me to have node_modules in another directory and app will look for them there.

Resources