Docker x NodeJS - Issue with node_modules - node.js

I'm a web developer who currently is working on a next.js project (it's just a framework to SSR ReactJS). I'm using Docker config on this project and I discovered an issue when I add/remove dependencies. When I add a dependency, build my project and up it with docker-compose, my new dependency isn't added to my Docker image. I have to clean my docker system with docker system prune to reset everything then I could build and up my project. After that, my dependency is added to my Docker container.
I use Dockerfile to configure my image and different docker-compose files to set different configurations depending on my environments. Here is my configuration:
Dockerfile
FROM node:10.13.0-alpine
# SET environment variables
ENV NODE_VERSION 10.13.0
ENV YARN_VERSION 1.12.3
# Install Yarn
RUN apk add --no-cache --virtual .build-deps-yarn curl \
&& curl -fSLO --compressed "https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz" \
&& tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn \
&& ln -snf /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
&& rm yarn-v$YARN_VERSION.tar.gz \
&& apk del .build-deps-yarn
# Create app directory
RUN mkdir /website
WORKDIR /website
ADD package*.json /website
# Install app dependencies
RUN yarn install
# Build source files
COPY . /website/
RUN yarn run build
docker-compose.yml (dev env)
version: "3"
services:
app:
container_name: website
build:
context: .
ports:
- "3000:3000"
- "3332:3332"
- "9229:9229"
volumes:
- /website/node_modules/
- .:/website
command: yarn run dev 0.0.0.0 3000
environment:
SERVER_URL: https://XXXXXXX.com
Here my commands to run my Docker environment:
docker-compose build --no-cache
docker-compose up
I suppose that something is wrong in my Docker's configuration but I can't catch it. Do you have an idea to help me?
Thanks!

Your volumes right now are not set up to do what you intend to do. The current set below means that you are overriding the contents of your website directory in the container with your local . directory.
volumes:
- /website/node_modules/
- .:/website
I'm sure your intention is to map your local directory into the container first, and then override node_modules with the original contents of the image's node_modules directory, i.e. /website/node_modules/.
Changing the order of your volumes like below should solve the issue.
volumes:
- .:/website
- /website/node_modules/

You are explicitly telling Docker you want this behavior. When you say:
volumes:
- /website/node_modules/
You are telling Docker you don't want to use the node_modules directory that's baked into the image. Instead, it should create an anonymous volume to hold the node_modules directory (which has some special behavior on its first use) and persist the data there, even if other characteristics like the underlying image change.
That means if you change your package.json and rebuild the image, Docker will keep using the volume version of your node_modules directory. (Similarly, the bind mount of .:/website means everything else in the last half of your Dockerfile is essentially ignored.)
I would remove the volumes: block in this setup to respect the program that's being built in the image. (I'd also suggest moving the command: to a CMD line in the Dockerfile.) Develop and test your application without using Docker, and build and deploy an image once it's essentially working, but not before.

Related

Docker is not writing to the defined volumes

I am new to Docker and created following files in a large Node project folder:
Dockerfile
# syntax=docker/dockerfile:1
FROM node:16
# Update npm
RUN npm install --global npm
# WORKDIR automatically creates missing folders
WORKDIR /opt/app
# https://stackoverflow.com/a/42019654/15443125
VOLUME /opt/app
RUN useradd --create-home --shell /bin/bash app
COPY . .
RUN chown -R app /opt/app
USER app
ENV NODE_ENV=production
RUN npm install
# RUN npx webpack
CMD [ "sleep", "180" ]
docker-compose.yml
version: "3.9"
services:
app:
build:
context: .
ports:
- "3000:3000"
volumes:
- ./dist/dockerVolume/app:/opt/app
And I run this command:
docker compose up --force-recreate --build
It builds the image, starts a container and I added a sleep to make sure the container stays up for at least 3 minutes. When I open a console for that container and run cd /opt/app && ls, I can verify that there are a lot of files. project/dist/dockerVolume/app gets created by Docker, but nothing is written to it at any point.
There are no errors or warnings or other indications that something isn't set up correctly.
What am I missing?
First you should move the VOLUME declaration to the end of the Dockerfile, because:
If any build steps change the data within the volume after it has been declared, those changes will be discarded. (Documentation)
After this you will face the issue of how bind mounts and docker volumes work. Unfortunately if you use a bind mount, the contents of the host directory will always replace the files that are already in the container. Files will only appear in the host directory, if they were created during runtime by the container.
Also see:
Docker docs: bind mounts
Docker docs: volumes
To solve the issue, you could use any of these workarounds, depending on your usecase:
Use volumes in your docker-compose.yml file instead of bind mounts (Documentation)
Create the files you want to run on the host instead of in the image, and bind mount them into the container.
Use a bash script in the container that creates the neccessary files (if they are missing) when the container is starting (so the bind mount is already initialized, and the changes will persist) and after that, it starts your processes.

Docker /dist output not mounted into host directory

I have recently added Docker to my javascript monorepo to build and serve a particular package. Everything is working great, however I did not succeed to make the contents under ./packages/common/dist available to the host directory under ./common-dist which is one of my requirements.
When running docker-compose up, the directory common-dist is indeed created on the host, but the files build under packages/common/dist on the volume are not appearing; the folder stays empty at all.
docker-compose.yml
version: "3"
services:
nodejs:
image: nodejs
container_name: app_nodejs
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "8080:8080"
volumes:
- ./common-dist:/app/packages/common/dist
Dockerfile
FROM node:12-alpine
# Install mozjpeg system dependencies
# #see https://github.com/imagemin/imagemin-mozjpeg/issues/1#issuecomment-52784569
RUN apk --update add \
build-base \
autoconf \
automake \
libtool \
pkgconf \
nasm
WORKDIR /app
COPY . .
RUN yarn install
RUN yarn run common:build
RUN ls /app/packages/common/dist # -> Yip, all files are there!
# CMD ["node", "/app/packages/common/dist/index.js"]
$ docker-compose build
$ docker-compose up # -> ./common-dist appears, but remains empty
Could this be related to some permission issues or am I lacking an understanding of what docker-compose actually does here?
Many thanks in advance!

Docker / docker-compose workflow: angular changes not being reflected

When I make changes to my app source code and rebuild my docker images, the changes are not being reflected in the updated containers. I have:
Checked that the changes are being pulled to the remote machine correctly
Cleared the browser cache and double checked with different browsers
Checked that the development build files are not being pulled onto the remote machine by mistake
Banged my head against a number of nearby walls
Every time I pull new code from the repo or make a local change, I do the following in order to do a fresh rebuild:
sudo docker ps -a
sudo docker rm <container-id>
sudo docker image prune -a
sudo docker-compose build --no-cache
sudo docker-compose up -d
But despite all that, the changes do not make it through - I simply dont know how it isn't working as the output during build appears to be taking the local files. Where can it be getting the old files from, cos I've checked and double checked that the local source has changed?
Docker-compose:
version: '3'
services:
angular:
build: angular
depends_on:
- nodejs
ports:
- "80:80"
- "443:443"
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/usr/share/nginx/html
- ./dhparam:/etc/ssl/certs
- ./nginx-conf/prod:/etc/nginx/conf.d
networks:
- app-net
nodejs:
build: nodejs
ports:
- "8080:8080"
volumes:
certbot-etc:
certbot-var:
web-root:
Angular dockerfile:
FROM node:14.2.0-alpine AS build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/
RUN apk update && apk add --no-cache bash git
RUN npm install
COPY . /app
RUN ng build --outputPath=./dist --configuration=production
### prod ###
FROM nginx:1.17.10-alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I found it. I followed tutorial to get https to work, and thats where the named volumes came in. Its a two step process, and it needed all those named volumes for the first step, but the web-root volume is what was screwing things up; deleting that solved my problem. At least I understand docker volumes better now...

How do I populate a volume in a docker-compose.yaml

I am starting to write my first docker-compose.yml file to set a a combination of services that make up my application (all node-js). One of the services (web-server - bespoke, not express) has both a large set of modules it needs and an even larger set of bower_components.
In order to provide separation of concerns, and so I can control the versioning more closely I want to create two named volumes which hold the node_modules and bower_components, and mount those volumes on to the relevant directories of the web-server service.
The question that is confusing me is how do I get these two volumes populated on service startup. There are two reasons for my confusion:-
The behaviour of docker-compose with the -d flag versus the docker run command with the -d flag - the web service obviously needs to keep running (and indeed needs to be restarted if it fails) whereas the container that might populate one or other of the volumes is a run once as the whole application is brought up with docker-compose up command. Can I control this?
A running service and the build commands of that service. Could I actually use a Dockerfiles to run npm install and bower install. In particular, if I change the source code of the web application, but the modules and bower_components don't change, will this build step be instantaneous because of a cached result?
I have been unable to find examples of this sort of behaviour so I am puzzled as to how to go about doing it. Can someone help.
I did sommething like that without bower but with nodeJS tools like Sass, Hall, live reload, jasmine...
I used npm for all installation inside the npm project (not global install)
For that, the official node image is quiet well, I only have to set the PATH to the app/node_modules/.bin. So my Dockerfile look like this (very simple) :
FROM node:7.5
ENV PATH /usr/src/app/node_modules/.bin/:$PATH
My docker-compose.yml file is :
version: '2'
services:
mydata:
image: busybox
stdin_open: true
volumes:
- .:/usr/src/app
node:
build: .
image: mynodecanvassvg
working_dir: /usr/src/app
stdin_open: true
volumes_from:
- mydata
sass:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "node-sass -w -r -o public/css src/scss"
stdin_open: true
jasmine:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
#entrypoint: "jasmine-node --coffee --autoTest tests/coffee"
stdin_open: true
live:
depends_on:
- node
image: mynodecanvassvg
working_dir: /usr/src/app
volumes_from:
- mydata
ports:
- 35729:35729
stdin_open: true
I have only some trouble with entrypoints that all needs a terminal to display result while working. So, I use the stdin_open: true to keep the container active and then I use the docker exec -it on each containers to get running each watch services.
And of course I launch the docker-compose with the -d to keep it alive as daemon.
Next you have to put your npm package.json on your app folder (next to Dockerfile and docker-compose.yml) and launch a npm update to load and install the modules.
I'll start with the standard way first
2. Dockerfile
Using a Dockerfile avoids trying to work out how to setup docker-compose service dependencies or external build scripts to get volumes populated and working before a docker-compose up.
A Dockerfile can be setup so only changes to the bower.json and package.json will trigger a reinstall of node_modules or bower_components.
The command that installs first will, at some point, have to invalidate the second commands cache though so the order you put them in matters. Which ever updates the least, or is significantly slower should go first. You may need to manually install bower globally if you want to run the bower command first.
If you are worried about NPM versioning, look at using yarn and a yarn.lock file. Yarn will speed things up a little bit too. Bower can just set specific versions as it doesn't have the same sub module versioning issues NPM does.
File Dockerfile
FROM mhart/alpine-node:6.9.5
RUN npm install bower -g
WORKDIR /app
COPY package.json /app/
RUN npm install --production
COPY bower.json /app/
RUN bower install
COPY / /app/
CMD ["node", "server.js"]
File .dockerignore
node_modules/
bower_components/
This is all supported in a docker-compose build: stanza
1. Docker Compose + Volumes
The easiest/quickest way to populate a volume is by defining a VOLUME in the Dockerfile after the directory has been populated in the image. This will work via compose. I'd question the point of using a volume when the image already has the required content though...
Any other methods of population will require some custom build scripts outside of compose. One option would be to docker run a container with the required volume attached and populate it with npm/bower install.
docker run \
--volume myapp_bower_components:/bower_components \
--volume bower.json:/bower.json \
mhart/alpine-node:6.9.5 \
npm install bower -g && bower install
and
docker run \
--volume myapp_mode_modules:/node_modules \
--volume package.json:/package.json \
mhart/alpine-node:6.9.5 \
npm install --production
Then you will be able to mount the populated volume on your app container
docker run \
--volume myapp_bower_components:/bower_components \
--volume myapp_node_modules:/node_modules \
--port 3000:3000
my/app
You'd probably need to come up with some sort of versioning scheme for the volume name as well so you could roll back. Sounds like a lot of effort for something an image already does for you.
Or possibly look at rocker, which provides an alternate docker build system and lets you do all the things Docker devs rail against, like mounting a directory during a build. Again this is stepping outside of what Docker Compose supports.

How do I point a docker image to my .m2 directory for running maven in docker on a mac?

When you look at the Dockerfile for a maven build it contains the line:
VOLUME /root/.m2
Now this would be great if this is where my .m2 repository was on my mac - but it isn't - it's in
/Users/myname/.m2
Now I could do:
But then the linux implementation in Docker wouldn't know to look there. I want to map the linux location to the mac location, and have that as part of my vagrant init. Kind of like:
ln /root/.m2 /Users/myname/.m2
My question is: How do I point a docker image to my .m2 directory for running maven in docker on a mac?
How do I point a docker image to my .m2 directory for running maven in docker on a mac?
You rather point a host folder (like /Users/myname/.m2) to a container folder (not an image)
See "Mount a host directory as a data volume":
In addition to creating a volume using the -v flag you can also mount a directory from your Docker daemon’s host into a container.
$ docker run -d -P --name web -v /Users/myname/.m2:/root/.m2 training/webapp python app.py
This command mounts the host directory, /Users/myname/.m2, into the container at /root/.m2.
If the path /root/.m2 already exists inside the container’s image, the /Users/myname/.m2 mount overlays but does not remove the pre-existing content.
Once the mount is removed, the content is accessible again.
This is consistent with the expected behavior of the mount command.
To share the .m2 folder in build step you can overwrite the localRepository value in settings.xml.
Here is the Dockerfile snippet I used to share my local .m2 repository in docker.
FROM maven:3.5-jdk-8 as BUILD
RUN echo \
"<settings xmlns='http://maven.apache.org/SETTINGS/1.0.0\' \
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' \
xsi:schemaLocation='http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd'> \
<localRepository>/root/Users/myname/.m2/repository</localRepository> \
<interactiveMode>true</interactiveMode> \
<usePluginRegistry>false</usePluginRegistry> \
<offline>false</offline> \
</settings>" \
> /usr/share/maven/conf/settings.xml;
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jre
EXPOSE 8080 5005
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
ENV _JAVA_OPTIONS '-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005'
ENV swarm.http.port 8080
CMD ["java", "-jar", "app-swarm.jar"]
Here are the Dockerfiles and docker-compose for example project containing one spring service and any other services;
Spring-service dockerfile
FROM maven:3.5-jdk-8-alpine
WORKDIR /app
COPY . src
CMD cd src ; mvn spring-boot:run
docker-compose.yml
version: '3'
services:
account-service:
build:
context: ./
dockerfile: Dockerfile
ports:
- "8080:8080"
volumes:
- "${HOME}/.m2:/root/.m2"
Here in docker-compose we make volumes for our local .m2 repo and container one.

Resources