installing Node modules on Docker: why are they disappearing? - node.js

I'm trying to create my first node Docker image. It's for a hubot. Here's the basics of the Dockerfile:
FROM ubuntu:14.04
VOLUME /opt
COPY package.json /opt/hubot/
RUN apt-get update && apt-get -y install build-essential nodejs python
RUN npm install -g npm
WORKDIR /opt/hubot/
RUN npm install --prefix /opt/hubot/
COPY app /opt/hubot/app
The problem is that the node_modules don't exist after the build step is over. I can see that it is being placed in my expected location during the build step:
make[1]: Entering directory `/opt/hubot/node_modules/aws2js/node_modules/mime-magic'
So, I know Docker files are somewhat stateless, which is why "apt update && install" is necessary. But something gets left behind, otherwise the installed apt bits wouldn't be there at the end. How can I persist the node_modules?

Changes made to VOLUMEs do not persist.
Data Volumes
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System to provide several useful features for persistent or shared data:
Data volumes can be shared and reused between containers
Changes to a data volume are made directly
Changes to a data volume will not be included when you update an image
Volumes persist until no containers use them

Related

Syncing node_modules in docker container with host machine

I would like to dockerize my react application and I have one question on doing so. I would like to install node_modules on the containter then have them synced to the host, so that I can run the npm commands on the container not the host machine. I achieved this, but the node_modules folder that is synced to my computer is empty, but is filled in the container. This is an issue since I am getting not installed warnings in the IDE, because the node_modules folder in the host machine is empty.
docker-compose.yml:
version: '3.9'
services:
frontend:
build:
dockerfile: Dockerfile
context: ./frontend
volumes:
- /usr/src/app/node_modules
- ./frontend:/usr/src/app
Dockerfile:
FROM node:18-alpine3.15
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install && \
mkdir -p node_modules/.cache && \
chmod -R 777 node_modules/.cache
COPY ./ ./
CMD npm run start
I would appreciate any tips and/or help.
You can't really "share" node_modules like this because there certain OS-specific steps which happen during installation. Some modules have compilation steps which need to target the host machine. Other modules have bin declarations which are symlinked, and symlinks cannot be "mounted" or shared between a host and container. Even different versions of node cannot share node_modules without rebuilding.
If you are wanting to develop within docker, you have two options:
Editing inside a container with VSCode (maybe other editors do this too?). I've tried this before and it's not very fun and is kind of tedious - it doesn't quite work the way you want.
Edit files on your host machine which are mounted inside docker. Nodemon/webpack/etc will see the changes and rebuild accordingly.
I recommend #2 - I've seen it used at many companies and is a very "standard" way to do development. This does require that you do an npm install on the host machine - don't get bogged down by trying to avoid an extra npm install.
If you want to make installs and builds faster, your best bet is to mount your npm cache directory into your docker container. You will need to find the npm cache location on both your host and your docker container by running npm get cache in both places. You can do this on your docker container by doing:
docker run --rm -it <your_image> npm get cache
You would mount the cache folder like you would any other volume. You can run a separate install in both docker and on your host machine - the files will only be downloaded once, and all compilations and symlinking will happen correctly.

How to avoid npm build node.js packages every time when creating a Docker image via Circle CI?

I deploy my Node.Js app via AWS ECS Docker container using Circle CI.
However, each time I build a new image it runs npm build (because it's in my Dockerfile) and downloads and builds all the node modules again every time. Then it uploads a new image to the AWS ECS repository.
As my environment stays the same I don't want it to build those packages every time. So do you think it is possible for Docker to actually update an existing image rather than building a new one from scratch with all the modules every time? Is this generally a good practice?
I was thinking the following workflow:
Check if there are any new Node packages compared to the previous image
If yes, run npm build
If not, just keep the old node_modules folder, don't run build and simply update the code
Deploy
What would be the best way to do that?
Here's my Dockerfile
FROM node:12.18.0-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . .
COPY package.json package-lock.json* ./
RUN npm install
RUN npm install pm2 -g
EXPOSE 3000
CMD [ "pm2-runtime", "ecosystem.config.js"]
My Circle CI workflow (from the ./circleci/config.yml):
workflows:
version: 2.1
test:
jobs:
- test
- aws-ecr/build-and-push-image:
create-repo: true
no-output-timeout: 10m
repo: 'stage-instance'
Move the COPY . . line after the RUN npm install line.
The way Docker's layer caching works, it will skip re-running a RUN line if it knows that it's already run it. So given this Dockerfile fragment:
FROM node:12.18.0-alpine
WORKDIR /usr/src/app
COPY package.json package-lock.json* ./
RUN npm install
Docker keeps track of the hashes of the files it COPYs in. When it gets to the RUN line, if the image up to this point is identical to one it's previously built, it will also skip over the RUN line.
If you have COPY . . first, then if any file in your source tree changes, it will invalidate the layer cache for everything afterwards. If you only copy package.json and the lock file first, then npm install only gets re-run if either of those two files change.
(CircleCI may or may not perform the same layer caching, but "install dependencies, then copy the application in" is a typical Docker-level optimization.)

How should the package-lock.json file be generated for Node.js docker apps?

The Node.js docker tutorial (https://nodejs.org/en/docs/guides/nodejs-docker-webapp/) specifies that the npm install should be run on the host prior to starting docker to generate the package-lock.json file.
How should this file be generated when npm/node are not available on the host?
How should the package-lock.json get updated when new dependencies are added to the package.json?
npm specifies that the package-lock.json file should be checked in to source control. When npm install is run via docker, it generates the package-lock.json file in the container - which is not where it would be checked out from source control. The obvious workaround would be to copy the file from the container to the host whenever it is updated but that seems like there should be an easier solution.
I usually just create a temporary container to run npm inside instead of having to install node and npm on the host. Something like this:
docker run --rm -v "$(pwd)":/data -w /data -it node bash
and then inside bash I run npm init to generate a package.json and npm install to generate package-lock.json. You may want to use -u "$UID" to make the file be owned by your host user too, or just chown it after.
I do the same thing to install new packages, just npm install package inside bash on the temporary container.

How can you get Grunt livereload to work inside Docker?

I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.

Docker and node_modules - put them in a layer, or a volume?

I'm planning a docker dev environment and doubtful whether running npm install as a cached layer is a good idea.
I understand that there are ways to optimize dockerfiles to avoid rebuilding node_modules unless package.json changes, however I don't want to completely rebuild node_modules every time package.json changes either. A fresh npm install takes over 5 minutes for us, and changes to package.json happen reasonably frequently. For someone reviewing pull requests and switching branches quite often, they could have to suffer through an infuriating amount of 5 minute npm installs each day.
Wouldn't it be better in cases like mine to somehow install node_modules into a volume so that it persists across builds, and small changes to package.json don't result in the entire dependency tree being rebuilt?
Yes. Don't rebuild node_modules over and over again. Just stick them in a data container and mount it read only. You can have a central process rebuild node_modules now and then.
As an added benefit, you get a much more predictable build because you can enforce that everyone uses the same node modules. This is critical if you want to be sure that you actually test the same thing that you're planning to put in production.
Something like this (untested!):
docker build -t my/module-container - <<END_DOCKERFILE
FROM busybox
RUN mkdir -p /usr/local/node
VOLUME /usr/local/node
END_DOCKERFILE
docker run --name=module-container my/module-container
docker run --rm --volumes-from=module-container \
-v package.json:/usr/local/node/package.json \
/bin/bash -c "cd /usr/local/node; npm install"
By now, the data container module-container will contain the modules specified by package.json in /usr/local/node/node_modules. It should now be possible to mount it in the production containers using --volume-from=module-container.

Resources