Syncing node_modules in docker container with host machine - node.js

I would like to dockerize my react application and I have one question on doing so. I would like to install node_modules on the containter then have them synced to the host, so that I can run the npm commands on the container not the host machine. I achieved this, but the node_modules folder that is synced to my computer is empty, but is filled in the container. This is an issue since I am getting not installed warnings in the IDE, because the node_modules folder in the host machine is empty.
docker-compose.yml:
version: '3.9'
services:
frontend:
build:
dockerfile: Dockerfile
context: ./frontend
volumes:
- /usr/src/app/node_modules
- ./frontend:/usr/src/app
Dockerfile:
FROM node:18-alpine3.15
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install && \
mkdir -p node_modules/.cache && \
chmod -R 777 node_modules/.cache
COPY ./ ./
CMD npm run start
I would appreciate any tips and/or help.

You can't really "share" node_modules like this because there certain OS-specific steps which happen during installation. Some modules have compilation steps which need to target the host machine. Other modules have bin declarations which are symlinked, and symlinks cannot be "mounted" or shared between a host and container. Even different versions of node cannot share node_modules without rebuilding.
If you are wanting to develop within docker, you have two options:
Editing inside a container with VSCode (maybe other editors do this too?). I've tried this before and it's not very fun and is kind of tedious - it doesn't quite work the way you want.
Edit files on your host machine which are mounted inside docker. Nodemon/webpack/etc will see the changes and rebuild accordingly.
I recommend #2 - I've seen it used at many companies and is a very "standard" way to do development. This does require that you do an npm install on the host machine - don't get bogged down by trying to avoid an extra npm install.
If you want to make installs and builds faster, your best bet is to mount your npm cache directory into your docker container. You will need to find the npm cache location on both your host and your docker container by running npm get cache in both places. You can do this on your docker container by doing:
docker run --rm -it <your_image> npm get cache
You would mount the cache folder like you would any other volume. You can run a separate install in both docker and on your host machine - the files will only be downloaded once, and all compilations and symlinking will happen correctly.

Related

is it necesary copy dependencies in Dockerfile when using containers for dev only?

I want to create a dev enviroment for a node app with Docker. I have seen examples of Dockerfile with similar configurations as the following:
FROM node:14-alpine
WORKDIR /express-api-server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
I know that you can use volumes in docker-compose.yml to map host directories with directories of the containers using Volumes, thus you can make changes in your code and save data in a mongo database and preserve those changes locally when deleting or stopping container.
My question is: if I want to use the container for dev purposes only, there is any benefit on copying the package.json and package-lock.json, and installing dependencies?
I can use volumes to map the node_modules and the package files alonside with the code, so there's no need for me to take those action when building the image the first time.
correct. both work.
you just need to balance pros and cons.
the most obvious advantage is that having one dockerfile for dev and prod is easier and garanty that the environment is the same.
i personally have a single dockerfile for dev / test / prod for max coherence. and i mount volume with code and dependencies for dev.
when i do "npm install" i do it on the host. it instantly restarts the project without needing to rebuild. then when i want to publish to prod i do a docker build. it rebuils everything. ignoring mounts.
if you do like me, check that host nodejs version and docker's nodejs version is the same.

npm install 4x slower in docker container compared to host machine

I'm trying to provision a project locally that's using NodeJs with NPM.
I'm running npm install on my host machine (MacBook Pro Retina, 15-inch, Mid 2015) using nvm with node version 10.19:
added 2335 packages from 985 contributors and audited 916010 packages in 61.736s
When I run the same setup in Docker, the result is much slower. This is my docker-compose.yml file:
version: '3.4'
services:
node:
image: node:10.19-alpine
container_name: node
volumes:
- .:/app/
- npm-cache:/root/.npm
working_dir: /app
command: ["tail", "-f", "/dev/null"]
volumes:
npm-cache:
external: false
Then I execute:
docker-compose up -d node; docker exec -t node npm install
And the result is:
added 2265 packages from 975 contributors and audited 916010 packages in 259.895s
(I'm assuming the number of resulting packages is different due to a different platform).
I thought the speedy installation was achieved by having a local cache (that's why there is an extra volume for caching in the docker-compose) but then I ran:
$ npm cache clean --force && rm -rf ~/.npm && rm -rf node_modules
and the result for installation on the host machine is still consistently ~60 seconds.
When it comes to resources allocated to the Docker VM, it shouldn't be a problem, here's my Docker VM configuration:
I don't know where else to look, any help would be greatly appreciated.
Thanks
This slowdown is caused by sharing files between a container and your host machine.
In order to cope with it, you can give a try to docker-sync.
This tool supports different strategies for automatical syncing between a host machine and containers (including rsync).
However, beware that it has own issues like occasional sync freezing.
Here is how I got around the issue.
Create a base docker image using a similar Dockerfile.
FROM node:latest
RUN mkdir -p /node/app
COPY ./package.json /node/app/package.json
WORKDIR "/node/app"
RUN yarn install --network-timeout 100000
Then in your container, make this script the entry point:
#!/bin/bash
mkdir -p /node/app
cp /srv/package.json /node/app
cd /node/app
yarn install
sleep 1
rm -f /srv/node_modules
ln -s /node/app/node_modules /srv/node_modules
cd /srv
sleep 1
yarn serve
This installs npm modules in another directory that is not synced between container and host, and links the directory for the app. This seems to work just fine until this is properly resolved by Docker.

Is it possible to mount folder from container to host machine?

As an example, I have a simple Node.js / Typescript application defined as follows:
Dockerfile
FROM node:6.2
RUN npm install --global typings#1.3.1
COPY package.json /app/package.json
WORKDIR /app
RUN npm install
COPY typings.json /app/typings.json
RUN typings install
Node packages and typings are preinstalled to image. node_modules and typings folders are by default present only in running container.
docker-compose.yml
node-app:
...
volumes:
- .:/app
- /app/node_modules
- /app/typings
I mount current folder from host to container, which creates volumes from existing folders from /app. Those are mounted back to container so the application can work with them. The problem is that I'd like to see typings folder on host system as a read-only folder (because some IDEs can show you type hints that can be found in this folder). From what I've tested, those folders (node_modules and typings) are created on host machine after I run the container, but they are always empty. Is it possible to somehow see their contents (read-only preferably) from container volumes only if the container is running?
You can't make a host directory read-only from Compose. Compose orchestrates containers, not the host system.
If you want to share directories with the host, create them on the host first and mount them as bind volumes (like you've done with .:/app)

Docker container files overwritten by host volume share

I am building an application in python which has javascript files. I want to use browserify, so I want to install some node modules which I can use in my js files with require calls. I want these node modules only in my container and not host machine. Here is my Docker setup of node specific container.
### Dockerfile
FROM node:5.9.1
RUN npm install -g browserify
RUN mkdir /js_scripts
ADD package.json /js_scripts/
WORKDIR /js_scripts
RUN npm install # This installs required packages from package.json
RUN ls # lists the node_modules directory indicating successful install.
Now I want to share js files from my host machine with this container, so that I can run browserify main.js -o bundle.js command in the container. Here is my docker-compose.yml, which copies host_js_files to my js_scripts directory.
node:
build: .
volumes:
- ./host_js_files:/js_scripts
Now when I run a container docker-compose run node bash and ls in the js_scripts directory I only see my js files from the host volume, and my node_modules directory is not visible. This makes sense, based on how volumes are set up in docker.
However I want to have these node_modules in the container to successfully run browserify (which looks for these node modules). Is there a good way to do this without globally installing the node modules in the container or having to install the modules in the host machine?
Thanks for your input.
Containers should be stateless. If you destroy the container, all data inside it will be destroyed. You can mount as a volume the node_modules directory to avoid to download all dependencies all the time you create a new container.
See this example that installs browserify once:
### docker-compose.yml
node:
image: node:5.9.1
working_dir: /js_scripts
command: npm install browserify
volumes:
- $PWD/host_js_files:/js_scripts
First, you should run docker-compose up and wait until all packages will be installed. After that, you should run the browserify command as:
docker-compose run node /js_scripts/node_modules/.bin/browserify /js_scripts/main.js -o /js_scripts/bundle.js
It's a bad idea to mix your host files with docker container files by folder sharing. After container removing docker deletes all containers data. Docker should know which files belong to containers and which to host(Docker removes all inside container except for volumes). You have two variants to mix host and container files together:
Put container files to volume after container was started. (Bad idea, container files will not be removed after removing of container)
You can place your host scripts to subfolder in /js_scripts or declare all your scripts separately:
-v ./host_js_files/script1.js:/js_script/script1.js
-v ./host_js_files/script2.js:/js_script/script2.js

How should I Accomplish a Better Docker Workflow?

Everytime I change a file in the nodejs app I have to rebuild the docker image.
This feels redundant and slows my workflow. Is there a proper way to sync the nodejs app files without rebuilding the whole image again, or is this a normal usage?
It sounds like you want to speed up the development process. In that case I would recommend to mount your directory in your container using the docker run -v option: https://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
Once you are done developing your program build the image and now start docker without the -v option.
What I ended up doing was:
1) Using volumes with the docker run command - so I could change the code without rebuilding the docker image every time.
2) I had an issue with node_modules being overwritten because a volume acts like a mount - fixed it with node's PATH traversal.
Dockerfile:
FROM node:5.2
# Create our app directories
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN npm install -g nodemon
# This will cache npm install
# And presist the node_modules
# Even after we are using the volume (overwrites)
COPY package.json /usr/src/
RUN cd /usr/src && npm install
#Expose node's port
EXPOSE 3000
# Run the app
CMD nodemon server.js
Command-line:
to build:
docker build -t web-image
to run:
docker run --rm -v $(pwd):/usr/src/app -p 3000:3000 --name web web-image
You could have also done something like change the instruction and it says look in the directory specified by the build context argument of docker build and find the package.json file and then copy that into the current working directory of the container and then RUN npm install and afterwards we will COPY over everything else like so:
# Specify base image
FROM node:alpine
WORKDIR /usr/app
# Install some dependencies
COPY ./package.json ./
RUN npm install
# Setup default command
CMD ["npm", "start"]
You can make as many changes as you want and it will not invalidate the cache for any of these steps here.
The only time that npm install will be executed again is if we make a change to that step or any step above it.
So unless you make a change to the package.json file, the npm install will not be executed again.
So we can test this by running the docker build -t <tagname>/<project-name> .
Now I have made a change to the Dockerfile so you will see some steps re run and eventually our successfully tagged and built image.
Docker detected the change to the step and every step after it, but not the npm install step.
The lesson here is that yes it does make a difference the order in which all these instructions are placed in a Dockerfile.
Its nice to segment out these operations to ensure you are only copying the bare minimum.

Resources