Docker container files overwritten by host volume share - node.js

I am building an application in python which has javascript files. I want to use browserify, so I want to install some node modules which I can use in my js files with require calls. I want these node modules only in my container and not host machine. Here is my Docker setup of node specific container.
### Dockerfile
FROM node:5.9.1
RUN npm install -g browserify
RUN mkdir /js_scripts
ADD package.json /js_scripts/
WORKDIR /js_scripts
RUN npm install # This installs required packages from package.json
RUN ls # lists the node_modules directory indicating successful install.
Now I want to share js files from my host machine with this container, so that I can run browserify main.js -o bundle.js command in the container. Here is my docker-compose.yml, which copies host_js_files to my js_scripts directory.
node:
build: .
volumes:
- ./host_js_files:/js_scripts
Now when I run a container docker-compose run node bash and ls in the js_scripts directory I only see my js files from the host volume, and my node_modules directory is not visible. This makes sense, based on how volumes are set up in docker.
However I want to have these node_modules in the container to successfully run browserify (which looks for these node modules). Is there a good way to do this without globally installing the node modules in the container or having to install the modules in the host machine?
Thanks for your input.

Containers should be stateless. If you destroy the container, all data inside it will be destroyed. You can mount as a volume the node_modules directory to avoid to download all dependencies all the time you create a new container.
See this example that installs browserify once:
### docker-compose.yml
node:
image: node:5.9.1
working_dir: /js_scripts
command: npm install browserify
volumes:
- $PWD/host_js_files:/js_scripts
First, you should run docker-compose up and wait until all packages will be installed. After that, you should run the browserify command as:
docker-compose run node /js_scripts/node_modules/.bin/browserify /js_scripts/main.js -o /js_scripts/bundle.js

It's a bad idea to mix your host files with docker container files by folder sharing. After container removing docker deletes all containers data. Docker should know which files belong to containers and which to host(Docker removes all inside container except for volumes). You have two variants to mix host and container files together:
Put container files to volume after container was started. (Bad idea, container files will not be removed after removing of container)
You can place your host scripts to subfolder in /js_scripts or declare all your scripts separately:
-v ./host_js_files/script1.js:/js_script/script1.js
-v ./host_js_files/script2.js:/js_script/script2.js

Related

Syncing node_modules in docker container with host machine

I would like to dockerize my react application and I have one question on doing so. I would like to install node_modules on the containter then have them synced to the host, so that I can run the npm commands on the container not the host machine. I achieved this, but the node_modules folder that is synced to my computer is empty, but is filled in the container. This is an issue since I am getting not installed warnings in the IDE, because the node_modules folder in the host machine is empty.
docker-compose.yml:
version: '3.9'
services:
frontend:
build:
dockerfile: Dockerfile
context: ./frontend
volumes:
- /usr/src/app/node_modules
- ./frontend:/usr/src/app
Dockerfile:
FROM node:18-alpine3.15
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install && \
mkdir -p node_modules/.cache && \
chmod -R 777 node_modules/.cache
COPY ./ ./
CMD npm run start
I would appreciate any tips and/or help.
You can't really "share" node_modules like this because there certain OS-specific steps which happen during installation. Some modules have compilation steps which need to target the host machine. Other modules have bin declarations which are symlinked, and symlinks cannot be "mounted" or shared between a host and container. Even different versions of node cannot share node_modules without rebuilding.
If you are wanting to develop within docker, you have two options:
Editing inside a container with VSCode (maybe other editors do this too?). I've tried this before and it's not very fun and is kind of tedious - it doesn't quite work the way you want.
Edit files on your host machine which are mounted inside docker. Nodemon/webpack/etc will see the changes and rebuild accordingly.
I recommend #2 - I've seen it used at many companies and is a very "standard" way to do development. This does require that you do an npm install on the host machine - don't get bogged down by trying to avoid an extra npm install.
If you want to make installs and builds faster, your best bet is to mount your npm cache directory into your docker container. You will need to find the npm cache location on both your host and your docker container by running npm get cache in both places. You can do this on your docker container by doing:
docker run --rm -it <your_image> npm get cache
You would mount the cache folder like you would any other volume. You can run a separate install in both docker and on your host machine - the files will only be downloaded once, and all compilations and symlinking will happen correctly.

Syncing local code inside docker container without having container service running

I have created a docker image which has an executable node js app.
I have multiple modules which are independent of themselves. These modules are created as a package inside docker using npm link command hence can be required in my node js index file.
The directory structure is as
|-node_modules
|-src
|-app
|-index.js
|-independent_modules
|-some_independent_task
|-some_other_independent_task
While building the image I have created npm link for every independent module in the root node_modules. This creates a node_modules folder inside every independent module, which is not present in local. This is only created inside the container.
I require these modules in src/app/index.js and proceed with my task.
This docker image does not use a server to keep the container running, hence the container stops when the process ends.
I build the image using
docker build -t demoapp
To run the index.js in the dev environment I need to mount the local src directory to docker src directory to reflect the changes without rebuilding the image.
For mounting and running I use the command
docker run -v $(pwd)/src:/src demoapp node src/index.js
The problem here is, in local, there is no dependencies installed i.e no node_modules folder is present. Hence while mounting local directory into docker, it replaces it with an empty one, hence the dependencies installed inside docker in node_modules vanish out.
I tried using .dockerignore to not mount the node_modules folder but it didn't work. Also, keeping empty node_modules in local also doesn't work.
I also tried using docker-compose to keep volumes synced and hide out node_modules from it, but I think this only syncs when the docker is running with any server i.e docker container keeps running.
This is the docker-compose.yml I used
# docker-compose.yml
version: "2"
services:
demoapp_container:
build: .
image: demoapp
volumes:
- "./src:/src"
- "/src/independent_modules/some_independent_task/node_modules"
- "/src/independent_modules/some_other_independent_task/node_modules"
container_name: demoapp_container
command: echo 'ready'
environment:
- NODE_ENV=development
I read this here that using this it will skip the `node_modules from syncing.
But this also doen't works for me.
I need to execute this index.js every time within a stopped docker container with the local code synced to the docker workdir and skipping the dependencies folder i.e node_modules.
One more thing if it could happen will be somewhat helpful. Every time I do docker-compose up or docker-compose run it prints ready. Can I have something, where I can override the command in docker-compose with the command passed from CLI.
Something like docker-compose run | {some command}.
You've defined a docker-compose file but you're not actually using it.
Since you use docker run, this is the command you should try:
docker run \
-v $(pwd)/src:/src \
-v "/src/independent_modules/some_independent_task/node_modules"
-v "/src/independent_modules/some_other_independent_task/node_modules"
demoapp \
node src/index.js
If you want to use the docker-compose, you should change command to be node src/index.js. Then you can use docker-compose up instead of the whole docker run ....

docker-compose "up" vs. "run" yields different mounted volume

Update 2: I have created a sample project on Git to reproduce this issue. Upon further testing, the test case is slightly different than what I've described in my original post.
I am including the contents of the README I wrote on the github repo below.
Use Case
One simple nodejs project with a Dockerfile.
One local NPM dependency used by the above project (copied to container via Dockerfile). The project refers to the dependency via a local path.
The nodejs project has one web route (/) that prints the version of the local npm dependency from its package.json. This is used to verify the results of the test case procedure.
docker-compose uses this volume technique to overlay the host machine's source tree
on top of the container's source tree and then overlaying the node_modules from the container on top of the first volume.
Steps to Reproduce
Clone this repo.
Clean up any previous containers and images related to this repo's project via docker rm and docker rmi.
Check out the test2_run1 tag. This state represents the project using version 1.0.0 of the local NPM dependency.
Do a docker-compose build. All steps should run without any cache usage if step 2 was followed correctly.
Note the version of the local NPM dependency during the npm install command, e.g. +-- my-npm#1.0.0.
Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.0.
Stop the running containers. (Ctrl-C on the terminal from which the up command was issued.)
Check out the test2_run2 tag. This introduces a small change to the NPM's index.js file, and a version
bump in its package.json to 1.0.1.
Do a docker-compose build. Only the instructions up to COPY ./my-npm ... should use a cache. (E.g., the docker output prints ---> Using cache for that instruction.) All subsequent steps should be run by docker. This is because the changes introduced in step 7 to the NPM package should have invalidated the cache for the COPY ./my-npm ... command, and, as a result, subsequent steps too. Confirm that during the npm install command, the new version of the NPM is printed in the summary tree output, e.g. +-- my-npm#1.0.1.
Do a docker-compose up. Browse to http://localhost:8000. The page should report version 1.0.1.
Expected behavior: Page in step 9 should report 1.0.1. That is, a change in the local npm should be reflected in the container via docker-compose up.
Actual behavior: Page in step 9 reports 1.0.0.
Note that docker itself is re-building images as expected. The observed issue is not that docker is re-using a cached image, as the output
shows it re-running NPM install and showing the new version of the local NPM dependency. The issue is that docker-compose is not seeing
that the underlying images that comprise the dctest_service1 container have been updated.
In fact, running bash in the container allows us to see that the container has the updated my-npm module files, but the node_modules
version is stale:
# docker exec -it dctest_service1_1 bash
app#6bf2671b75c6:~/service1$ grep version my-npm/package.json node_modules/my-npm/package.json
my-npm/package.json: "version": "1.0.1",
node_modules/my-npm/package.json: "version": "1.0.0"
app#6bf2671b75c6:~/service1$
Workaround: Use docker rm to remove the dctest_service1 container. Then re-run docker-compose up, which will re-create the container using the existing images. Notable in this step is that no underlying images are re-built. In re-creating the container, docker-compose seems to figure out to use the newer volume that has the updated node_modules.
See the output directory for the output printed during the first run (steps 4 and 5) and the second run (steps 8 and 9).
Original Post
I've got a nodejs Dockerfile based on this tutorial ("Lessons from Building a Node App in Docker"). Specifically, note that this tutorial uses a volume trick to mount the node_modules directory from the container itself to overlay on top of the equivalent one from the host machine. E.g.:
volumes:
- .:/home/app/my-app
- /home/app/my-app/node_modules
I am running into a problem where an update to package.json is triggering a npm install as expected (as opposed to using the docker cache), but when starting the service with docker-compose up, the resulting container somehow ends up with an older version of the node_modules data, as the newly added NPM package that had been added is missing from the directory. Yet, if run the specified CMD by hand via docker-compose run --rm, then I do see the updated volume!
I can confirm this a few ways:
node_modules Timestamp
Container started via "up":
app#88614c5599b6:~/my-app$ ls -l
...
drwxr-xr-x 743 app app 28672 Dec 12 16:41 node_modules
Container started via "run":
app#bdcbfb37b4ba:~/my-app$ ls -l
...
drwxr-xr-x 737 app app 28672 Jan 9 02:25 node_modules
Different docker inspect "Mount" entry Id
Container started via "up":
"Name": "180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a",
"Source": "/var/lib/docker/volumes/180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a/_data",
"Destination": "/home/app/my-app/node_modules",
Container started via "run":
"Name": "8eb7454fb976830c389b54f9480b1128ab15de14ca0b167df8f3ce880fb55720",
"Source": "/var/lib/docker/volumes/8eb7454fb976830c389b54f9480b1128ab15de14ca0b167df8f3ce880fb55720/_data",
"Destination": "/home/app/my-app/node_modules",
HostConfig -> Binds
I am unsure if this is related, but I also did notice (also in docker inspect) that the Binds section under HostConfig differs between both cases:
Container started via "up":
"Binds": [
"180b82101433ab159a46ec1dd0edb9673bcaf09c98e7481aed7a32a87a94e76a:/home/app/my-app/node_modules:rw",
"/Volumes/my-mount/my-app:/home/app/my-app:rw"
],
Container started via "run":
"Binds": [
"/Volumes/my-mount/my-app:/home/app/my-app:rw"
],
(Both show the host source mounted on the image, but only the "up" shows the secondary overlay volume with node_modules, which seems like another odd wrinkle.)
Theory
Per the docker-compose CLI reference:
If there are existing containers for a service, and the service’s
configuration or image was changed after the container’s creation,
docker-compose up picks up the changes by stopping and recreating the
containers
Thus, it appears docker-compose up does not think the configuration or image was changed. I am just not sure how to debug this to confirm. Of course, I could use --force-recreate to work around this, but I wish to resolve what about my configuration is incorrect that is causing the problem.
Update: If I do an explicit docker-compose build prior to docker-compose up, the problem still persists. Thus I am feeling less confident about this theory at the moment.
Here is the entire Dockefile:
FROM node:6.9.1
RUN useradd --user-group --create-home --shell /bin/false app
ENV HOME=/home/app
ENV APP=my-app
ENV NPM_CONFIG_LOGLEVEL warn
RUN npm install --global gulp-cli
COPY ./package.json $HOME/$APP/package.json
RUN chown -R app:app $HOME/*
USER app
WORKDIR $HOME/$APP
RUN npm install && npm cache clean
USER root
COPY . $HOME/$APP
RUN chown -R app:app $HOME/* && chmod u+x ./node_modules/.bin/* ./bin/*
USER app
ENTRYPOINT ["/home/app/my-app/bin/entrypoint.sh"]
CMD ["npm", "run", "start:dev"]
the command docker-compose uses the docker-compose.yml file as a configuration file of your containered service. By default it looks for this .yml config file in the directory you run docker-compose.
So it could be that your docker-compose.yml file is not up to date, seeing that one of your volumes is not mounted when you run docker-compose up.
docker run ignores the docker-compose.yml file and just builds the container using the Dockerimage. So there could be configuration difference between the Dockerimage file and the docker-compose.yml files.

Is it possible to mount folder from container to host machine?

As an example, I have a simple Node.js / Typescript application defined as follows:
Dockerfile
FROM node:6.2
RUN npm install --global typings#1.3.1
COPY package.json /app/package.json
WORKDIR /app
RUN npm install
COPY typings.json /app/typings.json
RUN typings install
Node packages and typings are preinstalled to image. node_modules and typings folders are by default present only in running container.
docker-compose.yml
node-app:
...
volumes:
- .:/app
- /app/node_modules
- /app/typings
I mount current folder from host to container, which creates volumes from existing folders from /app. Those are mounted back to container so the application can work with them. The problem is that I'd like to see typings folder on host system as a read-only folder (because some IDEs can show you type hints that can be found in this folder). From what I've tested, those folders (node_modules and typings) are created on host machine after I run the container, but they are always empty. Is it possible to somehow see their contents (read-only preferably) from container volumes only if the container is running?
You can't make a host directory read-only from Compose. Compose orchestrates containers, not the host system.
If you want to share directories with the host, create them on the host first and mount them as bind volumes (like you've done with .:/app)

How can you get Grunt livereload to work inside Docker?

I'm trying to use Docker as a dev environment in Windows.
The app I'm developing uses Node, NPM and Bower for setting up the dev tools, and Grunt for its task running, and includes a live reload so the app updates when the code changes. Pretty standard. It works fine outside of Docker but I keep running into the Grunt error Fatal error: Unable to find local grunt. no matter how I try to do it inside Docker.
My latest effort involves installing all the npm and bower dependencies to an app directory in the image at build time, as well as copying the app's Gruntfile.js to that directory.
Then in Docker-Compose I create a Volume that is linked to the host app, and ask Grunt to watch that volume using Grunt's --base option. It still won't work. I still get the fatal error.
Here are the Docker files in question:
Dockerfile:
# Pull base image.
FROM node:5.1
# Setup environment
ENV NODE_ENV development
# Setup build folder
RUN mkdir /app
WORKDIR /app
# Build apps
#globals
RUN npm install -g bower
RUN echo '{ "allow_root": true }' > /root/.bowerrc
RUN npm install -g grunt
RUN npm install -g grunt-cli
RUN apt-get update
RUN apt-get install ruby-compass -y
#locals
ADD package.json /app/
ADD Gruntfile.js /app/
RUN npm install
ADD bower.json /app/
RUN bower install
docker-compose.yml:
angular:
build: .
command: sh /host_app/startup.sh
volumes:
- .:/host_app
net: "host"
startup.sh:
#!/bin/bash
grunt --base /host_app serve
The only way I can actually get the app to run at all in Docker is to copy all the files over to the image at build time, create the dev dependencies there and then, and run Grunt against the copied files. But then I have to run a new build every time I change anything in my app.
There must be a way? My Django app is able to do a live reload in Docker no problems, as per Docker's own Django quick startup instructions. So I know live reload can work with Docker.
PS: I have tried leaving the Gruntfile on the Volume and using Grunt's --gruntfile option but it still crashes. I have also tried creating the dependencies at Docker-Compose time, in the shared Volume, but I run into npm errors to do with unpacking tars. I get the impression that the VM can't cope with the amount of data running over the shared file system and chokes, or maybe that the Windows file system can't store the Linux files properly. Or something.

Resources