Is it possible to mount folder from container to host machine? - node.js

As an example, I have a simple Node.js / Typescript application defined as follows:
Dockerfile
FROM node:6.2
RUN npm install --global typings#1.3.1
COPY package.json /app/package.json
WORKDIR /app
RUN npm install
COPY typings.json /app/typings.json
RUN typings install
Node packages and typings are preinstalled to image. node_modules and typings folders are by default present only in running container.
docker-compose.yml
node-app:
...
volumes:
- .:/app
- /app/node_modules
- /app/typings
I mount current folder from host to container, which creates volumes from existing folders from /app. Those are mounted back to container so the application can work with them. The problem is that I'd like to see typings folder on host system as a read-only folder (because some IDEs can show you type hints that can be found in this folder). From what I've tested, those folders (node_modules and typings) are created on host machine after I run the container, but they are always empty. Is it possible to somehow see their contents (read-only preferably) from container volumes only if the container is running?

You can't make a host directory read-only from Compose. Compose orchestrates containers, not the host system.
If you want to share directories with the host, create them on the host first and mount them as bind volumes (like you've done with .:/app)

Related

Syncing node_modules in docker container with host machine

I would like to dockerize my react application and I have one question on doing so. I would like to install node_modules on the containter then have them synced to the host, so that I can run the npm commands on the container not the host machine. I achieved this, but the node_modules folder that is synced to my computer is empty, but is filled in the container. This is an issue since I am getting not installed warnings in the IDE, because the node_modules folder in the host machine is empty.
docker-compose.yml:
version: '3.9'
services:
frontend:
build:
dockerfile: Dockerfile
context: ./frontend
volumes:
- /usr/src/app/node_modules
- ./frontend:/usr/src/app
Dockerfile:
FROM node:18-alpine3.15
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install && \
mkdir -p node_modules/.cache && \
chmod -R 777 node_modules/.cache
COPY ./ ./
CMD npm run start
I would appreciate any tips and/or help.
You can't really "share" node_modules like this because there certain OS-specific steps which happen during installation. Some modules have compilation steps which need to target the host machine. Other modules have bin declarations which are symlinked, and symlinks cannot be "mounted" or shared between a host and container. Even different versions of node cannot share node_modules without rebuilding.
If you are wanting to develop within docker, you have two options:
Editing inside a container with VSCode (maybe other editors do this too?). I've tried this before and it's not very fun and is kind of tedious - it doesn't quite work the way you want.
Edit files on your host machine which are mounted inside docker. Nodemon/webpack/etc will see the changes and rebuild accordingly.
I recommend #2 - I've seen it used at many companies and is a very "standard" way to do development. This does require that you do an npm install on the host machine - don't get bogged down by trying to avoid an extra npm install.
If you want to make installs and builds faster, your best bet is to mount your npm cache directory into your docker container. You will need to find the npm cache location on both your host and your docker container by running npm get cache in both places. You can do this on your docker container by doing:
docker run --rm -it <your_image> npm get cache
You would mount the cache folder like you would any other volume. You can run a separate install in both docker and on your host machine - the files will only be downloaded once, and all compilations and symlinking will happen correctly.

is it necesary copy dependencies in Dockerfile when using containers for dev only?

I want to create a dev enviroment for a node app with Docker. I have seen examples of Dockerfile with similar configurations as the following:
FROM node:14-alpine
WORKDIR /express-api-server
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
I know that you can use volumes in docker-compose.yml to map host directories with directories of the containers using Volumes, thus you can make changes in your code and save data in a mongo database and preserve those changes locally when deleting or stopping container.
My question is: if I want to use the container for dev purposes only, there is any benefit on copying the package.json and package-lock.json, and installing dependencies?
I can use volumes to map the node_modules and the package files alonside with the code, so there's no need for me to take those action when building the image the first time.
correct. both work.
you just need to balance pros and cons.
the most obvious advantage is that having one dockerfile for dev and prod is easier and garanty that the environment is the same.
i personally have a single dockerfile for dev / test / prod for max coherence. and i mount volume with code and dependencies for dev.
when i do "npm install" i do it on the host. it instantly restarts the project without needing to rebuild. then when i want to publish to prod i do a docker build. it rebuils everything. ignoring mounts.
if you do like me, check that host nodejs version and docker's nodejs version is the same.

Syncing local code inside docker container without having container service running

I have created a docker image which has an executable node js app.
I have multiple modules which are independent of themselves. These modules are created as a package inside docker using npm link command hence can be required in my node js index file.
The directory structure is as
|-node_modules
|-src
|-app
|-index.js
|-independent_modules
|-some_independent_task
|-some_other_independent_task
While building the image I have created npm link for every independent module in the root node_modules. This creates a node_modules folder inside every independent module, which is not present in local. This is only created inside the container.
I require these modules in src/app/index.js and proceed with my task.
This docker image does not use a server to keep the container running, hence the container stops when the process ends.
I build the image using
docker build -t demoapp
To run the index.js in the dev environment I need to mount the local src directory to docker src directory to reflect the changes without rebuilding the image.
For mounting and running I use the command
docker run -v $(pwd)/src:/src demoapp node src/index.js
The problem here is, in local, there is no dependencies installed i.e no node_modules folder is present. Hence while mounting local directory into docker, it replaces it with an empty one, hence the dependencies installed inside docker in node_modules vanish out.
I tried using .dockerignore to not mount the node_modules folder but it didn't work. Also, keeping empty node_modules in local also doesn't work.
I also tried using docker-compose to keep volumes synced and hide out node_modules from it, but I think this only syncs when the docker is running with any server i.e docker container keeps running.
This is the docker-compose.yml I used
# docker-compose.yml
version: "2"
services:
demoapp_container:
build: .
image: demoapp
volumes:
- "./src:/src"
- "/src/independent_modules/some_independent_task/node_modules"
- "/src/independent_modules/some_other_independent_task/node_modules"
container_name: demoapp_container
command: echo 'ready'
environment:
- NODE_ENV=development
I read this here that using this it will skip the `node_modules from syncing.
But this also doen't works for me.
I need to execute this index.js every time within a stopped docker container with the local code synced to the docker workdir and skipping the dependencies folder i.e node_modules.
One more thing if it could happen will be somewhat helpful. Every time I do docker-compose up or docker-compose run it prints ready. Can I have something, where I can override the command in docker-compose with the command passed from CLI.
Something like docker-compose run | {some command}.
You've defined a docker-compose file but you're not actually using it.
Since you use docker run, this is the command you should try:
docker run \
-v $(pwd)/src:/src \
-v "/src/independent_modules/some_independent_task/node_modules"
-v "/src/independent_modules/some_other_independent_task/node_modules"
demoapp \
node src/index.js
If you want to use the docker-compose, you should change command to be node src/index.js. Then you can use docker-compose up instead of the whole docker run ....

Docker container files overwritten by host volume share

I am building an application in python which has javascript files. I want to use browserify, so I want to install some node modules which I can use in my js files with require calls. I want these node modules only in my container and not host machine. Here is my Docker setup of node specific container.
### Dockerfile
FROM node:5.9.1
RUN npm install -g browserify
RUN mkdir /js_scripts
ADD package.json /js_scripts/
WORKDIR /js_scripts
RUN npm install # This installs required packages from package.json
RUN ls # lists the node_modules directory indicating successful install.
Now I want to share js files from my host machine with this container, so that I can run browserify main.js -o bundle.js command in the container. Here is my docker-compose.yml, which copies host_js_files to my js_scripts directory.
node:
build: .
volumes:
- ./host_js_files:/js_scripts
Now when I run a container docker-compose run node bash and ls in the js_scripts directory I only see my js files from the host volume, and my node_modules directory is not visible. This makes sense, based on how volumes are set up in docker.
However I want to have these node_modules in the container to successfully run browserify (which looks for these node modules). Is there a good way to do this without globally installing the node modules in the container or having to install the modules in the host machine?
Thanks for your input.
Containers should be stateless. If you destroy the container, all data inside it will be destroyed. You can mount as a volume the node_modules directory to avoid to download all dependencies all the time you create a new container.
See this example that installs browserify once:
### docker-compose.yml
node:
image: node:5.9.1
working_dir: /js_scripts
command: npm install browserify
volumes:
- $PWD/host_js_files:/js_scripts
First, you should run docker-compose up and wait until all packages will be installed. After that, you should run the browserify command as:
docker-compose run node /js_scripts/node_modules/.bin/browserify /js_scripts/main.js -o /js_scripts/bundle.js
It's a bad idea to mix your host files with docker container files by folder sharing. After container removing docker deletes all containers data. Docker should know which files belong to containers and which to host(Docker removes all inside container except for volumes). You have two variants to mix host and container files together:
Put container files to volume after container was started. (Bad idea, container files will not be removed after removing of container)
You can place your host scripts to subfolder in /js_scripts or declare all your scripts separately:
-v ./host_js_files/script1.js:/js_script/script1.js
-v ./host_js_files/script2.js:/js_script/script2.js

Is it possible to ignore a subfolder (e.g. node_module) in the mounted Volume in a docker container?

I am now working on a nodejs project. There are too many small files in the node_modules folder. I want to move a docker-based development environment, so that the node_modules folder can be kept in the docker image (update when necessary). At the same time, I need the source folder of the app remain on the hosting environment. The following is the Docker file, I hoped to work:
FROM node
MAINTAINER MrCoder
// To simplify the process node_modules is installed and cached
ADD package.json /opt/app/
WORKDIR /opt/app
RUN npm install
// will be mapped to the local app source folder
VOLUME /opt/app
// This is to test whether local node_modules or the folder in the image is used
CMD ls node_modules
Apparently, when I change add any file in the local node_modules folder it is listed.
No, you can't skip some files when mounting a volume. This is a general Linux (and every other system I know of) limitation, not Docker-related.
However, it's pretty easy to work around it: have a root-level index.js (or whatever) that does nothing but require('./src');. Then have /opt/app be provided by your container and mount /opt/app/src as an external volume.
You could potentially use a bind mount i.e.
mkdir -p /node_modules
mount --bind /node_modules /usr/src/app/node_modules
after the volume declaration but before the npm install.

Resources