I try to use Docker and use it with an angular App. I follow this steps : https://mherman.org/blog/dockerizing-an-angular-app/
But i did not understand the trick with the node_modules. I know that we need to run docker with the follow parameters -v ${PWD}:/app to bind the host files in the container. But i do not get the part with the parameter -v /app/node_modules. How can Docker create the volume with a copy of the /app/node_modules from the container if it's hidden by the host bind ?
The link mentioned in your question had already answer to your question.
Since we want to use the container version of the “node_modules” folder, we configured another volume: -v /app/node_modules. You should now be able to remove the local node_modules flavour.
In simple, you want to use everything from the host, the complete code but not the node_modules. so you just mapped with an empty directory of the host so this trick will able your container to use the container node_modules not the host node_modules.
There are many factors, some node_modules are Host depended like x509 so it will not work inside Linux container if your Host OS is the window.
the first trick, install modules in docker build time
COPY package.json /app/package.json
RUN npm install
second during running container,
docker run --name test -it -v ${PWD}:/app -v /app/node_modules -p 4201:4200 --rm test-app
After running your container if you delete host node_modules your container will still work as it will be using container node_modules.
In simple word, remove host node_modules and run
docker run --name test -it -v ${PWD}:/app -p 4201:4200 --rm test-app
It will fail to run because container /app complete directory override by host.
The work arround is simple the trick that you missed to understand.
docker run --name test -it -v ${PWD}:/app -v /app/node_modules -p 4201:4200 --rm test-app
It will prevent node_modules of the container to be overridden.
Related
New to docker. I wanted to create a simple Node.js app using docker on my Ubuntu OS. But all the tutorials/YouTube Videos are first installing Node on host machine, then run and test the app, and then dockerizing the app. Means all the tutorials are simply teaching "How to dockerize an existing app".
If anyone can provide a little guidance on this then it would be really helpful.
First create a directory for your source code and create something like the starter app from here in app.js. You have to make one change though: hostname has to be 0.0.0.0 rather than 127.0.0.1.
Then run an interactive node container using this command
docker run -it --rm -v $(pwd):/app -p 3000:3000 node /bin/bash
This container has your host directory mapped to /app, so if you do
cd /app
ls -al
you should see your app.js file.
Now you can run it using
node app.js
If you then switch back to the host, open a browser and go to http://localhost:3000/ you should get a 'Hello world' response.
Follow this
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
docker build . -t <your username>/node-web-app
docker run -p 49160:8080 -d <your username>/node-web-app
You can create package*.json ./ files manually, if you don't want to install node/npm on your local machine.
Please follow this reasoning and tell me where I am wrong.
I want to develop a web app with Typescript. This is the folder structure:
src
- index.ts
package.json
tsconfig.json
I want to compile and run the app inside a container so that is not environment-dependent.
I run docker build --name my_image with the following Dockerfile in order to create an image:
FROM node:16
WORKDIR /app
COPY package.json /app
RUN npm install
CMD ["npx", "dev"]
This will create a node_modules folder inside the container.
Now I create a container like so:
docker create -v $(pwd)/src/:/app/src my_image --name my_container
I created a volume so that when I change files in my host /src I will change also the same files in the container.
I start the container like so:
docker start -i my_container
Now everything is working. The problem, though, are the linters.
When I open a file with my text editor from the host machine in /src the linter are not working because there is no node_modules installed on the host.
If I npm install also on the host machine I will have 2 different node_modules installation and there might be some compilation that differ between the host and the container.
Is there a way to point the host node_modules to the container node_modules?
If I create a docker volume for node_modules like so
docker create -v $(pwd)/node_modules:/app/node_modules ...
I will delete all the compilation done in the container.
What am I doing wrong?
Thank you.
For those who might have a similar problem, this is what I have done and it is working.
First I build an image:
docker build --name my_image .
Then I copy the node_modules folder from a temporary container to my local folder:
docker create --name tmp_container my_image
docker cp tmp_container:/app/node_modules ./node_modules
docker rm tmp_container
Then I create the final container with a volume:
docker create -v $(pwd)/node_modules:/app/node_modules --name my_container my_image
And now I have a exactly what it has built and compiled in the container but also in my local folder.
This can be of course extended to any folder with build artifacts. For example in my case I needed also another folder in order to make the linters work in my text editor. I just copy the folder exactly like I did with node_modules.
I'm new to docker, so I have visited the https://www.docker.com/101-tutorial tutorial and executed, folowing their advice:
docker run -dp 80:80 docker/getting-started
This let me read the manual via http:/localhost:80. This works all right and I'm presented with the official docker manual. I follow it. They say to prepare a Dockerfile like this:
FROM node:12-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
which i did. I built the image: docker build -t getting-started . and tried to run it, docker run -dp 3000:3000 getting-started, but with no success. I checked with docker ps that this did not work. After some experimentation (with some "debugging" via an interactive docker launch, with /bin/sh as the initial command) I produced this Dockerfile, which works:
FROM node:alpine
WORKDIR /app
COPY . .
RUN yarn install --production
RUN yarn add express
RUN yarn add sqlite3
CMD ["node", "app/src/index.js"]
I'm quite impressed that without much knowledge about docker, and with absolutely no knowledge about node.js and Alpine Linux, I managed to fix the problem myself. Now I would like to understand what's going on here.
My question is as in the title: why the original instructions from the official manual fail on my computer? Why do I have to install extra libraries and why my working files go to /app/app rather than just /app?
My environment:
> cat /etc/lsb-release
DISTRIB_ID=ManjaroLinux
DISTRIB_RELEASE=20.2
DISTRIB_CODENAME=Nibia
DISTRIB_DESCRIPTION="Manjaro Linux"
> docker --version
Docker version 19.03.13-ce, build 4484c46d9d
My experiment with an interactive container:
> docker run -it my_image /bin/sh
/app # ls
Dockerfile app node_modules package.json yarn.lock
/app # ls app
package.json spec src yarn.lock
I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?
In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).
TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.
In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury
Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.
I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.