Please follow this reasoning and tell me where I am wrong.
I want to develop a web app with Typescript. This is the folder structure:
src
- index.ts
package.json
tsconfig.json
I want to compile and run the app inside a container so that is not environment-dependent.
I run docker build --name my_image with the following Dockerfile in order to create an image:
FROM node:16
WORKDIR /app
COPY package.json /app
RUN npm install
CMD ["npx", "dev"]
This will create a node_modules folder inside the container.
Now I create a container like so:
docker create -v $(pwd)/src/:/app/src my_image --name my_container
I created a volume so that when I change files in my host /src I will change also the same files in the container.
I start the container like so:
docker start -i my_container
Now everything is working. The problem, though, are the linters.
When I open a file with my text editor from the host machine in /src the linter are not working because there is no node_modules installed on the host.
If I npm install also on the host machine I will have 2 different node_modules installation and there might be some compilation that differ between the host and the container.
Is there a way to point the host node_modules to the container node_modules?
If I create a docker volume for node_modules like so
docker create -v $(pwd)/node_modules:/app/node_modules ...
I will delete all the compilation done in the container.
What am I doing wrong?
Thank you.
For those who might have a similar problem, this is what I have done and it is working.
First I build an image:
docker build --name my_image .
Then I copy the node_modules folder from a temporary container to my local folder:
docker create --name tmp_container my_image
docker cp tmp_container:/app/node_modules ./node_modules
docker rm tmp_container
Then I create the final container with a volume:
docker create -v $(pwd)/node_modules:/app/node_modules --name my_container my_image
And now I have a exactly what it has built and compiled in the container but also in my local folder.
This can be of course extended to any folder with build artifacts. For example in my case I needed also another folder in order to make the linters work in my text editor. I just copy the folder exactly like I did with node_modules.
Related
I try to use Docker and use it with an angular App. I follow this steps : https://mherman.org/blog/dockerizing-an-angular-app/
But i did not understand the trick with the node_modules. I know that we need to run docker with the follow parameters -v ${PWD}:/app to bind the host files in the container. But i do not get the part with the parameter -v /app/node_modules. How can Docker create the volume with a copy of the /app/node_modules from the container if it's hidden by the host bind ?
The link mentioned in your question had already answer to your question.
Since we want to use the container version of the “node_modules” folder, we configured another volume: -v /app/node_modules. You should now be able to remove the local node_modules flavour.
In simple, you want to use everything from the host, the complete code but not the node_modules. so you just mapped with an empty directory of the host so this trick will able your container to use the container node_modules not the host node_modules.
There are many factors, some node_modules are Host depended like x509 so it will not work inside Linux container if your Host OS is the window.
the first trick, install modules in docker build time
COPY package.json /app/package.json
RUN npm install
second during running container,
docker run --name test -it -v ${PWD}:/app -v /app/node_modules -p 4201:4200 --rm test-app
After running your container if you delete host node_modules your container will still work as it will be using container node_modules.
In simple word, remove host node_modules and run
docker run --name test -it -v ${PWD}:/app -p 4201:4200 --rm test-app
It will fail to run because container /app complete directory override by host.
The work arround is simple the trick that you missed to understand.
docker run --name test -it -v ${PWD}:/app -v /app/node_modules -p 4201:4200 --rm test-app
It will prevent node_modules of the container to be overridden.
I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?
In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).
TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.
In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury
Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.
I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.
I setted up a Docker container running an express app. Here is the dockerfile :
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
RUN npm install -g nodemon
RUN npm install --global gulp
RUN npm i gulp
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "nodemon", "-L", "./bin/www" ]
As you can see, it uses the nodejs image and creates an app folder in my container, itself containing my application. IThe docker container, on start runs npm install and installs my modules on the container (thanks to that i don't have this node_modules folder in my local folder) i want to integrate SemanticUI which uses gulp. I so installed gulp on my container but the files created by semantic are only present on my container. They are not on my local folder. How can i dynamically make that those files created on my container are locally present to modify.
I thought that one of docker's great jobs was to not have node_modules installed on your local app folder .. maybe i am wrong if so please correct me
You can use data volume to share files with the container and host machine.
Something like this:
docker run ... -v /path/on/host:/path/in/container ...
or if you are using docker-compose, see volume configuration reference.
...
volumes:
- /path/on/host:/path/in/container
I am using docker-compose to create build my image in my local environment but I have been having a few issues.
It looks like when doing docker-compose up, Docker compose does not run unless I have installed all the packages locally in my project outside of Docker.
Here is the Scenario.
I have a simple index.js file with one package (express)
const app = require('express')();
app.get('/health', (req, res) => res.status(200).send('ok'));
app.listen(8080, () => console.log('magic happens here'));
A Dockerfile to build the image
FROM node:6.9.5
RUN mkdir -p /var/log/applications/test_dc
RUN mkdir /src
WORKDIR /src
COPY package.json /src
COPY . /src
RUN npm install
VOLUME ["/src", "/var/log/applications/test_dc"]
EXPOSE 8080
CMD ["npm", "start"]
And a docker-compose.yml file to start the project up (I have removed the linked images to keep it minimal)
version: "2.1"
services:
api:
build: .
container_name: test_dc
ports:
- "8080:8080"
volumes:
- .:/src
The idea that is anyone can clone the repository and run the docker compose command to:
Build the image from the Dockerfile (Install deps and start up)
Spin up the containers.
Hit the health route.
This means I do not need to run npm install in the local project outside of docker.
This is however failing on docker-compose up
This however is solved when I run npm install locally.
Isn't this against the whole idea of Docker? That a project should be isolated from the rest of the system?
Is there something I am missing?
External volume mounts can mask image contents
You appear to be wanting your project code to live in /src, as seen in the Dockerfile:
RUN mkdir /src
WORKDIR /src
COPY package.json /src
COPY . /src
But in docker-compose.yml, you are mounting an external path as /src.
volumes:
- .:/src
The external mount will take precedence, and obscure everything that was built into your image in the Dockerfile (as well as the volume that you defined in Dockerfile).
Running startup tasks using a custom entrypoint
The usual way to handle this is by using an entrypoint script that runs your installer at container startup.
entrypoint.sh:
#!/bin/sh
npm install
exec "$#"
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
At container startup, the entrypoint script is run, with the external volume already mounted. Thus, npm install can run and install modules there. When that is done, the CMD is executed. In your case, that is npm start.
Why you would want to use a custom entrypoint
One reason to use a container is to create isolation. That is, you want the image to be isolated and not rely on your external environment. So why would you want to use an external mount at all?
The answer is that during development you often want to avoid frequent rebuilds. Since images are immutable, every single change would require a rebuild and restart of the container. So every code change... yeah, it would be pretty slow.
A good compromise is to build your images to be fully contained, but during development you externally mount the code so you can make changes very quickly without a rebuild step. Once you are ready to release code, you can remove the external mount and do one final build. Since you're copying the files you were mounting into the container, it's all the same. And the npm install step in the build is the same one you run in the custom ENTRYPOINT.