Nodemon inside docker container - node.js

I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?

In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).

TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.

In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury

Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.

Related

Run and compile code inside a Docker container

Please follow this reasoning and tell me where I am wrong.
I want to develop a web app with Typescript. This is the folder structure:
src
- index.ts
package.json
tsconfig.json
I want to compile and run the app inside a container so that is not environment-dependent.
I run docker build --name my_image with the following Dockerfile in order to create an image:
FROM node:16
WORKDIR /app
COPY package.json /app
RUN npm install
CMD ["npx", "dev"]
This will create a node_modules folder inside the container.
Now I create a container like so:
docker create -v $(pwd)/src/:/app/src my_image --name my_container
I created a volume so that when I change files in my host /src I will change also the same files in the container.
I start the container like so:
docker start -i my_container
Now everything is working. The problem, though, are the linters.
When I open a file with my text editor from the host machine in /src the linter are not working because there is no node_modules installed on the host.
If I npm install also on the host machine I will have 2 different node_modules installation and there might be some compilation that differ between the host and the container.
Is there a way to point the host node_modules to the container node_modules?
If I create a docker volume for node_modules like so
docker create -v $(pwd)/node_modules:/app/node_modules ...
I will delete all the compilation done in the container.
What am I doing wrong?
Thank you.
For those who might have a similar problem, this is what I have done and it is working.
First I build an image:
docker build --name my_image .
Then I copy the node_modules folder from a temporary container to my local folder:
docker create --name tmp_container my_image
docker cp tmp_container:/app/node_modules ./node_modules
docker rm tmp_container
Then I create the final container with a volume:
docker create -v $(pwd)/node_modules:/app/node_modules --name my_container my_image
And now I have a exactly what it has built and compiled in the container but also in my local folder.
This can be of course extended to any folder with build artifacts. For example in my case I needed also another folder in order to make the linters work in my text editor. I just copy the folder exactly like I did with node_modules.

How can I copy node_modules folder out of my docker container onto the build machine?

I am moving an application to a new build pipeline. On CI I am not able to install node to complete the NPM install step.
My idea to is to move the npm install step to a Docker image that uses Node, install the node modules and them copy the node modules back to the host so another process can package up the application.
This is my Dockerfile:
FROM node:9
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY ./dashboard-interface/package.json /usr/src/app/
RUN npm install --silent --production
# Bundle app src
COPY node_modules ./dashboard-interface/node_modules #I thought this would copy the new node_modules back to the host
This runs fine and install the node modules, but when I try and copy the node_modules directory back to the host I see an error saying:
COPY node_modules ./dashboard-interface/node_modules
COPY failed: stat /var/lib/docker/tmp/docker-builder718557240/node_modules: no such file or directory
So it's clear that the copy process cannot find the node_modules directory that it has just installed the node modules too.
According to the documentation of the COPY instruction, the COPY instruction copies a file from the host to the container.
If you want the files from the container to be available outside your container, you can use Volumes. Volumes will help you have a storage for your container that is independent of the container itself, and thus you can use it for other containers in the future.
Let me try to solve the issue you are having.
Here is the Dockerfile
# Use alpine for slimer image
FROM node:9-alpine
RUN mkdir /app
WORKDIR /app
COPY /dashboard-folder/package.json .
RUN npm i --production
COPY node_modules ./root
Assumes the following that your project stucture is like so:
|root
| Dockerfile
|
\---dashboard-folder
package.json
Where root is your working directory that will recieve node_modules
Building image this image with docker build . -t name and subsequently using it like so :
docker run -it --rm ${PWD}:/app/root NAME mv node_modules ./root
Should do the trick.
The simple and sure way is to do volume mapping, for example the docker-compose yaml file will have a volumes section that looks like this:
….
volumes:
- ./: /usr/src/app
- /usr/src/app/node_modules
For docker run command, use:
-v ./:/usr/src/app
and on Dockerfile, define:
VOLUME /usr/src/app
VOLUME /usr/src/app/node_modules
But confirm first that the run of npm install did create the
node_modules directory on the host system.
The main reason that you hitting the problem is depending on the OS that is running on your host. If your host is running Linux then for sure will be no problem but if your host is on Mac or Windows, then what happened is that docker is actually running on a VM which is hidden from you and hence the path you cannot map directly to host system. Instead, you can use Volume.

Using SemanticUI with Docker

I setted up a Docker container running an express app. Here is the dockerfile :
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
RUN npm install -g nodemon
RUN npm install --global gulp
RUN npm i gulp
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD [ "nodemon", "-L", "./bin/www" ]
As you can see, it uses the nodejs image and creates an app folder in my container, itself containing my application. IThe docker container, on start runs npm install and installs my modules on the container (thanks to that i don't have this node_modules folder in my local folder) i want to integrate SemanticUI which uses gulp. I so installed gulp on my container but the files created by semantic are only present on my container. They are not on my local folder. How can i dynamically make that those files created on my container are locally present to modify.
I thought that one of docker's great jobs was to not have node_modules installed on your local app folder .. maybe i am wrong if so please correct me
You can use data volume to share files with the container and host machine.
Something like this:
docker run ... -v /path/on/host:/path/in/container ...
or if you are using docker-compose, see volume configuration reference.
...
volumes:
- /path/on/host:/path/in/container

How should I run thumbd as a service inside a Docker container?

I'd like to run thumbd as a service inside a node Docker image! At the moment I'm just running it before I start my app, which is no use to me! Is there a way I could setup my Dockerfile to run it as an init.d service on startup without blocking any of my other docker commands?
My Dockerfile goes as follows:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Thumbd
RUN npm install -g thumbd
RUN mkdir -p /var/log/
RUN echo "" > /var/log/thumbd.log
RUN thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD npm run build && npm start
It's probably easiest to run thumbd in it's own container due to the way it works without direct links to your application. Docker likes to push the idea of a single process per container too.
FROM node:6.2.0
# Thumbd
RUN set -uex; \
npm install -g thumbd; \
mkdir -p /var/log/; \
touch /var/log/thumbd.log
CMD thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
You can use Docker Compose to orchestrate running multiple containers in your project.
If you really want to run multiple processes in a container, use an init system like s6 or possibly supervisord.

docker build + private NPM (+ private docker hub)

I have an application which runs in a Docker container. It requires some private modules from the company's private NPM registry (Sinopia), and accessing these requires user authentication. The Dockerfile is FROM iojs:latest.
I have tried:
1) creating an .npmrc file in the project root, this actually makes no difference and npm seems to ignore it
2) using env variables for NPM_CONFIG_REGISTRY, NPM_CONFIG_USER etc., but the user doesn't log in.
Essentially, I seem to have no way of authenticating the user within the docker build process. I was hoping that someone might have run into this problem already (seems like an obvious enough issue) and would have a good way of solving it.
(To top it off, I'm using Automated Builds on Docker Hub (triggered on push) so that our servers can access a private Docker registry with the prebuilt images.)
Are there good ways of either:
1) injecting credentials for NPM at build time (so I don't have to commit credentials to my Dockerfile) OR
2) doing this another way that I haven't thought of
?
I found a somewhat elegant-ish solution in creating a base image for your node.js / io.js containers (you/iojs):
log in to your private npm registry with the user you want to use for docker
copy the .npmrc file that this generates
Example .npmrc:
registry=https://npm.mydomain.com/
username=dockerUser
email=docker#mydomain.com
strict-ssl=false
always-auth=true
//npm.mydomain.com/:_authToken="someAuthToken"
create a Dockerfile that copies the .npmrc file appropriately.
Here's my Dockerfile (based on iojs:onbuild):
FROM iojs:2.2.1
MAINTAINER YourSelf
# Exclude the NPM cache from the image
VOLUME /root/.npm
# Create the app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Copy npm config
COPY .npmrc /root/.npmrc
# Install app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
# Run
CMD [ "npm", "start" ]
Make all your node.js/io.js containers FROM you/iojs and you're good to go.
In 2020 we've got BuildKit available. You don't have to pass secrets via COPY or ENV anymore, as it's not considered safe.
Sample Dockerfile:
# syntax=docker/dockerfile:experimental
FROM node:13-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN --mount=type=ssh --mount=type=secret,id=npmrc,dst=$HOME/.npmrc \
yarn install --production --ignore-optional --frozen-lockfile
# More stuff...
Then, your build command can look like this:
docker build --no-cache --progress=plain --secret id=npmrc,src=/path-to/.npmrc .
For more details, check out: https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information
For those who are finding this article via google and are still looking for an alternative way that doesn't involve leaving you private npm tokens on your docker images and containers:
We were able to get this working by doing the npm install prior to the docker build (By doing this it lets you have your .npmrc outside of your image\container). Once the private modules have been installed locally you can copy your files across to the image as part of your build:
# Make sure the node_modules contain only the production modules when building this image
COPY . /usr/src/app
You also need to make sure that your .dockerignore file doesn't exclude the node_modules folder.
Once you have the folder copied into your image, the trick is to to npm rebuild instead of npm install. This will rebuild any native dependancies that are effected by any differences between your build server and your docker OS:
FROM nodesource/vivid:LTS
# For application location, default from nodesource is /usr/src/app
# Make sure the node_modules contain only the production modules when building this image
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN npm rebuild
CMD npm start
I would recommend not using a .npmrc file but instead use npm config set. This works like a charm and is much cleaner:
ARG AUTH_TOKEN_PRIVATE_REGISTRY
FROM node:latest
ARG AUTH_TOKEN_PRIVATE_REGISTRY
ENV AUTH_TOKEN_PRIVATE_REGISTRY=${AUTH_TOKEN_PRIVATE_REGISTRY}
WORKDIR /home/usr/app
RUN npm config set #my-scope:registry https://my.private.registry && npm config set '//my.private.registry/:_authToken' ${AUTH_TOKEN_PRIVATE_REGISTRY}
RUN npm ci
CMD ["bash"]
The buildkit answer is correct, except it runs everything as root which is considered a bad security practice.
Here's a Dockerfile that works and uses the correct user node as the node Dockerfile sets up. Note the secret mount has the uid parameter set, otherwise it mounts as root which user node can't read. Note also the correct COPY commands that chown to user:group of node:node
FROM node:12-alpine
USER node
WORKDIR /home/node/app
COPY --chown=node:node package*.json ./
RUN --mount=type=secret,id=npm,target=./.npmrc,uid=1000 npm ci
COPY --chown=node:node index.js .
COPY --chown=node:node src ./src
CMD [ "node", "index.js" ]
#paul-s Should be the accepted answer now because it's more recent IMO. Just as a complement, you mentioned you're using the docker/build-push-action action so your workflow must be as following:
- uses: docker/build-push-action#v3
with:
context: .
# ... all other config inputs
secret-files: |
NPM_CREDENTIALS=./.npmrc
And then, of course, bind the .npmrc file from your dockerfile using the ID you specified. In my case I'm using a Debian based image (uid starts from 1000). Anyways:
RUN --mount=type=secret,id=NPM_CREDENTIALS,target=<container-workdir>/.npmrc,uid=1000 \
npm install --only=production

Resources