I created a Dockerfile for a nodejs project. It contains a package.json with many scripts.
"scripts": {
"start": "npm run",
"server": "cd server && npm start",
"generator": "cd generator && npm start",
...
},
I need to run server and genberator in my docker image. How to achieve this?
I tried:
CMD ls;npm run server ; npm run generator this won't find the package json because shellform seems to run within /bin/sh -c.
CMD ["npm","run","server"] is also not working and is missing the 2nd command
The ls in first try showed me that all files are in place (incl. the package.json).
For sake of completeness the project in question is https://github.com/seekwhencer/node-bilder-brause (not mine).
the current Dockerfile:
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY ./* ./
RUN npm install
EXPOSE 3050
EXPOSE 3055
CMD ls ; npm run server ; npm run generator
The typical way to run multiple commands in a CMD or ENTRYPOINT is to write all of the commands to a file and then run that file. This is demonstrated in the Dockerfile below.
This Dockerfile also install imagemagick, which is a dependency of the package OP is trying to use. I also changed the base image to node:14-alpine because it is much smaller than node:14 but works just as well for this purpose.
FROM node:14-alpine
# Install system dependencies for this package.
RUN apk add --no-cache imagemagick
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY . .
RUN npm install \
# Install server and generator.
&& npm run postinstall \
# Write entrypoint.
&& printf "ls\nnpm run server\nnpm run generator\n" > entrypoint.sh
EXPOSE 3050
EXPOSE 3055
CMD ["/bin/sh", "entrypoint.sh"]
docker build --tag nodebilderbrause .
docker run --rm -it nodebilderbrause
The contents of entrypoint.sh are written in the Dockerfile. Here is what the file would look like.
ls
npm run server
npm run generator
I found another way, adding for sake of completeness and reference for me:
https://www.npmjs.com/package/concurrently
adding
RUN npm install -g concurrently
enables:
CMD ["concurrently","npm:server", "npm:generator"]
If you need to run two separate processes, the typical approach is to run two separate containers. You can run both containers off the same image; it's very straightforward to override the command part of a container when you start it.
You need to pick something to be the default CMD. Given the package.json you show, for example, you can specify
CMD npm start
When you actually go to run the container, you can specify an alternate command: anything after the image name is taken as the command. (If you're using Docker Compose, specify command:.) A typical docker run setup might look like:
docker build -t bilder-brause .
docker network create photos
docker run \
--name server \
--net photos \
-d \
-p 3050:3050 \
bilder-brause \
npm run server
docker run \
--name generator \
--net photos \
-d \
-p 3055:3055 \
bilder-brause \
npm run generator
You could build separate images for the different components with separate EXPOSE and CMD directives if you really wanted
FROM bilder-brause
EXPOSE 3050
CMD npm run server
Building these is a minor hassle; there is no way to specify in Compose that one local image is built FROM another so the ordering might not turn out correctly, for example.
Related
I am run a single Node.js script like so: docker run -it --rm -u node --name build_container -v $(pwd):/home/node/app -w "/home/node/app" node:lts bash -c "yarn install --no-cache --frozen-lockfile".
However, the script log shows the displays info No lockfile found, and, what is even weirder, a message that says a package-lock.json was found. However, the work directory has no package-lock.
Are there any ideas what could be the issue?
I would suggest using your own Dockerfile to build your image - and then run it inside the build - like so:
Dockerfile
FROM node:12-alpine
# Create work directory
RUN mkdir -p /express
WORKDIR /express
# Bundle app sources
COPY . .
# Build app
RUN yarn install --prod --frozen-lockfile && yarn run build
EXPOSE 3000
# Set default NODE_ENV to production
ENV NODE_ENV production
ENTRYPOINT [ "yarn", "start" ]
.dockerignore
node_modules
build
dist
config
docs
*.log
.git
.vscode
And then build the image:
docker build -t <your image name> -f <Dockerfile (if omitted uses local folder .Dockerfile> <path to code your code>
Once built, run it as you would a normal image - as everything is already in.
I have a static website built with gridsome, I'm experimenting with docker containerization and I came up with the following Dockerfile:
FROM node:14.2.0-alpine3.10
RUN apk update && apk upgrade
RUN apk --no-cache add git g++ gcc libgcc libstdc++ linux-headers make python yarn
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
USER node
RUN npm i --global gridsome
RUN yarn global add serve
COPY --chown=node:node gridsome-website/ /home/node/build/
RUN echo && ls /home/node/build/ && echo
WORKDIR /home/node/build
USER node
RUN npm cache clean --force
RUN npm clean-install
# My attempt to serve the website from within, build is successful but serve command doesn't work
CMD ~/.npm-global/bin/gridsome build && serve -d dist/
Gridsome uses the port 8080 by default, after running the container via:
docker run --name my-website -d my-website-image
It doesn't fail, but I can't access my website using the link: http://localhost:8080, the container stops execution right after I run it. I tried copying the ``` dist/`` folder from my container via:
$ docker cp my-website:/home/node/build/dist ./dist
then serving it manually from terminal using:
$ serve -d dist/
It works from terminal, but why does it fail from within the container?
The point of Gridsome is that it produces a static site that you can host anywhere you can put files, right? You don't need nodejs or anything except for a simple webserver to host it.
If you want to produce a clean production Docker image to serve the static files built by Gridsome, then that's a good use-case for a multistage Dockerfile. In the first stage you build your Gridsome project. In the second and final stage you can go with a clean nginx image that contains your dist folder and nothing else.
It could be as simple as this:
FROM node:current AS builder
WORKDIR /build
COPY gridsome-website ./
RUN npm install && gridsome build
FROM nginx:stable
COPY --from=builder /build/dist /usr/share/nginx/html/
EXPOSE 80
After you build and run the image with 8080->80 port mapping, you can access your site at http://127.0.0.1:8080.
docker build -t my-website-image .
docker run -p 8080:80 my-website-image
It's probably a good idea to set up a .dockerignore file as well that ignores at least node_modules.
$ cat .dockerignore
gridsome-website/node_modules
I'm trying to use nodemon inside docker container:
Dockerfile
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "nodemon" ]
Build/Run command
docker build -t tag/apt .
docker run -p 49160:8080 -v /local/path/to/apt:/usr/src/app -d tag/apt
Attaching a local volume to the container to watch for changes in code, results in some override and nodemon complains that can't find node modules (any of them). How can I solve this?
In you Dockerfile, you are running npm install after copying your package*json files. A node_modules directory gets correctly created in /usr/src/app and you're good to go.
When you mount your local directory on /usr/src/app, though, the contents of that directory inside your container are overriden with your local version of the node project, which apparently is lacking the node_modules directory, causing the error you are experiencing.
You need to run npm install on the running container after you mounted your directory. For example you could run something like:
docker exec -ti <containername> npm install
Please note that you'll have to temporarily change your CMD instruction to something like:
CMD ["sleep", "3600"]
In order to be able to enter the container.
This will cause a node_modules directory to be created in your local directory and your container should run nodemon correctly (after switching back to your current CMD).
TL;DR: npm install in a sub-folder, while moving the node_modules folder to the root.
Try this config to see and it should help you.
FROM node:carbon
RUN npm install -g nodemon
WORKDIR /usr/src/app
COPY package*.json /usr/src/app/
RUN npm install && mv /usr/src/app/node_modules /node_modules
COPY . /usr/src/app
EXPOSE 8080
CMD [ "nodemon" ]
As the other answer said, even if you have run npm install at your WORKDIR. When you mount the volume, the content of the WORKDIR is replaced by your mount folder temporarily, which the npm install did not run.
As node search its require package in serveral location, a workaround is to move the 'installed' node_modules folder to the root, which is one of its require path.
Doing so you can still update code until you require a new package, which the image needs another build.
I reference the Dockerfile from this docker sample project.
In a Javascript or a Nodejs application when we bind src file using bind volume in a Docker container either using docker command or docker-compose we end up overriding node_modules folder. To overcome this issue you need to use anonymous volume. In an anonymous volume, we provide only the destination folder path as compared to the bind volume where we specify source:destination folder path.
General syntax
--volume <container file system directory absolute path>:<read write access>
An example docker run command
docker container run \
--rm \
--detach \
--publish 3000:3000 \
--name hello-dock-dev \
--volume $(pwd):/home/node/app \
--volume /home/node/app/node_modules \
hello-dock:dev
For further reference you check this handbook by Farhan Hasin Chowdhury
Maybe there's no need to mount the whole project. In this case, I would only mount the directory where I put all the source files, e.g. src/.
This way you won't have any problem with the node_modules/ directory.
Also, if you are using Windows you may need to add the -L (--legacy-watch) option to the nodemon command, as you can see in this issue. So it would be nodemon -L.
I'd like to run thumbd as a service inside a node Docker image! At the moment I'm just running it before I start my app, which is no use to me! Is there a way I could setup my Dockerfile to run it as an init.d service on startup without blocking any of my other docker commands?
My Dockerfile goes as follows:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Thumbd
RUN npm install -g thumbd
RUN mkdir -p /var/log/
RUN echo "" > /var/log/thumbd.log
RUN thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD npm run build && npm start
It's probably easiest to run thumbd in it's own container due to the way it works without direct links to your application. Docker likes to push the idea of a single process per container too.
FROM node:6.2.0
# Thumbd
RUN set -uex; \
npm install -g thumbd; \
mkdir -p /var/log/; \
touch /var/log/thumbd.log
CMD thumbd server --aws_key=<KEY> --aws_secret=<SECRET> --sqs_queue=<QUEUE> --bucket=<BUCKET> --aws_region=us-west-1 --s3_acl=public-read
You can use Docker Compose to orchestrate running multiple containers in your project.
If you really want to run multiple processes in a container, use an init system like s6 or possibly supervisord.
I'm containerizing a nodejs app. My Dockerfile looks like this:
FROM node:4-onbuild
ADD ./ /egp
RUN cd /egp \
&& apt-get update \
&& apt-get install -y r-base python-dev python-matplotlib python-pil python-pip \
&& ./init.R \
&& pip install wordcloud \
&& echo "ABOUT TO do NPM" \
&& npm install -g bower gulp \
&& echo "JUST FINISHED ALL INSTALLATION"
EXPOSE 5000
# CMD npm start > app.log
CMD ["npm", "start", ">", "app.log"]
When I DON'T use the Dockerfile, and instead run
docker run -it -p 5000:5000 -v $(pwd):/egp node:4-onbuild /bin/bash
I can then paste the value of the RUN command and it all works perfectly, and then execute the npm start command and I'm good to go. However, upon attempting instead docker build . it seems to run in to an endless loop attempting to install npm stuff (and never displaying my echo commands), until it crashes with an out-of-memory error. Where have I gone wrong?
EDIT
Here is a minimal version of the EGP folder that exhibits the same container: logging in and pasting the whole "RUN" command works, but docker build does not.It is a .tar.gz file (though the name might download without one of the .)
http://orys.us/egpbroken
The node:4-onbuild image contains the following Dockerfile
FROM node:4.4.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
The three ONBUILD commands run before your ADD or RUN command are kicked off, and the endless loop appears to come from the npm install command that's running. When you launch the container directly, the ONBUILD commands are skipped since you didn't build a child-image. Change your FROM line to:
FROM node:4
and you should have your expected results.