I'm containerizing a nodejs app. My Dockerfile looks like this:
FROM node:4-onbuild
ADD ./ /egp
RUN cd /egp \
&& apt-get update \
&& apt-get install -y r-base python-dev python-matplotlib python-pil python-pip \
&& ./init.R \
&& pip install wordcloud \
&& echo "ABOUT TO do NPM" \
&& npm install -g bower gulp \
&& echo "JUST FINISHED ALL INSTALLATION"
EXPOSE 5000
# CMD npm start > app.log
CMD ["npm", "start", ">", "app.log"]
When I DON'T use the Dockerfile, and instead run
docker run -it -p 5000:5000 -v $(pwd):/egp node:4-onbuild /bin/bash
I can then paste the value of the RUN command and it all works perfectly, and then execute the npm start command and I'm good to go. However, upon attempting instead docker build . it seems to run in to an endless loop attempting to install npm stuff (and never displaying my echo commands), until it crashes with an out-of-memory error. Where have I gone wrong?
EDIT
Here is a minimal version of the EGP folder that exhibits the same container: logging in and pasting the whole "RUN" command works, but docker build does not.It is a .tar.gz file (though the name might download without one of the .)
http://orys.us/egpbroken
The node:4-onbuild image contains the following Dockerfile
FROM node:4.4.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD COPY package.json /usr/src/app/
ONBUILD RUN npm install
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
The three ONBUILD commands run before your ADD or RUN command are kicked off, and the endless loop appears to come from the npm install command that's running. When you launch the container directly, the ONBUILD commands are skipped since you didn't build a child-image. Change your FROM line to:
FROM node:4
and you should have your expected results.
Related
I created a Dockerfile for a nodejs project. It contains a package.json with many scripts.
"scripts": {
"start": "npm run",
"server": "cd server && npm start",
"generator": "cd generator && npm start",
...
},
I need to run server and genberator in my docker image. How to achieve this?
I tried:
CMD ls;npm run server ; npm run generator this won't find the package json because shellform seems to run within /bin/sh -c.
CMD ["npm","run","server"] is also not working and is missing the 2nd command
The ls in first try showed me that all files are in place (incl. the package.json).
For sake of completeness the project in question is https://github.com/seekwhencer/node-bilder-brause (not mine).
the current Dockerfile:
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY ./* ./
RUN npm install
EXPOSE 3050
EXPOSE 3055
CMD ls ; npm run server ; npm run generator
The typical way to run multiple commands in a CMD or ENTRYPOINT is to write all of the commands to a file and then run that file. This is demonstrated in the Dockerfile below.
This Dockerfile also install imagemagick, which is a dependency of the package OP is trying to use. I also changed the base image to node:14-alpine because it is much smaller than node:14 but works just as well for this purpose.
FROM node:14-alpine
# Install system dependencies for this package.
RUN apk add --no-cache imagemagick
# Create app directory
WORKDIR /usr/src/app
# Bundle app source
COPY . .
RUN npm install \
# Install server and generator.
&& npm run postinstall \
# Write entrypoint.
&& printf "ls\nnpm run server\nnpm run generator\n" > entrypoint.sh
EXPOSE 3050
EXPOSE 3055
CMD ["/bin/sh", "entrypoint.sh"]
docker build --tag nodebilderbrause .
docker run --rm -it nodebilderbrause
The contents of entrypoint.sh are written in the Dockerfile. Here is what the file would look like.
ls
npm run server
npm run generator
I found another way, adding for sake of completeness and reference for me:
https://www.npmjs.com/package/concurrently
adding
RUN npm install -g concurrently
enables:
CMD ["concurrently","npm:server", "npm:generator"]
If you need to run two separate processes, the typical approach is to run two separate containers. You can run both containers off the same image; it's very straightforward to override the command part of a container when you start it.
You need to pick something to be the default CMD. Given the package.json you show, for example, you can specify
CMD npm start
When you actually go to run the container, you can specify an alternate command: anything after the image name is taken as the command. (If you're using Docker Compose, specify command:.) A typical docker run setup might look like:
docker build -t bilder-brause .
docker network create photos
docker run \
--name server \
--net photos \
-d \
-p 3050:3050 \
bilder-brause \
npm run server
docker run \
--name generator \
--net photos \
-d \
-p 3055:3055 \
bilder-brause \
npm run generator
You could build separate images for the different components with separate EXPOSE and CMD directives if you really wanted
FROM bilder-brause
EXPOSE 3050
CMD npm run server
Building these is a minor hassle; there is no way to specify in Compose that one local image is built FROM another so the ordering might not turn out correctly, for example.
dockerfile
FROM node:${NODE_VERSION}-buster-slim
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
RUN apt-get update && \
apt-get install -qqy --no-install-recommends \
ca-certificates \
dumb-init \
build-essential && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV HOME=/home/node
WORKDIR $HOME/app
COPY --chown=node:node . .
RUN set -xe && \
chown -R node /usr/local/lib /usr/local/include /usr/local/share /usr/local/bin && \
npm install && npm cache clean --force
EXPOSE 4200
CMD ["node"]
docker-compose
webapp :
container_name : webapp
hostname : webapp
build :
dockerfile : Dockerfile
context : ${PWD}/app
image : webapp:development
command :
- npm install
- npm run start
volumes :
- ${PWD}/webapp:/app
networks :
- backend
ports :
- 4200:4200
restart : on-failure
tty : true
stdin_open : true
env_file :
- variables.env
I can run the image with
docker run webapp bash -c "npm install; npm run start"
but when I run the compose file it says
webapp | [dumb-init] npm install: No such file or directory
I tried to replace the docker-compose command to prefix "node" but the same error but with node npm install: no such file or directory
Can someone tell me where things are going wrong ?
When you use the list form of command: in the docker-compose.yml file (or the JSON-array form of Dockerfile CMD) you are providing a list of words in a single command, not a list of separate commands. Once this gets combined with the ENTRYPOINT in the Dockerfile, the container command is
/usr/bin/dumb-init -- 'npm install' 'npm run start'
and when there isn't a /usr/bin/npm\ install file (including the space in the file name) you get that error.
Since you COPY the application code in the Dockerfile and run npm install there, you don't need to repeat this step at application start time. You should be able to delete the volumes: and command: part of the docker-compose.yml file to use what's built in to the image.
If you really need to repeat this command:, do it in exactly the form you specified in the docker run command, without list syntax
command: bash -c 'npm install; npm run start'
I'm running a node.js application that uses the html-pdf module, which in turn relies on phantomjs, to generate PDF files from HTML. The app runs withing a Docker container.
Dockerfile:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
RUN npm install -g html-pdf --unsafe-perm
VOLUME /mydirectory
ENTRYPOINT ["node"]
Which builds an image just fine.
app.js
const witch = require('witch');
const pdf = require('html-pdf');
const phantomPath = witch('phantomjs-prebuilt', 'phantomjs');
function someFunction() {
pdf.create('some html content', { phantomPath: `${this._phantomPath}` });
}
// ... and then some other stuff that eventually calls someFunction()
And then call docker run <the image name> app.js
When someFunction gets called, the following error message is thrown:
Error: spawn /mydirectory/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs ENOENT
This happens both when deploying the container on a cloud linux server or locally on my machine.
I have tried adding RUN npm install -g phantomjs-prebuilt --unsafe-perms to the Dockerfile, to no avail (this makes docker build fail because the installation of html-pdf cannot validate the installation of phantomjs)
I'm also obviously not a fan of using the --unsafe-perms argument of npm install, so if anybody has a solution that allows bypassing that, it would be fantastic.
Any help is greatly appreciated!
This is what ended up working for me, in case this is helpful to anyone:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
USER node
RUN npm install -g html-pdf
VOLUME /mydirectory
ENTRYPOINT ["node"]
I had a similar problem, only workaround for me was to download and copy a phantom manualy. This is my example from docker file, it should by the last thing before EXPOSE comand. Btw I use a node:10.15.3 image.
RUN wget -O /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN mkdir /tmp/phantomjs && mkdir -p /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN tar xvjf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /tmp/phantomjs
RUN mv /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64/* /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN rm -rf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz && rm -rf /tmp/phantomjs
Don't forget to update your paths. It's only workaround, I didn't have time to figure it out yet.
I came to this question in March 2021 and had the same issue dockerizing highcharts: it worked on my machine but failed on docker run (same spawn phantomjs error). In the end, the solution was to find a FROM node version that worked. This Dockerfile works using the latest Node docker image and almost latest highcharts npm version (always pick up specific npm versions):
FROM node:15.12.0
ENV ACCEPT_HIGHCHARTS_LICENSE YES
# see available versions of highcharts at https://www.npmjs.com/package/highcharts-export-server
RUN npm install highcharts-export-server#2.0.30 -g
EXPOSE 7801
# run the container using: docker run -p 7801:7801 -t CONTAINER_TAG
CMD [ "highcharts-export-server", "--enableServer", "1" ]
I have the following docker file
FROM ubuntu:14.04
#Install Node
RUN apt-get update -y
RUN apt-get upgrade -y
RUN apt-get install nodejs -y
RUN apt-get install nodejs-legacy -y
RUN apt-get install npm -y
RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# COPY distribution
COPY dist dist
COPY package.json package.json
# Substitute dependencies from environment variables
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
And here is the entrypoint script
#!/bin/sh
cp package.json /usr/src/app/dist/
cd /usr/src/app/dist/
echo "starting server"
exec npm start
When I run the image it fails with this error
sh: 1: http-server: not found
npm ERR! weird error 127
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
I tried various kinds of installation but still get the same error, I also tried checking if the node_modules conatins the http-server executable and it does. I tried forcing 777 permission on all the files but still running into the same error
What could be the problem?
It looks like you're just missing an npm install call somewhere, so the node_modules directory, nor any of its contents (like http-server) are present on the image. After the COPY package.json package.json, if you add a RUN npm install line, that might be all you need.
There are a few other things that could be simpler too though, like you probably don't need an ENTRYPOINT script to run the app and copy package.json since that's already done. Here's a simplified version of a Node Docker image I've been running with. I'm using the base Node images which, I believe, are Linux-based, but you could probably keep the Ubuntu stuff if you wanted to and it shouldn't be an issue.
FROM node:6.9.5
# Create non-root user to run app with
RUN useradd --user-group --create-home --shell /bin/bash my-app
# Set working directory
WORKDIR /home/my-app
COPY package.json ./
# Change user so that everything that's npm-installed belongs to it
USER my-app
# Install dependencies
RUN npm install --no-optional && npm cache clean
# Switch to root and copy over the rest of our code
# This is here, after the npm install, so that code changes don't trigger an un-caching
# of the npm install line
USER root
COPY .eslintrc index.js ./
COPY app ./app
RUN chown -R my-app:my-app /home/my-app
USER my-app
CMD [ "npm", "start" ]
It's good practice to make a specific user for owning/running your code and not using root, but, as I understand it, you need to use root to put files onto your image, hence the switching users a couple times here (which is what USER ... does).
I'll also note that I use this image with Docker Compose for local development, which is what the comment about code changes is referring to.
I am currently developing a Node backend for my application.
When dockerizing it (docker build .) the longest phase is the RUN npm install. The RUN npm install instruction runs on every small server code change, which impedes productivity through increased build time.
I found that running npm install where the application code lives and adding the node_modules to the container with the ADD instruction solves this issue, but it is far from best practice. It kind of breaks the whole idea of dockerizing it and it cause the container to weight much more.
Any other solutions?
Ok so I found this great article about efficiency when writing a docker file.
This is an example of a bad docker file adding the application code before running the RUN npm install instruction:
FROM ubuntu
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y install python-software-properties git build-essential
RUN add-apt-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get -y install nodejs
WORKDIR /opt/app
COPY . /opt/app
RUN npm install
EXPOSE 3001
CMD ["node", "server.js"]
By dividing the copy of the application into 2 COPY instructions (one for the package.json file and the other for the rest of the files) and running the npm install instruction before adding the actual code, any code change wont trigger the RUN npm install instruction, only changes of the package.json will trigger it. Better practice docker file:
FROM ubuntu
MAINTAINER David Weinstein <david#bitjudo.com>
# install our dependencies and nodejs
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y install python-software-properties git build-essential
RUN add-apt-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get -y install nodejs
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
COPY package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
COPY . /opt/app
EXPOSE 3000
CMD ["node", "server.js"]
This is where the package.json file added, install its dependencies and copy them into the container WORKDIR, where the app lives:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
To avoid the npm install phase on every docker build just copy those lines and change the ^/opt/app^ to the location your app lives inside the container.
Weird! No one mentions multi-stage build.
# ---- Base Node ----
FROM alpine:3.5 AS base
# install node
RUN apk add --no-cache nodejs-current tini
# set working directory
WORKDIR /root/chat
# Set tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]
# copy project file
COPY package.json .
#
# ---- Dependencies ----
FROM base AS dependencies
# install node packages
RUN npm set progress=false && npm config set depth 0
RUN npm install --only=production
# copy production node_modules aside
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install
#
# ---- Test ----
# run linters, setup and tests
FROM dependencies AS test
COPY . .
RUN npm run lint && npm run setup && npm run test
#
# ---- Release ----
FROM base AS release
# copy production node_modules
COPY --from=dependencies /root/chat/prod_node_modules ./node_modules
# copy app sources
COPY . .
# expose port and define CMD
EXPOSE 5000
CMD npm run start
Awesome tuto here: https://codefresh.io/docker-tutorial/node_docker_multistage/
I've found that the simplest approach is to leverage Docker's copy semantics:
The COPY instruction copies new files or directories from and adds them to the filesystem of the container at the path .
This means that if you first explicitly copy the package.json file and then run the npm install step that it can be cached and then you can copy the rest of the source directory. If the package.json file has changed, then that will be new and it will re-run the npm install caching that for future builds.
A snippet from the end of a Dockerfile would look like:
# install node modules
WORKDIR /usr/app
COPY package.json /usr/app/package.json
RUN npm install
# install application
COPY . /usr/app
I imagine you may already know, but you could include a .dockerignore file in the same folder containing
node_modules
npm-debug.log
to avoid bloating your image when you push to docker hub
you don't need to use tmp folder, just copy package.json to your container's application folder, do some install work and copy all files later.
COPY app/package.json /opt/app/package.json
RUN cd /opt/app && npm install
COPY app /opt/app
I wanted to use volumes, not copy, and keep using docker compose, and I could do it chaining the commands at the end
FROM debian:latest
RUN apt -y update \
&& apt -y install curl \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash - \
&& apt -y install nodejs
RUN apt -y update \
&& apt -y install wget \
build-essential \
net-tools
RUN npm install pm2 -g
RUN mkdir -p /home/services_monitor/ && touch /home/services_monitor/
RUN chown -R root:root /home/services_monitor/
WORKDIR /home/services_monitor/
CMD npm install \
&& pm2-runtime /home/services_monitor/start.json