How do I include my node_modules or specify a npm login / auth token for private npm modules?
It appears that GAE no longer lets the node_modules folder be included at all (see this issue) and there doesn't appear to be a hook to allow npm to login or set a token.
If you include a .npmrc file local to the application you want to deploy, it will get copied into the app source and be used during the npm install. You can have a build step create this file or copy it from your home directory. See this npm article.
The .npmrc file should look like this:
//registry.npmjs.org/:_authToken=<token here>
The Dockerfile I used looks like so:
# Use the base App Engine Docker image, based on debian jessie.
FROM gcr.io/google_appengine/base
# Install updates and dependencies
RUN apt-get update -y && apt-get install --no-install-recommends -y -q curl python build-essential git ca-certificates libkrb5-dev && \
apt-get clean && rm /var/lib/apt/lists/*_*
# Install the latest release of nodejs
RUN mkdir /nodejs && curl https://nodejs.org/dist/v6.2.1/node-v6.2.1-linux-x64.tar.gz | tar xvzf - -C /nodejs --strip-components=1
ENV PATH $PATH:/nodejs/bin
COPY . /app/
WORKDIR /app
# NODE_ENV to production so npm only installs needed dependencies
ENV NODE_ENV production
RUN npm install --unsafe-perm || \
((if [ -f npm-debug.log ]; then \
cat npm-debug.log; \
fi) && false)
# start
CMD ["npm", "start"]
Related
dockerfile
FROM node:${NODE_VERSION}-buster-slim
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
RUN apt-get update && \
apt-get install -qqy --no-install-recommends \
ca-certificates \
dumb-init \
build-essential && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV HOME=/home/node
WORKDIR $HOME/app
COPY --chown=node:node . .
RUN set -xe && \
chown -R node /usr/local/lib /usr/local/include /usr/local/share /usr/local/bin && \
npm install && npm cache clean --force
EXPOSE 4200
CMD ["node"]
docker-compose
webapp :
container_name : webapp
hostname : webapp
build :
dockerfile : Dockerfile
context : ${PWD}/app
image : webapp:development
command :
- npm install
- npm run start
volumes :
- ${PWD}/webapp:/app
networks :
- backend
ports :
- 4200:4200
restart : on-failure
tty : true
stdin_open : true
env_file :
- variables.env
I can run the image with
docker run webapp bash -c "npm install; npm run start"
but when I run the compose file it says
webapp | [dumb-init] npm install: No such file or directory
I tried to replace the docker-compose command to prefix "node" but the same error but with node npm install: no such file or directory
Can someone tell me where things are going wrong ?
When you use the list form of command: in the docker-compose.yml file (or the JSON-array form of Dockerfile CMD) you are providing a list of words in a single command, not a list of separate commands. Once this gets combined with the ENTRYPOINT in the Dockerfile, the container command is
/usr/bin/dumb-init -- 'npm install' 'npm run start'
and when there isn't a /usr/bin/npm\ install file (including the space in the file name) you get that error.
Since you COPY the application code in the Dockerfile and run npm install there, you don't need to repeat this step at application start time. You should be able to delete the volumes: and command: part of the docker-compose.yml file to use what's built in to the image.
If you really need to repeat this command:, do it in exactly the form you specified in the docker run command, without list syntax
command: bash -c 'npm install; npm run start'
I'm running a node.js application that uses the html-pdf module, which in turn relies on phantomjs, to generate PDF files from HTML. The app runs withing a Docker container.
Dockerfile:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
RUN npm install -g html-pdf --unsafe-perm
VOLUME /mydirectory
ENTRYPOINT ["node"]
Which builds an image just fine.
app.js
const witch = require('witch');
const pdf = require('html-pdf');
const phantomPath = witch('phantomjs-prebuilt', 'phantomjs');
function someFunction() {
pdf.create('some html content', { phantomPath: `${this._phantomPath}` });
}
// ... and then some other stuff that eventually calls someFunction()
And then call docker run <the image name> app.js
When someFunction gets called, the following error message is thrown:
Error: spawn /mydirectory/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs ENOENT
This happens both when deploying the container on a cloud linux server or locally on my machine.
I have tried adding RUN npm install -g phantomjs-prebuilt --unsafe-perms to the Dockerfile, to no avail (this makes docker build fail because the installation of html-pdf cannot validate the installation of phantomjs)
I'm also obviously not a fan of using the --unsafe-perms argument of npm install, so if anybody has a solution that allows bypassing that, it would be fantastic.
Any help is greatly appreciated!
This is what ended up working for me, in case this is helpful to anyone:
FROM node:8-alpine
WORKDIR /mydirectory
# [omitted] git clone, npm install etc....
ENV PHANTOMJS_VERSION=2.1.1
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
RUN apk update && apk add --no-cache fontconfig curl curl-dev && \
cd /tmp && curl -Ls https://github.com/dustinblackman/phantomized/releases/download/${PHANTOMJS_VERSION}/dockerized-phantomjs.tar.gz | tar xz && \
cp -R lib lib64 / && \
cp -R usr/lib/x86_64-linux-gnu /usr/lib && \
cp -R usr/share /usr/share && \
cp -R etc/fonts /etc && \
curl -k -Ls https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-${PHANTOMJS_VERSION}-linux-x86_64.tar.bz2 | tar -jxf - && \
cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs
USER node
RUN npm install -g html-pdf
VOLUME /mydirectory
ENTRYPOINT ["node"]
I had a similar problem, only workaround for me was to download and copy a phantom manualy. This is my example from docker file, it should by the last thing before EXPOSE comand. Btw I use a node:10.15.3 image.
RUN wget -O /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
RUN mkdir /tmp/phantomjs && mkdir -p /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN tar xvjf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /tmp/phantomjs
RUN mv /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64/* /usr/local/lib/node_modules/phantomjs/lib/phantom/
RUN rm -rf /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz && rm -rf /tmp/phantomjs
Don't forget to update your paths. It's only workaround, I didn't have time to figure it out yet.
I came to this question in March 2021 and had the same issue dockerizing highcharts: it worked on my machine but failed on docker run (same spawn phantomjs error). In the end, the solution was to find a FROM node version that worked. This Dockerfile works using the latest Node docker image and almost latest highcharts npm version (always pick up specific npm versions):
FROM node:15.12.0
ENV ACCEPT_HIGHCHARTS_LICENSE YES
# see available versions of highcharts at https://www.npmjs.com/package/highcharts-export-server
RUN npm install highcharts-export-server#2.0.30 -g
EXPOSE 7801
# run the container using: docker run -p 7801:7801 -t CONTAINER_TAG
CMD [ "highcharts-export-server", "--enableServer", "1" ]
Trying to build angular application in docker and run as container in my local using Node js.
I have used build image using below Dockerfile, but i am not sure what i am missing while running. Can someone point me out?
Dockerfile:
FROM node:10.15.3
ENV HOME=/home
WORKDIR $HOME
RUN npm config set strict-ssl false \
&& npm config set proxy http://proxy.xxxxxx.com:8080
COPY package.json .
RUN npm install
Image created with below command successfully
docker build -t example .
I am trying to run the image using below command, but it is not helping
docker run -p 4201:4200 example
your Dockerfile does not run/serve your application, in order to do that you have to:
install angular/cli
copy the app
run/serve the app
FROM node:10.15.3
RUN npm config set strict-ssl false \
&& npm config set proxy http://proxy.xxxxxx.com:8080
# get the app
WORKDIR /src
COPY . .
# install packages
RUN npm ci
RUN npm install -g #angular/cli
# start app
CMD ng serve --host 0.0.0.0
hope this helps.
Container need a foreground process running, then it will not exit. If not, the container will directly exit.
For your case, you need to COPY your nodejs project to container when docker build, and also start the project in CMD like CMD [ "npm", "start" ]. As the web server not exit, then your container will not exit.
A good article here for your reference on how to dockerizing a Node.js web app.
Just update your Dockerfile to achieve your goal for more options see here:
# base image
FROM node:12.2.0
RUN npm config set strict-ssl false \
&& npm config set proxy http://proxy.xxxxxx.com:8080
# install chrome for protractor tests
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#7.3.9
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Give a shot for the following Dockerfile as well!
FROM node:alpine
# get the app
WORKDIR /src
# install packages
RUN npm ci
RUN npm install -g #angular/cli
COPY package.json .
RUN npm install
COPY . .
# start app
CMD ["ng", "serve", "-o"]
I have docker setup for a node.js web service which creates a data volume for node modules like this:
volumes:
- ./service:/usr/src/service
- /usr/src/service/node_modules
The solution with data volume works fine. But using a data volume, the node_modules are not accessible or visible in my host (not container) which creates other restrictions. For example: I can not run tests in web-storm as they expect node_modules to be present.
Is there any alternative solution other than using data volume to mount node modules which also allows to access them on host?
The docker file looks like:
FROM node:6.3.1-slim
ARG NPM_TOKEN
ARG service
ENV NODE_ENV development
RUN apt-get update && apt-get -y install build-essential git-core python-dev vim
# use nodemon for development
RUN npm install --global nodemon
COPY .npmrc /root/.npmrc
RUN mkdir -p /usr/src/$service
RUN mkdir -p /modules/$service/
COPY sources/$service/package.json /modules/$service/package.json
RUN cd /modules/$service/ && npm install && rm -f /root/.npmrc && mv /modules/$service/node_modules /usr/src/$service/node_modules
WORKDIR /usr/src/$service
COPY sources/$service /usr/src/$service
ENV service=${service}
COPY start_services.sh /root/start_services.sh
COPY .env /root/.env
COPY services.sh /root/services.sh
RUN ["chmod", "+x", "/root/start_services.sh"]
RUN ["chmod", "+x", "/root/services.sh"]
CMD /root/start_services.sh
Specify node_modules as follows:
volumes:
- .:/usr/src/service/
- /usr/src/service/node_modules
I am currently developing a Node backend for my application.
When dockerizing it (docker build .) the longest phase is the RUN npm install. The RUN npm install instruction runs on every small server code change, which impedes productivity through increased build time.
I found that running npm install where the application code lives and adding the node_modules to the container with the ADD instruction solves this issue, but it is far from best practice. It kind of breaks the whole idea of dockerizing it and it cause the container to weight much more.
Any other solutions?
Ok so I found this great article about efficiency when writing a docker file.
This is an example of a bad docker file adding the application code before running the RUN npm install instruction:
FROM ubuntu
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y install python-software-properties git build-essential
RUN add-apt-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get -y install nodejs
WORKDIR /opt/app
COPY . /opt/app
RUN npm install
EXPOSE 3001
CMD ["node", "server.js"]
By dividing the copy of the application into 2 COPY instructions (one for the package.json file and the other for the rest of the files) and running the npm install instruction before adding the actual code, any code change wont trigger the RUN npm install instruction, only changes of the package.json will trigger it. Better practice docker file:
FROM ubuntu
MAINTAINER David Weinstein <david#bitjudo.com>
# install our dependencies and nodejs
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y install python-software-properties git build-essential
RUN add-apt-repository -y ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get -y install nodejs
# use changes to package.json to force Docker not to use the cache
# when we change our application's nodejs dependencies:
COPY package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
# From here we load our application's code in, therefore the previous docker
# "layer" thats been cached will be used if possible
WORKDIR /opt/app
COPY . /opt/app
EXPOSE 3000
CMD ["node", "server.js"]
This is where the package.json file added, install its dependencies and copy them into the container WORKDIR, where the app lives:
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /opt/app && cp -a /tmp/node_modules /opt/app/
To avoid the npm install phase on every docker build just copy those lines and change the ^/opt/app^ to the location your app lives inside the container.
Weird! No one mentions multi-stage build.
# ---- Base Node ----
FROM alpine:3.5 AS base
# install node
RUN apk add --no-cache nodejs-current tini
# set working directory
WORKDIR /root/chat
# Set tini as entrypoint
ENTRYPOINT ["/sbin/tini", "--"]
# copy project file
COPY package.json .
#
# ---- Dependencies ----
FROM base AS dependencies
# install node packages
RUN npm set progress=false && npm config set depth 0
RUN npm install --only=production
# copy production node_modules aside
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install
#
# ---- Test ----
# run linters, setup and tests
FROM dependencies AS test
COPY . .
RUN npm run lint && npm run setup && npm run test
#
# ---- Release ----
FROM base AS release
# copy production node_modules
COPY --from=dependencies /root/chat/prod_node_modules ./node_modules
# copy app sources
COPY . .
# expose port and define CMD
EXPOSE 5000
CMD npm run start
Awesome tuto here: https://codefresh.io/docker-tutorial/node_docker_multistage/
I've found that the simplest approach is to leverage Docker's copy semantics:
The COPY instruction copies new files or directories from and adds them to the filesystem of the container at the path .
This means that if you first explicitly copy the package.json file and then run the npm install step that it can be cached and then you can copy the rest of the source directory. If the package.json file has changed, then that will be new and it will re-run the npm install caching that for future builds.
A snippet from the end of a Dockerfile would look like:
# install node modules
WORKDIR /usr/app
COPY package.json /usr/app/package.json
RUN npm install
# install application
COPY . /usr/app
I imagine you may already know, but you could include a .dockerignore file in the same folder containing
node_modules
npm-debug.log
to avoid bloating your image when you push to docker hub
you don't need to use tmp folder, just copy package.json to your container's application folder, do some install work and copy all files later.
COPY app/package.json /opt/app/package.json
RUN cd /opt/app && npm install
COPY app /opt/app
I wanted to use volumes, not copy, and keep using docker compose, and I could do it chaining the commands at the end
FROM debian:latest
RUN apt -y update \
&& apt -y install curl \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash - \
&& apt -y install nodejs
RUN apt -y update \
&& apt -y install wget \
build-essential \
net-tools
RUN npm install pm2 -g
RUN mkdir -p /home/services_monitor/ && touch /home/services_monitor/
RUN chown -R root:root /home/services_monitor/
WORKDIR /home/services_monitor/
CMD npm install \
&& pm2-runtime /home/services_monitor/start.json