Debian Image on Docker - How install Nodejs - node.js

I am writing a Dockerfile to run nodejs on a debian server but the compilation cannot be done.
The dockerfile is like this :
FROM debian:9
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq \
&& apt-get clean -y
ADD . /app/
WORKDIR /app
RUN npm install
EXPOSE 2368
VOLUME /app/logs
CMD npm run start
I execute the following instructions step by step
docker run --rm -it debian:latest
apt-get update
apt-get clean
apt-get install curl gnupg -yq
curl -sL https://deb.nodesource.com/setup_12.x | bash
The last line tries to install the lsb-release package but an error occurs. The following lines appear :
+ apt-get install -y lsb-release > /dev/null 2>&1
Error executing command, exiting
I execute the command
apt-get install -y lsb-release
The last lines are
Failed to fetch http://deb.debian.org/debian/pool/main/p/python3-defaults/python3-minimal_3.7.3-1_amd64.deb Bad header line Bad header data [IP: 151.101.122.133 80]
E: Failed to fetch http://deb.debian.org/debian/pool/main/p/python3.7/python3.7_3.7.3-2+deb10u1_amd64.deb Bad header line Bad header data [IP: 151.101.122.133 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
I have searched a long time but I do not know why this package wants to install and why it does not install.

I know this post is dated, but I recently ran into this problem and thought I would share the solution that worked for us.
We started with a Maven image based on Debian 11 / stable (Bullseye).
FROM maven:3.8.4-openjdk-17-slim
RUN apt-get update && \
apt-get install -yq --no-install-recommends \
open-ssl \
curl \
wget \
git \
gnupg \
# more stuff
RUN curl -fsSL https://deb.nodesource.com/setup_current.x | bash - && \
apt-get install -y nodejs \
build-essential && \
node --version && \
npm --version
We successfully updated to node.js version 17.
Ultimately, this github from nodesource was the most helpful

Could be because you have obsolete source PPAs.
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /etc/apt/sources.list.d/*
sudo apt-get update
and try installing.
Details HERE

Your Dockerfile works perfectly for me now with two different machines.
Maybe there was problem with server. IP is different now
curl -v http://deb.debian.org/debian/pool/main/p/python3-defaults/python3-minimal_3.7.3-1_amd64.deb -o test
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 151.101.246.133:80...
* Connected to deb.debian.org (151.101.246.133) port 80 (#0)

hope this answer may help you. I actually accomplished making a nodejs CentOs image based on the actual nodejs docker image. If you enter to the next link, you may see how node docker image is constructed:
node docker oficial image
The first part of the node image runs commands to create a "node" user which I can't stress enough how good of a security practice is running your node containers from another user which is not "root". The second part comes the part I believe its going to help you; in all that code you have a part where you exchange gpg keys with a server and just after that, depending on your architecture, the nodejs program is downloaded from the nodejs oficial page and its prepared to be available to run. I think that your main problem is not importing the keys to the server, there in the image you should find the answer.
Also, in the image there comes a part responsible to detecting which architecture you have but mainly most architectures are going to be "x64". I include you me CentOs based node image (based on the oficial node image I linked you up) so you may look at it:
FROM centos:centos8
RUN groupadd --gid 1000 node \
&& useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# node install taken from the node oficial image
ENV NODE_VERSION=12.16.3
RUN set -ex \
&& for key in \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
77984A986EBC2AA786BC0F66B01FBB92821C587A \
8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
4ED778F539E3634C779C87C6D7062848A1AB005C \
A48C2BEE680E841632CD4E44F07496B3EB3C1762 \
B9E2F5981AA6E0CD28160D9FF13993A75599653C \
; do \
gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || \
gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || \
gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; \
done \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs \
# smoke tests
&& node --version \
&& npm --version
CMD [ "node" ]
OTHER INFORMATION
Here I want to give you to other points that may help you in you Dockerfile but don't answer directly your answer (that's why I put it until the bottom):
I may believe you have your reasons, but the oficial nodejs docker image is actually based on debian (unless you choose alpine) so you may solve directly your problem by using FROM nodejs:<version_you_want>. I repeat, maybe you have a good reason to be doing it that way, but it doesn't hurt to give an advice :)
It is not consider good practice (I will put the link after this paragraph to the reference) to use "npm" to start a node image due to the following reasons
The npm process starts a subprocess of node so you have to processes to run your application.
The npm process have a known (not so known) problem called: "PID 1 Problem". As Bret Fisher, which is a docker captain and consultant states in the following article:
I recommend calling the node binary directly, largely due to the “PID 1 Problem”... Node.js accepts and forwards signals like SIGINT and SIGTERM from the OS, which is important for proper shutdown of your app. Node.js leaves it up to your app to decide how to handle those signals, which means if you don’t write code or use a module to handle them, your app won’t shut down gracefully. It’ll ignore those signals and then be killed by Docker or Kubernetes after a timeout period.
It is better practice to run the "node" binary directly. As said in the article, npm doesn't handle SIGTERM/SIGINIT signals and node also doesn't handles them. The difference is that you may add code in node to handle those signals.
I include the node vs npm issue, it comes in the last part of the article and it also has many good nodejs docker practices :)
keep nodejs rockin in decker
Hope this could help you solve your doubts and helped you a little more to improve good practices. If you or anybody have any doubts, don't doubt to put it on the comments and I'll be happy to help if I can.
Have a nice day!

Related

apk not found error while changing to node-buster from Alpine base image

I have changed my image in docker from Alpine base image to node:14.16-buster, While running the code I am getting 'apk not found' error.
Sharing the codes snippet :
FROM node:14.16-buster
# ========= steps for Oracle instant client installation (start) ===============
RUN apk --no-cache add libaio libnsl libc6-compat curl && \
cd /tmp && \
curl -o instantclient-basiclite.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip -SL && \
unzip instantclient-basiclite.zip && \
mv instantclient*/ /usr/lib/instantclient && \
rm instantclient-basiclite.zip
Can you please help here, what do I need to change?
The issue comes from the fact that you're changing your base image from Alpine based to Debian based.
Debian based Linux distributions use apt as their package manager (Alpine uses apk).
That is the reason why you get apk not found. Use apt install, but also keep in mind that the package names could differ and you might need to look that up. After all, apt is a different piece of software with it's own capabilities.
Buster images are based on the Debian version.
It doesn't support the APK default package manger is APT
For example you can do :
FROM node:15.14.0-buster-slim
RUN apt-get update && \
apt-get install -y \
curl \
jq \
git \
wget \
openssl \
bash \
tar \
net-tools && \
rm -rf /var/lib/apt/lists/*
RUN mkdir /app && \
chown node:node /app
APK is part of the Linux alpine version you have to change the base version if you want to use the APK.
The buster node images are Debian based. buster is the release name for Debian 10 (11 will be bullseye).
Debian uses APT for packaging. apt-get can be used from scripts
apt-get update && apt-get install libaio1 curl
libnsl2 is not available in Buster, but you might not need it

Install nodejs and npm in Dockerfile

The context
I have a Dockerfile to create an image that contains an apache webserver. However I also want to build my website using the Dockerfile so that the build process isn't dependent on the developers local environment. Note that the docker container is only going to be used for local development not for production.
The problem
I have this Dockerfile:
FROM httpd
RUN apt-get update -yq
RUN apt-get -yq install curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt-get update -yq
RUN apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
I build it:
sudo docker build --no-cache .
The build completes successfully, here is part of the output:
Step 9/15 : RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
---> Running in e6c747221ac0
......
......
......
Removing intermediate container 5a07dd0b1e01
---> 6279003c1e80
Successfully built 6279003c1e80
However, when I run the image in a container using this:
sudo docker container run --rm -it --name=debug 6279003c1e80 /bin/bash
Then when doing apt-cache policy inside the container, it doesn't show the repository that should have been added with the curl command. Also when doing apt-cache policy nodejs it shows the old version is installed.
However when I then run the following inside the container:
curl -sL https://deb.nodesource.com/setup_12.x | bash
apt-cache policy
apt-cache policy nodejs
It shows me the repository is added and it shows the newer nodejs version is available.
So why is it that when using the curl command using RUN inside the docker file it doesn't seem to work, but when doing it manually in the container from a shell then it does work? And how can I get around this problem?
Updates
Note that to prevent caching issues I am using the --no-cache flag.
I also removed all containers and did sudo docker system prune and rebuild the image but without success.
I tried bundling everything in one RUN command as user "hmm" suggested (as this is best practice for apt commands):
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg && \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs \
&& rm -rf /var/lib/apt/lists/*
You're likely running into issues with cached layers. There's a long section in the Dockerfile best practices documentation on using apt-get. Probably worth a read.
The gist is that Docker doesn't recognize any difference between the first and second RUN apt-get update, nor does it know that apt-get install depends on a fresh apt-get update layer.
The solution is to combine all of that into a single RUN command (recommended) or disable the cache during the build process (docker build --no-cache).
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg ca-certificates \
&& curl -L https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
Edit: Running your Dockerfile locally, I noticed no output from the curl command. After removing the -s flag (fail silently), you can see it's failing due to not being able to verify the server's SSL certificate:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The solution to that issue is to install ca-certificates before running curl. I've updated the RUN command above.

docker build error the folder you are executing pip from can no longer be found

I'm making a Dockerfile to install python38 on centos7 base. Everything works file until pip3 command. Dockerile looks like this.
FROM centos:centos7
RUN RPM_LIST=" \
gcc \
make \
openssl-devel \
bzip2-devel \
libffi-devel" && \
yum install -y $RPM_LIST && \
curl -O https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tgz && \
tar xvf Python-3.8.2.tgz && \
cd Python-3.8.2 && \
./configure && \
make && \
make install && \
rm -rf /Python-3.8.2* && \
yum remove -y $RPM_LIST && \
pip3 install retrying
Error is The folder you are executing pip from can no longer be found..
I changed the last line to RUN pip3 install retrying and it started working, but it added an additional 300 MB to my image, which i can't effort.
Any suggestions, what am i missing here or any alternative ways ?
This is how the working Dockerfile looks like
FROM centos:centos7
RUN RPM_LIST=" \
gcc \
make \
openssl-devel \
bzip2-devel \
libffi-devel" && \
yum install -y $RPM_LIST && \
curl -O https://www.python.org/ftp/python/3.8.2/Python-3.8.2.tgz && \
tar xvf Python-3.8.2.tgz && \
cd Python-3.8.2 && \
./configure && \
make && \
make install && \
pip3 install retrying && \
yum remove -y $RPM_LIST && \
rm -rf /Python-3.8.2*
for some reason, calling pip3 command from Python-3.8.2 folder was not working. Here i moved the rm command after the pip3 call. Hope this information helps someone.
I just ran into The folder you are executing pip from can no longer be found.
My Dockerfile has not changed in months. It does not install pip.
It does force remove and make a folder just prior to running a pip install on a requirements file -- into which pip is supposed to install things.
The fix? I simply ran it again and it worked. Hmm...
So I am posting this here in case there is some mysterious timing issue about which folks wish to collect clues.
--- edit add --
Also, yesterday I ran into this docker issue, which required turning off docker's experimental gRPC feature in its preferences. Perhaps it could be related somehow?
Why does Serverless produce an Invalid Cross-device link Error when trying to package or deploy?

How to run Database services in Docker container?

Trying to build an docker image to install Node and databases in image.
Databases are installed but services are not running while trying to go for container logs ..
FROM ubuntu:lastest
RUN apt-get update && apt-get install -y curl wget gnupg && \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv D68FA50FEA312927 && \
echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list && \
curl -sL https://deb.nodesource.com/setup_8.x | bash - && \
apt-get update && \
apt-get install -y nodejs mongodb-org redis-server && \
node -v && \
npm -v
Please do help regarding this issue, i am new to Docker.
Best run the database containers separate, e.g. one container for mongodb and one for redis. Then connect your application container to those containers (either by links (deprecated) or by creating and sharing a network as discussed in this question. You also do not to have to start from ubuntu:latest, but can start with a node image like nodejs. Some 'orchestration', like docker-compose, can make a task of plugging these services together much easier, see this tutorial (the postgres database in the article can easily be exchanged by mongodb and redis). Also consider reading the best practices for Dockerfile writing.
You need to actually start mongod, e.g. like
apt-get install -y nodejs mongodb-org redis-server && \
mongod --fork && \
node -v && \
npm -v
But bear in mind that mongo should be configured first and it requires some time to spin up.
As a side note it is considered a better practice to compose individual single-purpose docker images rather than pack both database and application in a single image.
Please read https://docs.docker.com/compose/overview/

Installing specific version of node.js and npm in ubuntu image with Dockerfile

I would like to know how can update my custom Dockerfile to install Node v6.3.1 and NPM v3.10.6 without breaking what is already in there.
Currently this is my custom file:
FROM ubuntu:16.10
MAINTAINER Fátima Alves
COPY . /my-software
WORKDIR /my-software
RUN apt-get update \
&& \
apt-get install -y \
python-dev \
tesseract-ocr
Thanks!
Update
Currently my dockerfile is like this:
FROM ubuntu:16.10
MAINTAINER Fátima Alves
COPY ./dist /my-software
COPY ./s3-config.json /my-software
COPY ./_* /my-software
COPY ./node_modules /my-software
WORKDIR /dataextractor
RUN apt-get update \
&& \
apt-get install -y \
curl
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs
And is returning:
The command '/bin/sh -c curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install -y nodejs' returned a non-zero code: 1
Perhaps i'm missing something?
You can just follow the usual Ubuntu install instructions, just within the RUN statement in your Dockerfile
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs
Docs
WHY
Because https://nodejs.org/en/download/package-manager/#debian-and-ubuntu-based-linux-distributions suggests doing the "curl pipe bash" anti-pattern, let's try to make that cleaner.
WHAT
Since containers are built from a definitive OS and Version we don't need the universality of that bash script.
HOW
If we examine closely, the https://deb.nodesource.com/setup_6.x we see that it really only does 2 things for Debian:
Add their public key to apt's keychain via apt-key add
Add their deb repo to a newly created file /etc/apt/sources.list.d/nodesource.list
Adding sources
The 2nd thing we can do really easily. That is simply putting this in your Dockerfile:
COPY nodesource.list /etc/apt/sources.list.d/nodesource.list
Of course you'll need to create nodesource.list with content like:
deb https://deb.nodesource.com/node_6.x trusty main
deb-src https://deb.nodesource.com/node_6.x trusty main
Adding a trusted key
The 1st thing is a bit trickier to new "cleanly". I would rather to add a keychain file to /etc/apt/trusted.gpg.d/ than modify the existing /etc/apt/trusted.gpg file (which is what apt-key add would do).
What they have at the URL https://deb.nodesource.com/gpgkey/nodesource.gpg.key is a public key, not a keychain. To get a keychain file, we can pipe it [not to apt-key, rather] like so:
curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | \
gpg --import --no-default-keyring --keyring ./nodesource.gpg
That creates nodesource.gpg which we can utilize by putting this in your Dockerfile:
COPY nodesource.gpg /etc/apt/trusted.gpg.d/nodesource.gpg
Install as usual
The crazy spacing and \ terminated lines is what I use because I tend to have a lot of additional packages to install.
# Install software packages
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -qq && apt-get clean
RUN apt-get install -qqy \
nodejs \
&& \
apt-get clean
You can see the complete Dockerfile at https://gist.github.com/RichardBronosky/f748563dc328b12b39cd864973fcb138#file-dockerfile

Resources