How to run Database services in Docker container? - node.js

Trying to build an docker image to install Node and databases in image.
Databases are installed but services are not running while trying to go for container logs ..
FROM ubuntu:lastest
RUN apt-get update && apt-get install -y curl wget gnupg && \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv D68FA50FEA312927 && \
echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list && \
curl -sL https://deb.nodesource.com/setup_8.x | bash - && \
apt-get update && \
apt-get install -y nodejs mongodb-org redis-server && \
node -v && \
npm -v
Please do help regarding this issue, i am new to Docker.

Best run the database containers separate, e.g. one container for mongodb and one for redis. Then connect your application container to those containers (either by links (deprecated) or by creating and sharing a network as discussed in this question. You also do not to have to start from ubuntu:latest, but can start with a node image like nodejs. Some 'orchestration', like docker-compose, can make a task of plugging these services together much easier, see this tutorial (the postgres database in the article can easily be exchanged by mongodb and redis). Also consider reading the best practices for Dockerfile writing.

You need to actually start mongod, e.g. like
apt-get install -y nodejs mongodb-org redis-server && \
mongod --fork && \
node -v && \
npm -v
But bear in mind that mongo should be configured first and it requires some time to spin up.
As a side note it is considered a better practice to compose individual single-purpose docker images rather than pack both database and application in a single image.
Please read https://docs.docker.com/compose/overview/

Related

Install nodejs and npm in Dockerfile

The context
I have a Dockerfile to create an image that contains an apache webserver. However I also want to build my website using the Dockerfile so that the build process isn't dependent on the developers local environment. Note that the docker container is only going to be used for local development not for production.
The problem
I have this Dockerfile:
FROM httpd
RUN apt-get update -yq
RUN apt-get -yq install curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt-get update -yq
RUN apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
I build it:
sudo docker build --no-cache .
The build completes successfully, here is part of the output:
Step 9/15 : RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
---> Running in e6c747221ac0
......
......
......
Removing intermediate container 5a07dd0b1e01
---> 6279003c1e80
Successfully built 6279003c1e80
However, when I run the image in a container using this:
sudo docker container run --rm -it --name=debug 6279003c1e80 /bin/bash
Then when doing apt-cache policy inside the container, it doesn't show the repository that should have been added with the curl command. Also when doing apt-cache policy nodejs it shows the old version is installed.
However when I then run the following inside the container:
curl -sL https://deb.nodesource.com/setup_12.x | bash
apt-cache policy
apt-cache policy nodejs
It shows me the repository is added and it shows the newer nodejs version is available.
So why is it that when using the curl command using RUN inside the docker file it doesn't seem to work, but when doing it manually in the container from a shell then it does work? And how can I get around this problem?
Updates
Note that to prevent caching issues I am using the --no-cache flag.
I also removed all containers and did sudo docker system prune and rebuild the image but without success.
I tried bundling everything in one RUN command as user "hmm" suggested (as this is best practice for apt commands):
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg && \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs \
&& rm -rf /var/lib/apt/lists/*
You're likely running into issues with cached layers. There's a long section in the Dockerfile best practices documentation on using apt-get. Probably worth a read.
The gist is that Docker doesn't recognize any difference between the first and second RUN apt-get update, nor does it know that apt-get install depends on a fresh apt-get update layer.
The solution is to combine all of that into a single RUN command (recommended) or disable the cache during the build process (docker build --no-cache).
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg ca-certificates \
&& curl -L https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
Edit: Running your Dockerfile locally, I noticed no output from the curl command. After removing the -s flag (fail silently), you can see it's failing due to not being able to verify the server's SSL certificate:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The solution to that issue is to install ca-certificates before running curl. I've updated the RUN command above.

Debian Image on Docker - How install Nodejs

I am writing a Dockerfile to run nodejs on a debian server but the compilation cannot be done.
The dockerfile is like this :
FROM debian:9
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& apt-get install nodejs -yq \
&& apt-get clean -y
ADD . /app/
WORKDIR /app
RUN npm install
EXPOSE 2368
VOLUME /app/logs
CMD npm run start
I execute the following instructions step by step
docker run --rm -it debian:latest
apt-get update
apt-get clean
apt-get install curl gnupg -yq
curl -sL https://deb.nodesource.com/setup_12.x | bash
The last line tries to install the lsb-release package but an error occurs. The following lines appear :
+ apt-get install -y lsb-release > /dev/null 2>&1
Error executing command, exiting
I execute the command
apt-get install -y lsb-release
The last lines are
Failed to fetch http://deb.debian.org/debian/pool/main/p/python3-defaults/python3-minimal_3.7.3-1_amd64.deb Bad header line Bad header data [IP: 151.101.122.133 80]
E: Failed to fetch http://deb.debian.org/debian/pool/main/p/python3.7/python3.7_3.7.3-2+deb10u1_amd64.deb Bad header line Bad header data [IP: 151.101.122.133 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
I have searched a long time but I do not know why this package wants to install and why it does not install.
I know this post is dated, but I recently ran into this problem and thought I would share the solution that worked for us.
We started with a Maven image based on Debian 11 / stable (Bullseye).
FROM maven:3.8.4-openjdk-17-slim
RUN apt-get update && \
apt-get install -yq --no-install-recommends \
open-ssl \
curl \
wget \
git \
gnupg \
# more stuff
RUN curl -fsSL https://deb.nodesource.com/setup_current.x | bash - && \
apt-get install -y nodejs \
build-essential && \
node --version && \
npm --version
We successfully updated to node.js version 17.
Ultimately, this github from nodesource was the most helpful
Could be because you have obsolete source PPAs.
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /etc/apt/sources.list.d/*
sudo apt-get update
and try installing.
Details HERE
Your Dockerfile works perfectly for me now with two different machines.
Maybe there was problem with server. IP is different now
curl -v http://deb.debian.org/debian/pool/main/p/python3-defaults/python3-minimal_3.7.3-1_amd64.deb -o test
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 151.101.246.133:80...
* Connected to deb.debian.org (151.101.246.133) port 80 (#0)
hope this answer may help you. I actually accomplished making a nodejs CentOs image based on the actual nodejs docker image. If you enter to the next link, you may see how node docker image is constructed:
node docker oficial image
The first part of the node image runs commands to create a "node" user which I can't stress enough how good of a security practice is running your node containers from another user which is not "root". The second part comes the part I believe its going to help you; in all that code you have a part where you exchange gpg keys with a server and just after that, depending on your architecture, the nodejs program is downloaded from the nodejs oficial page and its prepared to be available to run. I think that your main problem is not importing the keys to the server, there in the image you should find the answer.
Also, in the image there comes a part responsible to detecting which architecture you have but mainly most architectures are going to be "x64". I include you me CentOs based node image (based on the oficial node image I linked you up) so you may look at it:
FROM centos:centos8
RUN groupadd --gid 1000 node \
&& useradd --uid 1000 --gid node --shell /bin/bash --create-home node
# node install taken from the node oficial image
ENV NODE_VERSION=12.16.3
RUN set -ex \
&& for key in \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
77984A986EBC2AA786BC0F66B01FBB92821C587A \
8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
4ED778F539E3634C779C87C6D7062848A1AB005C \
A48C2BEE680E841632CD4E44F07496B3EB3C1762 \
B9E2F5981AA6E0CD28160D9FF13993A75599653C \
; do \
gpg --batch --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys "$key" || \
gpg --batch --keyserver hkp://ipv4.pool.sks-keyservers.net --recv-keys "$key" || \
gpg --batch --keyserver hkp://pgp.mit.edu:80 --recv-keys "$key" ; \
done \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs \
# smoke tests
&& node --version \
&& npm --version
CMD [ "node" ]
OTHER INFORMATION
Here I want to give you to other points that may help you in you Dockerfile but don't answer directly your answer (that's why I put it until the bottom):
I may believe you have your reasons, but the oficial nodejs docker image is actually based on debian (unless you choose alpine) so you may solve directly your problem by using FROM nodejs:<version_you_want>. I repeat, maybe you have a good reason to be doing it that way, but it doesn't hurt to give an advice :)
It is not consider good practice (I will put the link after this paragraph to the reference) to use "npm" to start a node image due to the following reasons
The npm process starts a subprocess of node so you have to processes to run your application.
The npm process have a known (not so known) problem called: "PID 1 Problem". As Bret Fisher, which is a docker captain and consultant states in the following article:
I recommend calling the node binary directly, largely due to the “PID 1 Problem”... Node.js accepts and forwards signals like SIGINT and SIGTERM from the OS, which is important for proper shutdown of your app. Node.js leaves it up to your app to decide how to handle those signals, which means if you don’t write code or use a module to handle them, your app won’t shut down gracefully. It’ll ignore those signals and then be killed by Docker or Kubernetes after a timeout period.
It is better practice to run the "node" binary directly. As said in the article, npm doesn't handle SIGTERM/SIGINIT signals and node also doesn't handles them. The difference is that you may add code in node to handle those signals.
I include the node vs npm issue, it comes in the last part of the article and it also has many good nodejs docker practices :)
keep nodejs rockin in decker
Hope this could help you solve your doubts and helped you a little more to improve good practices. If you or anybody have any doubts, don't doubt to put it on the comments and I'll be happy to help if I can.
Have a nice day!

How to build an docker image using dockerfile which includes databases creation

I am trying to build docker image using docker file .
The docker file will contain database creation.
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get update && apt-get install -y nodejs
RUN node -v
RUN npm -v
RUN apt-get install -y redis-server
RUN redis-server -v
RUN apt-get install -my wget gnupg
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv D68FA50FEA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update
RUN apt-get install -y mongodb-org
RUN mongodb -version
I am unable to start redis server after installation in docker container
You should to expose ports for redis:
EXPOSE 6379
And please remember, that each RUN creates a new layer in your image, and you can group all shell commands in one RUN directive. It should be something like this:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y curl wget gnupg && \
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv D68FA50FEA312927 && \
echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list && \
curl -sL https://deb.nodesource.com/setup_8.x | bash - && \
apt-get update && \
apt-get install -y nodejs redis-server mongodb-org redis-server && \
node -v && \
npm -v && \
mongodb -version
EXPOSE 6379
And one more thing. Docker way tell us run only one process in one container, so you should to separete your Redis, Mongo and other apps to different containers and run it with some orchestrator(such as docker-swarm or node or kubernetes) or just docker-compose.

Cant launch chrome in docker linux container

I have an asp.net core application that uses the jsreport nuget packages to run reports. I am attempting to deploy it with a linux docker container. I seem to be having trouble getting chrome to launch when I run a report. I am getting the error:
Failed to launch chrome! Running as root without --no-sandbox is not supported.
I have followed the directions on the .net local reporting page (https://jsreport.net/learn/dotnet-local) regarding docker, but I am still getting the error.
Here is my full docker file:
#use the .net core 2.1 runtime default image
FROM microsoft/dotnet:2.1-aspnetcore-runtime
#set the working directory to the server
WORKDIR /server
#copy all contents in the current directory to the container server directory
COPY . /server
#install node
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install nodejs -yq
#install jsreport-cli
RUN npm install jsreport-cli -g
#install chrome for jsreport linux
RUN apt-get update && \
apt-get install -y gnupg libgconf-2-4 wget && \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' && \
apt-get update && \
apt-get install -y google-chrome-unstable --no-install-recommends
ENV chrome:launchOptions:executablePath google-chrome-unstable
ENV chrome:launchOptions:args --no-sandbox
#expose port 80
EXPOSE 80
CMD dotnet Server.dll
Is there another step that I am missing somewhere?
Its little late but may be can help someone else.
For me, the only option that was needed to fix this issue in the docker container was to run chrome in a headless mode (so cause was in tests not in dockerfile).
ChromeOptions options = new ChromeOptions().setHeadless(true);
WebDriver driver = new ChromeDriver(options);
Results: Now tests run successfully, without any errors.
Expanding on Pramod's answer, my own issues were only solved by running with both the --headless and --no-sandbox flags.

How do I connect to the localhost of a docker container (from inside the container)

I have a nodejs app that connects to a blockchain on the same server. Normally I use 127.0.0.1 + the port number (each chain gets a different port).
I decided to put the chain and the app in the same container, so that the frontend developers don't have to bother with setting up the chain.
However, When I build the image the chain should start. When I run the image it isn't. Furthermore, when I do go in the container and try to run it manually it says "besluitChain2#xxx.xx.x.2:PORT". So I thought instead of 127.0.0.1 I needed to connect to the port on 127.0.0.2, but that doesn't seem to work.
I'm sure connecting like this isn't new, and should work the same with a database. Can anyone help? The first piece of advice would be how to debug these images, because I have no idea where it goes wrong.
here is my dockerfile
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
ADD workfolder/app /root/applications/app
ADD .multichain /root/.multichain
RUN npm install \
&& apt-get upgrade -q -y \
&& apt-get dist-upgrade -q -y \
&& apt-get install -q -y wget curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& cd /tmp \
&& wget http://www.multichain.com/download/multichain-1.0-beta-1.tar.gz \
&& tar -xvzf multichain-1.0-beta-1.tar.gz \
&& cd multichain-1.0-beta-1 \
&& mv multichaind multichain-cli multichain-util /usr/local/bin \
&& cd /tmp \
&& rm -Rf multichain*
RUN multichaind Chain -daemon
RUN cd /root/applications/app && npm install
CMD cd /root/applications/app && npm start
EXPOSE 8080
btw due to policies I can only connect to the server at port 80 to check if it works. When I run the docker image I can go to my /api-docs but not to any of the endpoints where I start interacting with the blockchain.
I decided to put the chain and the app in the same container
That was a mistake, I think.
Docker is not a virtual machine. It's a virtual application or process instance.
A Docker container runs a linux distro under the hood, but this is a detail that should be ignored when thinking about the purpose of Docker.
You should think of a Docker container as a single application process, not as a full virtual machine to run generally run multiple processes. This is evidenced by the way Docker will shut the container down once the main process shuts down (the process with PID 1).
I've got a longer post about this, here: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
Additionally, the RUN multichaind instruction in your dockerfile doesn't run the chain in your image / container. It tells the image to run this instruction during the build process.
A Dockerfile is a list of instructions for building an image. The wording here is important. An image is not executed, it is built. An image is a static, immutable template from which a Container is executed.
RUN multichaind Chain -daemon
By putting this RUN instruction in your image, you are temporarily starting the chain, but it is immediately halted (forcefully) when the image layer is done building. It will not remain running, because an image is not executed, it is built.
My advice is to put the chain in a separate image.
You'll have one image for the chain, and one for the node.js app.
You can use docker-compose to make it easier to run containers from both of these at the same time. Or you can run containers manually from them. Either way, you need two images.

Resources