fail to link redis container to node.js container in docker - node.js

I had deployed a simple redis based nodejs application on the digital ocean cloud.
Here is the node.js app.
var express = require('express');
var app = express();
app.get('/', function(req, res){
res.send('hello world');
});
app.set('trust proxy', 'loopback')
app.listen(3000);
var redisClient = require('redis').createClient(6379,'localhost');
redisClient.on('connect',function(err){
console.log('connect');
})
In order to deploy the application, I used one node.js container and one redis container respectively,and linked node.js container with redis container.
The redis container could be obtained by
docker run -d --name redis -p 6379:6379 dockerfile/redis
and the node.js container is based on google/nodejs, in which Dockerfile is simply as
FROM google/nodejs
WORKDIR /src
EXPOSE 3000
CMD ["/bin/bash"]
my node.js image is named as nodejs and built by
docker build -t nodejs Dockerfile_path
and the container is run by copying my host applications files to the src folder in the container and linking the existing redis container
docker run -it --rm -p 8080:3000 --name app -v node_project_path:/src --link redis:redis nodejs
finally I got into the container successfully, then installed the npm modules by npm install and then start the application by node app.js.
But I got a error saying:
Error: Redis connection to localhost:6379 failed - connect ECONNREFUSED
As redis container is exposed to 6379, and my nodejs container is linking to redis container. in my node.js app, connecting to localhost redis server with port 6379 is supposed to be ok, why in fact it is not working at all

When you link the redis container to the node container, docker will already modify the hosts file for you
You should then be able to connect to the redis container via:
var redisClient = require('redis').createClient(6379,'redis'); // 'redis' is alias for the link -> what's in the hosts file.
From: https://docs.docker.com/userguide/dockerlinks/
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
This will link the new web container with the db container you created earlier. The --link flag takes the form:
--link name:alias
Where name is the name of the container we're linking to and alias is an alias for the link name. You'll see how that alias gets used shortly.

Related

Application run failed after deploying dockerized image on Azure App Service

I am trying to deploy dockerized React JS application (uses ngnix) on MS Azure App Service (Web application as Container/Web App). Using Azure Container Registry for the same.
Here is my Dockerfile
FROM node:14.17.0 as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts -g --silent
COPY . .
RUN npm run build
#prepare nginx
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
#fire up nginx
EXPOSE 80
CMD ["nginx","-g","daemon off;"]
Able to run image as container on local machine and working perfectly.
docker run -itd --name=ui-container -p 80:80 abc.azurecr.io:latest
But problem starts after running the image on Azure App Service/ Container Service due to it is not able to ping the port.
ERROR - Container didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging
This is the docker run command available on App service logs
docker run -d --expose=80 --name id_0_f8823503 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=80 -e WEBSITE_SITE_NAME=id -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=id.azurewebsites.net -e WEBSITE_INSTANCE_ID=af26eeb17400cdb1a96c545117762d0fdf33cf24e01fb4ee2581eb015d557e50 -e WEBSITE_USE_DIAGNOSTIC_SERVER=False i.azurecr.io/ivoyant-datamapper
I see the reason is there is no -p 80:80 found in above docker run command. I have tried multiple approaches to fix this but nothing worked for me.
Tried adding
key: PORT value: 80 in configuration app settings
key: WEBSITES_PORT value: 80 in configuration app settings
App Service listens on port 80/443 and forwards traffic to your container on port 80 by default. If your container is listening on port 80, no need to set the WEBSITES_PORT application setting. The -p 80:80 parameter is not needed.
You would set WEBSITES_PORT to the port number of your container only if it listens on a different port number.
Now, why does App Service reports that error? It might be that your app fails when starting. Try enabling the application logs to see if you'll get more info.
Replaced
FROM node:14.17.0 as build
with
FROM node:19-alpine as build
Application was deployed successfully in Azure with success response. Usage of non alpine image caused the issue in Azure App Service.

how to connect to docker images within the same network?

I am trying to connect my node JS application to my mongoDB without using the ip address. The strategy which I have read up is that I must have both the containers in the same network.
My node JS application looks as below
var express = require('express');
var app = express();
var mongoose = require('mongoose');
var bodyParser = require('body-parser');
var PORT = 4000;
// REQUIRE MIDDLEWARE
var instantMongoCrud = require('express-mongo-crud'); // require the module
mongoose.connect('localhost:27017/user');
var options = { //specify options
host: `localhost:${PORT}`
}
//USE AS MIDDLEWARE
app.use(bodyParser.json()); // add body parser
app.use(instantMongoCrud(options)); // use as middleware
app.listen(PORT, ()=>{
console.log('started');
})
I have run my database mongodb and attached it into a network called nodenetwork.
I then did a build of my docker application as below:
docker build -t sampleapp:v1 . && docker run --net container:mongodb sampleapp:v1
the app runs correctly from the console output. However, I cannot access it via my browser.
I understand that it is because I must expose the port 4000 when i do a docker run as I had done before.
However, the issue is that when I tried to run like this instead
docker build -t sampleapp:v1 . && docker run -p 4000:4000 --net container:mongodb sampleapp:v1
it throws me: docker: Error response from daemon: conflicting options: port publishing and the container type network mode.
AS such, my question how do I modify this command and is this the best way?
You're telling your application that mongodb will be accesible on localhost but it won't, as localhost will be the application container unless you're running it with --net=host and your mongodb container is exposing the 27017 port or is running on --net=host as well.
You could create a docker network and connect both containers (mongodb and app) to it.
Example:
$ docker network create mynetwork --driver bridge
$ docker run -d --name mongodb --net=mynetwork bitnami/mongodb:latest
$ docker run -d --name app -p 4000:4000 --net=mynetwork sampleapp:v1
This will create a DNS record for mongodb with the same name of the container that will be resolvable from any container within your user-defined network.
Another solution will be to use docker-compose:
version: '3'
services:
mongodb:
image: bitnami.mongodb:latest
app:
build: ./
ports:
- 4000:4000
Place this in a file called docker-compose.yml in the same directory where the Dockerfile for sampleapp is and run docker-compose up
As docker-compose creates a user-defined network as well, you can access mongodb using the container DNS record.
In any of the cases, change your application to connect using the DNS provided by docker:
mongoose.connect('mongodb:27017/user');
Read more about docker-compose and user-defined networks here:
https://docs.docker.com/compose/overview/
https://docs.docker.com/network/bridge/

Dockerized NodeJS application is unable to invoke another dockerized SpringBoot API

I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.

Docker EXPOSE. Can't get it

this past two day I'm having trouble with docker and i can get it. Following to the docker doc you can expose the ports on which a container will listen for connections with EXPOSE. So far, so good!
If my app listen on port 8080 I should expose my docker container with EXPOSE 8080 and bind it to port 80 of the main host with docker run -p 80:8080.
Here is my Dockerfile:
# DOCKER-VERSION 0.0.1
FROM ubuntu:14.10
# make sure apt is up to date
RUN apt-get update
# install nodejs and npm
RUN apt-get install -y nodejs-legacy npm git git-core
ADD package.json /root/
ADD server.js /root/
# start script
ADD start.sh /root/
RUN chmod +x /root/start.sh
EXPOSE 8080
CMD ./root/start.sh
And my start.sh just runan cd /root/ & npm install & node server.js.
I got a simple express nodejs app:
var express = require('express');
// Constants
var PORT = 8080;
// App
var app = express();
app.get('/', function (req, res) {
res.send('Hello world\n');
});
app.listen(PORT);
console.log('Running on http://localhost:' + PORT);
Here is how i build my docker image: docker build -t app1 .
And how i launch my docker: docker run -it -p 80:8080 --name app1 app1
What is really wired, this is not working. To make it work i have to change EXPOSE 8080 to EXPOSE 80. I don't get it.
Any explanation?
Thanks for reading,
Tom
In your nodejs app, you have the instruction app.listen(PORT); which tells nodejs to start a server listening for connections on the loopback interface on port PORT.
As a result your app will only by able to see connections originating from localhost (the container itself).
You need to tell your app to listen on all interfaces on port PORT:
app.listen(PORT, "0.0.0.0");
This way it will see the connections originating from outside your Docker container.

How can I run Ghost in Docker with the google/node-runtime image?

I'm very new to Docker, Ghost and node really, so excuse any blatant ignorance here.
I'm trying to set up a Docker image/container for Ghost based on the google/nodejs-runtime image, but can't connect to the server when I run via Docker.
A few details: I'm on OS X, so using I'm boot2docker. I'm running Ghost as a npm module, configured to use port 8080 because that's what google/nodejs-runtime expects. This configuration runs fine outside of Docker when I use npm start. I also tried a simple "Hello, World" Express app on port 8080 which works from within Docker.
My directory structure looks like this:
my_app
content/
Dockerfile
ghost_config.js
package.json
server.js
package.json
{
"name": "my_app",
"private": true,
"dependencies": {
"ghost": "0.5.2",
"express": "3.x"
}
}
Dockerfile
FROM google/nodejs-runtime
ghost_config.js
I changed all occurrences of port 2368 to 8080.
server.js
// This Ghost server works with npm start, but not with Docker
var ghost = require('ghost');
var path = require('path');
ghost({
config: path.join(__dirname, 'ghost_config.js')
}).then(function (ghostServer) {
ghostServer.start();
});
// This "Hello World" app works in Docker
// var express = require('express');
// var app = express();
// app.get('/', function(req, res) {
// res.send('Hello World');
// });
// var server = app.listen(8080, function() {
// console.log('Listening on port %d', server.address().port);
// });
I build my Docker image with docker build -t my_app ., then run it with docker run -p 8080 my_app, which prints this to the console:
> my_app# start /app
> node server.js
Migrations: Up to date at version 003
Ghost is running in development...
Listening on 127.0.0.1:8080
Url configured as: http://localhost:8080
Ctrl+C to shut down
docker ps outputs:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f4c7027f62f my_app:latest "/nodejs/bin/npm sta 23 hours ago Up About a minute 0.0.0.0:49165->8080/tcp pensive_lovelace
And boot2docker ip outputs:
The VM's Host only interface IP address is: 192.168.59.103
So I point my browser at: 192.168.59.103:49165 and get nothing, an no output in the Docker logs. Like I said above, running the "Hello World" app in the same server.js works fine.
Everything looks correct to me. The only odd thing that I see is that sqlite3 fails in npm install during docker build:
[sqlite3] Command failed:
module.js:356
Module._extensions[extension](this, filename);
^
Error: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found
...
node-pre-gyp ERR! Testing pre-built binary failed, attempting to source compile
but the source compile appears to succeed just fine.
I hope I'm just doing something silly here.
In your ghost config, change the related server host to 0.0.0.0 instead of 127.0.0.1:
server: {
host: '0.0.0.0',
...
}
PS: for the SQLite error. Try this Dockerfile:
FROM phusion/baseimage:latest
# Set correct environment variables.
ENV HOME /root
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# ...put your own build instructions here...
# Install Node.js and npm
ENV DEBIAN_FRONTEND noninteractive
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get install -y nodejs
# Copy Project Files
RUN mkdir /root/webapp
WORKDIR /root/webapp
COPY app /root/webapp/app
COPY package.json /root/webapp/
RUN npm install
# Add runit service for Node.js app
RUN mkdir /etc/service/webapp
ADD deploy/runit/webapp.sh /etc/service/webapp/run
RUN chmod +x /etc/service/webapp/run
# Add syslog-ng Logentries config file
ADD deploy/syslog-ng/logentries.conf /etc/syslog-ng/conf.d/logentries.conf
# Expose Ghost port
EXPOSE 2368
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Note I used phusion/baseimage instead of google/nodejs-runtime and installed node.js & npm with:
ENV DEBIAN_FRONTEND noninteractive
RUN curl -sL https://deb.nodesource.com/setup | sudo bash -
RUN apt-get install -y nodejs
In your Dockerfile, you need this command EXPOSE 8080.
But that only makes that port accessible outside the Docker container. When you run a container from that image you need to 'map' that port. For example:
$ docker run -d -t -p 80:8080 <imagename>
The -p 80:8080 directs port '8080' in the container to port '80' on the outside when it is running.
The syntax always confuses me (I think it is backwards).

Resources