how to connect to docker images within the same network? - node.js

I am trying to connect my node JS application to my mongoDB without using the ip address. The strategy which I have read up is that I must have both the containers in the same network.
My node JS application looks as below
var express = require('express');
var app = express();
var mongoose = require('mongoose');
var bodyParser = require('body-parser');
var PORT = 4000;
// REQUIRE MIDDLEWARE
var instantMongoCrud = require('express-mongo-crud'); // require the module
mongoose.connect('localhost:27017/user');
var options = { //specify options
host: `localhost:${PORT}`
}
//USE AS MIDDLEWARE
app.use(bodyParser.json()); // add body parser
app.use(instantMongoCrud(options)); // use as middleware
app.listen(PORT, ()=>{
console.log('started');
})
I have run my database mongodb and attached it into a network called nodenetwork.
I then did a build of my docker application as below:
docker build -t sampleapp:v1 . && docker run --net container:mongodb sampleapp:v1
the app runs correctly from the console output. However, I cannot access it via my browser.
I understand that it is because I must expose the port 4000 when i do a docker run as I had done before.
However, the issue is that when I tried to run like this instead
docker build -t sampleapp:v1 . && docker run -p 4000:4000 --net container:mongodb sampleapp:v1
it throws me: docker: Error response from daemon: conflicting options: port publishing and the container type network mode.
AS such, my question how do I modify this command and is this the best way?

You're telling your application that mongodb will be accesible on localhost but it won't, as localhost will be the application container unless you're running it with --net=host and your mongodb container is exposing the 27017 port or is running on --net=host as well.
You could create a docker network and connect both containers (mongodb and app) to it.
Example:
$ docker network create mynetwork --driver bridge
$ docker run -d --name mongodb --net=mynetwork bitnami/mongodb:latest
$ docker run -d --name app -p 4000:4000 --net=mynetwork sampleapp:v1
This will create a DNS record for mongodb with the same name of the container that will be resolvable from any container within your user-defined network.
Another solution will be to use docker-compose:
version: '3'
services:
mongodb:
image: bitnami.mongodb:latest
app:
build: ./
ports:
- 4000:4000
Place this in a file called docker-compose.yml in the same directory where the Dockerfile for sampleapp is and run docker-compose up
As docker-compose creates a user-defined network as well, you can access mongodb using the container DNS record.
In any of the cases, change your application to connect using the DNS provided by docker:
mongoose.connect('mongodb:27017/user');
Read more about docker-compose and user-defined networks here:
https://docs.docker.com/compose/overview/
https://docs.docker.com/network/bridge/

Related

How to configure port of React app fetch when deploying to ECS fargate cluster with two tasks

I have two docker images that communicate fine when deployed locally, but I'm not sure how to set up my app to correctly make fetch() calls from the React app to the correct port on the other app when they're both deployed as tasks on the same ECS cluster.
My react app uses a simple fetch('/fooService/' + barStr) type call, and my other app exposes a /fooService/{barStr} endpoint on port 8081.
For local deployment and local docker, I used setupProxy.js to specify a proxy:
const { createProxyMiddleware } = require("http-proxy-middleware");
module.exports = function(app) {
app.use(createProxyMiddleware('/fooService',
{ target: 'http://fooImage:8081',
changeOrigin: true
}
));
}
In ECS this seems to do nothing, though. I see the setupProxy run when the image starts up, but the requests from my react app just go directly to {sameIPaddress}/fooService/{barStr}, ignoring the proxy specification entirely. I can see in the browser that the requests are being made over port 80. If these requests are made on port 8081 manually, they complete just fine, so the port is available and the service is running.
I've exposed port 8081 on the other task, and I can access it externally with no problem, I just am unclear on how to design my react app to point to it, since I won't necessarily know what IP address I will be assigned until the task launches. If I use a relative path, I cannot specify the port.
What's the idiomatic way to specify the destination of my fetch requests in this context?
Edit: If it is relevant, here is how the docker images are configured. They are built automatically on dockerhub and work fine if I deploy them in my local docker instance.
docker-compose.yaml
version: "3.8"
services:
fooImage:
image: myname/foo-image:0.1
build: ./
container_name: server
ports:
- '8081:8081'
barImage:
image: myname/bar-image:0.1
build: ./bar
container_name: front
ports:
- '80:80'
stdin_open: true
tty: true
Dockerfile - foo image
#
# Build stage
#
FROM maven:3.8.5-openjdk-18-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:18-alpine
COPY --from=build /home/app/target/*.jar /usr/local/lib/app.jar
EXPOSE 8081
ENTRYPOINT ["java", "-jar", "/usr/local/lib/app.jar"]
Dockerfile - bar image
FROM node:17-alpine
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
ECS Foo task ports
ECS Bar task ports
The solution to this issue was to return the proxy target to "localhost:8081". Per Amazon support:
For quickest resolve your issue, you can try to change your proxy
configuration from "http://server:8081" to "http://localhost:8081" and
the proxy should work.
That's because when using Fargate with awsvpc network mode, containers running in a Task share the same network namespace, which means containers can communicate with each other through localhost. (e.g. When back-end container listen at port 8081, front-end container can access back-end container via localhost:8081) And when using docker-compose [2], you can use hostname to communicate to another container that specified in the same docker-compose file. So proxying back-end traffic with "http://server:8081" in Fargate won't work and should be modified to "http://localhost:8081"."

I can't connect to my mongo docker container by its name

I'm pretty new to docker but I'm having some issues getting a node app to connect to a mongo database running on a separate container.
I'm using the official mongo image
I run it using:
docker run --name some-mongo --network-alias some-mongo -d mongo
It's running on port 27017 by default. I can connect to it using the mongo shell:
mongo --host mongodb://172.17.0.2:27017
But I can't connect to it by name
mongo --host mongodb://some-mongo:27017
MongoDB shell version: 3.2.19
connecting to: mongodb://some-mongo:27017/test
2018-05-07T17:23:20.813-0400 I NETWORK [thread1] getaddrinfo("some-mongo") failed: Name or service not known
2018-05-07T17:23:20.813-0400 E QUERY [thread1] Error: couldn't initialize connection to host some-mongo, address is invalid :
connect#src/mongo/shell/mongo.js:223:14
#(connect):1:6
exception: connect failed
Instead I get an error message about how I can't connect to the mongo host:
I'm trying some docker-compose tutorials but either they're too simple or they don't seem to work for me. I just want to connect a custom node app, (not the official node) to mongodb and some other dependencies.
Your approach is not altering your host's system configuration, so that the mongo service will not be available just like that. Agreeing with #unm4sk, you should compose you application's services into a single compose file like this:
version: '2'
services:
mongo:
image: mongo
expose:
- "27017"
[...]
service_utilizing_mongo:
[...]
links:
- mongo:mongo
Then, your service_utilizing_mongo would have a DNS entry that'd make this service capable of accessing your mongo service via a alias mongo on a default 27017 port.
You have to run your container with passing ports to your host machine:
docker run -p 27017:27017 --name some-mongo --network-alias some-mongo -d mongo
Then you can connect to MongoDB from your host machine:
If you don't want to do this you can connect to mongo through docker container command
docker exec -it some-mongo mongo

Dockerized NodeJS application is unable to invoke another dockerized SpringBoot API

I am running a SpringBoot application in a docker container and another VueJS application in another docker container using docker-compose.yml as follows:
version: '3'
services:
backend:
container_name: backend
build: ./backend
ports:
- "28080:8080"
frontend:
container_name: frontend
build: ./frontend
ports:
- "5000:80"
depends_on:
- backend
I am trying to invoke SpringBoot REST API from my VueJS application using http://backend:8080/hello and it is failing with GET http://backend:8080/hello net::ERR_NAME_NOT_RESOLVED.
Interestingly if I go into frontend container and ping backend it is able to resolve the hostname backend and I can even get the response using wget http://backend:8080/hello.
Even more interestingly, I added another SpringBoot application in docker-compose and from that application I am able to invoke http://backend:8080/hello using RestTemplate!!
My frontend/Dockerfile:
FROM node:9.3.0-alpine
ADD package.json /tmp/package.json
RUN cd /tmp && yarn install
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
RUN npm run build
ENV PORT=80
EXPOSE 80
CMD [ "npm", "start" ]
In my package.json I mapped script "start": "node server.js" and my server.js is:
const express = require('express')
const app = express()
const port = process.env.PORT || 3003
const router = express.Router()
app.use(express.static(`${__dirname}/dist`)) // set the static files location for the static html
app.engine('.html', require('ejs').renderFile)
app.set('views', `${__dirname}/dist`)
router.get('/*', (req, res, next) => {
res.sendFile(`${__dirname}/dist/index.html`)
})
app.use('/', router)
app.listen(port)
console.log('App running on port', port)
Why is it not able to resolve hostname from the application but can resolve from the terminal? Am I missing any docker or NodeJS configuration?
Finally figured it out. Actually, there is no issue. When I run my frontend VueJS application in docker container and access it from the browser, the HTML and JS files will be downloaded on my browser machine, which is my host, and the REST API call goes from the host machine. So from my host, the docker container hostname (backend) is not resolved.
The solution is: Instead of using actual docker hostname and port number (backend:8080) I need to use my hostname and mapped port (localhost:28080) while making REST calls.
I would suggest:
docker ps to get the names/Ids of running containers
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' BACKEND_CONTAINER_NAME to get backend container's IP address from the host.
Now put this IP in the front app and it should be able to connect to your backend.

How to connect nodeJS docker container to mongoDB

I have problems to connect a nodeJS application which is running as a docker container to a mongoDB. Let me explain what I have done so far:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3732cc1d90 mongo:3.4 "docker-entrypoint..." 3 weeks ago Up 3 weeks 27017/tcp mongo_live
As you can see, there is already a mongo docker container running.
Now I'm running my nodeJS application docker container (which is a build from meteorJS):
$ docker run -it 0b422defbd59 /bin/bash
In this docker container I want to run the application by running:
$ node main.js
Now I'm getting the error
Error: MONGO_URL must be set in environment
I already tried to set MONGO_URL by setting:
ENV MONGO_URL mongodb://mongo_live:27017/
But this doesn't work:
MongoError: failed to connect to server [mongo_live:27017] on first connect
So my question is how to connect to a DB, which is - as far as I understand - 'outside' of the running container. Alternativly how do I set up a new DB to this container?
There are couple of ways to do it.
run your app in the same network as your mongodb:
docker run --net container:mongo_live your_app_docker_image
# then you can use mongodb in your localhost
$ ENV MONGO_URL mongodb://localhost:27017/
Also you can link two containers:
docker run --link mongo_live:mongo_live you_app_image ..
# Now mongodb is accessible via mongo_live
use mongodb container ip address:
docker inspect -f '{{.NetworkSettings.IPAddress}}' mongo_live
# you will get you container ip here
$ docker run -it 0b422defbd59 /bin/bash
# ENV MONGO_URL mongodb://[ip from previous command]:27017/
You can bind your mongodb port to your host and use host's hostname in your app
You can use docker network and run both apps in the same network
You could pass --add-host mongo_live:<ip of mongo container> to docker run for your application and then use mongo_live for mongodb url
You can also use docker compose to make your life easier ;)
...
When you run containers each container works in independent network. Because one container cant connect to other point to point.
The are 3 ways to connect containers
Have a little fuss with low-level docker network magic
Connect container through localhost. Each container must expose ports on localhost (as your mongo_live). But you need add to host ile on localhost 127.0.0.1 mongo_live (This is the simplest way)
Use docker-compose. It convenient tool for working many containers together. (This is right way)
Add mongodb to application container is not docker way.
Please use below snippet for your docker-compose.yml file, replace comments with your actuals. Should solve your problem.
version: '2'
services:
db:
build: <image for mongoDB>
ports:
- "27017:27017" # whatever port u r using
environment:
#you can specify mondo db username and stuff here
volumes:
- #load default config for mondodb from here
- "db-data-store:/data/db" # path depends on which image you use
networks:
- network
nodejs:
build: #image for node js
expose:
- # mention port for nodejs
volumes:
- #mount project code on container
networks:
- network
depends_on:
- db
networks:
network:
driver: bridge
Please use the below links for references :
1) NodeJs Docker
2) MongoDb docker
3) docker-compose tutorial
Best of Luck
I had problem how to connect my server.js to mongodb. And that's how i solved it hope you find it useful.
Tap For My Screenshot

fail to link redis container to node.js container in docker

I had deployed a simple redis based nodejs application on the digital ocean cloud.
Here is the node.js app.
var express = require('express');
var app = express();
app.get('/', function(req, res){
res.send('hello world');
});
app.set('trust proxy', 'loopback')
app.listen(3000);
var redisClient = require('redis').createClient(6379,'localhost');
redisClient.on('connect',function(err){
console.log('connect');
})
In order to deploy the application, I used one node.js container and one redis container respectively,and linked node.js container with redis container.
The redis container could be obtained by
docker run -d --name redis -p 6379:6379 dockerfile/redis
and the node.js container is based on google/nodejs, in which Dockerfile is simply as
FROM google/nodejs
WORKDIR /src
EXPOSE 3000
CMD ["/bin/bash"]
my node.js image is named as nodejs and built by
docker build -t nodejs Dockerfile_path
and the container is run by copying my host applications files to the src folder in the container and linking the existing redis container
docker run -it --rm -p 8080:3000 --name app -v node_project_path:/src --link redis:redis nodejs
finally I got into the container successfully, then installed the npm modules by npm install and then start the application by node app.js.
But I got a error saying:
Error: Redis connection to localhost:6379 failed - connect ECONNREFUSED
As redis container is exposed to 6379, and my nodejs container is linking to redis container. in my node.js app, connecting to localhost redis server with port 6379 is supposed to be ok, why in fact it is not working at all
When you link the redis container to the node container, docker will already modify the hosts file for you
You should then be able to connect to the redis container via:
var redisClient = require('redis').createClient(6379,'redis'); // 'redis' is alias for the link -> what's in the hosts file.
From: https://docs.docker.com/userguide/dockerlinks/
$ sudo docker run -d -P --name web --link db:db training/webapp python app.py
This will link the new web container with the db container you created earlier. The --link flag takes the form:
--link name:alias
Where name is the name of the container we're linking to and alias is an alias for the link name. You'll see how that alias gets used shortly.

Resources