Cannot access mongodb inside docker container - node.js

I am currently containerizing my application inside docker. I have two images one for my node application and another for mongodb. I have link two using docker-compose. But whenever I try accessing mongodb inside container using the port 27017.
But it displays like this:
**
It looks like you are trying to access MongoDB over HTTP on the native
driver port.
**
I have mongodb config as:
"url": "mongodb://mongoDB/astroDB"
My docker-compose file is as :
version: '3.0'
services:
web:
image: astrobot-node
build: .
command: "yarn start"
ports:
- "80:3601"
depends_on:
- "mongo"
mongo:
image: mongo
ports:
- "27017:27017"
Can anyone tell me what's actually wrong. Thanks in advance.

But whenever I try accessing mongodb inside container using the port 27017. But it displays like this: **
It looks like you are trying to access MongoDB over HTTP on the native driver port.
This is expected behavior if you try to send HTTP requests to the MongoDB port. What this says to me is you got your network connectivity figured out and that part is working correctly.
Next you just need to use a MongoDB driver to talk to the database from the application instead of hitting it with curl or similar.

the standard mangodb connection url should look like this
mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]]
https://docs.mongodb.com/manual/reference/connection-string/
In you case the mongoDB in your url string should indicate the db host, To make it points to the mangodb container you should change it to "url": "mongodb://mongo/astroDB", cause your service named mango not mangoDB.
you can also achieve that by provide a static IP to your container and write this IP directly, you can also find you container network IP by using the docker container inspect <container-IP> to test it manually.
EDIT
Here is a thread that should help you debugging your problem.
https://forums.docker.com/t/how-mongodb-work-in-docker-how-to-connect-with-mongodb/44763/8

Related

My docker container failing to connect to mongodb database container [duplicate]

I'm new to docker. I'm trying to create a MongoDB container and a NodeJS container. My file looks:
version: '2'
services:
backend:
image: node:5.11-onbuild
ports:
- "3001:3001"
volumes:
- .:/code
working_dir: "/code"
links:
- mongodb
mongodb:
image: mongo:3.3
expose:
- 27017
It should run npm install and then node ..
But docker-compose up ends up with [MongoError: connect ECONNREFUSED 127.0.0.1:27017] while the command node ..
I think this is because of the bind_ip = 127.0.0.1 in the file /etc/mongod.conf. Is this right?
I use boot2docker on a Win10 system.
How can I solve this problem so that node can connect to the MongoDB?
In your backend app, connect to mongodb:27017 instead of 127.0.0.1:27017. Where 'mongodb' is the name of your service within docker-compose.yml.
I recently encountered similar issue. I am running docker toolbox under win 10 and this is how it worked for me:
1) I had to verify which URL my default docker machine is using. This can be checked by running docker-machine ls command. It will list available machines:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:1234 v17.06.0-ce
rancher-client - virtualbox Stopped Unknown
rancher-server - virtualbox Stopped Unknown
2) When running mongodb image specify the port mapping
docker run -d -it -p 27017:27017 mongo
3) At that point the valid mongo url would look something like this
var dbhost = 'mongodb://192.168.99.100:27017/test
where 192.168.99.100 was the default machine URL from the point 1)
Hope it helps someone.
Most likely, yes. 127.0.0.1 points to localhost inside the mongodb container, so is not accessible from outside the container. Binding to 0.0.0.0 will probably work.
With the link you specified in the docker-compose.yml, your backend container should then be able to connect to the mongo container through mongodb:27017
You have to tell the container to use it's own IP Address instead of localhost.
For example, let's assume you generated scaffold code with expressjs, you have to write in routes/index.js
var mongodb = require('mongodb');
router.get('/thelist', function(req, res){
// Get a Mongo client to work with the Mongo server
var MongoClient = mongodb.MongoClient;
// Define where the MongoDB server is
var url = 'mongodb://172.17.0.5:27017/dbname';
// Connect to the server
MongoClient.connect(url, function (err, db) {
.........
where 172.17.0.5 is the $CONTAINER_IP
you can find the container ip via
$ docker inspect $CONTAINER_HOSTNAME | grep IPAddress
If you still can't understand you can take a peek at my Docker NodeJS and MongoDB app

Keep hitting connection error with docker for node mongo app [duplicate]

I'm new to docker. I'm trying to create a MongoDB container and a NodeJS container. My file looks:
version: '2'
services:
backend:
image: node:5.11-onbuild
ports:
- "3001:3001"
volumes:
- .:/code
working_dir: "/code"
links:
- mongodb
mongodb:
image: mongo:3.3
expose:
- 27017
It should run npm install and then node ..
But docker-compose up ends up with [MongoError: connect ECONNREFUSED 127.0.0.1:27017] while the command node ..
I think this is because of the bind_ip = 127.0.0.1 in the file /etc/mongod.conf. Is this right?
I use boot2docker on a Win10 system.
How can I solve this problem so that node can connect to the MongoDB?
In your backend app, connect to mongodb:27017 instead of 127.0.0.1:27017. Where 'mongodb' is the name of your service within docker-compose.yml.
I recently encountered similar issue. I am running docker toolbox under win 10 and this is how it worked for me:
1) I had to verify which URL my default docker machine is using. This can be checked by running docker-machine ls command. It will list available machines:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:1234 v17.06.0-ce
rancher-client - virtualbox Stopped Unknown
rancher-server - virtualbox Stopped Unknown
2) When running mongodb image specify the port mapping
docker run -d -it -p 27017:27017 mongo
3) At that point the valid mongo url would look something like this
var dbhost = 'mongodb://192.168.99.100:27017/test
where 192.168.99.100 was the default machine URL from the point 1)
Hope it helps someone.
Most likely, yes. 127.0.0.1 points to localhost inside the mongodb container, so is not accessible from outside the container. Binding to 0.0.0.0 will probably work.
With the link you specified in the docker-compose.yml, your backend container should then be able to connect to the mongo container through mongodb:27017
You have to tell the container to use it's own IP Address instead of localhost.
For example, let's assume you generated scaffold code with expressjs, you have to write in routes/index.js
var mongodb = require('mongodb');
router.get('/thelist', function(req, res){
// Get a Mongo client to work with the Mongo server
var MongoClient = mongodb.MongoClient;
// Define where the MongoDB server is
var url = 'mongodb://172.17.0.5:27017/dbname';
// Connect to the server
MongoClient.connect(url, function (err, db) {
.........
where 172.17.0.5 is the $CONTAINER_IP
you can find the container ip via
$ docker inspect $CONTAINER_HOSTNAME | grep IPAddress
If you still can't understand you can take a peek at my Docker NodeJS and MongoDB app

Why does mongo crash inside of docker at random? [duplicate]

My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.

Error: connect ECONNREFUSED 127.0.0.1:5432 docker-compose up

Not sure why am I getting the SequelizeConnectionRefusedError. I verified that I am able to run all my docker images locally but when I try to run 'docker-compose up' command, I am running into Error: connect ECONNREFUSED 127.0.0.1:5432.
Based on my understanding of your question, here are my assumptions:
You are using MacOS
Your Postgres server is running in the host OS instead of in another docker container.
With that being said, this is a common problem with MacOS users who want to connect their docker containers to the Postgres server running in the host machine. As they are not in the same network, there is no way for your container to reach the Postgres server and hence, connecting to it via 127.0.0.1:5432 will definitely not reachable.
It will be trivial to solve in a Linux machine by adding network_mode: host so that the containers will be running in the same network as host machine hence is able to reach the Postgres server. However, due to the implementation of Docker on Mac where Docker host is actually being run in a hidden VM on top of your MacOS, this solution will not work here.
Some suggestions:
Migrate your Postgres server to run in a docker container (in the same docker-compose file if you will). You can always do a port mapping in order to access it from your Postbird.
Or if you still insist on running it locally in your MacOS, here is a workaround that involves creating another docker container in the same docker network and perform a revert SSH tunneling.
Here are the steps to migrate the Postgres server to using docker container
Update your docker-compose with a new db service:
db:
image: postgres:10.5-alpine
environment:
POSTGRES_USER: $UDAGRAM_USERNAME
POSTGRES_PASSWORD: $UDAGRAM_PASSWORD
POSTGRES_DB: $UDAGRAM_DATABASE
ports:
- 35432:5432
volumes:
- <path where you want to persist your database data>:/var/lib/postgresql/data
You can now connect to your new postgres using Postbird at localhost:35432
EDIT 1
If you run your Postgres instance in AWS RDS, you will not need to make the changes above but follow other steps:
Make sure that your network can reach the RDS endpoint at port 5432. A best practice here is to update the security group inbound rules to allow only port 5432 from only your IP address (how to do that is out of the scope of this answer but can easily be found from AWS documentation)
Update the value of UDAGRAM_HOST to be the RDS endpoint which can be found from the AWS RDS console.

Nodejs facebook-passport work on localhost not when deploy firebase hosting [duplicate]

My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.

Resources