Why does mongo crash inside of docker at random? [duplicate] - node.js
My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.
Related
How to fix the ECONNRESET ioredis error in Adonisjs during deployment
When I tried to deploy AdonisJS to digital ocean or Azure, I get this error [ioredis] Unhandled error event: Error: read ECONNRESET at TCP.onStreamRead (internal/stream_base_commons.js:209:20) My Adonis app requires Redis to run. I'm using a Redis instance from Digital Ocean. Here's my production config for Redis. prod: { host: Env.get("REDIS_HOST"), port: Env.get("REDIS_PORT"), password: Env.get("REDIS_PASSWORD"), db: 0, keyPrefix: "" },
If you are connecting your AdonisJS app to a Transport Layer Security (TLS) protected Redis instance, you need to add the tls host to your config. So, your prod config should look like this prod: { host: Env.get("REDIS_HOST"), port: Env.get("REDIS_PORT"), password: Env.get("REDIS_PASSWORD"), db: 0, keyPrefix: "", tls: { host: Env.get("REDIS_HOST"), }, },
As a followup to my comment - my docker environment degraded to the point where I couldn't even connect via redis-cli to the vanilla dockerhub Redis image. I wound up cleaning out my docker environment by removing all containers, images, volumes, networks, etc., and then rebooting my mac. After rebuilding them this problem went away for me. I hate not knowing the "root cause" but have a theory. I had been playing with a few different Redis images including the vanilla standalone image from dockerhub and a cluster image from https://github.com/Grokzen/docker-redis-cluster. I was tweaking the build of the latter to add authentication. The theory is that there were residual processes fighting over the port from repeated builds and tear downs. I may have been impatient and hard-stopped the containers I was working on multiple times while debugging the dockerfile and docker-entrypoint.sh files. :) I know this answer isn't directly related to hosting on DO or Azure but since the symptom is the same, perhaps there is a networking conflict somewhere.
Cannot access mongodb inside docker container
I am currently containerizing my application inside docker. I have two images one for my node application and another for mongodb. I have link two using docker-compose. But whenever I try accessing mongodb inside container using the port 27017. But it displays like this: ** It looks like you are trying to access MongoDB over HTTP on the native driver port. ** I have mongodb config as: "url": "mongodb://mongoDB/astroDB" My docker-compose file is as : version: '3.0' services: web: image: astrobot-node build: . command: "yarn start" ports: - "80:3601" depends_on: - "mongo" mongo: image: mongo ports: - "27017:27017" Can anyone tell me what's actually wrong. Thanks in advance.
But whenever I try accessing mongodb inside container using the port 27017. But it displays like this: ** It looks like you are trying to access MongoDB over HTTP on the native driver port. This is expected behavior if you try to send HTTP requests to the MongoDB port. What this says to me is you got your network connectivity figured out and that part is working correctly. Next you just need to use a MongoDB driver to talk to the database from the application instead of hitting it with curl or similar.
the standard mangodb connection url should look like this mongodb://[username:password#]host1[:port1][,...hostN[:portN]][/[defaultauthdb][?options]] https://docs.mongodb.com/manual/reference/connection-string/ In you case the mongoDB in your url string should indicate the db host, To make it points to the mangodb container you should change it to "url": "mongodb://mongo/astroDB", cause your service named mango not mangoDB. you can also achieve that by provide a static IP to your container and write this IP directly, you can also find you container network IP by using the docker container inspect <container-IP> to test it manually. EDIT Here is a thread that should help you debugging your problem. https://forums.docker.com/t/how-mongodb-work-in-docker-how-to-connect-with-mongodb/44763/8
Nodejs facebook-passport work on localhost not when deploy firebase hosting [duplicate]
My server threw this today, which is a Node.js error I've never seen before: Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443 at Object.exports._errnoException (util.js:870:11) at errnoException (dns.js:32:15) at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26) I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that. My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services). Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error. My main question is what does dns.js do? The dns.js is there for node to get ip address of the domain(in brief). Some more info: http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started. To solve this, just stop and restart the container docker-compose down docker-compose up Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue. I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue: It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS. You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache. http.request (and https) accepts a lookup property which defaults to dns.lookup() http.get('http://example.com', { lookup: yourLookupImplementation }, response => { // do something here with response }); I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs. I personally use cacheable-lookup which is the one that got uses (see dnsCache option). You can use it on specific requests const http = require('http'); const CacheableLookup = require('cacheable-lookup'); const cacheable = new CacheableLookup(); http.get('http://example.com', {lookup: cacheable.lookup}, response => { // Handle the response here }); or globally const http = require('http'); const https = require('https'); const CacheableLookup = require('cacheable-lookup'); const cacheable = new CacheableLookup(); cacheable.install(http.globalAgent); cacheable.install(https.globalAgent); NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com). The error I encountered is the same in all respects except that no domain is specified. My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN" I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally. The file vue.config.js read : module.exports = { devServer: { host: 'tstvm01', port: 3030, }, }; When served on the original machine the start up output is : App running at: - Local: http://tstvm01:3030/ - Network: http://tstvm01:3030/ Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes: INFO Starting development server... 10% building modules 1/1 modules 0 activeevents.js:183 throw er; // Unhandled 'error' event ^ Error: getaddrinfo EAI_AGAIN at Object._errnoException (util.js:1022:11) at errnoException (dns.js:55:15) at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26) If it ain't already obvious, changing vue.config.js to read ... module.exports = { devServer: { host: 'tstvm07', port: 3030, }, }; ... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API. After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal. Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found. TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other. My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this: docker-compose.yml version: "3.8" # ... services: server: # ... networks: my-network: # ... minio-service: # ... (missing networks: section) # ... networks: my-network: To include my custom isolated named network, like this: docker-compose.yml version: "3.8" # ... services: server: # ... networks: my-network: # ... minio-service: # ... networks: my-network: # ... # ... networks: my-network: More details on docker-compose networking can be found here.
This is the issue related to hosts file setup. Add the following line to your hosts file In Ubuntu: /etc/hosts 127.0.0.1 localhost In windows: c:\windows\System32\drivers\etc\hosts 127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html i got the same error, i solved it by updating "hosts" file present under this location in windows os C:\Windows\System32\drivers\etc Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file. I initially had these services: services: frontend: depends_on: - backend ports: - 3005:3000 backend: ports: - 8005:8000 I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this: networks: moar: name: moar-network attachable: true services: frontend: networks: - moar depends_on: - backend ports: - 3005:3000 backend: ports: - 8005:8000 Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like: Error occured while trying to proxy to: localhost:3005/graphql/ The solution is to add the default network to the frontend service's network list, like so: networks: moar: name: moar-network attachable: true services: frontend: networks: - moar - default # here depends_on: - backend ports: - 3005:3000 backend: ports: - 8005:8000 Now we're peachy! One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work? Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.
Docker request to own server
I have a docker instance running apache on port 80 and node.js+express running on port 3000. I need to make an AJAX request from the apache-served website to the node server running on port 3000. I don't know what is the appropiate url to use. I tried localhost but that resolved to the localhost of the client browsing the webpage (also the end user) instead of the localhost of the docker image. Thanks in advance for your help!
First you should split your containers - it is a good practice for Docker to have one container per one process. Then you will need some tool for orchestration of these containers. You can start with docker-compose as IMO the simplest one. It will launch all your containers and manage their network settings for you by default. So, imaging you have following docker-compose.yml file for launching your apps: docker-compose.yml version: '3' services: apache: image: apache node: image: node # or whatever With such simple configuration you will have host names in your network apache and node. So from inside you node application, you will see apache as apache host. Just launch it with docker-compose up
make an AJAX request from the [...] website to the node server The JavaScript, HTML, and CSS that Apache serves up is all read and interpreted by the browser, which may or may not be running on the same host as the servers. Once you're at the browser level, code has no idea that Docker is involved with any of this. If you can get away with only sending links without hostnames <img src="/assets/foo.png"> that will always work without any configuration. Otherwise you need to use the DNS name or IP address of the host, in exactly the same way you would as if you were running the two services directly on the host without Docker.
Resolving subdomain.localhost doesn't work with docker swarm
What I want to achieve: docker swarm on localhost dockerized reverse proxy which would forward subdomain.domain to container with app container with app What I have done: changed /etc/hosts that now looks like: 127.0.0.1 localhost 127.0.0.1 subdomain.localhost set up traefik to forward word.beluga to specific container What is the problem: can't get to container via subdomain. It works if I use port though curl gives different results for subdomain and port The question: what is the problem? and why? how to debug it and find whether problem is docker or network based? (how to check whether request even got to my container) I'll add that I have also tried to do it on docker-machine (virtualbox) but it wasn't working. So I have moved to localhost, but as you can see it didn't help a lot. I am losing hope, so any hint would be appreciated. Thank you in advance
There’s no such thing as subdomains of localhost. By near-universal convention, localhost resolves to the IPv4 address 127.0.0.1 and the IPv6 address ::1. You can still test virtual host with docker, but you will have to use port: curl -H Host:sub.localhost http://localhost:8000
Late to respond but I was able to achieve this using Traefik's 2.x routers feature like so: labels: - "traefik.http.routers.<unique name>.rule=Host(`subdomain.localhost`)" in docker-compose file version: '3.9' services: app: image: myapp:latest labels: - "traefik.http.routers.myapp.rule=Host(`myapp.localhost`)" reverse-proxy: image: traefik:v2.4 command: --api.insecure=true --providers.docker ports: - "80:80" # The Web UI (enabled by --api.insecure=true) - "9000:8080" volumes: # So that Traefik can listen to the Docker events - /var/run/docker.sock:/var/run/docker.sock I think the reason, why it works, is that Treafik is intercepting everything on localhost and just then applying rules so its Traefik specific answer.