How to fix the ECONNRESET ioredis error in Adonisjs during deployment - node.js

When I tried to deploy AdonisJS to digital ocean or Azure, I get this error
[ioredis] Unhandled error event: Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
My Adonis app requires Redis to run. I'm using a Redis instance from Digital Ocean. Here's my production config for Redis.
prod: {
host: Env.get("REDIS_HOST"),
port: Env.get("REDIS_PORT"),
password: Env.get("REDIS_PASSWORD"),
db: 0,
keyPrefix: ""
},

If you are connecting your AdonisJS app to a Transport Layer Security (TLS) protected Redis instance, you need to add the tls host to your config.
So, your prod config should look like this
prod: {
host: Env.get("REDIS_HOST"),
port: Env.get("REDIS_PORT"),
password: Env.get("REDIS_PASSWORD"),
db: 0,
keyPrefix: "",
tls: {
host: Env.get("REDIS_HOST"),
},
},

As a followup to my comment - my docker environment degraded to the point where I couldn't even connect via redis-cli to the vanilla dockerhub Redis image. I wound up cleaning out my docker environment by removing all containers, images, volumes, networks, etc., and then rebooting my mac. After rebuilding them this problem went away for me.
I hate not knowing the "root cause" but have a theory. I had been playing with a few different Redis images including the vanilla standalone image from dockerhub and a cluster image from https://github.com/Grokzen/docker-redis-cluster. I was tweaking the build of the latter to add authentication. The theory is that there were residual processes fighting over the port from repeated builds and tear downs. I may have been impatient and hard-stopped the containers I was working on multiple times while debugging the dockerfile and docker-entrypoint.sh files. :)
I know this answer isn't directly related to hosting on DO or Azure but since the symptom is the same, perhaps there is a networking conflict somewhere.

Related

Why does mongo crash inside of docker at random? [duplicate]

My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.

Nodejs facebook-passport work on localhost not when deploy firebase hosting [duplicate]

My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.

Developing Node.js Applications Inside Docker Container

I'm trying to set up my local development environment to the point that all my Node.js applications are developed inside a docker container. Our team works on Linux, macOS, and Windows so this should help us limit some of the issues that we see due to this.
We're using Sails.js for our Node framework, and I'm not sure if the issue is in my Docker setup, or an issue with Sails itself.
Here's my docker run command, which almost works:
docker run --rm -it -p 3000:3000 --name my-app-dev -v $PWD:/home/app -w /home/app -u node node:latest /bin/bash
This almost works, but the application we're developing needs access to the machine's localhost for some database applicationss (MongoDB and SQL Server) and to a RabbitMQ instance. SQL Server is on port 1433 (running in Docker), RabbitMQ is on port 5672 (also running in Docker) and MongoDB is on 27017, but not running in Docker.
When I run that Docker command and then start the application, I get an error saying that the application cannot connect to those localhost ports, which makes sense from what I've read because by default the docker container has its own localhost, which is where it would try to connect by default.
So, I added the following to the docker run command: --net=host, hoping to give the container access to my machine's localhost. This seems to get rid of the issue for RabbitMQ, but not MongoDB. There are two errors in the console for it:
2019-09-05 15:58:38.800 | error | error: Could not tear down the ORM hook. Error details: Error: Consistency violation: Attempting to tear down a datastore (`myMongoTable`) which is not currently registered with this adapter. This is usually due to a race condition in userland code (e.g. attempting to tear down the same ORM instance more than once), or it could be due to a bug in this adapter. (If you get stumped, reach out at http://sailsjs.com/support.)
at Object.teardown (/home/app/node_modules/sails-mongo/lib/index.js:390:19)
at /home/app/node_modules/waterline/lib/waterline.js:758:27
at /home/app/node_modules/waterline/node_modules/async/dist/async.js:3047:20
at eachOfArrayLike (/home/app/node_modules/waterline/node_modules/async/dist/async.js:1002:13)
at eachOf (/home/app/node_modules/waterline/node_modules/async/dist/async.js:1052:9)
at Object.eachLimit (/home/app/node_modules/waterline/node_modules/async/dist/async.js:3111:7)
at Object.teardown (/home/app/node_modules/waterline/lib/waterline.js:742:11)
at Hook.teardown (/home/app/node_modules/sails-hook-orm/index.js:246:30)
at Sails.wrapper (/home/app/node_modules/#sailshq/lodash/lib/index.js:3275:19)
at Object.onceWrapper (events.js:291:20)
at Sails.emit (events.js:203:13)
at Sails.emitter.emit (/home/app/node_modules/sails/lib/app/private/after.js:56:26)
at /home/app/node_modules/sails/lib/app/lower.js:67:11
at beforeShutdown (/home/app/node_modules/sails/lib/app/lower.js:45:12)
at Sails.lower (/home/app/node_modules/sails/lib/app/lower.js:49:3)
at Sails.wrapper [as lower] (/home/app/node_modules/#sailshq/lodash/lib/index.js:3275:19)
at whenSailsIsReady (/home/app/node_modules/sails/lib/app/lift.js:68:13)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3861:9
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iterateeCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:924:17)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3858:13
at /home/app/node_modules/sails/lib/app/load.js:261:22
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:1609:17
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/lib/app/load.js:186:25
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3861:9
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iterateeCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:924:17)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3858:13
at afterwards (/home/app/node_modules/sails/lib/app/private/loadHooks.js:350:27)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3861:9
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iterateeCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:924:17)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3858:13
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iteratorCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:996:13)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/lib/app/private/loadHooks.js:233:40
at processTicksAndRejections (internal/process/task_queues.js:75:11)
2019-09-05 15:58:38.802 | verbose | verbo: (The error above was logged like this because `sails.hooks.orm.teardown()` encountered an error in a code path where it was invoked without providing a callback.)
2019-09-05 15:58:38.808 | error | error: Failed to lift app: Error: Consistency violation: Unexpected error creating db connection manager:
MongoError: failed to connect to server [localhost:27017] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1056:14) {
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017'
}]
at flaverr (/home/app/node_modules/flaverr/index.js:94:15)
at Function.module.exports.parseError (/home/app/node_modules/flaverr/index.js:371:12)
at Function.handlerCbs.error (/home/app/node_modules/machine/lib/private/help-build-machine.js:665:56)
at connectCb (/home/app/node_modules/sails-mongo/lib/private/machines/create-manager.js:130:22)
at connectCallback (/home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:428:5)
at /home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:335:11
at processTicksAndRejections (internal/process/task_queues.js:75:11)
at Object.error (/home/app/node_modules/sails-mongo/lib/index.js:268:21)
at /home/app/node_modules/machine/lib/private/help-build-machine.js:1514:39
at proceedToFinalAfterExecLC (/home/app/node_modules/parley/lib/private/Deferred.js:1153:14)
at proceedToInterceptsAndChecks (/home/app/node_modules/parley/lib/private/Deferred.js:913:12)
at proceedToAfterExecSpinlocks (/home/app/node_modules/parley/lib/private/Deferred.js:845:10)
at /home/app/node_modules/parley/lib/private/Deferred.js:303:7
at /home/app/node_modules/machine/lib/private/help-build-machine.js:952:35
at Function.handlerCbs.error (/home/app/node_modules/machine/lib/private/help-build-machine.js:742:26)
at connectCb (/home/app/node_modules/sails-mongo/lib/private/machines/create-manager.js:130:22)
at connectCallback (/home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:428:5)
at /home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:335:11
at processTicksAndRejections (internal/process/task_queues.js:75:11)
The first issue seems to be related to Sails.js and its sails-mongo ORM adapter. The second just seems to be an issue with connecting to the database. So I'm not sure if the first issue is a red herring and its underlying issue is the lack of database connection.
If anyone has any suggestions for how to run a Sails.js app inside a Docker container with access to the machine's localhost and MongoDB, I'd love some help with this!
Along with --network host in the docker run command, you need to define the host's IP in the connection properties and not localhost, since localhost in the container refers to the container itself. If you would like to make connection properties in the code consistent, you can have each developer set up a loopback alias in /etc/hosts, e.g. 127.0.0.1 my.host.com and set the connection properties to that host name ("my.host.com"), e.g. my.host.com:27017 for MongoDB.
By default, Docker creates a bridge network and assigns any container attached and the host OS an IP address. Running ifconfig and searching for docker0 interface will show the IP address range that Docker uses for the network.
This is normally quite useful because it isolates any running Docker container from the local network ensuring that only ports that are explicitly opened to the local network are exposed avoiding any potential conflicts.
Sometimes though, there are cases where a Docker container might require access to services for the host.
There are two options to achieve this:
Obtain the host IP address from within the container with the code below:
#get the IP of the host computer from within Docker container
/sbin/ip route|awk '/default/ { print $3 }'
You can Attach the Docker container to the local network by running this command:
docker run --network="host"
If you are using docker-compose you can add a network with host as driver
This will attach the container to the host network, which allows the Docker container to reach any services running on the host via localhost. It also works for any other Docker containers that are running on the local network or have their ports exposed to the localhost.

Error connecting to my redis server from my code

io.adapter(redis({ host: config.redisHost, port: config.redisPort }));
Both the confif.redisHost and config.redisPort are the correct values, but when I try to connect from my code I get the error -
'Error: Redis connection to 104.xxx.xx.xxx:6379 failed - connect ECONNREFUSED 104.xxx.xx.xxx:6379'
In redis.conf I've changed the bind to 0.0.0.0 as well as removing it completely and I've also tried setting protected mode to off just to try and get the connection working.
The IP and port are definitely, correct. Does anyone know why I might not be able to access the server from my code? My code is just being ran on my local machine and the redis server is being ran on a digital ocean ubuntu 14.04 machine. The status of the server is definitely running and I can access the redis-cli from the machine itself.
For some reason, it wasn't using redis.conf when I tried starting the server, I had to manually tell it to use it when starting up, now it's working.

Connection attempt failed when connecting to MongoDB deployment from mongo shell

first question and complete beginner so apologies in advanced for any silly mistakes.
I have created a server on Amazon Web Services and then linked that through the MongoDB Cloud Manager where I made a replica set.
I have been following the tutorial on the mongoDB cloud documentation but have become stuck on the final part - "Connect to a MongoDB Process".
It says "Cloud Manager provides a mongo shell command that you can use to connect to the MongoDB process if you are connecting from the system where the deployment runs" - Can I not do this because the deployment is running on the Amazon Server?
When I enter the mongo shell command this is what it reads:
MongoDB shell version: 3.0.4
connecting to: AM-0.amigodb.0813.mongodbdns.com:27001/AmigoMain_1
2015-08-07T18:41:56.806+0100 W NETWORK
Failed to connect to 52.18.23.14:27001 after 5000 milliseconds, giving up.
2015-08-07T18:41:56.809+0100 E QUERY
Error: couldn't connect to server AM-0.amigodb.0813.mongodbdns.com:27001 (52.18.23.14), connection attempt failed
at connect (src/mongo/shell/mongo.js:181:14)
at (connect):1:6 at src/mongo/shell/mongo.js:181
exception: connect failed
I followed the instructions for the security settings on the Amazon Web Service but thinking that I may well have made a mistake.
Would greatly appreciate any help or where to go for answers.
Thanks,
Louis
Mongo by default only listens to connections on localhost. You'll need to edit mongo.conf and add your IP to the "bind ip" setting.
This is my mongod.conf file:
processManagement:
fork: true
net:
bindIp: 127.0.0.1
port: 27017
storage:
dbPath: "/data/db"
systemLog:
destination: file
path: "/var/log/mongod.log"
logAppend: true
storage:
journal:
enabled: true
It also gave me the same error message when I changed my bindIp to 0.0.0.0

Resources