Error: Redis connection to localhost:6379 failed - getaddrinfo EMFILE localhost:6379 - node.js

I am getting the below error.
Error: Redis connection to localhost:6379 failed - getaddrinfo EMFILE localhost:6379
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
Node.js with MySQL db and redis concept is used.
There are too many requests for fetching the data from MySQL so data is cached for 2 minutes by syncing with db. So when new requests arrives it checks in redis if found its serves from redis else data is retrieved from MySQL and cached in redis and sent as response. This keeps happening.
After some time probably 1 hour or 2 hours the server crashes resulting in above error.
But as of now pm2 is used which restarts the server.
But need to know the reason for it.
Redis installation followed the instructions from here.
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
Please let me know how to solve the issue...
Redis Connection File Code
var Promise = require('bluebird');
var redisClient; // Global (Avoids Duplicate Connections)
module.exports =
{
OpenRedisConnection : function()
{
if (redisClient == null)
{
redisClient = require("redis").createClient(6379, 'localhost');
redisClient.selected_db = 1;
}
},
GetRedisMultiConnection: function ()
{
return require("redis").createClient(6379, 'localhost').multi();
},
IsRedisConnectionOpened : function()
{
if (redisClient && redisClient.connected == true)
{
return true;
}
else
{
if(!redisClient)
redisClient.end(); // End and open once more
module.exports.OpenRedisConnection();
return true;
}
}
};

What I usually do with code like this is write a very thin module that loads in the correct Redis driver and returns a valid handle with a minimum of fuss:
var Redis = require("redis");
module.exports = {
open: function() {
var client = Redis.createClient(6379, 'localhost');
client.selected_db = 1;
return client;
},
close: function(client) {
client.quit();
}
};
Any code in your Node application that needs a Redis handle acquires one on-demand and it's also understood that code must close it no matter what happens. If you're not catching errors, or if you're catching errors and skipping the close you'll "leak" open Redis handles and your app will eventually crash.
So, for example:
var Redis = require('./redis'); // Path to module defined above
function doStuff() {
var redis = Redis.open();
thing.action().then(function() {
redis.ping();
}).finally(function() {
// This code runs no matter what even if there's an exception or error
Redis.close(redis);
});
}
Due to the concurrent nature of Node code having a single Redis handle that's shared by many different parts of code will be trouble and I'd strongly advise against this approach.
To expand on this template you'd have a JSON configuration file that can override which port and server to connect to. That's really easy to require and use instead of the defaults here. It also means that you don't have to hack around with any actual code when you deploy your application to another system.
You can also expand on the wrapper module to keep active connections in a small pool to avoid closing and then immediately opening a new one. With a little bit of attention you can even check that these handles are in a sane state, such as not stuck in the middle of a MULTI transaction, before handing them out, by doing a PING and testing for an immediate response. This weeds out stale/dead connections as well.

Related

Reusing Redis Connection: Socket Closed Unexpectedly - node-redis

First, let me tell you how I'm using Redis connection in my NodeJS application:
I'm re-using one connection throughout the app using a singleton class.
class RDB {
static async getClient() {
if (this.client) {
return this.client
}
let startTime = Date.now();
this.client = createClient({
url: config.redis.uri
});
await this.client.connect();
return this.client;
}
}
For some reason - that I don't know - time to time my application crashes giving an error without any reason - this happens about once or twice a week:
Error: Socket closed unexpectedly
Now, my questions:
Is using Redis connections like this alright? Is there something wrong with my approach?
Why does this happen? Why is my socket closing unexpectedly?
Is there a way to catch this error (using my approach) or any other good practice for implementing Redis connections?
I solved this using the 'error' listener. Just listening to it - saves the node application from crashing.
client.on("error", function(error) {
console.error(error);
// I report it onto a logging service like Sentry.
});
I had similar issue of socket close unexpectedly. The issue started while I upgraded node-redis from 3.x to 4.x. The issue was gone after I upgraded my redis-server from 5.x to 6.x.
You should declare a private static member 'client' of the RDB class, like this:
private static client;
In a static method, you can't reference instance of 'this', you need to reference the static class member like this:
RDB.client
And it would be better to check, whether the client's connection is open, rather than simply checking if the client exists (considering you are using the 'redis' npm library). Like this:
if (RDB.client && RDB.client.isOpen)
After the changes, your code should look like this:
class RDB {
private static client;
static async getClient() {
if (RDB.client && RDB.client.isOpen) {
return RDB.client;
}
RDB.client = createClient({
url: config.redis.uri
});
await RDB.client.connect();
return RDB.client;
}
}
Note: the connect() method and isOpen property only exist in redis version ^4.0.0.

What's the default timeout of ioredis send command for any redis call

I am using ioredis with a node application, and due to some issues at cluster I started getting:
Too many Cluster redirections. Last error: Error: Connection is closed.
Due to which all of my redis calls failed and after a very long time ranging from 1sec to 130secs.
Is there any default timeout for ioredis library which it uses to assert the call after sending command to execute to redis server?
Higher failure time of range 100secs on sending commands to redis server, is it because the the high queue size at redis due to cluster failure?
Sample code :
this.getData = function(bucketName, userKey) {
let cacheKey = cacheHelper.formCacheKey(userKey, bucketName);
let serviceType = cacheHelper.getServiceType(bucketName, cacheConfig.service_config);
let log_info = _.get(cacheConfig.service_config, 'logging_options.cache_info_level', true);
let startTime = moment();
let dataLength = null;
return Promise.try(function(){
validations([cacheKey], ['cache_key'], bucketName, serviceType, that.currentService);
return cacheStore.get(serviceType, cacheKey);
})
.then(function(data) {
dataLength = (data || '').length;
return cacheHelper.uncompress(data);
})
.then(function(uncompressedData) {
let endTime = moment();
let responseTime = endTime.diff(startTime, 'miliseconds');
if(!uncompressedData) {
if(log_info) logger.consoleLog(bucketName, 'getData', 'miss', cacheKey, that.currentService,
responseTime, dataLength);
} else {
if(log_info) logger.consoleLog(bucketName, 'getData', 'success', cacheKey, that.currentService,
responseTime, dataLength);
}
return uncompressedData;
})
.catch(function(err) {
let endTime = moment();
let responseTime = endTime.diff(startTime, 'miliseconds');
logger.error(bucketName, 'getData', err.message, userKey, that.currentService, responseTime);
throw cacheResponse.error(err);
});
};
Here
logger.error(bucketName, 'getData', err.message, userKey, that.currentService, responseTime);
started giving response time of range 1061ms to 109939ms.
Please provide some inputs.
As you can read from this ioredis issue, there isn't a per-command timeout configuration.
As suggested in the linked comment, you can use a Promise-based strategy as a workaround. Incidentally, this is the same strategy used by the ioredis-timeout plugin that wraps the original command in a Promise.race() method:
//code from the ioredis-timeout lib
return Promise.race([
promiseDelay(ms, command, args),
originCommand.apply(redis, args)
]);
So you can use the plugin or this nice race timeout technique to add a timeout functionality on top of the redis client. Keep in mind that the underlying command will not be interrupted.
I was facing a similar issue which I have described in detail here: How to configure Node Redis client to throw errors immediately, when connection has failed? [READ DETAILS]
The fix was actually quite simple, just set enable_offline_queue to false. This was with Node Redis, so you'll have to figure out the equivalent for IORedis. Setting this to false, will make all commands throw an exception immediately which you can process in a catch block and continue instead of waiting for some timeout.
Do keep in mind that, with enable_offline_queue set to false, the commands that you issue while there's some connection issue with the server will never be executed.

Node.js connectListener still called on socket error

I'm having a weird issue with a TCP client - I use socket.connect() to connect to the server instance. However, since the server is not running, I receive an error of ECONNREFUSED (so far so good).
I handle it using on('error') and set a timeout to try and reconnect in 10 seconds. This should continue to fail as long as the server is down. which is the case.
However, as soon as the server is running, it looks like all of the previous sockets are still active, so now I have several client sockets connected to the server.
I tried to call the destroy at the beginning of the on('error') handler function.
Any ideas how to deal with that?
Thanks!
EDIT: Code snippet:
var mySocket;
var self = this;
...
var onError = function (error) {
mySocket.destroy(); // this does not change anything...
console.log(error);
// Wait 10 seconds and try to reconnect
setTimeout(function () {
console.log("reconnecting...");
self.once('InitDone', function () {
// do something
console.log("init is done")
});
self.init();
}, 10000);
}
Inside init function:
...
console.log("trying to connect");
mySocket = tls.connect(options, function () {
console.log("connected!");
self.emit('InitDone');
});
mySocket.setEncoding('utf8');
mySocket.on('error', onError);
...
The result of this is something like the following:
trying to connect
ECONNREFUSED
reconnecting...
trying to connect
ECONNREFUSED
reconnecting...
trying to connect
ECONNREFUSED
reconnecting...
--> Starting the server here
trying to connect
connected
init is done
connected
init is done
connected
init is done
connected
init is done
However I would expect only one connection since the previous sockets failed to connect. Hope this clarifies the question.
Thanks!

How do I shutdown a Node.js http(s) server immediately?

I have a Node.js application that contains an http(s) server.
In a specific case, I need to shutdown this server programmatically. What I am currently doing is calling its close() function, but this does not help, as it waits for any kept alive connections to finish first.
So, basically, this shutdowns the server, but only after a minimum wait time of 120 seconds. But I want the server to shutdown immediately - even if this means breaking up with currently handled requests.
What I can not do is a simple
process.exit();
as the server is only part of the application, and the rest of the application should remain running. What I am looking for is conceptually something such as server.destroy(); or something like that.
How could I achieve this?
PS: The keep-alive timeout for connections is usually required, hence it is not a viable option to decrease this time.
The trick is that you need to subscribe to the server's connection event which gives you the socket of the new connection. You need to remember this socket and later on, directly after having called server.close(), destroy that socket using socket.destroy().
Additionally, you need to listen to the socket's close event to remove it from the array if it leaves naturally because its keep-alive timeout does run out.
I have written a small sample application you can use to demonstrate this behavior:
// Create a new server on port 4000
var http = require('http');
var server = http.createServer(function (req, res) {
res.end('Hello world!');
}).listen(4000);
// Maintain a hash of all connected sockets
var sockets = {}, nextSocketId = 0;
server.on('connection', function (socket) {
// Add a newly connected socket
var socketId = nextSocketId++;
sockets[socketId] = socket;
console.log('socket', socketId, 'opened');
// Remove the socket when it closes
socket.on('close', function () {
console.log('socket', socketId, 'closed');
delete sockets[socketId];
});
// Extend socket lifetime for demo purposes
socket.setTimeout(4000);
});
// Count down from 10 seconds
(function countDown (counter) {
console.log(counter);
if (counter > 0)
return setTimeout(countDown, 1000, counter - 1);
// Close the server
server.close(function () { console.log('Server closed!'); });
// Destroy all open sockets
for (var socketId in sockets) {
console.log('socket', socketId, 'destroyed');
sockets[socketId].destroy();
}
})(10);
Basically, what it does is to start a new HTTP server, count from 10 to 0, and close the server after 10 seconds. If no connection has been established, the server shuts down immediately.
If a connection has been established and it is still open, it is destroyed.
If it had already died naturally, only a message is printed out at that point in time.
I found a way to do this without having to keep track of the connections or having to force them closed. I'm not sure how reliable it is across Node versions or if there are any negative consequences to this but it seems to work perfectly fine for what I'm doing. The trick is to emit the "close" event using setImmediate right after calling the close method. This works like so:
server.close(callback);
setImmediate(function(){server.emit('close')});
At least for me, this ends up freeing the port so that I can start a new HTTP(S) service by the time the callback is called (which is pretty much instantly). Existing connections stay open. I'm using this to automatically restart the HTTPS service after renewing a Let's Encrypt certificate.
If you need to keep the process alive after closing the server, then Golo Roden's solution is probably the best.
But if you're closing the server as part of a graceful shutdown of the process, you just need this:
var server = require('http').createServer(myFancyServerLogic);
server.on('connection', function (socket) {socket.unref();});
server.listen(80);
function myFancyServerLogic(req, res) {
req.connection.ref();
res.end('Hello World!', function () {
req.connection.unref();
});
}
Basically, the sockets that your server uses will only keep the process alive while they're actually serving a request. While they're just sitting there idly (because of a Keep-Alive connection), a call to server.close() will close the process, as long as there's nothing else keeping the process alive. If you need to do other things after the server closes, as part of your graceful shutdown, you can hook into process.on('beforeExit', callback) to finish your graceful shutdown procedures.
The https://github.com/isaacs/server-destroy library provides an easy way to destroy() a server with the behavior desired in the question (by tracking opened connections and destroying each of them on server destroy, as described in other answers).
As others have said, the solution is to keep track of all open sockets and close them manually. My node package killable can do this for you. An example (using express, but you can call use killable on any http.server instance):
var killable = require('killable');
var app = require('express')();
var server;
app.route('/', function (req, res, next) {
res.send('Server is going down NOW!');
server.kill(function () {
//the server is down when this is called. That won't take long.
});
});
var server = app.listen(8080);
killable(server);
Yet another nodejs package to perform a shutdown killing connections: http-shutdown, which seems reasonably maintained at the time of writing (Sept. 2016) and worked for me on NodeJS 6.x
From the documentation
Usage
There are currently two ways to use this library. The first is explicit wrapping of the Server object:
// Create the http server
var server = require('http').createServer(function(req, res) {
res.end('Good job!');
});
// Wrap the server object with additional functionality.
// This should be done immediately after server construction, or before you start listening.
// Additional functionailiy needs to be added for http server events to properly shutdown.
server = require('http-shutdown')(server);
// Listen on a port and start taking requests.
server.listen(3000);
// Sometime later... shutdown the server.
server.shutdown(function() {
console.log('Everything is cleanly shutdown.');
});
The second is implicitly adding prototype functionality to the Server object:
// .extend adds a .withShutdown prototype method to the Server object
require('http-shutdown').extend();
var server = require('http').createServer(function(req, res) {
res.end('God job!');
}).withShutdown(); // <-- Easy to chain. Returns the Server object
// Sometime later, shutdown the server.
server.shutdown(function() {
console.log('Everything is cleanly shutdown.');
});
My best guess would be to kill the connections manually (i.e. to forcibly close it's sockets).
Ideally, this should be done by digging into the server's internals and closing it's sockets by hand. Alternatively, one could run a shell-command that does the same (provided the server has proper privileges &c.)
I have answered a variation of "how to terminate a HTTP server" many times on different node.js support channels. Unfortunately, I couldn't recommend any of the existing libraries because they are lacking in one or another way. I have since put together a package that (I believe) is handling all the cases expected of graceful HTTP server termination.
https://github.com/gajus/http-terminator
The main benefit of http-terminator is that:
it does not monkey-patch Node.js API
it immediately destroys all sockets without an attached HTTP request
it allows graceful timeout to sockets with ongoing HTTP requests
it properly handles HTTPS connections
it informs connections using keep-alive that server is shutting down by setting a connection: close header
it does not terminate the Node.js process
Usage:
import http from 'http';
import {
createHttpTerminator,
} from 'http-terminator';
const server = http.createServer();
const httpTerminator = createHttpTerminator({
server,
});
await httpTerminator.terminate();
const Koa = require('koa')
const app = new Koa()
let keepAlive = true
app.use(async (ctx) => {
let url = ctx.request.url
// destroy socket
if (keepAlive === false) {
ctx.response.set('Connection', 'close')
}
switch (url) {
case '/restart':
ctx.body = 'success'
process.send('restart')
break;
default:
ctx.body = 'world-----' + Date.now()
}
})
const server = app.listen(9011)
process.on('message', (data, sendHandle) => {
if (data == 'stop') {
keepAlive = false
server.close();
}
})
process.exit(code); // code 0 for success and 1 for fail

How do you kill a redis client when there is no connection?

I have a valid server configuration in which redis can't be accessed, but the server can function correctly (I simply strip away features when redis can't be found).
However, I can't manage the connection errors well. I'd like to know when a connection error fails and shutdown the client in that case.
I've found that the connection retry will never stop. And quit() is actually swallowed - "Queueing quit for next server connection." - when called.
Is there a way to kill the client in the case where no connection can be established?
var redis = require("redis"),
client = redis.createClient();
client.on("error", function(err) {
logme.error("Bonk. The worker framework cannot connect to redis, which might be ok on a dev server!");
logme.error("Resque error : "+err);
client.quit();
});
client.on("idle", function(err) {
logme.error("Redis queue is idle. Shutting down...");
});
client.on("end", function(err) {
logme.error("Redis is shutting down. This might be ok if you chose not to run it in your dev environment");
});
client.on("ready", function(err) {
logme.info("Redis up! Now connecting the worker queue client...");
});
ERROR - Resque error : Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
ERROR - Redis is shutting down. This might be ok if you chose not to run it in your dev environment
ERROR - Resque error : Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
ERROR - Resque error : Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
ERROR - Resque error : Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
ERROR - Resque error : Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
One thing that is interesting is the fact that the 'end' event gets emitted. Why?
For v3.1.2 of the library
The right way to have control on the client's reconnect behaviour is to use a retry_strategy.
Upon disconnection the redisClient will try to reconnect as per the default behaviour. The default behaviour can be overridden by providing a retry_strategy while creating the client.
Example usage of some fine grained control from the documentation.
var client = redis.createClient({
retry_strategy: function (options) {
if (options.error && options.error.code === 'ECONNREFUSED') {
// End reconnecting on a specific error and flush all commands with
// a individual error
return new Error('The server refused the connection');
}
if (options.total_retry_time > 1000 * 60 * 60) {
// End reconnecting after a specific timeout and flush all commands
// with a individual error
return new Error('Retry time exhausted');
}
if (options.attempt > 10) {
// End reconnecting with built in error
return undefined;
}
// reconnect after
return Math.min(options.attempt * 100, 3000);
}
});
Ref: https://www.npmjs.com/package/redis/v/3.1.2
For the purpose of killing the client when the connection is lost, we could use the following retry_strategy.
var client = redis.createClient({
retry_strategy: function (options) {
return undefined;
}
});
Update June 2022 (Redis v4.1.0)
The original answer was for an earlier version of Redis client. Since v4 things have changed in the client configuration. Specifically, the retry_strategy is now called reconnectStrategy and is nested under the socket configuration option for createClient.
You might want to just forcibly end the connection to redis on error with client.end() rather than using client.quit() which waits for the completion of all outstanding requests and then sends the QUIT command which as you know requires a working connection with redis to complete.

Resources