I'm running a cluster of 2 RabbitMQ servers (could be any number) and I have implemented a failover where my app loops the list of RabbitMQs and tries to reconnect when a connection drops.
If the RabbitMQ instance is down which I'm trying to connect to, it takes about 60 seconds to timeout before trying to the next one, which is a very long time. Is there a way to configure the timeout or some other way to make it fail faster. This is causing an unnecessary long downtime. The heartbeat takes care of detecting a failure on an existing connection, but the problem is the initial connect attempt.
Here is my code used for connecting:
connect(callback) {
const self = this;
amqp.connect(rabbitInstances[rabbitInstance] + "?heartbeat=10").then(conn => {
conn.on("error", function(err) {
setTimeout(() => self.reconnect(callback), 5000));
return;
});
conn.on("close", function() {
setTimeout(() => self.reconnect(callback), 5000));
return;
});
connection = conn;
whenConnected(callback);
})
.catch(err => {
setTimeout(() => self.reconnect(callback), 5000));
});
}
reconnect(callback) {
this.rabbitInstance === (rabbitInstances.length - 1) ? this.rabbitInstance = 0 : this.rabbitInstance++;
this.connect(callback)
}
I read the source code for amqplib and saw the second argument to connect accepts an object that contains ordinary socket options. I used that to impose and verify a 2-second timeout as follows:
const amqp = require('amqplib');
const connection = await amqp.connect('amqp://localhost', {
timeout: 2000,
servername: 'localhost',
});
I am using version 0.5.3 of amqplib. The Github URL is here: https://github.com/squaremo/amqp.node.
Related
I have a node js service that consumes messages from Kafka and processes it through various steps of transformation logic. During the processing, services use Redis and mongo for storage and caching purposes. In the end, it sends the transformed message to another destination via UDP packets.
On startup, it starts consuming message from Kafka after a while, it crashes down with the unhandled error: ERR_CANNOT_SEND unable to send data(see below picture).
restarting the application resolves the issue temporarily.
I initially thought it might have to do with the forwarding through UDP sockets, but the forwarding destinations are reachable from the consumer!
I'd appreciate any help here. I'm kinda stuck here.
Consumer code:
const readFromKafka = ({host, topic, source}, transformationService) => {
const logger = createChildLogger(`kafka-consumer-${topic}`);
const options = {
// connect directly to kafka broker (instantiates a KafkaClient)
kafkaHost: host,
groupId: `${topic}-group`,
protocol: ['roundrobin'], // and so on the other kafka config.
};
logger.info(`starting kafka consumer on ${host} for ${topic}`);
const consumer = new ConsumerGroup(options, [topic]);
consumer.on('error', (err) => logger.error(err));
consumer.on('message', async ({value, offset}) => {
logger.info(`recieved ${topic}`, value);
if (value) {
const final = await transformationService([
JSON.parse(Buffer.from(value, 'binary').toString()),
]);
logger.info('Message recieved', {instanceID: final[0].instanceId, trace: final[1]});
} else {
logger.error(`invalid message: ${topic} ${value}`);
}
return;
});
consumer.on('rebalanced', () => {
logger.info('cosumer is rebalancing');
});
return consumer;
};
Consumer Service startup and error handling code:
//init is the async function used to initialise the cache and other config and components.
const init = async() =>{
//initialize cache, configs.
}
//startConsumer is the async function that connects to Kafka,
//and add a callback for the onMessage listener which processes the message through the transformation service.
const startConsumer = async ({ ...config}) => {
//calls to fetch info like topic, transformationService etc.
//readFromKafka function defn pasted above
readFromKafka( {topicConfig}, transformationService);
};
init()
.then(startConsumer)
.catch((err) => {
logger.error(err);
});
Forwarding code through UDP sockets.
Following code throws the unhandled error intermittently as this seemed to work for the first few thousands of messages, and then suddenly it crashes
const udpSender = (msg, destinations) => {
return Object.values(destinations)
.map(({id, host, port}) => {
return new Promise((resolve) => {
dgram.createSocket('udp4').send(msg, 0, msg.length, port, host, (err) => {
resolve({
id,
timestamp: Date.now(),
logs: err || 'Sent succesfully',
});
});
});
});
};
Based on our comment exchange, I believe the issue is just that you're running out of resources.
Throughout the lifetime of your app, every time you send a message you open up a brand new socket. However, you're not doing any cleanup after sending that message, and so that socket stays open indefinitely. Your open sockets then continue to pile up, consuming resources, until you eventually run out of... something. Perhaps memory, perhaps ports, perhaps something else, but ultimately your app crashes.
Luckily, the solution isn't too convoluted: just reuse existing sockets. In fact, you can just reuse one socket for the entirety of the application if you wanted, as internally socket.send handles queueing for you, so no need to do any smart hand-offs. However, if you wanted a little more concurrency, here's a quick implementation of a round-robin queue where we've created a pool of 10 sockets in advance which we just grab from whenever we want to send a message:
const MAX_CONCURRENT_SOCKETS = 10;
var rrIndex = 0;
const rrSocketPool = (() => {
var arr = [];
for (let i = 0; i < MAX_CONCURRENT_SOCKETS; i++) {
let sock = dgram.createSocket('udp4');
arr.push(sock);
}
return arr;
})();
const udpSender = (msg, destinations) => {
return Object.values(destinations)
.map(({ id, host, port }) => {
return new Promise((resolve) => {
var sock = rrSocketPool[rrIndex];
rrIndex = (rrIndex + 1) % MAX_CONCURRENT_SOCKETS;
sock.send(msg, 0, msg.length, port, host, (err) => {
resolve({
id,
timestamp: Date.now(),
logs: err || 'Sent succesfully',
});
});
});
});
};
Be aware that this implementation is still naïve for a few reasons, mostly because there's still no error handling on the sockets themselves, only on their .send method. You should look at the docs for more info about catching events such as error events, especially if this is a production server that's supposed to run indefinitely, but basically the error-handling you've put inside your .send callback will only work... if an error occurs in a call to .send. If between sending messages, while your sockets are idle, some system-level error outside of your control occurs and causes your sockets to break, your socket may then emit an error event, which will go unhandled (like what's happening in your current implementation, with the intermittent errors that you see prior to the fatal one). At that point they may now be permanently unusable, meaning they should be replaced/reinstated or otherwise dealt with (or alternatively, just force the app to restart and call it a day, like I do :-) ).
I'd like to get the number of connections of a few servers running on my local machine.
I've successfully used server.getConnections() on a server created via net.createServer(), however I don't know how to use it on an already started server.
I tried obtaining the server instance by connecting to it, using net.connect(). The connection is created successfully and I get a new net.Socket object, however my understanding is that I actually need a net.Server in order to use getConnections().
So my question is, how do I get a net.Server instance of an already running server?
I realize my question is an XY problem, so apologies to all who tried to answer it.
I suspect the answer to the literal question, "how do I get an instance of an existing server", is: "you can't".
I should have added more details to the question, especially what I was trying to achieve.
My application is a load balancer / reverse proxy server. Initially I was able to use getConnections() because I would start the proxy server and a few dummy servers from the same script. However I wanted to make the dummy servers and the proxy separate from each other, so even though I did have complete control over them, I needed to pretend that I didn't actually own the servers.
The solution I found to my specific case, in the end, was to keep a hash list of servers I can connect to (via the reverse proxy), and increment the connection counters every time I connect to a specific server:
let servers = [
{ port: 4000, connectionsCounter: 0 },
{ port: 5000, connectionsCounter: 0 },
{ port: 6000, connectionsCounter: 0 },
];
let myProxyServer = net.createServer((socket) => {
// Open a connection to the first server in the list
net.connect(servers[0].port, () => {
// Once connected, increment the connections counter
socket.on('connect', () => {
servers[0].connectionsCounter++;
});
// When the connection ends, decrement the counter
socket.on('close', () => {
servers[0].connectionsCounter--;
});
});
});
I hope this will be helpful to someone.
If you want to just use the server, you can probably store it as a variable when you call net.createServer()
const my_server = net.createServer();
// do what you want with it
my_server.getConnections();
my_server.listen();
you can make instance of net.createServer() and then get your number of connections from server.on('connection', <callback>):
server.on('connection', (socket) => {
// someone connected
console.log("New active connection");
server.getConnections((err, count) => {
if(err){
console.log(err);
} else {
console.log("Currently " + count + " active connection(s)");
}
});
});
i hope this complete example code help you:
const net = require('net');
const uuid = require('uuid/v1');
const server = net.createServer((socket) => {
socket.uuid = uuid();
socket.on('data', (data) => {
//const response = JSON.parse(data.toString('utf8'));
});
socket.on('error', (err) => {
console.log('A client has left abruptly !');
server.getConnections((err, count) => {
if(err){
console.log(err);
} else {
console.log("Currently " + count + " active connection(s)");
}
});
});
socket.on('end', () => {
console.log("A client has left");
server.getConnections((err, count) => {
if(err){
console.log(err);
} else {
console.log("Currently " + count + " active connection(s)");
}
});
});
});
server.on('error', (err) => {
// handle errors here
console.log("Error:", err);
});
server.on('connection', (socket) => {
// someone connected
console.log("New active connection");
server.getConnections((err, count) => {
if(err){
console.log(err);
} else {
console.log("Currently " + count + " active connection(s)");
}
});
});
// port number.
server.listen(3000, () => {
console.log('opened server on', server.address());
});
or you can use netstat for get number of connection, https://www.npmjs.com/package/node-netstat nodejs module for this solution:
const netstat = require('node-netstat');
myObject = {
protocol: 'tcp',
};
setInterval(function () {
let count = 0;
netstat({
filter: {
local: {port: 3000, address: '192.168.1.1'}
}
}, item => {
// console.log(item);
count++;
console.log(count);
});
}, 1000);
Your application is a bit unclear.
Only the server socket can report how many connections it has. This is why if you create the net.Server you can access that information from it.
If you want to connect to an application and query the number of clients connected to it, the application that you connect to needs to provide that information to you when you ask. This is not information that the socket provides - the application itself has to provide that information.
If you are writing the application that created the net.Server, you can create another net.Server on a different port that you can then connect to and query it for information about the other clients on its other sockets.
If you are trying to generically find the number of connections to a particular application that has a socket, that application needs to be able to tell you, or, as #root mentioned, you need to ask the OS the application is running on. This function will be OS dependent and will likely require elevated privileges. But consider connecting to a socket on a router or IoT device: that application may not be running on any OS at all.
I am writing a simple port scanner using core net module from Node.js. I am getting a 'Callback was already called' error with my code. Can you spot please where the error is coming from? Below is my code:
const net = require('net')
const async = require('async')
function findPortStatus(host, port, timeout, cb) {
const socket = new net.Socket()
socket.setTimeout(timeout, () => {
// couldn't establish a connection because of timeout
socket.destroy()
return cb(null, null)
})
socket.connect(port, host, () => {
// connection established
return cb(null, port)
})
socket.on('error', (err) => {
// couldn't establish a connection
return cb(null, null)
})
}
const funcs = []
for (let port = 0; port <= 80; port++) {
funcs.push(function(callback) {
findPortStatus('192.30.253.112', port, 4000, (err, port) => {
if (!err) {
return callback(null, port)
}
})
})
}
async.parallel(funcs, (err, ports) => {
if (err) {
console.error(err.message)
} else {
for (let port of ports) {
if (port) {
console.log(port)
}
}
}
})
Not sure if this is related, but you really should pass something to the callback when you call it. null,null isn't very useful for debugging. What I would suggest is timeout events in your context are probably not errors, but they are informative. You could just cb(null, 'timeout') or cb(null, {state: 'timedOut', port: port}) or something to better keep track of what worked and what didn't.
The most likely candidate for your actual error, though, is if your socket emits an error or timeout event after the connect event was already successful. Dropped connection or the like. If all you're looking for is a 'ping'-like functionality (across more than just ICMP obviously), then you should probably close the connection as soon as you get a connect and/or remove the other event listeners as part of the connect listener's handler.
Finally, the node docs suggest you not call socket.connect() directly, unless implementing a custom socket (which it doesn't seem like you are), but to use net.createConnection() instead; not sure that'll help you but it's worth noting.
It looks like the successfully connected sockets are subsequently timing out (which makes sense, as you connect but then do nothing with the connection, so it times out).
If you disconnect from a socket once you have recorded a successful connection, then that should clear up the error.
I want my application (lets say a simple node file for now) to work as it is even if redis is not available. I'm not able to do it the correct way. This is what I've tried.
var redis = require('redis');
var redisClient = null;
var getRedisClient = function(){
if(redisClient){
return redisClient;
}
try {
redisClient = redis.createClient({connect_timeout : 5000, max_attempts : 1});
redisClient.on("error", function(err) {
console.error("Error connecting to redis", err);
redisClient = null;
});
return redisClient;
} catch(ex){
console.log("error initialising redis client " + ex);
return null;
}
};
try {
var client = getRedisClient();
console.log("done!");
} catch (ex){
console.log("Exception");
}
However, with this code my application exits if redis is not available (it shouldn't because i've not given a process.exit() command).
How can I solve this?
Checking for Successful Connection on Start
Using a promise, you could guarantee that at least initially, you were able to connect to redis without error within a specified time period:
const redis = require('redis');
const Promise = require('bluebird');
function getRedisClient(timeoutMs){
return new Promise((resolve, reject) => {
const redisClient = redis.createClient();
const timer = setTimeout(() => reject('timeout'), timeoutMs);
redisClient.on("ready", () => {
clearTimeout(timer);
resolve(redisClient);
});
redisClient.on("error", (err) => {
clearTimeout(timer);
reject(err);
});
});
};
const redisReadyTimeoutMs = 10000;
getRedisClient(redisReadyTimeoutMs)
.then(redisClient => {
// the client has connected to redis sucessfully
return doSomethingUseful();
}, error => {
console.log("Unable to connect to redis", error);
});
You Need Proper Error Handling
The redis client being non-null does NOT guarantee using it won't throw an error.
you could experience infrastructure misfortune e.g. crashed redis process, out of memory or network being down.
a bug in your code could cause an error e.g. invalid or missing arguments to a redis command.
You should be handling redis client errors as a matter of course.
DON'T null the Redis Client on Error
It won't give you much but it will force you to check for null every time you try and use it.
The redis client also has inbuilt reconnect and retry mechanisms that you'll miss out on if you null it after the first error. See the redis package docs, look for retry_strategy.
DO Wrap your redis client code with try .. catch ... or use .catch in your promise chain.
DO Make use of a retry_strategy.
Currently I'm using https://github.com/mranney/node_redis as my node redis client.
client.retry_delay is set to 250ms default.
I tried connecting to redis and once connection was successful, I manually stopped the redis server to see whether client.retry_delay works. But I didn't see it working.
The following log messages are logged on ready & end events on redisClients created using createClient:
[2012-03-30 15:13:05.498] [INFO] Development - Node Application is running on port 8090
[2012-03-30 15:13:08.507] [INFO] Development - Connection Successfully Established to '127.0.0.1' '6379'
[2012-03-30 15:16:33.886] [FATAL] Development - Connection Terminated to '127.0.0.1' '6379'
I didn't see Success message again [ready event was not fired] when the server came back live.
Am I missing something? When will be the retry constant used? Is there a work around to find whether a redis server has come up after a failure from node?
I can't reproduce this. Could you try this code, stop your redis server, and check the log output?
var client = require('redis').createClient();
client.on('connect' , log('connect'));
client.on('ready' , log('ready'));
client.on('reconnecting', log('reconnecting'));
client.on('error' , log('error'));
client.on('end' , log('end'));
function log(type) {
return function() {
console.log(type, arguments);
}
}
Answer # Feb-2020
const redis = require('redis');
const log = (type, fn) => fn ? () => {
console.log(`connection ${type}`);
} : console.log(`connection ${type}`);
// Option 1: One connection is enough per application
const client = redis.createClient('6379', "localhost", {
retry_strategy: (options) => {
const {error, total_retry_time, attempt} = options;
if (error && error.code === "ECONNREFUSED") {
log(error.code); // take actions or throw exception
}
if (total_retry_time > 1000 * 15) { //in ms i.e. 15 sec
log('Retry time exhausted'); // take actions or throw exception
}
if (options.attempt > 10) {
log('10 attempts done'); // take actions or throw exception
}
console.log("Attempting connection");
// reconnect after
return Math.min(options.attempt * 100, 3000); //in ms
},
});
client.on('connect', log('connect', true));
client.on('ready', log('ready', true));
client.on('reconnecting', log('reconnecting', true));
client.on('error', log('error', true));
client.on('end', log('end', true));
For complete running example clone node-cheat and run node connect-retry.js.
Adding to the answer above. Small change. The callback provided should be a method name and not execute the method itself. Something like below:
function redisCallbackHandler(message){
console.log("Redis:"+ message);
}
var redis = require("redis");
var redisclient = redis.createClient();
redisclient.on('connect', redisCallbackHandler);
redisclient.on('ready', redisCallbackHandler);
redisclient.on('reconnecting', redisCallbackHandler);
redisclient.on('error', redisCallbackHandler);
redisclient.on('end', redisCallbackHandler);