I am creating 5 connections to servicebus and putting them in an array. Then as the new messages keep on coming I get one connection from the array and use them to send the message. When I start the service and run a load test it works fine. I leave the service ideal for sometime and run the same load test again, it starts having this error. connect ETIMEDOUT xxx.xxx.xxx.xxx\\n at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1191:14)
I am not sure if it is a good way to cache the connection and reuse them, which would be causing this issue, or it is something else that causes this.
let serviceBusConnectionArray = [];
let executed = false;
let serviceBusService;
let count = 0;
let MAX_CONNECTIONS = 5;
class ServiceBus {
static createConnections(){
if(!executed){
for(let i=0; i< MAX_CONNECTIONS; i++){
serviceBusConnectionArray.push(azure.createServiceBusService(SERVICEBUS_CONNECTION_STRING).withFilter(new azure.ExponentialRetryPolicyFilter()));
}
executed = true;
}
}
static getConnectionString(){
ServiceBus.createConnections();
if(count < MAX_CONNECTIONS){
return serviceBusConnectionArray[count++];
}else{
count = 0;
return serviceBusConnectionArray[count];
}
}
static putMessageToServiceBus(topicName, message) {
return new Promise((resolve, reject) => {
serviceBusService = ServiceBus.getConnectionString();
serviceBusService.sendTopicMessage(topicName, message, function (error) {
if (error) {
log.error('Error in putting message to service bus, message: %s', error.stack);
reject(error);
}
resolve('Message added');
});
});
}
}
I am not sure what route should I choose now, to resolve this timeout errors.
Looking into the source code for azure-sdk-for-node, specifically these lines in order
servicebusservice.js#L455
servicebusservice.js#L496
serviceclient.js#L190
The SDK is just performing REST requests to Service Bus via its REST API. So, I don't really think pooling those objects really help.
The timeout seems to be a genuine timeout at that point of time raised by the request npm module used by the SDK.
You could probably try the newer SDK which uses amqp under the hood to connect to service bus. Note that this SDK is in preview.
As PramodValavala-MSFT has mentioned about #azure/service-bus SDK in the other answer, major version 7.0.0 of #azure/service-bus SDK(which was in the preview) depends on AMQP has been released recently.
Each instance of ServiceBusClient represents a connection, all the methods under ServiceBusClient use the same connection.
#azure/service-bus - 7.0.0
Samples for 7.0.0
Guide to migrate from #azure/service-bus v1 to v7
Related
First, let me tell you how I'm using Redis connection in my NodeJS application:
I'm re-using one connection throughout the app using a singleton class.
class RDB {
static async getClient() {
if (this.client) {
return this.client
}
let startTime = Date.now();
this.client = createClient({
url: config.redis.uri
});
await this.client.connect();
return this.client;
}
}
For some reason - that I don't know - time to time my application crashes giving an error without any reason - this happens about once or twice a week:
Error: Socket closed unexpectedly
Now, my questions:
Is using Redis connections like this alright? Is there something wrong with my approach?
Why does this happen? Why is my socket closing unexpectedly?
Is there a way to catch this error (using my approach) or any other good practice for implementing Redis connections?
I solved this using the 'error' listener. Just listening to it - saves the node application from crashing.
client.on("error", function(error) {
console.error(error);
// I report it onto a logging service like Sentry.
});
I had similar issue of socket close unexpectedly. The issue started while I upgraded node-redis from 3.x to 4.x. The issue was gone after I upgraded my redis-server from 5.x to 6.x.
You should declare a private static member 'client' of the RDB class, like this:
private static client;
In a static method, you can't reference instance of 'this', you need to reference the static class member like this:
RDB.client
And it would be better to check, whether the client's connection is open, rather than simply checking if the client exists (considering you are using the 'redis' npm library). Like this:
if (RDB.client && RDB.client.isOpen)
After the changes, your code should look like this:
class RDB {
private static client;
static async getClient() {
if (RDB.client && RDB.client.isOpen) {
return RDB.client;
}
RDB.client = createClient({
url: config.redis.uri
});
await RDB.client.connect();
return RDB.client;
}
}
Note: the connect() method and isOpen property only exist in redis version ^4.0.0.
I'm having memory leak issue in a node application. The application is subscribed to a topic in redis and on receiving a message pops a message from a list using brpop. There are a number instances of this application running in production so one instance might be blocking for a message in the redis list. Here is the code snippet which consumes a message from redis:
private doWork(): void {
this.storage.subscribe("newRoom", (message: [any, any]) => {
const [msg] = message;
if (msg === "room") {
return new Promise( async (resolve, reject) => {
process.nextTick( async () => {
const roomIdData = await this.storage.brpop("newRoomList"); // a promisified version of brpop with timeout of 5s
if (roomIdData) {
const roomId = roomIdData[1];
this.createRoom(roomId);
}
});
resolve();
});
}
});
}
I've tried debugging the memory leaks using chrome debugger and I've observed too many closure objects getting created. I suspect that it's due to this code as I'm able to see the redis client object name in the closure object but I'm not able to figure out how I might be able to fix it. I added process.nextTick but it didn't help. I'm using node-redis client for connecting to redis. Attaching an object retainer map screenshot from the chrome debugger tool.
P.S. blk is the redis client object name used exclusively for blocking commands i.e. brpop.
Edit: Replaced brpop with rpop and we're seeing a significant drop in memory growth rate but now the distribution of messages between the workers has gone skewed.
I am using ioredis with a node application, and due to some issues at cluster I started getting:
Too many Cluster redirections. Last error: Error: Connection is closed.
Due to which all of my redis calls failed and after a very long time ranging from 1sec to 130secs.
Is there any default timeout for ioredis library which it uses to assert the call after sending command to execute to redis server?
Higher failure time of range 100secs on sending commands to redis server, is it because the the high queue size at redis due to cluster failure?
Sample code :
this.getData = function(bucketName, userKey) {
let cacheKey = cacheHelper.formCacheKey(userKey, bucketName);
let serviceType = cacheHelper.getServiceType(bucketName, cacheConfig.service_config);
let log_info = _.get(cacheConfig.service_config, 'logging_options.cache_info_level', true);
let startTime = moment();
let dataLength = null;
return Promise.try(function(){
validations([cacheKey], ['cache_key'], bucketName, serviceType, that.currentService);
return cacheStore.get(serviceType, cacheKey);
})
.then(function(data) {
dataLength = (data || '').length;
return cacheHelper.uncompress(data);
})
.then(function(uncompressedData) {
let endTime = moment();
let responseTime = endTime.diff(startTime, 'miliseconds');
if(!uncompressedData) {
if(log_info) logger.consoleLog(bucketName, 'getData', 'miss', cacheKey, that.currentService,
responseTime, dataLength);
} else {
if(log_info) logger.consoleLog(bucketName, 'getData', 'success', cacheKey, that.currentService,
responseTime, dataLength);
}
return uncompressedData;
})
.catch(function(err) {
let endTime = moment();
let responseTime = endTime.diff(startTime, 'miliseconds');
logger.error(bucketName, 'getData', err.message, userKey, that.currentService, responseTime);
throw cacheResponse.error(err);
});
};
Here
logger.error(bucketName, 'getData', err.message, userKey, that.currentService, responseTime);
started giving response time of range 1061ms to 109939ms.
Please provide some inputs.
As you can read from this ioredis issue, there isn't a per-command timeout configuration.
As suggested in the linked comment, you can use a Promise-based strategy as a workaround. Incidentally, this is the same strategy used by the ioredis-timeout plugin that wraps the original command in a Promise.race() method:
//code from the ioredis-timeout lib
return Promise.race([
promiseDelay(ms, command, args),
originCommand.apply(redis, args)
]);
So you can use the plugin or this nice race timeout technique to add a timeout functionality on top of the redis client. Keep in mind that the underlying command will not be interrupted.
I was facing a similar issue which I have described in detail here: How to configure Node Redis client to throw errors immediately, when connection has failed? [READ DETAILS]
The fix was actually quite simple, just set enable_offline_queue to false. This was with Node Redis, so you'll have to figure out the equivalent for IORedis. Setting this to false, will make all commands throw an exception immediately which you can process in a catch block and continue instead of waiting for some timeout.
Do keep in mind that, with enable_offline_queue set to false, the commands that you issue while there's some connection issue with the server will never be executed.
Hi I am using the new nodejs sdk to connect to servicebus. What is the proper way to keep receiving messages as long as my application is running? The example code shows 2 ways of listening for messages:
Method 1 - Receive Batch
const receiver = client.getReceiver();
try {
for (let i = 0; i < 10; i++) {
const messages = await receiver.receiveBatch(1, 5);
if (!messages.length) {
console.log("No more messages to receive");
break;
}
console.log(`Received message #${i}: ${messages[0].body}`);
await messages[0].complete();
}
await client.close();
} finally {
await ns.close();
}
}
Method 2 - Streaming Listener
try {
receiver.receive(onMessageHandler, onErrorHandler, { autoComplete: false });
// Waiting long enough before closing the receiver to receive messages
await delay(5000);
await receiver.close();
await client.close();
} finally {
await ns.close();
}
I went with method 2, on startup, and basically never close the client. But after a period of time the connection just stops working and the messages don't get received anymore (stuck in the queue).
What is the correct way to receive messages "forever"?:
Re-establish a new client (open and close eg every minute) with method 1, OR
Re-establish a new client (open and close eg every minute) with method 2, OR
Some kind of polling system (how)?
I know this is a late reply, but... Version 7 of #azure/service-bus SDK offers a solution in order to tackle this specific problem of disconnects and reconnection.
The subscribe method(which is equivalent to the receive method in your code snippet) can be leveraged which would run forever and is capable of recovering from fatal errors as well.
You can refer to the receiveMessagesStreaming.ts sample code that uses version 7 of #azure/service-bus SDK.
The latest version 7.0.0 of #azure/service-bus has been released recently.
#azure/service-bus - 7.0.0
Samples for 7.0.0
Guide to migrate from #azure/service-bus v1 to v7
I am getting the below error.
Error: Redis connection to localhost:6379 failed - getaddrinfo EMFILE localhost:6379
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
Node.js with MySQL db and redis concept is used.
There are too many requests for fetching the data from MySQL so data is cached for 2 minutes by syncing with db. So when new requests arrives it checks in redis if found its serves from redis else data is retrieved from MySQL and cached in redis and sent as response. This keeps happening.
After some time probably 1 hour or 2 hours the server crashes resulting in above error.
But as of now pm2 is used which restarts the server.
But need to know the reason for it.
Redis installation followed the instructions from here.
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
Please let me know how to solve the issue...
Redis Connection File Code
var Promise = require('bluebird');
var redisClient; // Global (Avoids Duplicate Connections)
module.exports =
{
OpenRedisConnection : function()
{
if (redisClient == null)
{
redisClient = require("redis").createClient(6379, 'localhost');
redisClient.selected_db = 1;
}
},
GetRedisMultiConnection: function ()
{
return require("redis").createClient(6379, 'localhost').multi();
},
IsRedisConnectionOpened : function()
{
if (redisClient && redisClient.connected == true)
{
return true;
}
else
{
if(!redisClient)
redisClient.end(); // End and open once more
module.exports.OpenRedisConnection();
return true;
}
}
};
What I usually do with code like this is write a very thin module that loads in the correct Redis driver and returns a valid handle with a minimum of fuss:
var Redis = require("redis");
module.exports = {
open: function() {
var client = Redis.createClient(6379, 'localhost');
client.selected_db = 1;
return client;
},
close: function(client) {
client.quit();
}
};
Any code in your Node application that needs a Redis handle acquires one on-demand and it's also understood that code must close it no matter what happens. If you're not catching errors, or if you're catching errors and skipping the close you'll "leak" open Redis handles and your app will eventually crash.
So, for example:
var Redis = require('./redis'); // Path to module defined above
function doStuff() {
var redis = Redis.open();
thing.action().then(function() {
redis.ping();
}).finally(function() {
// This code runs no matter what even if there's an exception or error
Redis.close(redis);
});
}
Due to the concurrent nature of Node code having a single Redis handle that's shared by many different parts of code will be trouble and I'd strongly advise against this approach.
To expand on this template you'd have a JSON configuration file that can override which port and server to connect to. That's really easy to require and use instead of the defaults here. It also means that you don't have to hack around with any actual code when you deploy your application to another system.
You can also expand on the wrapper module to keep active connections in a small pool to avoid closing and then immediately opening a new one. With a little bit of attention you can even check that these handles are in a sane state, such as not stuck in the middle of a MULTI transaction, before handing them out, by doing a PING and testing for an immediate response. This weeds out stale/dead connections as well.