Frequent timeout errors from Azure ServiceBus - node.js

Using Azure SDK example code as inspiration I have coded a publishing function that sends a message to the specified queue name.
export const publisher = async (message: ServiceBusMessage, queueName: string) => {
let sbClient, sender;
try {
sbClient = new ServiceBusClient(SB_CONNECTION_STRING);
sender = sbClient.createSender(queueName);
await sender.sendMessages([message]);
await sender.close();
} catch (err) {
console.log(`[Service Bus] error sending message ${queueName}`, err);
console.log("retrying message publish...");
publisher(message, queueName);
} finally {
await sbClient.close();
}
};
Most of the time this code works flawlessly but occasionally the connection times out and I retry sending within the catch block which seems to work all the time.
The message I'm sending are quite small:
{
body: {
type: PROCESS_FILE,
data: { type: CURRENT, directory: PENDING_DIRECTORY }
}
}
And example of the log output that includes the thrown error by the Azure SDK:
[08/03/2022 12:40:55.191] [LOG] [X] Processed task
[08/03/2022 12:40:55.346] [LOG] [X] Processed task
[08/03/2022 12:40:55.545] [LOG] [X] Processed task
[08/03/2022 12:41:27.840] [LOG] [Service Bus] error sending message local.importer.process { ServiceBusError: ETIMEDOUT: connect ETIMEDOUT 40.127.7.243:5671
at translateServiceBusError (/usr/share/app/node_modules/#azure/service-bus/src/serviceBusError.ts:174:12)
at MessageSender.open (/usr/share/app/node_modules/#azure/service-bus/src/core/messageSender.ts:304:31)
at process._tickCallback (internal/process/next_tick.js:68:7)
name: 'MessagingError',
retryable: false,
address: '40.127.7.243',
code: 'GeneralError',
errno: 'ETIMEDOUT',
port: 5671,
syscall: 'connect' }
[08/03/2022 12:41:27.840] [LOG] retrying message publish...
[08/03/2022 12:41:28.756] [LOG] [X] Processed task
I am not sure on how to proceed. Azure documentation recommends that you retry the message in the case of a timeout which I am doing however the timeouts are so frequent that it concerns me.
Does any kind soul have some insight into this from previous experience? I am using "#azure/service-bus": "^7.3.0",

Related

can't run 2 individually guild and direct message "ready" event / discord.js v13

I'm intent on adding 2 "ready" events to my Discord bot. I think that would be probably fine because I've added more than one "interactionCreate" event before. So basically one event (1) will send a message to the guild and the other one (2) will send a message to the DMs. Anyways these are my codes (I've followed this discord guide on sending a message to the channel and this discord guide on sending a private message):
(1)
module.exports = {
name: 'ready',
once: true,
async execute(client) {
const channel = client.channels.cache.get('ChannelID');
...
channel.send(`${todaySession.join(", ")}`);
...
};
(2)
module.exports = {
name: 'ready',
once: true,
async execute(client) {
const user = await client.users.fetch('UserID');
console.log(user);
user.send(" ");
};
sometimes it works but mostly it will throw this error (there's 15 page total of the error so i'm not gonna post it on here ofc):
node:internal/process/promises:246
triggerUncaughtException(err, true /* fromPromise */);
^
<ref *1> Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:220:20) {
errno: -4077,
code: 'ECONNRESET',
I've done a variety of research though, but it seems too complicated for me (because I casually understand a part of WebSocket)
I'll put it down here in case you guys found something that i've missed.
1/ ECONNRESET usually means the internet connection dies somewhere along the way, there's nothing much the lib can do about it.
2/ the second one
3/ bot goes ded
My temporary solution here is to implement all of the event into a-one-but-only "ready" event file so it can work like a charm. Besides, is there any explanation for this and how can we solve this for good ?

RabbitMQ: Ack/Nack a message on a channel that is closed and reopened

I'm getting this error from the RabbitMq server
Channel closed by server: 406 (PRECONDITION-FAILED) with message "PRECONDITION_FAILED - unknown delivery tag 80"
This happends because the connection is lost during the consumer task and at the end, when the message is acked/nacked, i get this error because I cannot ack a message on a different channel than the one I got it from.
Here is the code for the RabbitMq connection
async connect({ prefetch = 1, queueName }) {
this.queueName = queueName;
console.log(`[AMQP][${this.queueName}] | connecting`);
return queue
.connect(this.config.rabbitmq.connstring)
.then(conn => {
conn.once('error', err => {
this.channel = null;
if (err.message !== 'Connection closing') {
console.error(
`[AMQP][${this.queueName}] (evt:error) | ${err.message}`,
);
}
});
conn.once('close', () => {
this.channel = null;
console.error(
`[AMQP][${this.queueName}] (evt:close) | reconnecting`,
);
this.connect({ prefetch, queueName: this.queueName });
});
return conn.createChannel();
})
.then(ch => {
console.log(`[AMQP-channel][${this.queueName}] created`);
ch.on('error', err => {
console.error(
`[AMQP-ch][${this.queueName}] (evt:error) | ${err.message}`,
);
});
ch.on('close', () => {
console.error(`[AMQP-ch][${this.queueName}] (evt:close)`);
});
this.channel = ch;
return this.channel;
})
.then(ch => {
return this.channel.prefetch(prefetch);
})
.then(ch => {
return this.channel.assertQueue(this.queueName);
})
.then(async ch => {
while (this.buffer.length > 0) {
const request = this.buffer.pop();
await request();
}
return this.channel;
})
.catch(error => {
console.error(error);
console.log(`[AMQP][${this.queueName}] reconnecting in 1s`);
return this._delay(1000).then(() =>
this.connect({ prefetch, queueName: this.queueName }),
);
});
}
async ack(msg) {
try {
if (this.channel) {
console.log(`[AMQP][${this.queueName}] ack`);
await this.channel.ack(msg);
} else {
console.log(`[AMQP][${this.queueName}] ack (buffer)`);
this.buffer.push(() => {
this.ack(msg);
});
}
} catch (e) {
console.error(`[AMQ][${this.queueName}] ack error: ${e.message}`);
}
}
As you can see, after the connection is enstablished a channel is created, and after i get a connection issue, the channel is set to NULL and after 1 second the connection retries, recreating a new channel.
For managing the offline period I'm using a buffer that collects all the ack message that are sent while the channel was NULL and after the connection is reenstabilshed i unload the buffer.
So basically I have to find a way to send an ACK after a connection is lost or a channel is closed for watherver reason.
Thanks for any help
You cannot acknowledge a message once the channel is closed (whatever is the reason). The broker will automatically re-deliver the same message to another consumer.
This is well documented in RabbitMQ message confirmation section.
When Consumers Fail or Lose Connection: Automatic Requeueing
When manual acknowledgements are used, any delivery (message) that was not acked is automatically requeued when the channel (or connection) on which the delivery happened is closed. This includes TCP connection loss by clients, consumer application (process) failures, and channel-level protocol exceptions (covered below).
...
Due to this behavior, consumers must be prepared to handle redeliveries and otherwise be implemented with idempotence in mind. Redeliveries will have a special boolean property, redeliver, set to true by RabbitMQ. For first time deliveries it will be set to false. Note that a consumer can receive a message that was previously delivered to another consumer.
As the documentation suggests, you need to handle such issues at the consumer side by implementing a message idempotency design pattern. In other words, your architecture should be ready to deal with message re-delivery due to errors.
Alternatively, you can disable message acknowledgment and obtain a "once delivery" type of pattern. This implies that in case of errors you will have to deal with message loss.
Further readings in the matter:
https://bravenewgeek.com/you-cannot-have-exactly-once-delivery/
And the follow up once Kafka introduced new semantics:
https://bravenewgeek.com/you-cannot-have-exactly-once-delivery-redux/
There is no way to send an ACK if the connection is dropped or broken for some reason because the connection happens at the socket level and once it is closed there is no way to recreate it with the same socket.
When the connection drops the message remains non-ACK and therefore another listener can process it or it will be processed again by the disconnected listener when it connects again.
In my opinion you are trying to solve a problem that is not given by RabbitMQ but by the socket implementation at the base.
You could solve this by avoiding managing the message buffer and taking advantage of the peculiarity of RabbitMQ which will re-present the last unprocessed message as soon as your listener connects again.

RabbitMQ data lost on crash

I'm using RabbitMQ to store and retrieve data. I referred this article. I have set the durable flag to true and the noAck flag to false (i need to store the messages on the queue even after consuming).
I created these scenarios:
I updated stock data 3 times with consumers off state (inactive). Then I activated the consumer.It consumed all the three messages from the queue. [Works good.]
Now I again produced three messages (consumer inactive again) then I turned off the rabbitmq server. When I restarted the server and activated the consumer. It doesn't seem to be consuming the data (are the messages that were on the queue has been lost?)
Consumer :
connection.createChannel(function (error1, channel) {
if (error1) {
throw error1;
}
var queue = "updateStock2";
channel.assertQueue(queue, {
durable: true,
});
console.log(
" [*] Waiting for stockData messages in %s. To exit press CTRL+C",
queue
);
channel.consume(
queue,
function (data) {
stock = JSON.parse(data.content.toString());
console.log(" [x] Received Stock:", stock.name + " : " + stock.value);
},
{
noAck: false,
}
);
Producer :
connection.createChannel(function (error1, channel) {
if (error1) {
throw error1;
}
var queue = "updateStock2";
channel.assertQueue(queue, {
durable: true,
});
channel.sendToQueue(queue, Buffer.from(data));
console.log(" [x] Sent %s", data);
});
setTimeout(function () {
connection.close();
//process.exit(0);
}, 500);});
Aren't they persistent? If the server crashes all the messages in the queue are gone forever?
How to retrieve data that were in the queue when the server crashes?
Thanks in advance.
Why your messages have lost?
Regret to say, you did not declare {persistent: true} when you send message.Check https://www.rabbitmq.com/tutorials/tutorial-two-javascript.html, so you should use channel.sendToQueue(queue, Buffer.from(msg), {persistent: true});
Aren't they persistent?
Durable queues will be recovered on node boot, including messages in them published as persistent. Messages published as transient will be discarded during recovery, even if they were stored in durable queues.
Which middleware maybe better for you?
If you want a middleware which can persist messages even if consumed by consumers, you maybe need kafka

How to control commit of a consumed kafka message using kafka-node

I'm using Node with kafka for the first time, using kafka-node. Consuming a message requires calling an external API, which might even take a second to response. I wish to overcome sudden failures of my consumer, in a way that if a consumer fails, another consumer that will consume that will replace it will receive the same message that its work was not completed.
I'm using kafka 0.10 and trying to use ConsumerGroup.
I thought of setting autoCommit: false in options, and committing the message only once its work has been completed (as I previously done with some Java code some time ago).
However, I can't seem to be sure how should I correctly commit the message only once it is done. How should I commit it?
Another worry I have is that it seems, because of the callbacks, that the next message is being read before the previous one had finished. And I'm afraid that if message x+2 have finished before message x+1, then the offset will be set at x+2, thus in case of failure x+1 will never be re-executed.
Here is basically what I did so far:
var options = {
host: connectionString,
groupId: consumerGroupName,
id: clientId,
autoCommit: false
};
var kafka = require("kafka-node");
var ConsumerGroup = kafka.ConsumerGroup;
var consumerGroup = new ConsumerGroup(options, topic);
consumerGroup.on('connect', function() {
console.log("Consuming Kafka %s, topic=%s", JSON.stringify(options), topic);
});
consumerGroup.on('message', function(message) {
console.log('%s read msg Topic="%s" Partition=%s Offset=%d', this.client.clientId, message.topic, message.partition, message.offset);
console.log(message.value);
doSomeStuff(function() {
// HOW TO COMMIT????
consumerGroup.commit(function(err, data) {
console.log("------ Message done and committed ------");
});
});
});
consumerGroup.on('error', function(err) {
console.log("Error in consumer: " + err);
close();
});
process.once('SIGINT', function () {
close();
});
var close = function() {
// SHOULD SEND 'TRUE' TO CLOSE ???
consumerGroup.close(true, function(error) {
if (error) {
console.log("Consuming closed with error", error);
} else {
console.log("Consuming closed");
}
});
};
One thing you can do here is to have a retry mechanism for every message you process.
You can consult my answer on this thread:
https://stackoverflow.com/a/44328233/2439404
I consume messages from Kafka using kafka-consumer, batch them together using async/cargo and put them in async/queue (in-memory queue). The queue takes a worker function as an arguement to which I am passing a async/retryable.
For your problem, you can just use retryable to do processing on your messages.
https://caolan.github.io/async/docs.html#retryable
This may solve your problem.

Websocket Message Receiving Failed Detection

Here is ws library..
https://github.com/websockets/ws/blob/master/lib/WebSocket.js
Now How can I use send method such as i can detect the message sending failed? I have tried callback and try catch.. may be i am missing something..
Now I am doing this.. it can send the message..
BulletinSenderHelper.prototype.sendMessage = function(bulletin, device) {
var message = JSON.stringify({
action: 'bulletin:add',
data: bulletin.data
});
if (device.is_active) {
logger.debug('Sending message to %s', device.id);
device.conn.send(message); // device.conn == ws . though i am checking it is active or not, sometimes it fails to send the message. i have to detect it.
} else {
logger.debug('Client %s is inactive, queuing bulletin', device.id);
this.queueBulletin(device, bulletin);
}
};

Resources