Rabbit mq messages processed before acknowledgement - node.js

I have an issue with my rabbit mq setup using node.
A brief gist of the code is
queue.subscribe({ack: true}, function(msg) {
// Do some processing
queue.shift(false);
}
I expect messages to be read and applied serially -> the second message should start processing only after first message is applied and acknowledged.
But this is not happening. I see that in some cases the second message starts processing before the first message is acknowledged. The default prefetch count is 1 so this should not happen.
What am I missing?

Related

Pulsar: Failure reason while sending message in DLQ

We are using apache-pulsar's retry topic and dead letter topic to handle failures in event processing. In case of a failure in processing, event is retried 3 times via retry topic and if it still fails then message is added in DLQ topic.
Is there a way to add failure message in event while adding that in DLQ after 3 failures?
Example:
Event:
{
"messageId": "message-1234",
"data": {...}
}
I want this event to have error field if it goes in DLQ topic after all retries are exhausted. This will help in understanding the reason of failure while analyzing the DLQ.
Event in DLQ:
{
"messageId": "message-1234",
"data": {...},
"errorReason": "Error reason from exception"
}
When you call the reconsumeLaterAsync(), you can pass the properties which is a customized key-value pair list. The properties will be copied to the message that sends to DLQ. Version 2.10.0 is required.
Here is an example.

Azure Queue GetMessagesAsync does not get results

I try to get 32 messages per request from Azure Queue.
queue.ApproximateMessageCount;
This code gives me the result of 1509. Telling me the connection is OK and it has records. Also I check in queue it really has 1509 records.
But when I try to retrieve records I don't get any record.
I do the following:
var messages = await queue.GetMessagesAsync(configuration.MessageBatchSize);
if (!messages.Any()) {
return;
}
It always goes in the if and returns.
What is going on here and what am I missing?
Do do that, receiving messages in batch mode, i use this kind of code :
var messages = await queueClient?.ReceiveBatchAsync(Max_Messages);
foreach (var message in messages)
{
await dispatcher.Dispatch(message); // do something with each message
}
But, for receiving messages with ReceiveBatchAsync, the queue have to be configured with the EnableBatchedOperations flag to true.
ApproximateMessageCount property represents the total number of messages available in queue at that particular moment. It does not represent that all messages (max #32 messages in a pull) are ready to be dequeued. You can use this property to infer that how many messages are in queue.
queue.ApproximateMessageCount;
If you could not retrieve the message by, GetMessagesAsync(numberOfMessages), then it says that all messages are not available or invisible for current QueueClient.
var cloudQueueMessages = await cloudQueue.GetMessagesAsync(numberOfMessages);
You could try polling the queue after sometime to see if messages came back to surface.
Note that, be advised of setting adequate visibility timeout for any message being dequeued to avoid indefinite starvation :)

How can you keep track of the last message you read in Azure Message recieve

When we receive 100 message in a queue .. How can you keep track of the last message you read?
When do i delete that message... I mean to ask when can we actually call it off in the process.
You should delete the message from the queue when you are ready processing the message.
// Get the next message
CloudQueueMessage retrievedMessage = queue.GetMessage();
//Process the message in less than 30 seconds
//and then delete the message
queue.DeleteMessage(retrievedMessage);
If you fail to process the message in 30 second or not call deletemessage, it will become visible again. You can change the timeout and how many message you take from the queue.
storage-dotnet-how-to-use-queues

RabbitMQ: how to limit consuming rate

I need to limit the rate of consuming messages from rabbitmq queue.
I have found many suggestions, but most of them offer to use prefetch option. But this option doesn't do what I need. Even if I set prefetch to 1 the rate is about 6000 messages/sec. This is too many for consumer.
I need to limit for example about 70 to 200 messages per second. This means consuming one message every 5-14ms. No simultaneous messages.
I'm using Node.JS with amqp.node library.
Implementing a token bucket might help:
https://en.wikipedia.org/wiki/Token_bucket
You can write a producer that produces to the "token bucket queue" at a fixed rate with a TTL on the message (maybe expires after a second?) or just set a maximum queue size equal to your rate per second. Consumers that receive a "normal queue" message must also receive a "token bucket queue" message in order to process the message effectively rate limiting the application.
NodeJS + amqplib Example:
var queueName = 'my_token_bucket';
rabbitChannel.assertQueue(queueName, {durable: true, messageTtl: 1000, maxLength: bucket.ratePerSecond});
writeToken();
function writeToken() {
rabbitChannel.sendToQueue(queueName, new Buffer(new Date().toISOString()), {persistent: true});
setTimeout(writeToken, 1000 / bucket.ratePerSecond);
}
I've already found a solution.
I use module nanotimer from npm for calculation delays.
Then I calculate delay = 1 / [message_per_second] in nanoseconds.
Then I consume message with prefetch = 1
Then I calculate really delay as delay - [processing_message_time]
Then I make timeout = really delay before sending ack for the message
It works perfectly. Thanks to all
See 'Fair Dispatch' in RabbitMQ Documentation.
For example in a situation with two workers, when all odd messages are heavy and even messages are light, one worker will be constantly busy and the other one will do hardly any work. Well, RabbitMQ doesn't know anything about that and will still dispatch messages evenly.
This happens because RabbitMQ just dispatches a message when the message enters the queue. It doesn't look at the number of unacknowledged messages for a consumer. It just blindly dispatches every n-th message to the n-th consumer.
In order to defeat that we can use the prefetch method with the value of 1. This tells RabbitMQ not to give more than one message to a worker at a time. Or, in other words, don't dispatch a new message to a worker until it has processed and acknowledged the previous one. Instead, it will dispatch it to the next worker that is not still busy.
I don't think RabbitMQ can provide you this feature out of the box.
If you have only one consumer, then the whole thing is pretty easy, you just let it sleep between consuming messages.
If you have multiple consumers I would recommend you to use some "shared memory" to keep the rate. For example, you might have 10 consumers consuming messages. To keep 70-200 messages rate across all of them, you will make a call to Redis, to see if you are eligible to process message. If yes, then update Redis, to show other consumers that currently one message is in process.
If you have no control over consumer, then implement option 1 or 2 and publish message back to Rabbit. This way the original consumer will consume messages with the desired pace.
This is how I fixed mine with just settimeout
I set mine to process consume every 200mls which will consume 5 data in 1 seconds I did mine to do update if exist
channel.consume(transactionQueueName, async (data) => {
let dataNew = JSON.parse(data.content);
const processedTransaction = await seperateATransaction(dataNew);
// delay ack to avoid duplicate entry !important dont remove the settimeout
setTimeout(function(){
channel.ack(data);
},200);
});
Done

SysV message queue number increase

I have a scenario where:
1: there is a reader process and a writer process, these processes are communicating through SysV message queue.
2:Writer process is faster than reader process, that is, writer process writes messages in the queue faster than reader process reads a message and empty the queue, for example if I have 8 messages in the queue (single message queue) and reader process is yet to read one message at that time writer process trying to write (msgsnd) the 9th message in the queue.
3: What will happen will any of my message will get overwritten ?
4: or my last or first message in the queue will get overwritten ?
5: or the entire queue will be overwritten ?
6: or else the 9th message will be lost ?
7: How can I make sure that none of these scenario happens and I will not loose any new incoming message and no existing messages will get overwritten ?
8: How can I handle this situation ?
Regards
About point 3, the manpage of msgsnd says
When msgsnd() fails, errno will be set to one among the following values:
...
EAGAIN The message can't be sent due to the msg_qbytes limit for the queue
and IPC_NOWAIT was specified in msgflg.
therefore you won't be able to add another message to the queue and you will need to store them somewhere else. If you specified IPC_NOWAIT when you opened the queue, then the message will be lost.

Resources