Azure Queue GetMessagesAsync does not get results - azure

I try to get 32 messages per request from Azure Queue.
queue.ApproximateMessageCount;
This code gives me the result of 1509. Telling me the connection is OK and it has records. Also I check in queue it really has 1509 records.
But when I try to retrieve records I don't get any record.
I do the following:
var messages = await queue.GetMessagesAsync(configuration.MessageBatchSize);
if (!messages.Any()) {
return;
}
It always goes in the if and returns.
What is going on here and what am I missing?

Do do that, receiving messages in batch mode, i use this kind of code :
var messages = await queueClient?.ReceiveBatchAsync(Max_Messages);
foreach (var message in messages)
{
await dispatcher.Dispatch(message); // do something with each message
}
But, for receiving messages with ReceiveBatchAsync, the queue have to be configured with the EnableBatchedOperations flag to true.

ApproximateMessageCount property represents the total number of messages available in queue at that particular moment. It does not represent that all messages (max #32 messages in a pull) are ready to be dequeued. You can use this property to infer that how many messages are in queue.
queue.ApproximateMessageCount;
If you could not retrieve the message by, GetMessagesAsync(numberOfMessages), then it says that all messages are not available or invisible for current QueueClient.
var cloudQueueMessages = await cloudQueue.GetMessagesAsync(numberOfMessages);
You could try polling the queue after sometime to see if messages came back to surface.
Note that, be advised of setting adequate visibility timeout for any message being dequeued to avoid indefinite starvation :)

Related

RateLimit and ibmmq module Node.js

I'm using ibmmq module https://github.com/ibm-messaging/mq-mqi-nodejs.
I am trying to make an application which will get one message from a queue every 500ms.
There is an option getLoopPollTimeMs, but it works when there is no messages in the queue and then they comes.
I've tried to use limiter https://www.npmjs.com/package/limiter
mq.Get(openQueue.ref as mq.MQObject, mqmd, gmo, await this.getCB.bind(this))
async getCB(...) {
const remainingMessages = await this.limiter.removeTokens(1);
...
}
So the application reads a message from the queue and processes it.
And in the same time it reads all other messages and wait for the limiter to process because of the asynchronous callback.
But I need it to read the next message only when the previous one is processed.
I've tryed GetSync, but limiter works absolutley incorrect and when it's sync other processes in the application stop working.
How can I get only one message from the queue? Is it the only way if I mq.GetDone(hObj); every time in getCB and then connect with mq.Get to the queue again in setInterval? Any advices?
Upd: The way with mq.GetDone(hObj) isn't working. The application reads one message, processes it, and then it reads the second message from the queue and falls with mistake
terminate called after throwing an instance of 'Napi::Error'
what(): GetDone: MQCC = MQCC_FAILED [2] MQRC = MQRC_HOBJ_ERROR [2019]
Aborted
The queue is closed, but getCB is still working.
As per the comments, its possible to use tuning parameters, see https://github.com/ibm-messaging/mq-mqi-nodejs and line 196-202 of https://github.com/ibm-messaging/mq-mqi-nodejs/blob/148b70db036c80f442adb34769d5d239a6f05b65/lib/mqi.js#L575
Again as per the comments you could use a combination of
mq.setTuningParameters({getLoopDelayTimeMs: 2000, maxConsecutiveGets: 1})
for a throttle limit of 1 message in 2 seconds.

AzureServiceBus - Publish messages in a guaranteed order

When publishing messages to a service bus topic, if I loop over 3 messages:
{ A, B, C }
And await the SendAsync() each time, I'd expect them to be published to the topic in the order:
{ A, B, C }
public async Task PublishMessage(string topic, string json, string sessionId)
{
var topicClient = new TopicClient(_connectionString, topic);
var busMessage = new Message(Encoding.UTF8.GetBytes(json));
busMessage.SessionId = sessionId;
await topicClient.SendAsync(busMessage);
}
A number of employees have suggested this isn't guaranteed to be the case, and that in certain scenarios (i.e large messages), this publishing order isn't guaranteed. I've never encountered a scenario of this myself, does ASB not guarantee publish ordering even when the sending of messages is awaited like the above?
This article https://devblogs.microsoft.com/premier-developer/ordering-messages-in-azure-service-bus/ uses this quote:
"While Azure Service Bus allows for a FIFO approach (First-In-First-Out), we cannot guarantee that messages are entered in the order we want them to be processed"
This all seems quite baffling to me, as I'd have assumed SendAsync() would only return a successful result once the message has been added into the topic. Do we really need to write layers of complexity around this to manage it?
Please note this only relates to the publishing of messages, we use SessionIds to handle consumption.
Even if you wait for one message to be sent and then only send the next one, the FIFO is not guaranteed. This is due to too many probable causes. In order to ensure you get guaranteed ordering, you need to use session enabled queues or subscriptions.

RabbitMQ: how to limit consuming rate

I need to limit the rate of consuming messages from rabbitmq queue.
I have found many suggestions, but most of them offer to use prefetch option. But this option doesn't do what I need. Even if I set prefetch to 1 the rate is about 6000 messages/sec. This is too many for consumer.
I need to limit for example about 70 to 200 messages per second. This means consuming one message every 5-14ms. No simultaneous messages.
I'm using Node.JS with amqp.node library.
Implementing a token bucket might help:
https://en.wikipedia.org/wiki/Token_bucket
You can write a producer that produces to the "token bucket queue" at a fixed rate with a TTL on the message (maybe expires after a second?) or just set a maximum queue size equal to your rate per second. Consumers that receive a "normal queue" message must also receive a "token bucket queue" message in order to process the message effectively rate limiting the application.
NodeJS + amqplib Example:
var queueName = 'my_token_bucket';
rabbitChannel.assertQueue(queueName, {durable: true, messageTtl: 1000, maxLength: bucket.ratePerSecond});
writeToken();
function writeToken() {
rabbitChannel.sendToQueue(queueName, new Buffer(new Date().toISOString()), {persistent: true});
setTimeout(writeToken, 1000 / bucket.ratePerSecond);
}
I've already found a solution.
I use module nanotimer from npm for calculation delays.
Then I calculate delay = 1 / [message_per_second] in nanoseconds.
Then I consume message with prefetch = 1
Then I calculate really delay as delay - [processing_message_time]
Then I make timeout = really delay before sending ack for the message
It works perfectly. Thanks to all
See 'Fair Dispatch' in RabbitMQ Documentation.
For example in a situation with two workers, when all odd messages are heavy and even messages are light, one worker will be constantly busy and the other one will do hardly any work. Well, RabbitMQ doesn't know anything about that and will still dispatch messages evenly.
This happens because RabbitMQ just dispatches a message when the message enters the queue. It doesn't look at the number of unacknowledged messages for a consumer. It just blindly dispatches every n-th message to the n-th consumer.
In order to defeat that we can use the prefetch method with the value of 1. This tells RabbitMQ not to give more than one message to a worker at a time. Or, in other words, don't dispatch a new message to a worker until it has processed and acknowledged the previous one. Instead, it will dispatch it to the next worker that is not still busy.
I don't think RabbitMQ can provide you this feature out of the box.
If you have only one consumer, then the whole thing is pretty easy, you just let it sleep between consuming messages.
If you have multiple consumers I would recommend you to use some "shared memory" to keep the rate. For example, you might have 10 consumers consuming messages. To keep 70-200 messages rate across all of them, you will make a call to Redis, to see if you are eligible to process message. If yes, then update Redis, to show other consumers that currently one message is in process.
If you have no control over consumer, then implement option 1 or 2 and publish message back to Rabbit. This way the original consumer will consume messages with the desired pace.
This is how I fixed mine with just settimeout
I set mine to process consume every 200mls which will consume 5 data in 1 seconds I did mine to do update if exist
channel.consume(transactionQueueName, async (data) => {
let dataNew = JSON.parse(data.content);
const processedTransaction = await seperateATransaction(dataNew);
// delay ack to avoid duplicate entry !important dont remove the settimeout
setTimeout(function(){
channel.ack(data);
},200);
});
Done

Manually publish messages to dead-letter queue?

Why would someone want to do that? I have to unit-test exception handling mechanism in our application.
I presumed that dead letter queue is literally azure service bus queue, where I could publish messages using QueueClient
string dlQ = #"sb://**.servicebus.windows.net/**/Subscriptions/DefaultSubscription/$DeadLetterQueue";
string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
NamespaceManager _namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
QueueDescription qd = _namespaceManager.GetQueue(dataPromotionDLQ);
var queueClient = QueueClient.CreateFromConnectionString(connectionString, "DefaultSubscription/$DeadLetterQueue");
BrokeredMessage brokeredMessage = new BrokeredMessage("Message to PublishToDLQ");
try
{
queueClient.Send(brokeredMessage);
}
catch (Exception)
{
}
But I get MessagingEntityNotFoundException. What could be wrong?
You would never want to publish directly to a dead letter queue. It's where poisoned messages that can't be processed are placed.
There are two ways of placing messages onto the dead letter queue. The service bus itself dead-letters messages that have exceeded the maximum number of delivery attempts. You can also explicitly dead-letter a message that you have received using the DeadLetter() method.
Create your messages with a very short TTL via the BrokeredMessage.TimeToLive property.
The Subscription must have EnableDeadLetteringOnMessageExpiration set to true.
Though late here, adding to the answers of #Mikee and #Ben Morris may help someone. You can make use of #Mike's suggestion of making use of message.DeadLetter() or message.DeadLetterAsync() to dead-letter a message. Another suggestion can be to set very less or 0 second TimeToLive to move the messages to Dead letter.
After you perform any of these and try to view the messages in the Active end queue, you may still find that message is available sometimes (Which you are currently facing). The reason is that the messages that are dead-lettered due to TTLExpiredException, HeaderSizeExceeded or any system defined Errors, or manually Dead-Lettered messages like DeadLetter() methods are cleaned up by an asynchronous "garbage collection" program periodically. This doesn't occur immediately which we expect it to.
When you perform Peek operation, you can still see that the message is in the Active queue. You have to wait for the garbage collector to run or you can perform a Receive operation which forces the garbage collector to run first, thereby moving the messages to dead-letter before retrieval is done.

Azure Queue Storage - Mark messages as visible immediately after calling CloudQueue.GetMessages()

Problem:
I am reading messages from a Azure Storage Queue and then inserting them into a Storage Table using a Worker Role.
I want to read in messages but only process them if there are at least 100 (this is to optimize the Storage Table batch insert which is occurring). If there are less than 100 messages, then I want to cancel the message processing and make them immediately visible on the queue again for the next queue read.
Question:
Is it possible to mark a message which has just been read by CloudQueue.GetMessages(...) as visible without having to wait for the timeout to expire?
Code: (in WorkerRole.cs)
public override void Run()
{
while (true)
{
var messages = queue.GetMessages(100);
if (messages.Count() >= 100)
{
// This will process, insert into a table, and delete from the queue
ProcessMessages(messages);
}
else
{
//!!! MARK MESSAGES AS VISIBLE ON THE QUEUE
System.Threading.Thread.Sleep(1000);
}
}
}
Thanks
You can check the queue's `ApproximateMessageCount' property (details here), which will give you a rough idea how many messages are waiting in the queue.
Also: you can set a message's invisibility timeout to something small (maybe 5-10 seconds?). After that period, the message becomes visible again. You can also modify invisibility timeout to something shorter after you read it.
Just remember that reading from the queue counts as a transaction, as does updating messages (e.g. updating invisibility timeout).
Waiting for 100 messages may be a non-optimal optimization. Oh, and GetMessages()(details here) is limited to 32 messages, so it doesn't make sense to wait for 100. Also: Transactions are really, really cheap (a penny per 100K transactions). I don't necessarily see the value here.
Reset the expire time to 0.0. That will hopefully do the trick.

Resources