What is the order of messages delivered by QueueClient.Receive() - azure

In which order does QueueClient.Receive() deliver messages?
I have been running some tests and what I can see a few of the messages (top-most ones I guess) are delivered over and over again if you don't Complete() them.
Is there a way to force it deliver in a Round Robin manner?

When you have a message that is delivered, but not completed, it's expected for message to show up on the queue once LockDuration expires and be consumed again. You have to complete the message. If you don't, it will eventually go into DLQ, but prior to that your consumer(s) will receive it multiple times.
QueueClient.Receive gets whatever is available on the broker (server). I'm not following Round Robin delivery idea, because it's a queue. You get what's in the queue. As a rule of thumb, I would suggest not to rely on the order of messages.
Saying that, there's an ASB Session feature that can preserve and guarantee an ordered of delivery. In case you're looking for sessions, similar question was asked before.

When you create the QueueClient you can specify the receive mode and set it to ReceiveAndDelete:
QueueClient.CreateFromConnectionString(connectionString, path, ReceiveMode.ReceiveAndDelete);
This will enable you to remove the message from the queue as soon as you receive it instead of having to call Complete.
If you don't call Complete or use ReceiveAndDelete, the order will be:
Get a message (locking it for X seconds)
Get the next message in order (locking it for X seconds)
First message lock expired so you get it again and relock it.
Same for the second message and so on forever.

Related

AWS SQS: Moving to dead letter queue when error happens in consumer

I have tried using npm packages like sqs-queue-parallel & sqs-consumer for consuming messages of SQS in node
But lately I have mechanism where when error happens for a particular message while processing, it should be moved to dead letter queue
But as of now it keeps on retrying the message by maximum receive count times
Is it possible with some other npm package, were whenever an error happens it should be moved directly to dead letter queue?
Know this is a bit late but think OP is trying to ask for a dynamic policy. I.e.:
on normal errors -> retry as per redrive-policy.
However, for certain failures you might know you can't recover even if you try it a hundred items. In that case -> move message directly to dead letter queue.
How to do the latter if presumably what is asked.
Answer is probably to manually copy message to deadletter queue (it behaves just like any other queue in that regard) and remove message from source queue afterwards.
Don't believe there's a 'special' way to do this.
You can configure your SQS queue to move messages to your Dead Letter Queue after any number of failed message receives between 1 and 1000.
To have a message moved to the Dead Letter Queue after only one failed receive, then modify your queue's configuration and set the "Maximum Receives" value to 1. This would be part of your queue's "Redrive Policy".
See the following AWS documentation on configuring your queue:
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html
You dont need to use a new npm for that its automatically happens when you finish dealing with a message. for example if you been using node-consumer when you finish with the message you do:
done() //in order to remove from queue due to for success probably
or
done(err) //in order to keep in queue
so now in order to move message from queue to dead letter queue you dont need to do anything else in your code but only in you sqs console manager :
create a new queue
call it dead-messages (or whatever)
set the "Maximum Receives" value to 1 (that means after one call to
"done(error)" the message will removed from your queue and go to the dead queue.
refresh!!!!
go back to your source queue (original one)
go to configure queue
set retrieve policy
put the name that you gave to the dead letter queue
thats it! good luck and i have to say the sqs is great way to scale tasks.
I think OP is asking if there is a way to move messages to DLQ after "A SINGLE FAILURE" in processing the message. As per these 2 SQS documentations I see these 2 points:
if the source queue has a redrive policy with maxReceiveCount set to 5, and the consumer of the source queue receives a message 6 times without ever deleting it, Amazon SQS moves the message to the dead-letter queue (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html).
Maximum receives value must be between 1 and 100 (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue.html)
Which means that even if you set Maximum receives value to 1, your consumer would still receive the message "AT-LEAST TWICE"
I am not able to find any solution where you can move the failed message to DLQ after a single failure. Would love to hear other people's thoughts on this

Azure queue - return timed out messages to the head of the queue

When a message is retrieved from an azure queue but not deleted from it, the messages visibility timeout expires and the message is (re)added to the end of the queue.
Is there a way to return such messages to the head of the queue instead?
When Azure Queue messages re-appear, they don't necessarily get sent to the end of the queue. They just reappear, and at that point, no real guarantee of order. It doesn't even get moved from its current position; it's just visible again. Azure storage queues aren't set up for guaranteed order. So no, there's no way to force a message to appear at the head of the queue when it reappears after its invisibility timeout expires.
Also, check out this forum answer from Jai Haridas regarding queue message ordering. Specifically:
The messages in a queue today are sorted by its visibility time. So the ordering of messages purely depends on when they are made visible. However, it is important for an app to not assume FIFO order or any specific order as it may change in future. You can only rely that 1) a message will be eligible based on its visibility timeout and 2) Message processing should be made idempotent and use the new UpdateMessage to save state
UpdateMessage() allows you to modify the queue message (e.g.adding breadcrumbs), so the next time you start processing it, you can pick up at a point beyond "start." Note that you can also adjust the timeout value, while it's still in your possession and invisible, to allow you to keep working on the message.

Is it possible to get a message from an azure storage queue twice?

I know that if a worker fails to process a message off of the queue that it will become visible again and you have to code against this (idempotent). But is it possible that a worker can dequeue a message twice? Based on my logging, I seem to be seeing this behavior and I'm not sure why. I'm even deleting the message in between going go get the next message and it seems like I got it again.
Yes, you can dequeue same message twice. This can happen for two reasons:
Worker A dequeues Message B and invisibility timeout expires. Message B becomes visible again and Worker C dequeues Message B, invalidating Worker A's pop receipt. Worker A finishes work, goes to delete Message B and error is thrown. This is most common.
In certain conditions (very frequent queue polling) you can get the same message twice on a GetMessage. This is a type of race condition that while rare does occur. Worker A and B are polling very quickly and hit the queue simultaneously and both get same message. This used to be much more common (SDK 1.0 time frame) under high polling scenarios, but it has become much more rare now in later storage updates (can't recall seeing this recently).
That being said - if you only have 1 worker popping messages, then you are queueing message twice. 1 and 2 only happen when you have more than 1 worker.
You shouldn't be able to dequeue it twice. And if I recall things properly, even deleting it twice shouldn't be possible because the pop receipt should change after the second dequeue and lock.
As SilverNinja suggests, I'd look to see if perhaps the message was inadvertantly queued twice.
Do you have more than one worker role?
It is possible (especially with processes that take a while) that the timeout on the queue item visibility could end before your role has finished processing whatever it is doing. In this case another identical role could pick up the same message (which is effectively what you need to allow for - you do not want it to be a problem if the same message is processed multiple times).
At this point the first role will finish and dequeue the message and then the other role that picked it up after the timeout will end and attempt to dequeue the message. Off the top of my head I don't recall what exactly happens when a role attempts to dequeue an already dequeued message.

Is there any issue with resending a message back to the Azure queue

I've got a scheduler and some workers in Azure. The scheduler puts messages into a queue and the workers pull those messages and work on them. I've now just come into a scenario where I will need to move some data from table storage to our database once a certain threshold has been reached. These items need to be processed in order, oldest first. Once that threshold is met all the other items are processed in order. The current message that triggered the transfer needs to be stuffed at the end of the line and be reprocessed.
So, to the meat of my question...
Is it fine to simply resend the message to the queue as is or is there a potential for that to cause problems?
queueProvider.SendMessage(message);
A co-worker mentioned that he "though he might have read something about needing to do something special." I haven't seen anything to confirm his suspicions yet however so I thought I would pose the question here just to be safe.
The short answer is that it is fine. If you have a CloudQueueMessage, you can just send it to any queue (it is just a REST request at the end of the day). Every time you AddMessage(), it creates a new ID (might be same pop receipt but that doesn't matter). That being said, there are some things you might want to take care of and or investigate:
If you push a message onto one queue, pop it, and push to another queue or same queue, you should probably delete the first message off the queue. Merely popping it means that you have set the invisibility time out, but that it will reappear soon (and you now have identical message content on each queue). So, if I pop a message and immediately push it again, I now have 2 messages in the queue with identical content.
You can now update messages. This might be appropriate for you if you need ordering. You can indicate on the message itself in metadata or content what stage of processing it is in and you get some ordering here with a thoughtful implementation.
It is recommended that all logic inside the consumer of the queue be idempotent since a message can actually be picked up more than once. We have to keep in mind that the queue service guarantees that a message will be delivered, AT LEAST ONCE - so you could end up duplicating messages with this approach.

Azure Queue unique message

I would like to make sure that I don't insert a message to the queue multiple times. Is there any ID/Name I can use to enforce uniqueness?
vtortola pretty much covered it, but I wanted to add a bit more detail into why it's at least once delivery.
When you read a queue item, it's not removed from the queue; instead, it becomes invisible but stays in the queue. That invisibility period defaults to 30 seconds (max: 2 hours). During that time, the code that got the item off the queue has that much time to process whatever command was in the queue message and delete the queue item.
Assuming the queue item is deleted before the timeout period is reached, all is well. However: Once the timeout period is reached, the queue item becomes visible again, and the code holding the queue item may no longer delete it. In this case, someone else can read the same queue message and re-process that message.
Because of the fact a queue message can timeout, and can re-appear:
Your queue processing must be idempotent - operations on a queue message must result in the same outcome (such as rendering a thumbnail for a photo).
You need to think about timeout adjustments. You might find that commands are valid but processing is taking too long (maybe your 45-second thumbnail rendering code worked just fine until someone uploaded a 25MP image)
You need to think about poison messages - those that will never process correctly. Maybe they cause an exception to be thrown or have some invalid condition that causes the message processor to abort processing, which leads to the message eventually re-appearing in the queue. There's a property callded DequeueCount - consider viewing that property upon reading a queue item and, if equal to, say, 3, push the message into a table or blob and send yourself a notification to spend some time debugging that message offline.
More details on the get-queue low-level REST API is here. This will give you more insight into Windows Azure queue message handling.
Azure queues doesn't ensure message order and either message uniqueness. Messages will be processed "at least once", but nothing ensures it won't be processed twice, so it doesn't ensure "at most once".
You should get ready to receive the same message twice. You can put an ID in the body of the message as part of your data.

Resources