Azure ServiceBus message reprocessing using MaxDeliveryCount - azure

I'm trying to use the ServiceBus subscriptions's MaxDeliveryCount for implementing retry of message processing. Not sure it's the best idea, but don't want to lose messages.
Scenario:
Having one topic, with two subscribers one sending (A) and the other
(B) receiving messages, using Peek-Lock
Both subscriptions have configured MaxDeliveryCount=10
Both clients use
SubscriptionClient.ReceiveBatch(50,TimeSpan.FromMilliseconds(50)) to
get the messages from the Queue
A sends 5 messages, having payload "1", "2",..."5"
The first messages ("1") fails to process on B and is marked as abandoned (BrokeredMessage.Abandon())
Reason: for internal reasons, app can't process this message now.
It's not yet BlackLettered since DeliveryCount < MaxDeliveryCount)
Next, since the message "1" previously failed, only one message is
requested from, and it's expected to be message "1"
SubscriptionClient.ReceiveBatch(1,TimeSpan.FromMilliseconds(50))
After 2-3 repetitions of step 7, instead of receiving message "1",
message "2" is received Message "2" is also marked as Abandoned
since message "1" is expected
Then message "3" is received
Message "3" is also marked as Abandoned since message "1" is
expected
and so on.
It seems, in this scenario, the SB is delivering the messages in a Round Robing manner.
Is this the intended behavior of ServiceBus?
I am aware about the existence of some debates whether SB guarantees ordered delivery or not. For the applications it's really important that messages are processed in the same order they are sent.
Any ideas how reprocessing of message "1" could be performed until DeliveryCount reaches MaxDeliveryCount before processing the message "2"?

Firstly, as Thomas shared in his comment, you could try to specify SupportOrdering property to true for the Topic.
TopicDescription td = new TopicDescription("TopicTest");
td.SupportOrdering = true;
And if subscription client received a message and call Abandon method to abandon the lock on a peek-locked message, we could call Receive method again to get it again, like this.
output:
On the other hand, if possible, you could try to combine a complete work steps with a specific order in a single message instead of splitting steps in multiple messages, and then you could control the processing in specific order in your code logic rather than reply on the service bus infrastructure to provide this guarantee.

Related

Google PubSub: drop nacked message after n retries

Is there way to configure pull subscription in the way that messages which caused error and were nacked, were re-queued (and so that redelivered) no more than n times?
Ideally on the last processing if it also failed I would like to handle this case (for example, log that this message is given up to process and will be dropped).
Or probably it's possible to find out, how much times received message was tried to be processed before?
I use node.js. I can see a lot of different options in the source code by am not sure how should I achieve desired behaviour.
Cloud Pub/Sub supports Dead Letter Queues that can be used to drop nacked messages after a configurable number of retries.
Currently, there is no way in Google Cloud Pub/Sub to automatically drop messages that were redelivered some designated number of times. The message will stop being delivered once the retention deadline has passed for that message (by default, seven days). Likewise, Pub/Sub does not keep track of or report the number of times a message was delivered.
If you want to handle these kinds of messages, you'd need to maintain a persistent storage keyed by message ID that you could use to keep track of the delivery count. If the delivery count exceeds your desired threshold, you could write the message to a separate topic that you use as a dead letter queue and then acknowledge original message.

What is the order of messages delivered by QueueClient.Receive()

In which order does QueueClient.Receive() deliver messages?
I have been running some tests and what I can see a few of the messages (top-most ones I guess) are delivered over and over again if you don't Complete() them.
Is there a way to force it deliver in a Round Robin manner?
When you have a message that is delivered, but not completed, it's expected for message to show up on the queue once LockDuration expires and be consumed again. You have to complete the message. If you don't, it will eventually go into DLQ, but prior to that your consumer(s) will receive it multiple times.
QueueClient.Receive gets whatever is available on the broker (server). I'm not following Round Robin delivery idea, because it's a queue. You get what's in the queue. As a rule of thumb, I would suggest not to rely on the order of messages.
Saying that, there's an ASB Session feature that can preserve and guarantee an ordered of delivery. In case you're looking for sessions, similar question was asked before.
When you create the QueueClient you can specify the receive mode and set it to ReceiveAndDelete:
QueueClient.CreateFromConnectionString(connectionString, path, ReceiveMode.ReceiveAndDelete);
This will enable you to remove the message from the queue as soon as you receive it instead of having to call Complete.
If you don't call Complete or use ReceiveAndDelete, the order will be:
Get a message (locking it for X seconds)
Get the next message in order (locking it for X seconds)
First message lock expired so you get it again and relock it.
Same for the second message and so on forever.

AWS SQS: Moving to dead letter queue when error happens in consumer

I have tried using npm packages like sqs-queue-parallel & sqs-consumer for consuming messages of SQS in node
But lately I have mechanism where when error happens for a particular message while processing, it should be moved to dead letter queue
But as of now it keeps on retrying the message by maximum receive count times
Is it possible with some other npm package, were whenever an error happens it should be moved directly to dead letter queue?
Know this is a bit late but think OP is trying to ask for a dynamic policy. I.e.:
on normal errors -> retry as per redrive-policy.
However, for certain failures you might know you can't recover even if you try it a hundred items. In that case -> move message directly to dead letter queue.
How to do the latter if presumably what is asked.
Answer is probably to manually copy message to deadletter queue (it behaves just like any other queue in that regard) and remove message from source queue afterwards.
Don't believe there's a 'special' way to do this.
You can configure your SQS queue to move messages to your Dead Letter Queue after any number of failed message receives between 1 and 1000.
To have a message moved to the Dead Letter Queue after only one failed receive, then modify your queue's configuration and set the "Maximum Receives" value to 1. This would be part of your queue's "Redrive Policy".
See the following AWS documentation on configuring your queue:
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSDeadLetterQueue.html
You dont need to use a new npm for that its automatically happens when you finish dealing with a message. for example if you been using node-consumer when you finish with the message you do:
done() //in order to remove from queue due to for success probably
or
done(err) //in order to keep in queue
so now in order to move message from queue to dead letter queue you dont need to do anything else in your code but only in you sqs console manager :
create a new queue
call it dead-messages (or whatever)
set the "Maximum Receives" value to 1 (that means after one call to
"done(error)" the message will removed from your queue and go to the dead queue.
refresh!!!!
go back to your source queue (original one)
go to configure queue
set retrieve policy
put the name that you gave to the dead letter queue
thats it! good luck and i have to say the sqs is great way to scale tasks.
I think OP is asking if there is a way to move messages to DLQ after "A SINGLE FAILURE" in processing the message. As per these 2 SQS documentations I see these 2 points:
if the source queue has a redrive policy with maxReceiveCount set to 5, and the consumer of the source queue receives a message 6 times without ever deleting it, Amazon SQS moves the message to the dead-letter queue (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html).
Maximum receives value must be between 1 and 100 (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue.html)
Which means that even if you set Maximum receives value to 1, your consumer would still receive the message "AT-LEAST TWICE"
I am not able to find any solution where you can move the failed message to DLQ after a single failure. Would love to hear other people's thoughts on this

How does Azure Service Bus identify a duplicate message?

I understand that Azure Service Bus has a duplicate message detection feature which will remove messages it believes are duplicates of other messages. I'd like to use this feature to help protect against some duplicate delivery.
What I'm curious about is how the service determines two messages are actually duplicates:
What properties of the message are considered?
Is the content of the message considered?
If I send two messages with the same content, but different message properties, are they considered duplicates?
The duplicate detection is looking at the MessageId property of the brokered message. So, if you set the message Id to something that should be unique per message coming in the duplicate detection can catch it. As far as I know only the message Id is used for detection. The contents of the message are NOT looked at, so if you have two messages sent that have the same actual content, but have different message IDs they will not be detected as duplicate.
References:
MSDN Documentation: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-queues-topics-subscriptions
If the scenario cannot tolerate duplicate processing, then additional
logic is required in the application to detect duplicates which can be
achieved based upon the MessageId property of the message which will
remain constant across delivery attempts. This is known as Exactly
Once processing.
There is also a Brokered Message Duplication Detection code sample on WindowsAzure.com that should be exactly what you are looking for as far as proving it out.
I also quickly tested this out and sent in 5 messages to a queue with RequiresDuplicateDetection set to true, all with the exact same content but different MessageIds. I then retrieved all five messages. I then did the reverse where I had matching MessageIds but different payloads, and only one message was retrieved.
In my case I have to apply ScheduledEnqueueTimeUtc on top of MessageId.
Because most of the time the first message already got pickup by worker, before the sub-sequence duplicate message were arrive in the Queue.
By adding ScheduledEnqueueTimeUtc. We tell the Service bus to hold on the the message for some time before letting worker them up.
var message = new BrokeredMessage(json)
{
MessageId = GetMessageId(input, extra)
};
// Delay 30 seconds for Message to process
// So that Duplication Detection Engine has enought time to reject duplicated message
message.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(30);
Another important property to be considered while dealing with 'RequiresDuplicateDetection' property of a Azure Service Bus entity is 'DuplicateDetectionHistoryTimeWindow', the time frame within which message with duplicate message id will be rejected.
Default value of duplicate detection time history now is 30 seconds, the value can range between 20 seconds and 7 days.
Enabling duplicate detection helps keep track of the application-controlled MessageId of all messages sent into a queue or topic during a specified time window. If any new message is sent carrying a MessageId that has already been logged during the time window, the message is reported as accepted (the send operation succeeds), but the newly sent message is instantly ignored and dropped. No other parts of the message other than the MessageId are considered.

Azure Queue unique message

I would like to make sure that I don't insert a message to the queue multiple times. Is there any ID/Name I can use to enforce uniqueness?
vtortola pretty much covered it, but I wanted to add a bit more detail into why it's at least once delivery.
When you read a queue item, it's not removed from the queue; instead, it becomes invisible but stays in the queue. That invisibility period defaults to 30 seconds (max: 2 hours). During that time, the code that got the item off the queue has that much time to process whatever command was in the queue message and delete the queue item.
Assuming the queue item is deleted before the timeout period is reached, all is well. However: Once the timeout period is reached, the queue item becomes visible again, and the code holding the queue item may no longer delete it. In this case, someone else can read the same queue message and re-process that message.
Because of the fact a queue message can timeout, and can re-appear:
Your queue processing must be idempotent - operations on a queue message must result in the same outcome (such as rendering a thumbnail for a photo).
You need to think about timeout adjustments. You might find that commands are valid but processing is taking too long (maybe your 45-second thumbnail rendering code worked just fine until someone uploaded a 25MP image)
You need to think about poison messages - those that will never process correctly. Maybe they cause an exception to be thrown or have some invalid condition that causes the message processor to abort processing, which leads to the message eventually re-appearing in the queue. There's a property callded DequeueCount - consider viewing that property upon reading a queue item and, if equal to, say, 3, push the message into a table or blob and send yourself a notification to spend some time debugging that message offline.
More details on the get-queue low-level REST API is here. This will give you more insight into Windows Azure queue message handling.
Azure queues doesn't ensure message order and either message uniqueness. Messages will be processed "at least once", but nothing ensures it won't be processed twice, so it doesn't ensure "at most once".
You should get ready to receive the same message twice. You can put an ID in the body of the message as part of your data.

Resources