Azure Service Bus - cancelled scheduled messages getting re-queued - azure

I'm using the latest Java bindings (v3.1.3) for Azure Service Bus: https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/servicebus
When I create a new queue client, schedule a message, and cancel it...
QueueClient sendClient = new QueueClient(new ConnectionStringBuilder(connectionString, queueName), ReceiveMode.PEEKLOCK);
long sequenceNumber = sendClient.scheduleMessage(message, instant);
...
sendClient.cancelScheduledMessage(sequenceNumber)
...the code appears to work as intended: The active message count goes to 0. But as soon as the scheduled message gets to the time it was supposed to be scheduled (I tested with 10 seconds and 100 seconds in the future), the message sometimes gets re-queued with a new sequence number. I'm not getting any errors when scheduling or cancelling the messages. Is there something I can do to make sure cancelled messages don't get re-queued?

From my own testing, I found that cancelling a service bus message in a short time frame after the scheduled message was sent to the service bus queue does not always process the cancellation as expected. In general we're talking only a few seconds but the behaviour is not entirely consistant.
My conslusion is that there will be some latency between the scheduled message being queued to when a cancellation of that same message is registered which means that canclelling a scheduled message almost straight away after sending it to the queue will not always stop it being processed.
Therefore in my environment, I had to provide my own fallback feature to check additional custom properties in the service bus message, so when it arrives back at my subscriber app, i use an IF Statement to check the status of the custom property so I can chose whether to ignore it and not process anything more.
This really caught me out for a little while as my environement was rather complex and I assumed there was some issue in my code somwhere along the line which in the end, once I factored in the above annomlly and started to see how the service bus was responding to the schedule message cancellation, I was able to overcome this issue.

You can schedule messages either by setting the ScheduledEnqueueTimeUtc property when sending a message through the regular send path, or explicitly with the ScheduleMessageAsync API. The latter immediately returns the scheduled message's SequenceNumber, which you can later use to cancel the scheduled message if needed.
Cancels the enqueuing of an already sent scheduled message, if it was not already enqueued. This is an asynchronous method returning a CompletableFuture which completes when the message is cancelled.
So, I suggest that you could use cancelScheduledMessageAsync to cancel scheduled message.

Related

What happens to the messages being processed on functions running when we disable the function?

We are working with Azure functions, which are triggered on every message in the service bus queue. We are trying to solve a problem whereby we need to disable a function on the function app processing messages, dynamically, so that it does not process messages any further and we do not lose any message in the process as well.
We can disable the functions via multiple ways, referring to link but the problem remains the same. Unable to figure out what happens to the functions already spawned when trying to disable the same.
Since the function is service bus triggered there is always a possibility that the function is processing a message and we disable the same, does it get processed, any sorts of cancellation is raised, it just dies out with an exception?
It would be great someone could direct me to some documentation or something. Thanks.
Azure Service Bus triggered function will already have a lock on the message that's being processed. If Function is terminated and the message was not completed or disposition, the lock will expire and the message will reappear on the queue. That's because of the Functions runtime receives a message in PeekLock mode.
One factor to consider is the queue's MaxDeliveryCount. If a function is terminated upon the last processing attempt, the message will be dead-lettered as all processing attempts have been exhausted. That's a standard Azure Service Bus behaviour.

Azure Web Jobs, Azure Service Bus Queue Trigger prevent message from getting deleted

I am looking into setting up a web job trigger to read message from service bus queue. What would be the best practice to implement a retry logic in case of any errors handling the downstream systems.
Would we be able to throw an exception so that the message will not be deleted from the queue and will be retried after certain time period?
Appreciate your feedback.
You don't need to define retry logic explicitly. When the message is de-queued from service bus , it gets invisible from queue for certain time period (lock time default 30secs , you can configure it). You try to process the message , if it gets successful you simply call BrokeredMessage.CompleteAsync which means i am done and mark this message as completed. If you have some problem in down stream you can abandon the message by calling BrokeredMessage.AbandonAsync . This will unlock the message and the message appears back in the queue. The message will be picked up by the worker again and process it. Until you get successful or reach the max retry limit after which the message is send to dead letter queue.

Azure Service Bus - Add a message to the queue in a deferred state

I'm wondering if it is possible to send a brokered message to a queue/topic where the message is already in a deferred state?
I'm asking this because I currently have a process that does the following ...
The process starts and a brokered message is sent to a queue (this triggers a function that records the message body as an entity in table storage with a 'Processing' status).
Additional work is done in the process
If we get to the end of the process without any issues, another brokered message is sent to the queue with a completion message (this triggers the same function that updates the entity in table storage with a 'Complete' status).
While this method is mostly working, it feels clunky and fragile. I would really like to be able to send a message to the queue and then have the final step make the message visible on the queue so it can be consumed by the function (Durable Function).
I thought about setting the ScheduledEnqueueTimeUtc, but I can't guarantee when the process will finish (I'm thinking worst case scenario here) so I'm not sure how long to set it.
I also looked at the Defer option for a BrokeredMessage but it seems this can only be set from the receiver and not be in a deferred state initially.
Is what I'm trying to do possible with Service Bus brokered messages? Could I set the Scheduled Enqueue time so some ridiculously long time (e.g. 2 hours) and if it reaches that time it is automatically expired and moved to the Dead Letter queue? Should I send the initial message to the Dead Letter queue and then once the process is complete, retrieve it and resubmit it?
Has anyone had any experience with implementing a process like this ... send a start message and only process the message once a completion notification has been received? I need this to be as robust as possible as I'm dealing with financial transactions in this process.
Hopefully my explanation makes sense.
I'm wondering if it is possible to send a brokered message to a queue/topic where the message is already in a deferred state?
That's not possible. You can only delay a brand new message, not defer it. Deferring required a message to be received first for it to have a SequenceNumber.
Using ScheduledEnqueueTimeUtc has its challenges as you will be sending it in the future, but cannot cancel once processing is over. Instead, you could leverage QueueClient.ScheduleMessageAsync() that returns back SequenceNumber immediately. This way you can set the message far into future, but also cancel it if processing is finished earlier.
I ended up solving this issue by keeping the process of sending two messages, but refactoring my durable function to record the messages in Table Storage, check that both messages have been received and if they have, add a new message to Azure Queue Storage. A second function listens to the queue which starts its process.
After much testing, this appears to be quite a robust solution. It then doesn't matter what order the two messages arrive, or how long they take ... as long as both of them have arrive, that is when the second function will kick off.

How to prevent Azure webjob processing same message multiple times concurrently

I have an Azure WebJob project, that I am running locally on my dev machine. It is listening to an Azure Service Bus message queue. Nothing going on like Topics, just the most basic message queue.
It is receiving/processing the same message multiple times, launching twice immediately when the message is received, then intermittently whilst the message is being processed.
Questions:
How come I am receiving the same message multiple times instantly? It seems that it's re-fetching before a PeekLock is applied?
How come the message is being re-received even though it is still being processed? Can I set the PeekLock duration, or somehow lock the message to only be processed once
How can I ensure that each message on the queue is only processed once?
I want to be able to process multiple messages at once, just not the same message multiple times, so setting MaxConcurrentCalls to 1 does not seem to be my answer, or am I misunderstanding that property?
I am using an async function, simple injector and a custom JobActivator, so instead of a static void method, my Function signature is:
public async Task ProcessQueueMessage([ServiceBusTrigger("AnyQueue")] MediaEncoderQueueItem message, TextWriter log) {...}
Inside the job, it is moving some files around on a blob service, and calling (and waiting for) a media encoder from media services. So, whilst the web job itself is not doing a lot of processing, it takes quite a long time (15 minutes, for some files).
The app is launching, and when I post a message to the queue, it responds. However, it is receiving the message multiple times as soon as the message is received:
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New ServiceBus message detected on 'MyQueue'.'
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New ServiceBus message detected on 'MyQueue'.'
Additionally, whilst the task is running (and I see output from the Media Service functionality), it will get another "copy" from the queue.
finally after the task has completed, it's still intermittently processing the same message.
I suspect what's happening is the following:
Maximum DurationLock can be 5 minutes. If processing of the message is done under 5 minutes, message is marked as completed and removed from the broker. Otherwise, message will re-appear if processing takes longer than 5 minutes (we lost the lock on the message) and will be consumed again. You could verify that by looking at the DeliveryCount of your message.
To resolve that, you could renew message lock just before it's about to expire using BrokeredMessage.RenewLockAsync().

Azure ServiceBus Retry Delay

I am using the Microsoft Azure ServiceBus for Queue messages using WCF for the subscriptions. I am trying to implement retry logic. I use Peak/Lock to view the message and then have to do some local processing on the message. If that processing fails, I unlock the message so I can try processing it again. The problem is I need to build a have a delay in-between processing tries. Currently it is popped back into the queue and then is processed almost immediately. There needs to be about 2 minutes between attempts.
If you always have to wait 2 minutes before re-processing the message of that particular queue, you could try to configure the lock-timeout on the queue to be 2 minutes (plus the time you expect it will take you to process the message) and then just let the lock expire, instead of unlocking it. This has the downside that you would need to keep an eye on your processing time, and extend the lock's timeout if needed.
Another option could be to receive and complete the message, set a scheduled delivery of 2 minutes into the future, and re send the message. This has the downside that you need to consume it and ack it, which involves certain risks (e.g. your process dies before you get a chance to re-send the message).
"If the message is Peeked in Peek Lock mode from a Queue then you don't have the receive context in the message. You can receive the message in Peek Lock mode, which will lock the message for the interval specified for the 'lock duration' property of the queue. Locked messages cannot be received until its lock expires. Thus, by setting the lock duration to 2 minutes and Receiving messages in Peek Lock mode will solve this issue.
You can either write custom code to update the Lock Duration property. Tools like Service Bus Explorer, Serverless360 etc provides options to update property using graphical user interface."

Resources