Set visibilitytimeout to 7days means automatically delete for azure queue? - azure

It seems that new azure SDK extends the visibilitytimeout to <= 7 days. I know by default, when I add a message to an azure queue, the live time is 7days. When I get message out, and set the visibilitytimeout to 7days. Does that mean I don't need to delete this message if I don't care about message reliable? the message will disappear later 7 days.
I want to take this way because DeleteMessage is very slow. If I don't delete message, doesn't it have any impact on performance of GetMessage?

Based on the documentation for Get Messages, I believe it is certainly possible to set the VisibilityTimeout period to 7 days so that messages are fetched only once. However I see some issues with this approach instead of just deleting the message once the process is done:
What happens when you get the message and start processing it and somehow the process fails? If you set the visibility timeout to be 7 days, then the message would never appear in the queue again and thus the process it was supposed to do never gets done.
Even though the message is hidden, it is still there in the queue thus you keep on incurring storage charges for that message. Even though the cost is trivial but why keep the message when you don't really need it.
A lot of systems rely on Approximate Messages Count property of a queue to check on the health of processes which are performed by messages in a queue. Please note that even though you make the message hidden, it is still there in the queue and thus will be included in total messages count in the queue. So if you're building a system which relies on this for health check, you will always find your system to be unhealthy because you're never deleting the messages.
I'm curious to know why you find deleting messages to be very slow. In my experience this is quite fast. How are you monitoring message deletion?

Rather than hacking around the problem I think you should drill into understanding why the deletes are slow. Have you enabled logs and looked at the e2elatency and serverlatency numbers across all your queue operations. Ideally you shouldn't be seeing a large difference between the two for all your queue operations. If you do see a large difference then it implies something is happening on the client that you should investigate further.
For more information on logging take a look at the following articles:
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/
http://msdn.microsoft.com/en-us/library/azure/hh343262.aspx
Information on client side logging can also be found in this post: e client side logging – which you can learn more about in this blog post. http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/announcing-storage-client-library-2-1-rtm.aspx
Please let me know what you find.
Jason

Related

How does message locking and lock renewal work in Azure Service Bus?

I'm getting a bit confused trying to nail down how peek lock works in Service Bus. In particular I'm using Microsoft.Azure.ServiceBus with Azure Functions and a ServiceBusTrigger.
From what I can make out the time a message gets locked for is set on the queue itself and defaults to 30 seconds though it can be set to be anywhere up to 5 minutes.
When a message is peeked from the queue this lock kicks in.
There is then a setting called maxAutoRenewDuration which when using Azure Functions is set in the host.json file under Extensions:ServiceBus:messageHandlerOptions. This allows the client to automatically request one or more extensions to a lock until the maxAutoRenewDuration is reached. Once you hit this limit a renewal won't be requested and the lock will be released.
Renewals are best effort and can't be guaranteed so ideally you try to come up with a design where messages are typically processed within the lock period specified on the queue.
Have I got this right so far?
Questions I still have are
are there and limits on what maxAutoRenewDuration can be set to. One article I read seemed to suggest that this could be set to whatever I need to ensure my message is processed (link). The Microsoft Documentation though states that the maximum value for this is also limited to 5 minutes (link).
The maxAutoRenewDuration is configurable in host.json, which maps to OnMessageOptions.MaxAutoRenewDuration. The maximum allowed for this setting is 5 minutes according to the Service Bus documentation
Which is correct? I know the default lock duration has a maximum of 5 minutes but it doesn't seem to make sense that this also applies to maxAutoRenewDuration?
I've read about a setting called MaxLockDuration in some articles (e.g. link). Is this just referring to the lock duration set on the queue itself?
Am I missing anything else? Are the lock duration set on the queue and the maxAutoRenewDuration in my code the main things I need to consider when dealing with locks and renewals?
Thanks
Alan
I understand your confusion. The official doc explanation of maxAutoRenewDuration seems wrong. There is already an open doc issue https://github.com/MicrosoftDocs/azure-docs/issues/62110 and also a reference https://github.com/Azure/azure-functions-host/issues/6500.
To pinpoint your questions:
#1: As told above, the there is open doc issue.
#2: MaxLockDuration is Service Bus queue side setting which basically signifies that if you peek lock a message from the queue, the message is locked for the consumer for that duration. So, unless you complete your message processing or renew lock within that period, the lock is going to expire.
#3: #sean-feldman 's awesome explanation in the thread https://stackoverflow.com/a/60381046/13791953 should answer that.
What it does is extends the message lease with the broker, "re-locking" it for the competing consumer that is currently handling the message. MaxAutoRenewDuration should be set to the "possibly maximum processing time a lease will be required".

Messages going to dead letter rather than active queue

I have configured service bus and I am sending messages to a topic in it.
I am observing a strange behavior that my messages are going to dead letter queue and not the active queue.
I have checked the properties for my topic like the auto delete on idle, default time to live but not able to figure out the reason.
I tried turning off my listener on this topic hoping some code failure causing the messages to go to dead letter. But still not able to figure out the reason.
Inspect queue's MaxDeliverCount. If dead-lettered messages exceed that value, it's an indication your code was failing to process the messages and they were dead-lettered for that reason. The reason is stated in the DeadLetterReason header. If that's the case, as suggested in the comments, log in your code the reason of failure to understand what's happening.
Additional angle to check if your message is getting aborted. This could happen when you use some library or abstraction on top of Azure Service Bus client. If it is, it will eventually get dead-lettered as well. Just like in the first scenario, you'll need some logs to understand why this is happening.

Retrieving Azure Queue message length in C#

I noticed CloudQueue.ApproximateMessageCount would return the number of messages including the expired ones. This is probably a bug. Is there any way to see how many messages are in the queue?
So after doing some digging around I think I found the behavior you're talking about. From what I can tell the messages are still in the queue when they expire but aren't retrievable. They seem to remain in there for a short period of time and then are cleared out.
If I had to guess it may be similar to a storage bus queue in that expired messages are moved to some sort of dead letter queue. Except with storage queues you can't access the dead letter queue and the dead letter queue is automatically cleared after some period of time.
I'll update this answer if I find more.
Edit
I confirmed the behavior. It seems expired messages remain in the queue but you can't interact with them. They disappear eventually without intervention it appears.

Temporarily hiding a message in azure service bus queue/topic

I have a scenario where some of the messages depend no the completion of another messages to be completed. So there is a precondition for a set of messages to be processed that another message should be processed first. The precondition message is a long running process which can take up to 30 minutes to process.
What I would like is to hide a message for lets say 5 minutes from all the subscribers when I sense that precondition is not complete and then after 5 minutes it is available again and hidden for next 5 minutes if cant be processed and so on.
I can see that I can use sessions and defer could be solution but I do not want to go that way. Since that will require to maintain a storage to keep the defered messages in a non queue storage.
Another way could be that I do a peak lock on the message and then leave it alone and let the lock expire so that in due time it will reappear in the queue.
Is there a better way of doing this?
There are a couple ways to achieve this. When you get a message to can choose to Defer it. This will remove it from the active queue and you will have to later ask for this message specifically with a MessageId. For your scenario it may be possible to use Scheduled messages (see below) but that will involve receiving the message and then scheduling another one using the following:
http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.servicebus.messaging.brokeredmessage.scheduledenqueuetimeutc.aspx

Controlling azure worker roles concurrency in multiple instance

I have a simple work role in azure that does some data processing on an SQL azure database.
The worker basically adds data from a 3rd party datasource to my database every 2 minutes. When I have two instances of the role, this obviously doubles up unnecessarily. I would like to have 2 instances for redundancy and the 99.95 uptime, but do not want them both processing at the same time as they will just duplicate the same job. Is there a standard pattern for this that I am missing?
I know I could set flags in the database, but am hoping there is another easier or better way to manage this.
Thanks
As Mark suggested, you can use an Azure queue to post a message. You can have the worker role instance post a followup message to the queue as the last thing it does when processing the current message. That should deal with the issue Mark brought up regarding the need for a semaphore. In your queue message, you can embed a timestamp marking when the message can be processed. When creating a new message, just add two minutes to current time.
And... in case it's not obvious: in the event the worker role instance crashes before completing processing and fails to repost a new queue message, that's fine. In this case, the current queue message will simply reappear on the queue and another instance is then free to process it.
There is not a super easy way to do this, I dont think.
You can use a semaphore as Mark has mentioned, to basically record the start and the stop of processing. Then you can have any amount of instances running, each inspecting the semaphore record and only acting out if semaphore allows it.
However, the caveat here is that what happens if one of the instances crashes in the middle of processing and never releases the semaphore? You can implement a "timeout" value after which other instances will attempt to kick-start processing if there hasnt been an unlock for X amount of time.
Alternatively, you can use a third party monitoring service like AzureWatch to watch for unresponsive instances in Azure and start a new instance if the amount of "Ready" instances is under 1. This will save you can save some money by not having to have 2 instances up and running all the time, but there is a slight lag between when an instance fails and when a new one is started.
A Semaphor as suggested would be the way to go, although I'd probably go with a simple timestamp heartbeat in blob store.
The other thought is, how necessary is it? If your loads can sustain being down for a few minutes, maybe just let the role recycle?
Small catch on David's solution. Re-posting the message to the queue would happen as the last thing on the current execution so that if the machine crashes along the way the current message would expire and re-surface on the queue. That assumes that the message was originally peeked and requires a de-queue operation to remove from the queue. The de-queue must happen before inserting the new message to the queue. If the role crashes in between these 2 operations, then there will be no tokens left in the system and will come to a halt.
The ESB dup check sounds like a feasible approach, but it does not sound like it would be deterministic either since the bus can only check for identical messages currently existing in a queue. But if one of the messages comes in right after the previous one was de-queued, there is a chance to end up with 2 processes running in parallel.
An alternative solution, if you can afford it, would be to never de-queue and just lease the message via Peek operations. You would have to ensure that the invisibility timeout never goes beyond the processing time in your worker role. As far as creating the token in the first place, the same worker role startup strategy described before combined with ASB dup check should work (since messages would never move from the queue).

Resources