I have a console app that pushes a lot of messages in parallel to an Azure Storage Queue. There's a continuous triggered Azure WebJob that invokes whenever a message gets added into that queue. In one of the scenarios, I had added 300 items to queue , 100 each in three different threads. Since there are only 300 messages in the queue, the WebJob should ideally be invoked only 300 times. But I could see it has invoked 308 times. What could be the reason for it?
Also note that the additional trigger count is not predictable. Sometimes it may be 306 or 310 etc.
I have tried posting messages sequentially by removing Parallel.Invoke to check if its related to parallel processing, but the issue is still there. I am debugging the issue by running the WebJob project in local.
It is possible that exceptions occurred during processing of some messages. If a message fails to process successfully, it will be retried. Each message has a dequeueCount. If memory serves, the default for a Queue Trigger is five attempts.
Try running locally and watching the output window. Or, you can check the webjob logs on your live Azure instance to see if a message was retried.
Related
I have a Queue trigger Azure function which is triggered whenever a queue msg appears in Azure Queue Storage.
My work flow is:
A user may schedule a task which needs to run after a few days at a particular time (execute at time)
So I put a message in azure queue with visibility timeout as the time difference b/w current time and the execute at time of that task.
So when the msg is visible in the queue it gets picked up by the azure Function and gets executed.
I'm facing an intermittent issue when the queue message is supposed to be visible after a few days(<7 days). But somehow it got dropped/removed from the queue. So it was never picked up by the function, and that task still shows pending.
I've gone through all the articles I have found on the internet and didn't find solution to my problem.
The worst part is that it works fine for a few weeks but every now and then the queue messages (invisible ones)
Suddenly disappears. (I use azure storage explorer to check number of invisible messages)
So My webjob runs on 10 instances, grabs 10 messages of the queue and processes them from what I can tell in my personal logs, but the webjob log never shows it finishing and the status continues to be "running" even though it should be finished. This job does run for awhile, about 45-60 minutes, since I'm syncing a ton of data for each call. I checked the process explorer and the thread says "Running" but when I look in the details I see below:
Process Explorer Example Here
Not sure what to do to make the job change its status to "Success" and continue on with the next item in the queue.
Another related issue, I'm using a ServiceBusTrigger but since the call is taking more than 5 minutes to complete, the next instance of the job picks up the same item from the queue again, so then I have 2 processes running the same message off the queue. It keeps doing this every 5 minutes until I maxed out my instance count available which is 10. Is there a way to stop this from happening? This may be related to issue above.
In order to fix this, I had to add the following:
public async Task SyncTest([ServiceBusTrigger("syncqueue")] BrokeredMessage message, TextWriter log)
{
message.Complete();
}
I have azure function triggered off of a storage queue. The behavior of the function is to create out around 10,000 additional messages to another storage queue for another function (within the same function app) to execute. I'm seeing some strange behavior whenever it executes, where the first function appears to be executed multiple times. I observed this by watching the queue it's publishing to receive significantly more messages than expected.
I understand that the function should be coded defensively (i.e. expect to be executed multiple times), but this is happening consistently each time the first function executes. I don't think the repeated executions re due to it timing out or failing (according to app insights).
Could it be that when the 10,000 messages get queued up the function is scaling out and that is somehow causing the original message to be executed multiple times?
The lock on the original message that triggers the first Azure Function to execute is likely expiring. This will cause the Queue to assume that processing the message failed, and it will then use that message to trigger the Function to execute again. You'll want to look into renewing the message lock while you're sending out 10,000 messages to the next Queue.
Also, since you're sending out 10,000 messages, you may need to redesign that to be more efficient at scaling out what ever massively parallel processing you're attempting to implement. 10,000 is a really high amount of messages to send from a single triggered event.
I have an Azure WebJob project, that I am running locally on my dev machine. It is listening to an Azure Service Bus message queue. Nothing going on like Topics, just the most basic message queue.
It is receiving/processing the same message multiple times, launching twice immediately when the message is received, then intermittently whilst the message is being processed.
Questions:
How come I am receiving the same message multiple times instantly? It seems that it's re-fetching before a PeekLock is applied?
How come the message is being re-received even though it is still being processed? Can I set the PeekLock duration, or somehow lock the message to only be processed once
How can I ensure that each message on the queue is only processed once?
I want to be able to process multiple messages at once, just not the same message multiple times, so setting MaxConcurrentCalls to 1 does not seem to be my answer, or am I misunderstanding that property?
I am using an async function, simple injector and a custom JobActivator, so instead of a static void method, my Function signature is:
public async Task ProcessQueueMessage([ServiceBusTrigger("AnyQueue")] MediaEncoderQueueItem message, TextWriter log) {...}
Inside the job, it is moving some files around on a blob service, and calling (and waiting for) a media encoder from media services. So, whilst the web job itself is not doing a lot of processing, it takes quite a long time (15 minutes, for some files).
The app is launching, and when I post a message to the queue, it responds. However, it is receiving the message multiple times as soon as the message is received:
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New ServiceBus message detected on 'MyQueue'.'
Executing: 'Functions.ProcessQueueMessage' - Reason: 'New ServiceBus message detected on 'MyQueue'.'
Additionally, whilst the task is running (and I see output from the Media Service functionality), it will get another "copy" from the queue.
finally after the task has completed, it's still intermittently processing the same message.
I suspect what's happening is the following:
Maximum DurationLock can be 5 minutes. If processing of the message is done under 5 minutes, message is marked as completed and removed from the broker. Otherwise, message will re-appear if processing takes longer than 5 minutes (we lost the lock on the message) and will be consumed again. You could verify that by looking at the DeliveryCount of your message.
To resolve that, you could renew message lock just before it's about to expire using BrokeredMessage.RenewLockAsync().
I have a WebJob which gets triggered when a user uploads a file to the blob storage - it is triggered by a queue storage message which is created once the upload is complete.
Depending on the purpose of the file, it will post messages to other queues to trigger processing jobs.
Some of these jobs are time critical, and run relatively quickly. In one case the processing takes about three seconds, and the user is waiting for the result.
However, because the minimum queue polling interval is 2 seconds, the time the user must wait for the two WebJobs to be invoked is generally doubling their wait time.
I tried combining the two WebJobs into one, hoping that when the first handler posts a queue message the corresponding processing handler would be immediately triggered, but in fact it consistently waits two seconds before picking up the message.
My question is, is there a way for me to tell my WebJob to check the queue triggers immediately from within the same WebJob if I know there is a message waiting? Or even better configure it to immediately check the queue triggers if I post to a queue from inside the WebJob?
Or would switching to a service bus queue improve the responsiveness to new messages?
Update
In the docs about using blob triggers, it says:
There is an exception for blobs that you create by using the Blob attribute. When the WebJobs SDK creates a new blob, it passes the new blob immediately to any matching BlobTrigger functions. Therefore if you have a chain of blob inputs and outputs, the SDK can process them efficiently. But if you want low latency running your blob processing functions for blobs that are created or updated by other means, we recommend using QueueTrigger rather than BlobTrigger.
http://azure.microsoft.com/en-gb/documentation/articles/websites-dotnet-webjobs-sdk-storage-blobs-how-to/
However there is no mention of anything similar for queues. Meaning if you need really low latency in this scenario then blobs are the better than queues, which seems wrong.
Update 2
I ended up working around this by pulling the orchestrating code out of the first WebJob and into the service layer of the application and removing the WebJob.. it was fast running anyway so perhaps separating it into its own WebJob was an overkill. This means only the processing WebJob has to be triggered after the file upload.
Currently 2 sec is the minimum time it will take for the SDK to poll for the new message. The SDK does an exponential back off polling so you can configure the MaxPollingInterval to be 2 sec always.
config.Queues.MaxPollingInterval = TimeSpan.FromSeconds(15);
For more details please see http://azure.microsoft.com/en-us/documentation/articles/websites-dotnet-webjobs-sdk-storage-queues-how-to/#config