I have a continuous Azure WebJob, that is set to 'Always On'.
This WebJob is supposed to handle new messages being added to a storage queue.
I'm wondering what if for some reason the WebJob has stopped working while it was processing a queue trigger. This way I will lose the message from the queue, and it won't go to poison queue.
How can I workaround this?
I'm wondering what if for some reason the WebJob has stopped working
while it was processing a queue trigger.
Firstly, if the WebJob would exit while process message, you don't have to worry about corrupted data. When the WebJob access the message to process, it will hide the message in the queue for a certain amount time. If your WebJob shut down while process the message, the message will be available again after the time, and it will be re-grabbed and ran against the updated code.About the details, you could refer to this answer.
As for your requirement about being notified the job is shutting down. You could use Graceful Shutdown and listen to CancellationToken you can pass to you triggered function and judge the property IsCancellationRequired value , then you could handle the WebJob would shut down. Here is the sample code about CancellationToken.
Related
I have a Queue trigger Azure function which is triggered whenever a queue msg appears in Azure Queue Storage.
My work flow is:
A user may schedule a task which needs to run after a few days at a particular time (execute at time)
So I put a message in azure queue with visibility timeout as the time difference b/w current time and the execute at time of that task.
So when the msg is visible in the queue it gets picked up by the azure Function and gets executed.
I'm facing an intermittent issue when the queue message is supposed to be visible after a few days(<7 days). But somehow it got dropped/removed from the queue. So it was never picked up by the function, and that task still shows pending.
I've gone through all the articles I have found on the internet and didn't find solution to my problem.
The worst part is that it works fine for a few weeks but every now and then the queue messages (invisible ones)
Suddenly disappears. (I use azure storage explorer to check number of invisible messages)
We are working with Azure functions, which are triggered on every message in the service bus queue. We are trying to solve a problem whereby we need to disable a function on the function app processing messages, dynamically, so that it does not process messages any further and we do not lose any message in the process as well.
We can disable the functions via multiple ways, referring to link but the problem remains the same. Unable to figure out what happens to the functions already spawned when trying to disable the same.
Since the function is service bus triggered there is always a possibility that the function is processing a message and we disable the same, does it get processed, any sorts of cancellation is raised, it just dies out with an exception?
It would be great someone could direct me to some documentation or something. Thanks.
Azure Service Bus triggered function will already have a lock on the message that's being processed. If Function is terminated and the message was not completed or disposition, the lock will expire and the message will reappear on the queue. That's because of the Functions runtime receives a message in PeekLock mode.
One factor to consider is the queue's MaxDeliveryCount. If a function is terminated upon the last processing attempt, the message will be dead-lettered as all processing attempts have been exhausted. That's a standard Azure Service Bus behaviour.
I am looking into setting up a web job trigger to read message from service bus queue. What would be the best practice to implement a retry logic in case of any errors handling the downstream systems.
Would we be able to throw an exception so that the message will not be deleted from the queue and will be retried after certain time period?
Appreciate your feedback.
You don't need to define retry logic explicitly. When the message is de-queued from service bus , it gets invisible from queue for certain time period (lock time default 30secs , you can configure it). You try to process the message , if it gets successful you simply call BrokeredMessage.CompleteAsync which means i am done and mark this message as completed. If you have some problem in down stream you can abandon the message by calling BrokeredMessage.AbandonAsync . This will unlock the message and the message appears back in the queue. The message will be picked up by the worker again and process it. Until you get successful or reach the max retry limit after which the message is send to dead letter queue.
I have a console app that pushes a lot of messages in parallel to an Azure Storage Queue. There's a continuous triggered Azure WebJob that invokes whenever a message gets added into that queue. In one of the scenarios, I had added 300 items to queue , 100 each in three different threads. Since there are only 300 messages in the queue, the WebJob should ideally be invoked only 300 times. But I could see it has invoked 308 times. What could be the reason for it?
Also note that the additional trigger count is not predictable. Sometimes it may be 306 or 310 etc.
I have tried posting messages sequentially by removing Parallel.Invoke to check if its related to parallel processing, but the issue is still there. I am debugging the issue by running the WebJob project in local.
It is possible that exceptions occurred during processing of some messages. If a message fails to process successfully, it will be retried. Each message has a dequeueCount. If memory serves, the default for a Queue Trigger is five attempts.
Try running locally and watching the output window. Or, you can check the webjob logs on your live Azure instance to see if a message was retried.
Subject says it all really :) Say I've got a pretty busy Azure continuous webjob that is processing from an azure Queue:
public static void ProcessQueue([QueueTrigger("trigger")] Info info)
{ .... }
If I re-deploy the webjob, I can see that any currently executing job seems to be aborted (I see a "Never Finished" status). Is that job replayed after I release or is it lost forever?
Also, is there a nice way to make sure that no jobs are running when we deploy webjobs, or is it up to the developer to code a solution to that (such as a config flag that is checked every run).
Thanks
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.