So My webjob runs on 10 instances, grabs 10 messages of the queue and processes them from what I can tell in my personal logs, but the webjob log never shows it finishing and the status continues to be "running" even though it should be finished. This job does run for awhile, about 45-60 minutes, since I'm syncing a ton of data for each call. I checked the process explorer and the thread says "Running" but when I look in the details I see below:
Process Explorer Example Here
Not sure what to do to make the job change its status to "Success" and continue on with the next item in the queue.
Another related issue, I'm using a ServiceBusTrigger but since the call is taking more than 5 minutes to complete, the next instance of the job picks up the same item from the queue again, so then I have 2 processes running the same message off the queue. It keeps doing this every 5 minutes until I maxed out my instance count available which is 10. Is there a way to stop this from happening? This may be related to issue above.
In order to fix this, I had to add the following:
public async Task SyncTest([ServiceBusTrigger("syncqueue")] BrokeredMessage message, TextWriter log)
{
message.Complete();
}
Related
I have a Queue trigger Azure function which is triggered whenever a queue msg appears in Azure Queue Storage.
My work flow is:
A user may schedule a task which needs to run after a few days at a particular time (execute at time)
So I put a message in azure queue with visibility timeout as the time difference b/w current time and the execute at time of that task.
So when the msg is visible in the queue it gets picked up by the azure Function and gets executed.
I'm facing an intermittent issue when the queue message is supposed to be visible after a few days(<7 days). But somehow it got dropped/removed from the queue. So it was never picked up by the function, and that task still shows pending.
I've gone through all the articles I have found on the internet and didn't find solution to my problem.
The worst part is that it works fine for a few weeks but every now and then the queue messages (invisible ones)
Suddenly disappears. (I use azure storage explorer to check number of invisible messages)
I have a webjob in Azure, hosted on an App Service that is not used for anything. I am currently deploying my webjob from Visual Studio, but this will change in the future as it's not in production. It's a .NET Core 3.1 application webjob that compiles to an EXE, but that shouldn't matter to this question (and I'm aware of Azure Functions, but that is also not a part of my question).
The webjob is a continous webjob triggered by a queue. I have set it up to run 10 batches simultaneously. I have looked online for answers, but I have found unclear answers.
My question is: Let's say I have 3 jobs running. Then I deploy a new version of the EXE file. This seems to work without problems. But what happens to the jobs that are running? Will they continue running to the end? Or will it fail and stop? I haven't quite managed to sort that out and I wanted to ask here in case someone have helpful experience on this.
My queue related config is like this, if that's helpful:
.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage(a =>
{
a.BatchSize = 10;
a.NewBatchThreshold = 5;
a.MaxDequeueCount = 1;
a.MaxPollingInterval = TimeSpan.FromSeconds(20);
});
})
Thank you!
But what happens to the jobs that are running? Will they continue running to the end? Or will it fail and stop?
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.
I have a console app that pushes a lot of messages in parallel to an Azure Storage Queue. There's a continuous triggered Azure WebJob that invokes whenever a message gets added into that queue. In one of the scenarios, I had added 300 items to queue , 100 each in three different threads. Since there are only 300 messages in the queue, the WebJob should ideally be invoked only 300 times. But I could see it has invoked 308 times. What could be the reason for it?
Also note that the additional trigger count is not predictable. Sometimes it may be 306 or 310 etc.
I have tried posting messages sequentially by removing Parallel.Invoke to check if its related to parallel processing, but the issue is still there. I am debugging the issue by running the WebJob project in local.
It is possible that exceptions occurred during processing of some messages. If a message fails to process successfully, it will be retried. Each message has a dequeueCount. If memory serves, the default for a Queue Trigger is five attempts.
Try running locally and watching the output window. Or, you can check the webjob logs on your live Azure instance to see if a message was retried.
A strange this is happening in one of my WebJobs and I cannot find what's wrong.
This WebJob is configured to use ServiceBusTrigger as soon as a message is received on a particular Service Bus Queue. This function has an execution time of minutes / hours, so I configured the JobHostConfiguration like this:
config.UseServiceBus(new ServiceBusConfiguration()
{
MessageOptions = new Microsoft.ServiceBus.Messaging.OnMessageOptions()
{
MaxConcurrentCalls = 3,
AutoRenewTimeout = new TimeSpan(4, 0, 0),
AutoComplete = true
}
});
If I understood correctly, this would grant that each function:
Is executed maximum three times per instance in parallel (one per each message)
The lock, on the message, is kept for a maximum of 4 hours. This would avoid the messages to be requeued if the processing time is less than 4 hours.
As soon as the function completes, the message is removed from the queue (even if it should be the default behavior).
These messages contains jobs that must be executed for different users. In order to be executed in parallalel for different users, but as a singleton for each user, inside the function I take a lease (different from each user) and I keep it for 60 seconds, renewing it every 30 seconds with a Task.
This seems to work: jobs are executed in parallel for different customer and same requests for same users are serialized.
Now, sometimes it happens that the WebJob stops triggering the function, even if the queue is full of messages. I don't understand why.
As soon as I restart that webjob, everything runs as before.
Any ideas on why this happens?
What I noticed is that the App was (wrongly) stopped, even if the webjobs were running. Maybe that was the problem?
Subject says it all really :) Say I've got a pretty busy Azure continuous webjob that is processing from an azure Queue:
public static void ProcessQueue([QueueTrigger("trigger")] Info info)
{ .... }
If I re-deploy the webjob, I can see that any currently executing job seems to be aborted (I see a "Never Finished" status). Is that job replayed after I release or is it lost forever?
Also, is there a nice way to make sure that no jobs are running when we deploy webjobs, or is it up to the developer to code a solution to that (such as a config flag that is checked every run).
Thanks
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.