Azure WebJob deployment - azure

I have a webjob in Azure, hosted on an App Service that is not used for anything. I am currently deploying my webjob from Visual Studio, but this will change in the future as it's not in production. It's a .NET Core 3.1 application webjob that compiles to an EXE, but that shouldn't matter to this question (and I'm aware of Azure Functions, but that is also not a part of my question).
The webjob is a continous webjob triggered by a queue. I have set it up to run 10 batches simultaneously. I have looked online for answers, but I have found unclear answers.
My question is: Let's say I have 3 jobs running. Then I deploy a new version of the EXE file. This seems to work without problems. But what happens to the jobs that are running? Will they continue running to the end? Or will it fail and stop? I haven't quite managed to sort that out and I wanted to ask here in case someone have helpful experience on this.
My queue related config is like this, if that's helpful:
.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage(a =>
{
a.BatchSize = 10;
a.NewBatchThreshold = 5;
a.MaxDequeueCount = 1;
a.MaxPollingInterval = TimeSpan.FromSeconds(20);
});
})
Thank you!

But what happens to the jobs that are running? Will they continue running to the end? Or will it fail and stop?
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.

Related

Will webjob scale-in kill "busy" instances?

Let's say I have a webjob that consumes messages from a storage queue. I am planning a scale rule to scale-out when my queue has too many messages waiting to be picked up.
Some of the messages will take a long time to process.
My question is... If scale-out happens and that the scale-in rule kicks in (when the nbr of message in the queue decrease), will Azure wait until the messages have finished processing before killing the instances or it will just kill the instances right away?
Update 01/20:
For auto-scaling, it doesn't matter, it will wait for the instance to finish it's job.
But for manually scale-in, it does not wait but kill it right away.
Original:
For manually scale-in, no, it does not kill the busy instance. Azure WebApp / webjobs should use a specified LoadBalance strategy for multiple instances, and it will balance the workload and route the queue messages to the 3 instances respectively. That means the 3 instances will work in parallel.
I didn't find any official document about this, but it's easy to test it out.
I set up a queue trigger webjob, and upload it as continuous in azure. After scale out, you can see the 3 instances are all working.
Note: the instance id 940246 is the original one, the other 2 instances id 4c7ed0 / f3753c comes after scale out.
In the kudu site, the 3 instances are listed there. Screenshot as below:
In the webjob logs:

Azure Continous WebJob keeps running

So My webjob runs on 10 instances, grabs 10 messages of the queue and processes them from what I can tell in my personal logs, but the webjob log never shows it finishing and the status continues to be "running" even though it should be finished. This job does run for awhile, about 45-60 minutes, since I'm syncing a ton of data for each call. I checked the process explorer and the thread says "Running" but when I look in the details I see below:
Process Explorer Example Here
Not sure what to do to make the job change its status to "Success" and continue on with the next item in the queue.
Another related issue, I'm using a ServiceBusTrigger but since the call is taking more than 5 minutes to complete, the next instance of the job picks up the same item from the queue again, so then I have 2 processes running the same message off the queue. It keeps doing this every 5 minutes until I maxed out my instance count available which is 10. Is there a way to stop this from happening? This may be related to issue above.
In order to fix this, I had to add the following:
public async Task SyncTest([ServiceBusTrigger("syncqueue")] BrokeredMessage message, TextWriter log)
{
message.Complete();
}

Queue trigger in azure apparently not clearing up after succesful functions run

I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.

Is Console.ReadKey(); fine for an azure webjob

At the risk of asking a stupid simple question:
I have a console application that uses servicestack framework to listen to a redis queue. Eventually I want to publish it up as a continuous azure web job.
I've seen examples that use:
host.RunAndBlock();
However I have no absolutely zero need to use the azure webjobs SDK and so just wondering is there anything 'wrong' with just using:
Console.ReadKey();
To keep the console program running.
I assume you're talking about a continuous WebJob.
I don't think Console.ReadKey() will work. The call is likely to just blow up in the sandbox that WebJobs run in.
If you are able to do the redis queue listening on the main thread, that would be simplest.
If now, just use an infinite sleep loop in your main. e.g.
for (;;)
{
System.Threading.Thread.Sleep(5000);
}

What happens to "in-progress" jobs when you deploy a webjob?

Subject says it all really :) Say I've got a pretty busy Azure continuous webjob that is processing from an azure Queue:
public static void ProcessQueue([QueueTrigger("trigger")] Info info)
{ .... }
If I re-deploy the webjob, I can see that any currently executing job seems to be aborted (I see a "Never Finished" status). Is that job replayed after I release or is it lost forever?
Also, is there a nice way to make sure that no jobs are running when we deploy webjobs, or is it up to the developer to code a solution to that (such as a config flag that is checked every run).
Thanks
When a WebJob that uses the WebJobs SDK picks up a message from a queue, it acquires it with a 10 minutes lease. If the job process dies while processing the message, the lease expires after 10 minutes and the message goes back in the queue. If the WebJob is restarted, it will pick that message again. The message is only deleted if the function completes successfully.
Therefore, if the job dies and restarts immediately, like in the case of a redeploy, it might take up to 10 minutes to pick again the message. Also, because of this, it is recommended to either save state yourself or make the function idempotent.
In the WebJobs Dashboard you will see two invocations for the same message. One of them will be marked as Never Finished because the function execution never completed.
Unfortunately, there is no out of the box solution to prevent jobs from running during deploy. You would have to create your own logic that notifies (through a queue message?) that a deploy is about the start and then aborts the host. The host abort will wait for any existing function to stop and will prevent new ones from starting. However, this is a very tricky situation if you have multiple instances of the webjob because only one of them will get the notification.

Resources