Azure Function Deployment while Running - azure

Let's say I have an Azure Function named func running. In the middle of func running, I deploy some new changes to the func. Will func finish the current run and then start with the new changes or will the current run just end?

Maybe this can help you:
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-zero-downtime-deployment
The reliable execution model of Durable Functions requires that orchestrations be deterministic, which creates an additional challenge to consider when you deploy updates. When a deployment contains changes to activity function signatures or orchestrator logic, in-flight orchestration instances fail. This situation is especially a problem for instances of long-running orchestrations, which might represent hours or days of work.
To prevent these failures from happening, you have two options:
Delay your deployment until all running orchestration instances have completed.
Make sure that any running orchestration instances use the existing versions of your functions.

Related

Two Azure Functions share an In-Proc collection

I have two Azure Functions. I can think of them as "Producer-Consumer". One is "HttpTrigger" based Function (Producer) which can be fired randomly. It writes the input data in a static "ConcurrentDictionary". The second one is "Timer Trigger" Azure Function(consumer). It reads the data periodically from the same "ConcurrentDictionary" which was being used by the "Producer" function App and then do some processing.
Both the functions are within the same .Net project (but in different classes). The in-memory data sharing through static "ConcurrentDictionary" works perfectly fine when I run the application locally. While running locally, I assume that they are running under the same process. However, when I deploy these Functions in Azure Portal ( They are in the same function App Resource), I found that data sharing through static "ConcurrentDictionary" is not not working.
I am just curious to know, if in Azure Portal, both the Functions have their own process (Probably, that's why they are not able to share in-process static collection). If that is the case, what are my options that these two Functions work as proper "Producer-Consumer"? Will keeping both the Functions in the same class help?
Probably, the scenario is just opposite to what is described in the post - "https://stackoverflow.com/questions/62203987/do-azure-function-from-same-app-service-run-in-same-instance". As against the question in the post, I would like both the Functions to use the same static member of a static class instance.
I am sorry that I cannot experiment too much because the deployment is done through Azure-DevOps pipeline. Too many check-ins in repository is slightly inconvenient. As I mention, it works well locally. So, I don't know how to recreate what's happening in Azure Portal in local environment so that I can try different options? Is there any configurable thing which I am missing to apply?
Don't do that, use an azure queue, event grid, service bus or something else that is reliable but just don't try using a shared object. It will fail as soon as scale out happens or as soon as one of the processes dies. Do think about functions as independent pieces and do not try to go against the framework.
Yes, it might work when you run the functions locally but then you are running on a single machine and the runtime might use the same process but once deployed that ain't true anymore.
If you really really don't want to decouple your logic into a fully seperated producer and consumer then write a single function that uses an in process queue or collection and have that function deal with the processing.

Recommended Azure service to replace Azure functions

We have a service running as an Azure function (Event and Service bus triggers) that we feel would be better served by a different model because it takes a few minutes to run and loads a lot of objects in memory and it feels like it loads it every time it gets called instead of keeping in memory and thus performing better.
What is the best Azure service to move to with the following goals in mind.
Easy to move and doesn't need too many code changes.
We have long term goals of being able to run this on-prem (kubernetes might help us here)
Appreciate your help.
To achieve first goal:
Move your Azure function code inside a continuous running Webjob. It has no max execution time and it can run continuously caching objects in its context.
To achieve second goal (On-premise):
You need to explain this better, but a webjob can be run as a console program on-premise, also you can wrap it into a docker container to move it from on-premise to any cloud but if you need to consume messages from an Azure Service Bus you will need an On-Premise-Azure approach connecting your local server to the cloud with a VPN or expressroute.
Regards.
There are a couple of ways to solve the said issue, each with slightly higher amount of change from where you are.
If you are just trying to separate out the heavy initial load, then you can do it once in a Redis Cache instance and then reference it from there.
If you are concerned about how long your worker can run, then Webjobs (as explained above) can work, however, that is something I'd suggest avoiding since its not where Microsoft is putting its resources. Rather look at durable functions. Here an orchestrator function can drive a worker function. (Even here be careful, that since durable functions retain history after running for very very very long times, the history tables might get too large - so probably program in something like, restart the orchestrator after say 50,000 runs (obviously the number will vary based on your case)). Also see this.
If you want to add to this, the constrain of portability then you can run this function in a docker image that can be run in an AKS cluster in Azure. This might not work well for durable functions (try it out, who knows :) ), but will surely work for the worker functions (which would cost you the most compute anyways)
If you want to bring the workloads completely on-prem then Azure functions might not be a good choice. You can create an HTTP server using the platform of your choice (Node, Python, C#...) and have that invoke the worker routine. Then you can run this whole setup inside an image on an AKS cluster on prem and to the user it looks just like a load balanced web-server :) - You can decide if you want to keep the data on Azure or bring it down on prem as well, but beware of egress costs if you decide to move it out once you've moved it up.
It appears that the functions are affected by cold starts:
Serverless cold starts within Azure
Upgrading to the Premium plan would move your functions to pre-warmed instances, which should counter the problem you are experiencing:
Pre-warmed instances for Azure Functions
However, if you potentially want to deploy your function/triggers to on-prem, you should spin them out as microservices and deploy them with containers.
Currently, the fastest way would probably be to deploy the containerized triggers via Azure Container Instances if you don't already have a Kubernetes Cluster running. With some tweaking, you can deploy them on-prem later on.
There are few options:
Move your function app on to premium. But it will not help u a lot at the time of heavy load and scale out.
Issue: In that case u will start facing cold startup issues and problem will be persist in heavy load.
Redis Cache, it will resolve your most of the issues as the main concern is heavy loading.
Issue: If your system is multitenant system then your Cache become heavy during the time.
Create small micro durable functions. It will be not the answer of your Q as u don't want lots of changes but it will resolve your most of the issues.

How many tasks I can execute inside the azure function?

I have some kind of that code in my azure function:
foreach (var ins in instances)
{
TaskList.Add(ins.DeallocateAsync());
}
Task.WaitAll(TaskList.ToArray());
And I'm wondering how many parallel tasks can be spawned inside the one azure function? Is the host tolerant for such model or it will kill subprocess?
#Gaploid I recommend to use durable functions in such scenarios which maintains yours state for the such long running operations.
Also, as MS recommends, azure functions should be less time consuming operations because if you're running your azure function in consumption plan then you may found it a bit unresponsiveness for your calls as Azure functions cold starts after an specific idle time. So when it restarts your lengthy process would be more complicated to boot up.
More importantly, if you use orchestration aka durable functions, you'll found yourself in saving some money as MS doesn't charges you for you background tasks.
specifically, you can run as much as threads/tasks in case you're using durable one as preferred by MSFT.

Azure Logic apps take hours/days to run Azure Functions

Background
I have a set of logic apps that each call a set function apps which are run in parallel.
Each logic app is triggered to start at a certain time during the night with all staggered an hour apart.
The Azure functions are written using the async pattern and call external APIs.
Problem
Sometimes the logic apps will run fine and complete their execution in a normal time period, and can do so for two or three days in a row.
However sometimes they will take hours or days forcing me to cancel their run.
Can any body shed any light on this might be happening?
Notes
I'm using the latest nuget packages of the durable functions extension
When debugging the functions always complete in a timely fashion
I have noticed that the functions sometimes get stuck at pending.
It appears you have at least two function apps that are configured with the same storage account and task hub name:
AzureConsumptionXXX
AzureComputeXXX
This causes the two function apps to steal messages from each other. If functions in one app do not exist in the other app, then it's very possible for orchestrations to get stuck in a Pending state like this.
The simplest way to mitigate this is to give each function app a unique task hub name. Please see the Task Hubs documentation for more information: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-task-hubs.

Understanding how WebJobs run

I am struggling to answer this question for myself so can anyone answer it for me. How, exactly, do webjobs get run? By this I mean does the Azure framework manage access to the WebJob and are multiple instances run in separate processes?
I am under the impression that I get 12 or 16 parrallel instances by default. Is this correct? So if three messages are placed on a queue, that my webjob is triggered by, they will all run in parrallel.
AFAIK, when you scale out multiple instances, the webjobs will run on the instances in parallel in separate processes. But there are prerequisites of it,
The webjob should be continous and not Manual/Scheduled.
For this to happen correctly, you need to be running in Standard mode, and have the Always On setting enabled.
Note: If you use TimerTrigger in your webjob, it will not scale out. Refer to this article.
Behind the scenes, TimerTrigger uses the Singleton feature of the WebJobs SDK to ensure that only a single instance of your triggered function is running at any given time. When the JobHost starts up, for each of your TimerTrigger functions a blob lease (the Singleton Lock) is taken. This distrubuted lock ensures that only a single instance of your scheduled function is running at any time.
For more details about the issue, here are two similar posts for you to refer, 1 and 2.

Resources