Function in web app slower than standard function - azure

Are functions created as a web app slower on first load than a regular function?
I have a HttpTrigger as a normal function (in a .csx), and one in a web app. And it feels like the first response is slow on the web app one. Subsequent calls are fine. Feels like the first one just woke it up.
In a normal function I haven't really experienced that

Assuming by function in a web app, you're referring to the pre-compiled model using a web app project, please correct me if I'm wrong.
Cold start performance will be better when using the pre-compiled model (as there is no need for a compilation step). What you may be observing as a comparison between cold start (for the web app based version) and a warm host (for the CSX), as changes to your CSX will not trigger a host reload, just a recompilation of your function, where redeploying a pre-compiled function does.
If you restart both sites and compare first request performance, you should see a better results with the pre-compiled version (those may be variable and will be more noticeable if you have a larger number of functions)

Related

Two Azure Functions share an In-Proc collection

I have two Azure Functions. I can think of them as "Producer-Consumer". One is "HttpTrigger" based Function (Producer) which can be fired randomly. It writes the input data in a static "ConcurrentDictionary". The second one is "Timer Trigger" Azure Function(consumer). It reads the data periodically from the same "ConcurrentDictionary" which was being used by the "Producer" function App and then do some processing.
Both the functions are within the same .Net project (but in different classes). The in-memory data sharing through static "ConcurrentDictionary" works perfectly fine when I run the application locally. While running locally, I assume that they are running under the same process. However, when I deploy these Functions in Azure Portal ( They are in the same function App Resource), I found that data sharing through static "ConcurrentDictionary" is not not working.
I am just curious to know, if in Azure Portal, both the Functions have their own process (Probably, that's why they are not able to share in-process static collection). If that is the case, what are my options that these two Functions work as proper "Producer-Consumer"? Will keeping both the Functions in the same class help?
Probably, the scenario is just opposite to what is described in the post - "https://stackoverflow.com/questions/62203987/do-azure-function-from-same-app-service-run-in-same-instance". As against the question in the post, I would like both the Functions to use the same static member of a static class instance.
I am sorry that I cannot experiment too much because the deployment is done through Azure-DevOps pipeline. Too many check-ins in repository is slightly inconvenient. As I mention, it works well locally. So, I don't know how to recreate what's happening in Azure Portal in local environment so that I can try different options? Is there any configurable thing which I am missing to apply?
Don't do that, use an azure queue, event grid, service bus or something else that is reliable but just don't try using a shared object. It will fail as soon as scale out happens or as soon as one of the processes dies. Do think about functions as independent pieces and do not try to go against the framework.
Yes, it might work when you run the functions locally but then you are running on a single machine and the runtime might use the same process but once deployed that ain't true anymore.
If you really really don't want to decouple your logic into a fully seperated producer and consumer then write a single function that uses an in process queue or collection and have that function deal with the processing.

Dealing with Long running Tasks in Azure

We are moving an On-Premise solution into Azure and there are few services as part of the application which schedules to run once everyday.
I did it as a Web API and when ever the HTTP call calls the method fires without any trouble.
But the problem is the the method behind this API is a heavy weight one which takes around 40-50 mins to finish.
But since Azure APIs will expire in 230sec, I am really got stuck.
I am calling the API from Timer Triggered Azure functions. Its working fine.
But the 30-40 mins becoming a real challenge.
So how to handle this such situation in Azure when we have a time consuming method to execute.
(Other than APIs as well)
There can be many issues that causing performance problems in Azure Functions. Try to debug with the help of Azure Service Profiler or any other debugging tools, determine which line of code is executing how long.
Few reasons could be like:
There might be inefficient algorithm written form fetching IDs/ADLS (Azure Data Lake Storage) operations.
If await keyword is used in the Function App Code, then use the .ConfigureAwait(false) functionality also.
Enable Automatic Scaling in the Azure Function App..
It also depends on NuGet Packages that you're using which might be taking long time to create the Azure Functions instance.
ReadIDs and ReadData functions should be asynchronous.
Note: You may get doubt like all the functions are with async, but make sure in return type the Function definition should have Task and async keyword.

Azure Logic App that does heavy processing?

I am wanting to create a custom Azure Logic App which does some heavy processing. I am reading as much as I can about this. I want to describe what I wish to do, as I understand it currently, then I am hoping someone can point out where I am incorrect in my understanding, or point out a more ideal way to do this.
What I want to do is take an application that runs a heavy computational process on a 3D mesh and turn it into a node to use in Azure Logic App flows.
What I am thinking so far, in a basic form, is this:
HTTP Trigger App: This logic app receives a reference to a 3D mesh to be processed, it then saves this mesh to Azure Store and passes that that reference to the next logic app.
Mesh Computation Process App: This receives the Azure Storage reference to the 3D mesh. It then launches a high performance server with many CPUs and GPU's, the high performance server downloads the mesh, processes the mesh, then uploads the mesh back to Azure Storage. This app then passes the reference to the processed mesh to the next logic app. Finally this shuts down the high performance server so it doesn't consume resource unnecessarily.
Email notification App: This receives the Azure Storage reference to the processed mesh, then fires off an email with the download link to the user.
Is this possible? So far what I've read this appears possible. I am just wanting someone to verify this in case I've severely misunderstood something.
Also I am hoping a to get a little bit of guidance on the mechanism to launch and shut down a high performance server within the 'Mesh Computation Process App'. The only place the Azure documentation mentions asynchronous, long-term, task processing in Logic Apps is on this page:
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-create-api-app
It talks about it requiring you to launch an API App or a Web App to receive the Azure Logic App request, then ping back status to Azure Logic Apps. I was wondering, is it possible to do this in a serverless manner? So the 'Mesh Computation Process App' would fire off an Azure Function which spins up the higher performance server, then another Azure Function periodically pings that server to report back status until complete, at which point an Azure Function then triggers the higher performance server to shut down, then signals to the 'Mesh Computation Process App' that it is complete and it continues onto the next logic app. Is it possible to do it in that manner?
Any comments or guidance on how to better approach or think about this would be appreciated. This is the first time I've dug into Azure, so I am simultaneously trying to orient myself on proper understanding in Azure and make a system like this.
Should be possible. At the moment I'm not exactly sure if Logic Apps themselves can create all of those things for you, but it definitely can be done with Azure Functions, in a serverless manner.
For your second step, if I understand correctly, you need it to run for long, just so that it can pass something further once the VM is done? You don't really need that. When in serverless, try to not think of long running tasks, and remember that everything is an event.
Putting stuff into Azure Blob storage is an event you can react to, this removes your need for linking.
Your first step, saves stuff to Azure Store, and that's it, doesn't need to do anything else.
Your second app, triggers on inserted stuff to initiate processing.
The VM processes your stuff, and puts it in the store.
The email app triggers on stuff being put into "processed" folder.
Another App triggers on the same file to shut down the VM.
This way you remove the long running state management and chaining the apps directly, and instead each of them does only what it needs to do, and then apps can trigger automatically to the results of the previous flows.
If you do need some kind of state management/orchestration in all of your steps, and you want to still be in serverless, look into durable azure functions. They are serverless, but the actions they take and results they get are stored in table storage, so it can be recreated and restored to a state it was in before. Of course everything is done for you automatically, it just changes a bit on what exactly you can do inside of it to still be durable.
The actual state management you might want to do is maybe something to keep track of all the VM's and try to reuse stuff, instead of spending time spinning them up and killing them. But don't complicate it too much for now.
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview
Of course you still need to think about error handling, like what happens if your VM just dies without uploading anything, you don't want to just miss on stuff. So you can trigger special flows to handle repeats/errors, maybe send different emails, etc.

Azure Function reaching timeout without doing anything

I have an Azure Function app in Node.js with a couple of Queue-triggered functions.
These were working great, until I saw a couple of timeouts in my function logs.
From that point, none of my triggered functions are actually doing anything. They just keep timing out even before executing the first line of code, which is a context.log()-statement to show the execution time.
What could be the cause of this?
Check your functions storage account in the azure portal, you'll likely see very high activity for files monitoring.
This is likely due to the interaction between Azure Files and requiring a large node_modules tree. Once the modules have been required once, functions will execute quickly because modules are cached, but these timeouts can throw the function app into a timeout -> restart loop.
There's a lot of discussion on this, along with one possible improvement (using webpack on server side modules) here.
Other possibilities:
decrease number of node modules if possible
move to dedicated instead of consumption plan (it runs on a different file system which has better performance)
use C# or F#, which don't suffer from these limitations

First server call is taking more time than subsequent call in Windows Azure cloud application?

I am working on windows azure cloud service. First time when i click on login button it takes 6 to 7 seconds but after sometime when i click on same login button it takes 2 seconds. I am not able to understand why it is happening so though the server side code is same for both processing but subsequent calls are quiet faster than first call ?.
"First-hit" delay is very common with ASP.NET applications. There is the overhead of JIT compilation, and various "pools" (database connections, threads, etc) may not be initialized. If you have an ASP.NET Web Forms application, each .aspx page is compiled the first time it is accessed, not when the server starts up. Also the various caching mechanisms (server or client) that make subsequent requests faster are not initialized on that first hit. And on the very first hit, any code in Application_Start will be run, setting up routing tables and doing any other initialization.
There are various things you can do to prevent your users from seeing this delay. The simplest is to write some kind of automated process that hits every page and run it after deploying a new release. There are also modules for IIS that will run code ahead of the Application_Start, when the site is actually deployed. Search for "ASP.NET warmup" to find those.
You may also experience delays after a period of inactivity, if your ASP.NET App Pool is recycled - this resets a bunch of things and causes start-up code to be run again on the next request. You can ameliorate this effect by setting up something to ping a page on your site frequently so that if the app pool is recycled it is warmed up again automatically, instead of on the next actual user request. Using an uptime monitoring service will work for this, or a Scheduled Task within the Azure ecosystem itself.

Resources