We are moving an On-Premise solution into Azure and there are few services as part of the application which schedules to run once everyday.
I did it as a Web API and when ever the HTTP call calls the method fires without any trouble.
But the problem is the the method behind this API is a heavy weight one which takes around 40-50 mins to finish.
But since Azure APIs will expire in 230sec, I am really got stuck.
I am calling the API from Timer Triggered Azure functions. Its working fine.
But the 30-40 mins becoming a real challenge.
So how to handle this such situation in Azure when we have a time consuming method to execute.
(Other than APIs as well)
There can be many issues that causing performance problems in Azure Functions. Try to debug with the help of Azure Service Profiler or any other debugging tools, determine which line of code is executing how long.
Few reasons could be like:
There might be inefficient algorithm written form fetching IDs/ADLS (Azure Data Lake Storage) operations.
If await keyword is used in the Function App Code, then use the .ConfigureAwait(false) functionality also.
Enable Automatic Scaling in the Azure Function App..
It also depends on NuGet Packages that you're using which might be taking long time to create the Azure Functions instance.
ReadIDs and ReadData functions should be asynchronous.
Note: You may get doubt like all the functions are with async, but make sure in return type the Function definition should have Task and async keyword.
Related
I am currently working on supporting an old application which uses logic apps and azure functions.
The logic apps are on consumption plan and it times out frequently due to long running azure functions which in turn calls ms sql server using EF core.
Now, we don't want to spend much time on development as it will be sunset and migrated so azure durable functions, webhooks, and event bus is not being considered.
Are there any other ways to solve this which requires no major code changes?
We are planning to move from consumption to standard logic apps to increase the timeout from 2 minutes to 3.9 minutes.
Any pointers would be highly appreciated.
I had found one alternative that you can use until loop.
Until loops run until specific condition is true. Until loop requires 200 ok Response from the request. As per MS-Doc, it says:
This loop action definition sends an HTTP request to the specified URL .
As #skin suggested, you can use webhooks, durable functions which does exactly what you need (if development is not concern). And as you said you can change to Standard to increase timeout till 4 min.
We have a service running as an Azure function (Event and Service bus triggers) that we feel would be better served by a different model because it takes a few minutes to run and loads a lot of objects in memory and it feels like it loads it every time it gets called instead of keeping in memory and thus performing better.
What is the best Azure service to move to with the following goals in mind.
Easy to move and doesn't need too many code changes.
We have long term goals of being able to run this on-prem (kubernetes might help us here)
Appreciate your help.
To achieve first goal:
Move your Azure function code inside a continuous running Webjob. It has no max execution time and it can run continuously caching objects in its context.
To achieve second goal (On-premise):
You need to explain this better, but a webjob can be run as a console program on-premise, also you can wrap it into a docker container to move it from on-premise to any cloud but if you need to consume messages from an Azure Service Bus you will need an On-Premise-Azure approach connecting your local server to the cloud with a VPN or expressroute.
Regards.
There are a couple of ways to solve the said issue, each with slightly higher amount of change from where you are.
If you are just trying to separate out the heavy initial load, then you can do it once in a Redis Cache instance and then reference it from there.
If you are concerned about how long your worker can run, then Webjobs (as explained above) can work, however, that is something I'd suggest avoiding since its not where Microsoft is putting its resources. Rather look at durable functions. Here an orchestrator function can drive a worker function. (Even here be careful, that since durable functions retain history after running for very very very long times, the history tables might get too large - so probably program in something like, restart the orchestrator after say 50,000 runs (obviously the number will vary based on your case)). Also see this.
If you want to add to this, the constrain of portability then you can run this function in a docker image that can be run in an AKS cluster in Azure. This might not work well for durable functions (try it out, who knows :) ), but will surely work for the worker functions (which would cost you the most compute anyways)
If you want to bring the workloads completely on-prem then Azure functions might not be a good choice. You can create an HTTP server using the platform of your choice (Node, Python, C#...) and have that invoke the worker routine. Then you can run this whole setup inside an image on an AKS cluster on prem and to the user it looks just like a load balanced web-server :) - You can decide if you want to keep the data on Azure or bring it down on prem as well, but beware of egress costs if you decide to move it out once you've moved it up.
It appears that the functions are affected by cold starts:
Serverless cold starts within Azure
Upgrading to the Premium plan would move your functions to pre-warmed instances, which should counter the problem you are experiencing:
Pre-warmed instances for Azure Functions
However, if you potentially want to deploy your function/triggers to on-prem, you should spin them out as microservices and deploy them with containers.
Currently, the fastest way would probably be to deploy the containerized triggers via Azure Container Instances if you don't already have a Kubernetes Cluster running. With some tweaking, you can deploy them on-prem later on.
There are few options:
Move your function app on to premium. But it will not help u a lot at the time of heavy load and scale out.
Issue: In that case u will start facing cold startup issues and problem will be persist in heavy load.
Redis Cache, it will resolve your most of the issues as the main concern is heavy loading.
Issue: If your system is multitenant system then your Cache become heavy during the time.
Create small micro durable functions. It will be not the answer of your Q as u don't want lots of changes but it will resolve your most of the issues.
Background
I have a set of logic apps that each call a set function apps which are run in parallel.
Each logic app is triggered to start at a certain time during the night with all staggered an hour apart.
The Azure functions are written using the async pattern and call external APIs.
Problem
Sometimes the logic apps will run fine and complete their execution in a normal time period, and can do so for two or three days in a row.
However sometimes they will take hours or days forcing me to cancel their run.
Can any body shed any light on this might be happening?
Notes
I'm using the latest nuget packages of the durable functions extension
When debugging the functions always complete in a timely fashion
I have noticed that the functions sometimes get stuck at pending.
It appears you have at least two function apps that are configured with the same storage account and task hub name:
AzureConsumptionXXX
AzureComputeXXX
This causes the two function apps to steal messages from each other. If functions in one app do not exist in the other app, then it's very possible for orchestrations to get stuck in a Pending state like this.
The simplest way to mitigate this is to give each function app a unique task hub name. Please see the Task Hubs documentation for more information: https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-task-hubs.
I have an Azure Function app in Node.js with a couple of Queue-triggered functions.
These were working great, until I saw a couple of timeouts in my function logs.
From that point, none of my triggered functions are actually doing anything. They just keep timing out even before executing the first line of code, which is a context.log()-statement to show the execution time.
What could be the cause of this?
Check your functions storage account in the azure portal, you'll likely see very high activity for files monitoring.
This is likely due to the interaction between Azure Files and requiring a large node_modules tree. Once the modules have been required once, functions will execute quickly because modules are cached, but these timeouts can throw the function app into a timeout -> restart loop.
There's a lot of discussion on this, along with one possible improvement (using webpack on server side modules) here.
Other possibilities:
decrease number of node modules if possible
move to dedicated instead of consumption plan (it runs on a different file system which has better performance)
use C# or F#, which don't suffer from these limitations
I have a C# console application which extracts 15GB FireBird database file on a server location to multiple files and loads the data from files to SQLServer database. The console application uses System.Threading.Tasks.Parallel class to perform parallel execution of the dataload from files to sqlserver database.
It is a weekly process and it takes 6 hours to complete.
What is best option to move this (console application) process to azure cloud - WebJob or WorkerRole or Any other cloud service ?
How to reduce the execution time (6 hrs) after moving to cloud ?
How to implement the suggested option ? Please provide pointers or code samples etc.
Your help in detail comments is very much appreciated.
Thanks
Bhanu.
let me give some thought on this question of yours
"What is best option to move this (console application) process to
azure cloud - WebJob or WorkerRole or Any other cloud service ?"
First you can achieve the task with both WebJob and WorkerRole, but i would suggest you to go with WebJob.
PROS about WebJob is:
Deployment time is quicker, you can turn your console app without any change into a continues running webjob within mintues (https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/)
Build in timer support, where WorkerRole you will need to handle on your own
Fault tolerant, when your WebJob fail, there is built-in resume logic
You might want to check out Azure Functions. You pay only for the processing time you use and there doesn't appear to be a maximum run time (unlike AWS Lambda).
They can be set up on a schedule or kicked off from other events.
If you are already doing work in parallel you could break out some of the parallel tasks into separate azure functions. Aside from that, how to speed things up would require specific knowledge of what you are trying to accomplish.
In the past when I've tried to speed up work like this, I would start by spitting out log messages during the processing that contain the current time or that calculate the duration (using the StopWatch class). Then find out which areas can be improved. The slowness may also be due to slowdown on the SQL Server side. More investigation would be needed on your part. But the first step is always capturing metrics.
Since Azure Functions can scale out horizontally, you might want to first break out the data from the files into smaller chunks and let the functions handle each chunk. Then spin up multiple parallel processing of those chunks. Be sure not to spin up more than your SQL Server can handle.