I have a use case to implement multiple BlobTriggers in Azure Functions (using the Linux Consumption Plan). For example in Azure Storage I would have 5 different clients with a directory structure like:
client1/file.txt
client2.file.txt
client3/file.txt
client4/file.txt
client5/file.txt
It's possible for both client1/file.txt and client2/file.txt to be dropped off at the same time in Azure Storage. To prevent race conditions and exceeding the 1.5 GB memory limit, I would like the BlobTrigger for client1/file.txt to wait for the BlobTrigger for client2/file.txt to finish or vice versa (the order doesn't matter here, just that both of them eventually execute).
Do I have to set up a queue process separately? Can I use the preview setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUTto achieve this easily?
Edit: Would using durable functions be a better solution?
You should be able to do this by making sure the MAX SCALE OUT value is set to 1, this way it will only process 1 file at a time. You can also change your consumption\pricing model from consumption to app service plan one. This way you can use the tier you want, then you can have more memory available as well (depending on the tier you choose).
Related
I'm trying to find the optimal cloud architecture to host a software on Microsoft Azure.
The scenario is the following:
A (containerised) REST API is exposed to the users through which they can submit POST and GET requests. POST requests trigger a backend that needs a robust configuration to operate properly and GET requests are sent to fetch the result of the backend, if any. This component of the solution is currently hosted on an Azure Web App Service which does the job perfectly.
The (containerised) backend (triggered by POST requests) perform heavy calculations during a short amount of time (typically 5-10 minutes are allotted for the calculation). This backend needs (at least) 4 cores and 16 Gb RAM, but the more the better.
The current configuration consists in the backend hosted together with the REST API on the App Service with a plan that accommodates the backend's requirements. This is clearly not very cost-efficient, as the backend is idle ~90% of the time. On top of that it's not really scalable despite an automatic scaling rule to spawn new instances based on the CPU use: it's indeed possible that if several POST requests come at the same time, they are handled by the same instance and make it crash due to a lack of memory.
Azure Functions doesn't seem to be an option: the serverless (consumption plan) solution they propose is restricted to 1.5 Gb RAM and doesn't have Docker support.
Azure Container Instances neither, because first the max number of CPUs is 4 (which is really few for the needs here, although acceptable) and second there are cold starts of approximately 2 minutes (I imagine due to the creation of the container group, pull of the image, and so on). Despite the process is async from a user perspective, a high latency is not allowed as the result is expected within 5-10 minutes, so cold starts are a problem.
Azure Batch, which at first glance appears to be a perfect fit (beefy configurations available, made for hpc, cost effective, made for time limited tasks, ...) seems to be slow too (it takes a couple of minutes to create a pool and jobs don't run immediately when submitted).
Do you have any idea what I could use?
Thanks in advance!
Azure Functions
You could look at Azure Functions Elastic Premium plan. EP3 has 4 cores, 14GB of RAM and 250GB of storage.
Premium plan hosting provides the following benefits to your functions:
Avoid cold starts with perpetually warm instances
Virtual network connectivity.
Unlimited execution duration, with 60 minutes guaranteed.
Premium instance sizes: one core, two core, and four core instances.
More predictable pricing, compared with the Consumption plan.
High-density app allocation for plans with multiple function apps.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-premium-plan?tabs=portal
Batch Considerations
When designing an application that uses Batch, you must consider the possibility of Batch not being available in a region. It's possible to encounter a rare situation where there is a problem with the region as a whole, the entire Batch service in the region, or your specific Batch account.
If the application or solution using Batch always needs to be available, then it should be designed to either failover to another region or always have the workload split between two or more regions. Both approaches require at least two Batch accounts, with each account located in a different region.
https://learn.microsoft.com/en-us/azure/batch/high-availability-disaster-recovery
We have a service running as an Azure function (Event and Service bus triggers) that we feel would be better served by a different model because it takes a few minutes to run and loads a lot of objects in memory and it feels like it loads it every time it gets called instead of keeping in memory and thus performing better.
What is the best Azure service to move to with the following goals in mind.
Easy to move and doesn't need too many code changes.
We have long term goals of being able to run this on-prem (kubernetes might help us here)
Appreciate your help.
To achieve first goal:
Move your Azure function code inside a continuous running Webjob. It has no max execution time and it can run continuously caching objects in its context.
To achieve second goal (On-premise):
You need to explain this better, but a webjob can be run as a console program on-premise, also you can wrap it into a docker container to move it from on-premise to any cloud but if you need to consume messages from an Azure Service Bus you will need an On-Premise-Azure approach connecting your local server to the cloud with a VPN or expressroute.
Regards.
There are a couple of ways to solve the said issue, each with slightly higher amount of change from where you are.
If you are just trying to separate out the heavy initial load, then you can do it once in a Redis Cache instance and then reference it from there.
If you are concerned about how long your worker can run, then Webjobs (as explained above) can work, however, that is something I'd suggest avoiding since its not where Microsoft is putting its resources. Rather look at durable functions. Here an orchestrator function can drive a worker function. (Even here be careful, that since durable functions retain history after running for very very very long times, the history tables might get too large - so probably program in something like, restart the orchestrator after say 50,000 runs (obviously the number will vary based on your case)). Also see this.
If you want to add to this, the constrain of portability then you can run this function in a docker image that can be run in an AKS cluster in Azure. This might not work well for durable functions (try it out, who knows :) ), but will surely work for the worker functions (which would cost you the most compute anyways)
If you want to bring the workloads completely on-prem then Azure functions might not be a good choice. You can create an HTTP server using the platform of your choice (Node, Python, C#...) and have that invoke the worker routine. Then you can run this whole setup inside an image on an AKS cluster on prem and to the user it looks just like a load balanced web-server :) - You can decide if you want to keep the data on Azure or bring it down on prem as well, but beware of egress costs if you decide to move it out once you've moved it up.
It appears that the functions are affected by cold starts:
Serverless cold starts within Azure
Upgrading to the Premium plan would move your functions to pre-warmed instances, which should counter the problem you are experiencing:
Pre-warmed instances for Azure Functions
However, if you potentially want to deploy your function/triggers to on-prem, you should spin them out as microservices and deploy them with containers.
Currently, the fastest way would probably be to deploy the containerized triggers via Azure Container Instances if you don't already have a Kubernetes Cluster running. With some tweaking, you can deploy them on-prem later on.
There are few options:
Move your function app on to premium. But it will not help u a lot at the time of heavy load and scale out.
Issue: In that case u will start facing cold startup issues and problem will be persist in heavy load.
Redis Cache, it will resolve your most of the issues as the main concern is heavy loading.
Issue: If your system is multitenant system then your Cache become heavy during the time.
Create small micro durable functions. It will be not the answer of your Q as u don't want lots of changes but it will resolve your most of the issues.
I have a Python Azure Function (Linux Consumption Plan) that is being set up to run multiple HttpTriggers at various times throughout the day. It's possible for more than one of these triggers to execute at or around the same time as the other triggers. To avoid exceeding the 1.5 GB memory limit, I'd like to make sure only one function invocation is allowed to run at a time. Is there any way to achieve this?
Edit: After doing a little research, would this setting allow me to avoid concurrent executions of my HttpTriggers: https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#website_max_dynamic_application_scale_out?
If I set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1, would that mean only one invocation could run at a time and the other HttpTriggers would wait?
Check out the full description of WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT here. It says that:
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT sets a maximum number of instances that a function app can scale to.
This limit is not yet fully supported - it does work to limit your
scale out, but there are some cases where it might not be completely
foolproof. We're working on improving this.
I believe this is not what you are looking for.
Logically, this can be made possible if we choose timer trigger times in such a way that they never collide within a day (24hrs). Please note, this depends on the business requirements of function app HttpTriggers - what is the required frequency of them.
Another solution can be to have separate consumption plans for different HttpTriggers. Infact, in this case, you will get a monthly free grant of 1 million requests and 4,00,000 GB-s of resource consumption per month per subscription in pay-as-you-go pricing across all function apps in that subscription.
As mentioned above, there can be different solutions for this, you need to choose as per what suits you the best.
Is it possible to fix a cap on the number of servers on which the azure functions scale? I have a consumption plan and basically I would like to set a cap on the number of resources that azure functions can use.
The only solutions I found are:
set a cap for daily GB/sec threshold, after which the functions are stopped until the following day, which is definitely something I do not want because I need to use some functions for online tasks.
In the host.json, changing parameters for http.maxConcurrentRequests and http.maxOutstandingRequests, which will affect the number of concurrent functions running. Is this the thing should I look into? isn't this setting per-server level? my fear is that this won't end up capping resources, but insted will let Azure create just more and more servers, in order to comply with request loads
You can use the WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT app setting: The maximum number of instances that the function app can scale out to. Default is no limit.
Note: This setting is a preview feature - and only reliable if set to a value <= 5
Ref: https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#websitemaxdynamicapplicationscaleout
One thing to note is that timer-triggered functions are automatically singletons. In my case that was sufficient, as I can wake-up such function every minute and process specific amount of data. Even if the function takes longer than expected, there's no risk a second one will be run concurrently.
More info: https://stackoverflow.com/a/53919048/4619705
Situation:
A user with a TB worth of files on our Azure blob storage and gigabytes of storage in our Azure databases decides to leave our services. At this point, we need to export all his data into 2GB packages and deposit them on the blob storage for a short period (two weeks or so).
This should happen very rarely, and we're trying to cut costs. Where would it be optimal to implement a task that over the course of a day or two downloads the corresponding user's blobs (240 KB files) and zips them into the packages?
I've looked at a separate webapp running a dedicated continuous webjob, but webjobs seem to shut down when the app unloads, and I need this to hibernate and not use resources when not up and running, so "Always on" is out. Plus, I can't seem to find a complete tutorial on how to implement the interface, so that I may cancel the running task and such.
Our last resort is abandoning webapps (three of them) and running it all on a virtual machine, but this comes up to greater costs. Is there a method I've missed that could get the job done?
This sounds like a job for a serverless model on Azure Functions to me. You get the compute scale you need without paying for idle resources.
I don't believe that there are any time limits on running the function (unlike AWS Lambda), but even so you'll probably want to implement something to split the job up first so it can be processed in parallel (and to provide some resilience to failures). Queue these tasks up and trigger the function off the queue.
It's worth noting that they're still in 'preview' at the moment though.
Edit - have just noticed your comment on file size... that might be a problem, but in theory you should be able to use local storage rather than doing it all in memory.