Compare: Azure Functions vs Azure Batch - azure

Can we use Azure Functions along with Azure Batch? Please Advise.
I am working on a POC to decide which one to use for our background processes.

I too was in similar dilemma till I tried both of them for my use case.
The major difference between the two is that Azure Function has a hard timeout limit of I guess 10 minutes which you can not exceed. What I mean is that if your script/execution runs beyond 10 minutes then Azure function will kill it automatically.
Whereas Azure batch is essentially a configuration of pools or VMs in which you can run long running jobs where you are not bothered about the time of its execution. Essentially they are old VMs (low costs too). Difference between batch and Azure VMs is that Azure VMs have high speed VMs but in batch you can configure the periodic jobs where in Azure VMs you need to code in such a way that it executed like a periodic job
And yes it is possible to use Functions with Azure batch. You can configure your script as HTTP trigger in Function which you can call (get/post) through Azure Batch VMs.
Hope it helps.

May be we should expand this topic to Azure services for Batch processing in general. I did come across an article from Microsoft that goes through these options in general (which includes Web Jobs, and Kubernetes options).
But, frankly, even after reading the article; the confusion remains. For example, Azure Batches can be scheduled; but not sure if they can be triggered based on other Azure services like how Azure web jobs handles it. I get a feeling that Azure Batch is pitched where you need high + parallel computing at low costs. Because, none of the other options directly allow you to low-priority and low-cost compute instances. Correct me please!
#AzureBatch #AzureWebJobs #AzureAKS #AzureFunctions

Related

Azure Functions not Running Fast Enough

I have an azure function that reads jobs from a storage queue. It then executes these jobs and grabs more. I have been getting more jobs for it to run lately and noticed that the queue is building up.
What can I do from an Azure Perspective to get better performance out of this? Each job runs in its own little world so adding a new instance or adding threads or attaching to a "better" machine would all work fine.
Things come to mind with the information provided:
For more pure power: Host your Azure Function in a dedicated App Service plan instead of using the consumption plan. You can scale up (better hardware) or out (more hardware). Be aware that this could also be worse in theory. I would give it a try. Or try the "premium consumption plan" mentioned by Ken.
More parallelism: If your queue builds up even though you are not using most of your resources. Try playing with the configuration parameters batchSize and newBatchThreshold.
Changed execution logic: Depending where most of your time is spent during function execution, durable functions might help. Based on your comments you might also try to cache the external data using static or Azure Redis Cache.
Look at the most common performance considerations
Premium plan (Preview)
Azure Functions Premium plan provides customers the same features and scaling mechanism used on the Consumption plan (based on number of events) with enhanced performance and VNET access. Azure Functions Premium Functions plan is billed on a per second basis based on the number of vCPU-s and GB-s your premium functions consume.
In order to use the Azure Functions Premium Plan private preview your subscription needs to be added to an allowlist. Please apply for access via http://aka.ms/functionspremium.
More Info:
https://github.com/Azure/Azure-Functions/blob/master/functions-premium-plan/overview.md

Stopping VMs in Azure when CPU usage is lower than 10%

Is there a way to script something using runbooks in Azure so that a script runs that checks the CPU Usage and if the average for two hours is less than 10% it shuts down the VM?
Has anyone got an example script?
I do not have example code, but would start with Azure Monitor. Using Azure Monitor you can create an alerts with specific criteria such as CPU usage and a time slice.
Create, view, and manage alerts using Azure Monitor
On alert you can engage an Azure Automation Web-hook to perform the remediation action.
Webhook actions for alert rules
Azure alert rules are probably the way to go here. Here's an end-to-end solution with that.
A simpler way to do it is using a tool like VMPower which isn't free but is inexpensive and works best when you need to do this across multiple VMs with different auto-stop configurations.

Worker Role vs Azure Batch

We have a scenario for on prem to upload a bunch of pdf/tiffs then service on the cloud to process them. In the same time a queue is populated with metadata on processing instructions.
We are trying to decide if whether the worker role or the Azure Batch is the right choice for this. Our primary goal are,
need to scalable base on queue size
scale ramp up time need to be quick
of course cost is another factor
You should not use PaaS cloud services for new workloads. There are various options depending on the complexity of the processing you need to do. The following are likely better choices than Azure Batch: Azure Web Jobs, Virtual Machine Scale Sets. The first is easier to use, and is based on the Web Apps technology, while the second is the (preview) way to provide scale-out VMs.
PaaS cloud services are classic now, you can use webjobs with appservices. Another interesting option is AZURE STACK which is really good for hybrid cloud. refer this link
https://azure.microsoft.com/en-us/overview/azure-stack AZURE STACK is really good for hybrid cloud, As if now it is in Technical preview.

Azure WebJobs for Aggregation

I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.

How to build a auto-scale Azure Cloud Service based on network usage?

Azure Cloud Services have auto-scale based on CPU / Queue. We have a set of machines running API for uploading and processing files. Although we moved the processing part on Worker Role that scale depending on the queue size, the servers but also take care of the upload while responding to other operations like downloading.
Right now we're using more machines for the just in case scenario, but we want to build a way to scale and to be cost-efficient while having a great upload experience for our users.
What would your approach be for creating a way to detect the network usage across all machines from the same Cloud Service and auto-scale if necessary?
I would:
1) Create metrics that calculate the amount of time it takes to download/upload a file.
2) Aggregate the metrics in some persistence layer (we have plenty in Azure)
3) Create a service that looks those metrics
4) Check the thresholds
5) Use the Management Libraries for .NET to trigger scaling on the Cloud Service(s) affected.
This approach also scales with your solution. You can eventually separate the scaling part from the checking part and have them as two different services, communicate asynchronously.
We also have an old, open source now project that does some of that for you, so you don't have to reinvent the wheel. It's called WASABi. Be careful though as this is not maintained anymore but as I said, you can use it as inspiration.

Resources