Disabled Azure Function Billing - azure

TLDR;
Assuming my azure function (consumption plan) is disabled, would I still have to pay for request attempts sent to it (NotFound response)?
In Depth:
I have azure function created (consumption plan) with multiple HTTP triggers configured (myazfunc.azurewebsites.com/api/add, api/check, etc..)
As you know, Azure allows to change the state of every one of this triggers (Disabled/Enabled).
If I disabled one of them and then try to call it, I get a NotFound response (which is fine).
So, my question is, would I be billed for requests sent to this trigger even though it's disabled?

Azure functions only pay for computing resources for the functions that are active. Its usage plan is based on resource consumption and executions per second.
The Consumption plan dynamically adds and removes instances of the Functions host based on the number of incoming events.
For more info on Azure functions you can refer HERE
REFERENCES:
Azure Functions pricing

Related

Azure Functions not Running Fast Enough

I have an azure function that reads jobs from a storage queue. It then executes these jobs and grabs more. I have been getting more jobs for it to run lately and noticed that the queue is building up.
What can I do from an Azure Perspective to get better performance out of this? Each job runs in its own little world so adding a new instance or adding threads or attaching to a "better" machine would all work fine.
Things come to mind with the information provided:
For more pure power: Host your Azure Function in a dedicated App Service plan instead of using the consumption plan. You can scale up (better hardware) or out (more hardware). Be aware that this could also be worse in theory. I would give it a try. Or try the "premium consumption plan" mentioned by Ken.
More parallelism: If your queue builds up even though you are not using most of your resources. Try playing with the configuration parameters batchSize and newBatchThreshold.
Changed execution logic: Depending where most of your time is spent during function execution, durable functions might help. Based on your comments you might also try to cache the external data using static or Azure Redis Cache.
Look at the most common performance considerations
Premium plan (Preview)
Azure Functions Premium plan provides customers the same features and scaling mechanism used on the Consumption plan (based on number of events) with enhanced performance and VNET access. Azure Functions Premium Functions plan is billed on a per second basis based on the number of vCPU-s and GB-s your premium functions consume.
In order to use the Azure Functions Premium Plan private preview your subscription needs to be added to an allowlist. Please apply for access via http://aka.ms/functionspremium.
More Info:
https://github.com/Azure/Azure-Functions/blob/master/functions-premium-plan/overview.md

Understanding Azure Function app and Service Bus costs

I'm playing around with creating a Pub/Sub system using Azure Service Bus Topics and an Azure Function app where the functions are decorated with ServiceBusTrigger. I'm using the standard tier for Service Bus and Consumption Plan for the Function app.
The concept is working fine and will serve my purposes, however, I'm having difficulty understanding how this will affect my Azure costs.
Running just one function with that attribute seems to generate a couple request per second to the Service Bus. In the end, I expect to have quite a few Topics with Subscriptions and an Azure Function for each Subscription.
If I understand correctly, each of the requests that the ServiceBusTrigger generates counts against the number of Service Bus operations you get. With a few functions, I'll go over 13M ops pretty easily.
Some of these subscribers aren't so time sensitive that they'd need to check the Subscription several times per second. Is there a way to change the frequency that ServiceBusTrigger checks the Subscription to reduce costs? Am I approaching this problem from entirely the wrong angle?

Is custom storage polling relevant to consumption plan Azure Functions?

How does the concept of storage queue polling apply when an Azure Function is hosted under the consumption plan?
I get the principal of polling with classic hosted WebJob functions and I understand that the maximum polling interval of 1 minute can be overridden. However in the case of consumption plan hosting there is no app-level memory resident process, therefore I assume that Azure internals spin up a FunctionApp via some other trigger beyond my control.
The motivation for this question is that I am trying to understand typical E2E function invocation propagation delays when an Azure hosted WebApp adds a message to a storage queue. In my case the WebApp, StorageQueue and pre-compiled function DLL will run in the same Azure region.
I need to cap Azure Function invocation delays to under 10 seconds with an average of <3 seconds.
Unfortunately this isn't possible on the consumption plan with the current polling model, as we poll your trigger resource every 10s to determine if there are new events requiring a function instance to be loaded/started.
If your function app runs frequently enough that it always has active instances (a new queue message every 5 min, for example) you can get the invocation delays that you want, as the instances themselves handle the polling.
The worst case (no function instances running) is ~10s polling + ~5s instance startup time to process a new event.

Running an azure cloud service after every n days

I have created an azure service which is responsible for below task:
(1) Access the blob containers and download the files from there.
(2) Extract some data from downloaded files
(3) Stored the extracted data to an Azure SQL Server
I want to run this processing after every 7 days. Is there a way to achieve this? or can I use any other option than cloud service to achieve the above goal?
I would recommend you to use Azure Function as its Timer-based processing (Timer trigger) feature is able to fulfill your requirements.
Timer triggers call functions based on a schedule, one time or
recurring.
Reference: Azure Functions timer trigger, Azure Functions Pricing
Another great advantage of using Azure Function for your scenario is its pricing model.
Azure Functions consumption plan is billed based on resource
consumption and executions.
Consumption plan pricing includes a
monthly free grant of 1 million requests and 400,000 GB-s of resource
consumption per month.
Certainly not natively with the Cloud Service itself. I mean, you can obviously code it so it performs some task(s) and sleeps for 7 days, but you will pay for all of that time, that makes no sense
You can use Azure WebJobs, Functions and Scheduler for this purpose, or you can create a PowerShell\Cli or something else cron task\task scheduler to turn on your Azure Cloud Service, wait for it to finish processing and turn it off. But that seems like a lot of extra effort, I'd rather go with Scheduler or Functions.

Sending emails through Amazon SES with Azure Functions

The Problem:
So we are building a newsletter system for our app that must have a capacity to send 20k-40k emails up to several times a day.
Tools of Preference:
Amazon SES - for pricing and scalability
Azure Functions - for serverless compute to send emails
Limitations of Amazon SES:
Amazon SES Throttling having Max Send Rate - Amazon SES throttles sending via their services by imposing a max send rate. Right now, being out of the sandbox SES environment, our capacity is 14 emails/sec with 50K daily emails cap. But this limit can be increased via a support ticket.
Limitations of Azure Functions:
On a Consumption Plan, there's no way to limit the scale as to how many instances of your Azure Function execute. Currently the scaling is handled internally by Azure, and thus the function can execute between just a few to hundreds of instances.
From reading other post on Azure Functions, there seems to be "warm-up" period for Azure Functions, meaning the function may not execute as soon as it is triggered via one of the documented triggers.
Limitations of Azure Functions with SES:
The obvious problem would be Amazon SES throttling sending emails from Azure functions because the scaled execution of Azure Function that sends out an email will be much higher than allowed send rate for SES.
Due to "warm-up" period of Azure Function messages may end up being piled up in a queue before Azure Function actually starts processing them at a scale and sending out the email, thus there's a very high probability of hitting that send/rate limit.
Question:
How can we utilize sending emails via Azure Functions while still being under X emails/second limit of SES? Is there a way to limit how many times an Azure Function can execute per time frame? So like let's says we don't want more than 30 instances of Azure Function running per/second?
Other thoughts:
Amazon SES might not like continuous throttling of SES for a customer if the customer's implementation is constantly hitting up that throttling limit. Amazon SES folks, can you please comment?
Azure Functions - as per documentation, the scaling of Azure Functions on a Consumption Plan is handled internally. But isn't there a way to put a manual "cap" on scaling? This seems like such a common requirement from a customer's point of view. The problem is not that Azure Functions can't handle the load, the problem is that other components of the system that interface with Azure Functions can't handle the load at the massive scale at which Azure Functions can handle it.
Thank you for your help.
If I understand your problem correctly, the easiest method is a custom queue throttling solution.
Basically your AF just retrieve the calls for all the mailing requests, and then queue them into a queue system (say ServiceBus / EventHub / IoTHub) then you can have another Azure function that runs by x minute interval, which will pull a maximum of y messages and push it to SES. Your control point becomes that clock function, and since the queue system will help you to ensure you know your message delivery status (has sent to SES yet) and you can pop the queue once done, it would allow you to ensure the job is eventually processed.
You should be able to set the maxConcurrentCalls to 1 within the host.json file for the function; this will ensure that only 1 function execution is occurring at any given time and should throttle your processing rate to something more agreeable from AWS perspective in terms of sends per second:
host.json
{
// The unique ID for this job host. Can be a lower case GUID
// with dashes removed. When running in Azure Functions, the id can be omitted, and one gets generated automatically.
"id": "9f4ea53c5136457d883d685e57164f08",
// Configuration settings for 'serviceBus' triggers. (Optional)
"serviceBus": {
// The maximum number of concurrent calls to the callback the message
// pump should initiate. The default is 16.
"maxConcurrentCalls": 1,
...

Resources