The Problem:
So we are building a newsletter system for our app that must have a capacity to send 20k-40k emails up to several times a day.
Tools of Preference:
Amazon SES - for pricing and scalability
Azure Functions - for serverless compute to send emails
Limitations of Amazon SES:
Amazon SES Throttling having Max Send Rate - Amazon SES throttles sending via their services by imposing a max send rate. Right now, being out of the sandbox SES environment, our capacity is 14 emails/sec with 50K daily emails cap. But this limit can be increased via a support ticket.
Limitations of Azure Functions:
On a Consumption Plan, there's no way to limit the scale as to how many instances of your Azure Function execute. Currently the scaling is handled internally by Azure, and thus the function can execute between just a few to hundreds of instances.
From reading other post on Azure Functions, there seems to be "warm-up" period for Azure Functions, meaning the function may not execute as soon as it is triggered via one of the documented triggers.
Limitations of Azure Functions with SES:
The obvious problem would be Amazon SES throttling sending emails from Azure functions because the scaled execution of Azure Function that sends out an email will be much higher than allowed send rate for SES.
Due to "warm-up" period of Azure Function messages may end up being piled up in a queue before Azure Function actually starts processing them at a scale and sending out the email, thus there's a very high probability of hitting that send/rate limit.
Question:
How can we utilize sending emails via Azure Functions while still being under X emails/second limit of SES? Is there a way to limit how many times an Azure Function can execute per time frame? So like let's says we don't want more than 30 instances of Azure Function running per/second?
Other thoughts:
Amazon SES might not like continuous throttling of SES for a customer if the customer's implementation is constantly hitting up that throttling limit. Amazon SES folks, can you please comment?
Azure Functions - as per documentation, the scaling of Azure Functions on a Consumption Plan is handled internally. But isn't there a way to put a manual "cap" on scaling? This seems like such a common requirement from a customer's point of view. The problem is not that Azure Functions can't handle the load, the problem is that other components of the system that interface with Azure Functions can't handle the load at the massive scale at which Azure Functions can handle it.
Thank you for your help.
If I understand your problem correctly, the easiest method is a custom queue throttling solution.
Basically your AF just retrieve the calls for all the mailing requests, and then queue them into a queue system (say ServiceBus / EventHub / IoTHub) then you can have another Azure function that runs by x minute interval, which will pull a maximum of y messages and push it to SES. Your control point becomes that clock function, and since the queue system will help you to ensure you know your message delivery status (has sent to SES yet) and you can pop the queue once done, it would allow you to ensure the job is eventually processed.
You should be able to set the maxConcurrentCalls to 1 within the host.json file for the function; this will ensure that only 1 function execution is occurring at any given time and should throttle your processing rate to something more agreeable from AWS perspective in terms of sends per second:
host.json
{
// The unique ID for this job host. Can be a lower case GUID
// with dashes removed. When running in Azure Functions, the id can be omitted, and one gets generated automatically.
"id": "9f4ea53c5136457d883d685e57164f08",
// Configuration settings for 'serviceBus' triggers. (Optional)
"serviceBus": {
// The maximum number of concurrent calls to the callback the message
// pump should initiate. The default is 16.
"maxConcurrentCalls": 1,
...
Related
I am trying to use AWS services to implement a real-time email-sending feature for one of my projects. It is like someone uses my app to schedule a reminder from my project and then the email will be sent to them nearby or at the actual time that he scheduled.
I know the AWS services such as AWS CloudWatch rules (CRONs) and DynamoDB stream (TTL based). But that is not perfect for such a feature. Can anyone please suggest a better way to implement such a feature?
Any type of guidance is acceptable.
-- Thanks in advance.
Imagine your service at huge scale. At such scale, there are likely to be multiple messages going off every minute. Therefore, you could create:
A database that stores the reminder times (this could be DynamoDB or an Amazon RDS database)
An AWS Lambda function that is configured to trigger every minute
When the Lambda function is triggered, it should check the database to retrieve all reminders that should be sent for this particular minute. It can then use Amazon Simple Email Service (SES) to send emails.
If the number of emails to be sent is really big, then rather than having the Lambda function call SES in series, it could put a message into an Amazon SQS queue for each email to be sent. The SQS queue could then trigger another Lambda function that sends the email via SES. This allows the emails to be sent in parallel.
TLDR;
Assuming my azure function (consumption plan) is disabled, would I still have to pay for request attempts sent to it (NotFound response)?
In Depth:
I have azure function created (consumption plan) with multiple HTTP triggers configured (myazfunc.azurewebsites.com/api/add, api/check, etc..)
As you know, Azure allows to change the state of every one of this triggers (Disabled/Enabled).
If I disabled one of them and then try to call it, I get a NotFound response (which is fine).
So, my question is, would I be billed for requests sent to this trigger even though it's disabled?
Azure functions only pay for computing resources for the functions that are active. Its usage plan is based on resource consumption and executions per second.
The Consumption plan dynamically adds and removes instances of the Functions host based on the number of incoming events.
For more info on Azure functions you can refer HERE
REFERENCES:
Azure Functions pricing
I'm developing an Azure Function that executes several operations in Dynamics 365 CRM.
I don't fully understand how Azure Functions concurrency works.
I have a Consumption Plan, my Azure Function has a function inside that is triggered by a Service Bus message.
When I tested it the first time, the service bus received around 200 messages and the app started processing a lot of messages at the same time, making a huge load of requests to dynamics 365 that couldn't handle them.
So in the Azure Portal I managed to set the max number of instances to 1, but still the function was processing many messages at one time.
What's the best way to set a limit to that?
Using maxConcurrentCalls in host.json?
Using maxConcurrentSessions in host.json?
Using WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT in the app configs?
Also, what's the difference between setting maxConcurrentCalls at 10, and 1 function instance or setting it at 5 with 2 function instances?
maxConcurrentCalls is the attribute configured in host.json for the Azure Functions Service Bus Trigger.
By default, the runtime of Functions processes multiple messages concurrently (default value - 16). Set maxConcurrentCalls to 1 for setting up to process only a single queue or topic message at a time by the runtime.
Also, maxConcurrentCalls is the max no. of concurrent calls to the callback that should be initiate per scaled instance.
maxConcurrentSessions - Maximum No. of Sessions handled concurrently per scaled instance.
This setting only applies for functions that receive a single message at a time.
For the requirement of only one message need to be processed at a time per instance than you can use above configuration in the host.json.
If your requirement is Singleton support for Functions to ensure only one function running at a time, then you need to configure this.
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
This setting has no default limit, which states the max no. of instances that the app can scale out to.
Not to Scale out the function and to run only in one instance, set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1
In addition to this setting, you need to set maxConcurrentCalls to 1.
Few References for more information:
Azure Function to process only single message at one time
Azure Functions - WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT and batchSize - how can I get the desired concurrency
azure servicebus maxConcurrentCalls totally ignored
Official documentation of host.json settings in Azure Function Service Bus Trigger explains about maxConcurrentCalls, maxConcurrentSessions
Official Doc explains about WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT
I’m in need of some second opinions and guidance on how to use Azure Functions in combination with Azure Service Bus in the scenario described below. Coding is not an issue its about selecting the most appropriate method. Sadly, I have not found any good example of this online so now I’m reaching out for some help.
Scenario
I have an ecommerce customer that is sending a few thousand orders a day to an ERP system. The normal day operations are not an issue, but we would like to make the solution more robust to handle for example “Black Friday” surges. Currently the website can hold x amount of orders before that database is full and is forced to close or send order downstream. Currently the website sends order directly into the ERP system and it is this part I want to decouple with Azure Server Bus Queues. With this decoupling we can continue pushing new orders to the queue and consuming these at our own pace in the ERP without flooding any system.
My thoughts about how to set this up
The website can send messages directly to the Service Bus Queue. An Azure Function is bound to trigger on every new message in the queue and will send that message to the ERP system.
Same as above however the website first sends a message to an Azure Function that puts it into the queue.
The website sends messages to the queue like in point 1 or 2. Instead of binding a function to the queue we setup a scheduled function. The function will run frequently and send 1 message to the ERP system per run.
The website sends messages to the queue like in point 1 or 2. Here we do not send messages to the ERP system but instead the ERP system is the one who reads the queue. Do not like this approach but its possible to do and easy to administrate by ERP users.
Questions
If I go with point 1 or 2 above should the function responsible for delivering the message to the ERP system send 1 or multiple orders per trigger?
If I go with point 1 or 2 it should still be possible to flood the ERP system since they most likely trigger at the same time they get put in?
If the ERP system is down and the queue grows, do I need a separate scheduled function to handle the queue until it is empty?
We do not have to discuss the dead letter queue here, that is another topic.
How would you approach this or if you have done a similar solution what method did you use?
Thank you for your guidance much appriciated!
We've learned a lot in the past couple of years working with Azure Functions and Service Bus to solve similar scenarios you mentioned above. You're definitely on the right track regards to wanting to decouple in case of a surge. To give you some peace of mind with your choice to use Azure Service Bus, we normally push hundreds of events a minute through our topics and subscriptions and it holds up pretty well.
Let me just share some of the lessons we learned:
The concurrent number of incoming requests within the same second
was one of our breaking points. The website when written properly
will easily accommodate multiple incoming requests but we learned
about "port exhaustion" related to outbound web requests to our
Azure Function. Review the scope and lifetime of your web client and
the limits of your app service plan / web server.
If you choose to use a consumption plan for your Azure Function, be
aware that it sometimes takes a long time to start. Whatever is
hitting the function will have to implement retries (probably a good
practice anyway).
A Service Bus Message has a size limit (which can be increased, but
there's still a limit). We randomly hit it with one of our payloads
that contains a bulk of information. Know the worst case scenario
payload size you may encounter.
In the event something goes wrong and there are tens of thousands of
messages in the queue, there is no easy way to query what's in
there. Make sure you're fine with that otherwise consider
doing fast writes into a database that can be queried.
The Azure Function can be triggered by Service Bus and can spawn
multiple concurrent executions of the code (which is desired) and
with a limit. Be aware of any limitations with code to update your
ERP. You will have no control over Service Bus triggers.
Be conscious about the function's storage account, functions with
same name will have their trigger settings and locks stepping on
each other (dev vs. prod environment).
Connections to Azure Service Bus will sometimes fail, just the
nature of services hosted in the cloud. It only happens a few times
and recovers after a few seconds.
Consider doing this:
Website -> Azure API Management Gateway -> Azure Function A -> Service Bus -> Azure Function B -> ERP
Azure API Management with AppInsights enabled is a nice extra layer allowing you to secure, monitor, and route to your Function A. In cases where you need to route incoming requests to some emergency bucket it's a life saver.
Consider allowing function 1 to accept an array of your items. Enable AppInsights, add code for telemetry, providing preview of throughput in terms of orders.
Function B with a configurable timer trigger and some app configuration for number of messages to process from the queue. Allows you to throttle flow of data to your ERP. This may be debatable as you won't be able to scale this function out with multiple instances, but I'm assuming the original concern was to control the pace. Also enable same AppInsights, telemetry, logging, etc.
I'm hoping I don't draw too much criticism from this. We learned the hard way and eventually received some really good guidance from Azure architects and engineers later.
I get the above message in my Amazon RDS instance alerts section. How do I get notified by email when such violations are reported by RDS.
The most straight forward option for monitoring Amazon RDS (and any other AWS service for that matter) is Amazon CloudWatch, which provides a reliable, scalable, and flexible monitoring solution that you can start using within minutes and specifically includes Alarms:
[...] Alarms can automatically initiate actions on your behalf, based
on parameters you specify. An alarm watches a single metric over a
time period you specify, and performs one or more actions based on the
value of the metric relative to a given threshold over a number of
time periods. The action is a notification sent to an Amazon SNS topic
or Auto Scaling policy. [...] [emphasis mine]
Amazon SNS supports notifications over multiple transport protocols in turn, amongst those Email and Email-JSON, see the respective FAQ What are the different delivery formats/transports for receiving notifications?:
[...] Customers can select one the following transports as part
of the subscription requests:
[...]
”Email”, “Email-JSON” – Messages are
sent to registered addresses as email. Email-JSON sends notifications
as a JSON object, while Email sends text-based email.
The metric in question is the FreeStorageSpace RDS metric (see Amazon RDS Dimensions and Metrics for details on the available ones) as discussed in Scaling DB Instance Storage:
Important
We highly recommend that you constantly monitor the
FreeStorageSpace RDS metric published in CloudWatch to ensure that
your DB Instance has enough free storage space. For more information
on monitoring RDS DB Instances, see Viewing DB Instance Metrics.
Accordingly, you'll need to create an alarm mirroring or approximating the threshold reported to you by AWS in the console, publish it to an SNS topic and subscribe to this topic via an email address of your choice.