How to define multiple targets in CloudWatch events by serverless framework - node.js

I have been using the serverless framework (1.61.0). I have many and many scheduled events that are syncing data from another source. For instance, I am syncing Category entities within one lambda function.
categories:
handler: functions/scheduled/categories/index.default
name: ${self:provider.stage}-categories-sync
description: Sync categories
events:
- schedule:
name: ${self:provider.stage}-moe-categories
rate: rate(1 hour)
enabled: true
input:
params:
appId: moe
mallCode: moe
scheduled: true
So for this worker, I have another 15 scheduled events. They are preserved as new resources on CloudWatch and which makes it really big. We are exceeding CloudWatch Event limit even if we increased it by submitting a limit increase request to AWS.
Is there any way to define multiple targets for the same CloudWatch Event? So that instead of defining
lambda_func_count (15) x event_count (15) x stage_count (dev, staging, prod) resources on CloudWatch, we could just define one event with multiple targets for each individual lambda function.
Currently, it is supported on AWS console but couldn't find a way to achieve this by the serverless framework.

One way to help mitigate this issue is to not use the same AWS account for all your stages. Take a look at the AWS Organisations feature that helps you create sub accounts to a master account and if you use Serverless Framework Pro, even on the free tier, you can easily have specific stages deploy to specific AWS accounts. Each sub account has its own set of resources that don't affect other accounts. You could even take this further if you have multiple ways of breaking things across multiple accounts; perhaps you can break it up per Category?

Here is an example of a single CloudWatch rule, with multiple targets (each either an AWS Lamdba function, or Lambda alias)
"LCSCombinedKeepWarmRule2":{
"Type":"AWS::Events::Rule",
"Properties": {
"Description":"LCS Keep Functions Warm Rule",
"ScheduleExpression": "rate(3 minutes)",
"State":"ENABLED",
"Targets":[
{
"Arn":{"Fn::GetAtt":["CheckCustomer","Arn"]},
"Id":"CheckCustomerId"
},
{
"Arn":{"Fn::GetAtt":["PatchCustomerId","Arn"]},
"Id":"PatchCustomerId"
},
{
"Arn":{"Ref":"GetTierAttributes.Alias"},
"Id":"GetTierAttributes"
},
{
"Arn":{"Ref":"ValidateToken.Alias"},
"Id":"ValidateTokenId"
},
{
"Arn":{"Ref":"EventStoreRecVoucher.Alias"},
"Id":"EventStoreRecVoucherId"
}
]
}
},

Related

Why doesn't Azure Functions use other instances in app service plan to to process data?

I have an Azure Function durable task that will spread into 12 smaller tasks. I am using dedicated plan, my maxConcurrentActivityFunctions is currently set to 4, and I have total of 3 instances (P3v2 - 4 cores) in the app service plan.
What I understand is, I should be able to process 12 concurrent tasks, and each instance should use all of its CPU to process the job, because the job is CPU oriented.
But in reality, scaling doesn't improve the performance, all of the task go to a single instance. Other 2 instances just stay idle, despite the fact that the main instance is being totally tortured and CPU usage always sit at 100% percent.
I am sure they go to the same instance because I can read that information from the log analytics. Every log has the same host instance id. If I filter out that host instance id, no logs will even get returned.
I also tested making 3 separated call, with 4 tasks in each. It also doesn't seem to use 3 instances too. The metric for the app service plan seem like, there can only be 1 instance online at a time, depite having 3 instances available. The dashline seems to mean "offline". Because when I filter by instance, it just show at 0.
Here is the host.json file
{
"version": "2.0",
"functionTimeout": "01:00:00",
"logging": {
"logLevel": {
"default": "Information"
},
"console": {
"isEnabled": "false"
},
"applicationInsights": {
"samplingSettings": {
"ExcludedTypes": "Request",
"isEnabled": true
}
}
},
"extensions": {
"durableTask": {
"hubName": "blah",
"maxConcurrentActivityFunctions": 4,
"maxConcurrentOrchestratorFunctions": 20
}
}
}
My expection is. 12 tasks should immediately begin. And 3 instances should all be busy processing the data, instead of only 1 instance with 4 concurrent task.
Am I doing anything wrong. Or am I misunderstand something here?
As far as I know and as per the Microsoft documentation Multiple applications in the same app service plan will share all the instances you have in your premium plan.
For example if you have if the app service plan is configured to run multiple VM instances, then all the apps in the plan will run on multiple instances.
In your case, the application you have is only one but that application has many sub units (functions). So the application is using only one instance.
if you want to use all the instances then try deploying multiple function apps into the same app service plan.
Also, you can use Scaling functionalities or you can set by default Auto Scaling for the app service plan

How to implement a pull-queue using Cloud Tasks in Node.js

I am trying to implement a pull-queue using Cloud Tasks + App Engine standard environment in Node.js. So basically I am trying to lease tasks from a queue. The problem is that I can only find examples in other languages, and I can find no mention of creating or leasing tasks for pull queues in the GCP Node.js documentation.
Please tell me this is possible and I do not need to start using a different language in my project only to implement a pull-queue mechanism.
Here is a link to the equivalent Python documentation
--- edit ---
I managed to find a reference in the types that allowed me to do this:
import { v2beta2 } from "#google-cloud/tasks";
const client = new v2beta2.CloudTasksClient();
const [{ tasks }] = await client.leaseTasks({
parent: client.queuePath(project, location, "my-pull-queue"),
maxTasks: 100,
});
...but it is giving me some odd quota error:
Error: Failed to lease tasks from the my-pull-queue queue: 8
RESOURCE_EXHAUSTED: Quota exceeded for quota metric 'Alpha API
requests' and limit 'Alpha API requests per minute (should be 0 unless
whitelisted)' of service 'cloudtasks.googleapis.com' for consumer
'project_number:xxx'.
I can hardly find sources referencing this type of quota error, but it seems to stem from APIs that are not made public yet and can only be used when access is granted explicitly (which would explain the whitelisting).
Another thing I find very odd is that there seem to be two beta clients v2beta2 and v2beta3, but only beta2 types define methods for leasing a task. Both beta APIs are defining types for creating a pull-queue task.
I just found this statement that pull-queues are not supported in Node.js.
https://github.com/googleapis/nodejs-tasks/issues/123#issuecomment-445090253

Blob-triggered Azure Function doesn't process only one blob at a time anymore

I have written a blob-triggered function that uploads data on a CosmosDB database using the Gremlin API, using Azure Functions version 2.0. Whenever the function is triggered, it is going to read the blob, extract relevant information, and then queries the database to upload the data on it.
However, when all files are uploaded on the blob storage at the same time, the Function is going to process all files at the same time, which results in too many requests for the database to handle. To avoid this, I ensured that the Azure Function would only process one file at a time, by setting the batchSize to 1 in the host.json file :
{
"extensions": {
"queues": {
"batchSize": 1,
"maxDequeueCount": 1,
"newBatchThreshold": 0
}
},
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"version": "2.0"
}
This worked perfectly fine for 20 files at a time.
Now, we are trying to process 300 files at a time, and this feature doesn't seem to work anymore, the Function processes all the files at the same time again, which results in the database not being able to handle all the requests.
What am I missing here ? Is there some scaling issue I'm not aware of ?
From here:
If you want to avoid parallel execution for messages received on one queue, you can set batchSize to 1. However, this setting eliminates concurrency as long as your function app runs only on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.
You need to combine this with the app setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT when you run in Consumption plan.
Or, according to the docs, the better way would be through the Function property functionAppScaleLimit: https://learn.microsoft.com/en-us/azure/azure-functions/event-driven-scaling#limit-scale-out
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT would work of course.
You can also scale to multiple Function App instances within one Host then you can have less hosts and more FUNCTIONS_WORKER_PROCESS_COUNT per host. Cost implications would depend on your plan.
Note that all workers within a Host would share resources, so this is recommended for more IO bound workload.

azure servicebus maxConcurrentCalls totally ignored

I have thsese in my host.json but every time i run the function it runs in parallel runing much more threads then 1 ( so much as there are messages in queue)
{
"version": "2.0",
"extensions": {
"serviceBus": {
"prefetchCount": 1,
"messageHandlerOptions": {
"maxConcurrentCalls": 1
}
}
}
}
my function
[FunctionName(nameof(ResourceEventProcessorFunction))]
public async Task Run([ServiceBusTrigger("%TopicName%", "%SubscriptionName%", Connection = "ServiceBusConnection", IsSessionsEnabled = true)]Message message, IMessageSession messageSession, ILogger log)
Leveraging sessions
Since you are using sessions, you can use the same sessionId for all messages, and they will be processed in order by a single instance, regardless of the settings in your host.json.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sessions
Using Singleton attribute
If you can't use the sessionId for your purpose, you should try the [Singleton] attribute on your function. This will ensure that only one instance across all of your function instances will process the request.
We have this working successfully for WebJobs in production, and it should work just the same for Azure Functions. If you have dedicated app service plans, using this attribute should be enough. This is not recommended for a consumption plan.
[Singleton] does work on functions. The Azure Function host will create or wait for a lock in the Azure Storage account. The lock is the host ID which should be the same for all hosts of an app across all instances - so all instances share this lock and will only allow one execution to occur at a time.
To test this I put 1000 queue messages at once on a function with [Singleton]. The function would wake up, emit the invocation ID, sleep, and then emit the invocation ID. After processing all 1000 I looked at logs and never saw invocation IDs overlap. Only one invocation would happen globally at a time.
https://github.com/Azure/azure-functions-host/issues/912#issuecomment-419608830
[Singleton]
[FunctionName(nameof(ResourceEventProcessorFunction))]
public async Task Run([ServiceBusTrigger("%TopicName%", "%SubscriptionName%", Connection = "ServiceBusConnection", IsSessionsEnabled = true)]Message message, IMessageSession messageSession, ILogger log)
In a consumption plan
Continuing the quote above:
With that said I think the recommendation is: [Singleton] isn't recommended for consumption hosted function plans. If you have a dedicated app service plan it's fine (as you are paying for the instance anyway). If you want to enforce [Singleton] like behavior in a consumption plan you are likely best to:
Set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1 so you never scale to more than one instance
Set the host.json file to only allow 1 concurrent execution at a time for that trigger (for instance a batch size of 1 for Azure Queues).
https://github.com/Azure/azure-functions-host/issues/912#issuecomment-419608830
{
"version": "2.0",
"extensions": {
"serviceBus": {
"prefetchCount": 1,
"messageHandlerOptions": {
"maxConcurrentCalls": 1
}
}
}
}
Maybe you can set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1 to make the function run only one instance at a time.
If you develop locally, you can set it in local.settings.json, if you develop in Azure portal, you can set it in Configuration -> Application settings.
Noteļ¼š
1. If you set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to 1, your function will not scale out and can only run in one instance.
2. In addition to setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT, you still need to set maxConcurrentCalls to 1
3. This setting is in preview. An app property for function max scale out has been added and is the recommended way to limit scale out.
For more details, you can refer to this official document.
so the problem was that every message had a differnet sessionId.
Disabling sessionId on subscription in azure solved this problem.
In details below for bounty :D
azure docs doesnt exactly specify how to limit thread number, but I looked a bit dipper.
there is MessageRecievePump
and
SessionRecievePump
one uses MaxConcurrentCalls the other one MaxConcurrentSessions and MaxConcurrentAcceptSessionCalls
be aware of this if you include session in your subscription (MaxConcurrentCalls doesnt work) it works only when session id is the same.
when session is differnt try to use MaxConcurrentSessions or MaxConcurrentAcceptSessionCalls but be aware there are no docs about this....

How to limit concurrent Azure Function executions

I've seen this problem expressed a lot but I've yet to find a working solution.
In short, I periodically have a large batch of processing operations to be done. Each operation is handled by an Azure Function. Each operation makes calls to a database. If I have too many concurrent functions running at the same time, this overloads the database and I get timeout errors. So, I want to be able to limit the number of concurrent Azure Function calls that are run at a single time.
I've switched the function to be queue-triggered and tweaked the batchSize / newBatchThreshold / maxDequeueCount host.json settings in many ways based on what I've seen online. I've also set the WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT application setting to 1 in my function application settings to prevent more than one VM from being spawned.
Yet still, every time I fill that queue, multiple functions will spawn indiscriminately and my database will fall over.
How can I throttle the number of concurrent operations?
The problem ended up being a difference in formatting of host.json in V1 and V2 Functions. Below is the correct configuration (using Microsoft.Azure.WebJobs.Extensions.Storage at least 3.0.1). The following host.json configures a single function app to process queue messages sequentially.
{
"version":"2.0",
"extensions": {
"queues": {
"batchSize": 1,
"newBatchThreshold": 0
}
}
}
Setting App Setting WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT = 1 restricts a function app from dynamically scaling out beyond one instance.

Resources