I have HTTP trigger Azure Durable Function. On running this function, I see 502 Bad Gateway exceptions happening very frequently.
On looking 'Diagnose and solve problems', I see this:
502 errors: The HTTP Trigger limits at https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook?tabs=in-process%2Cfunctionsv2&pivots=programming-language-csharp#trigger---limits---limits may result in 502 errors being logged if you hit these limits.
This is host.json in my Azure Function:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
}
}
What am I missing? How do I resolve this?
As the users #Skin and #thanzeel specified that your function hitting the limits such as time-out, etc. and also same is mentioned in your last screenshot given:
If you open the link specified in above image i.e., MS Doc of Azure Function Http Webhook Limits, check if your Function App:
If hosted in consumption plan, increase its time-out value from host.json file and check.
If Http Request Time is more than the consumption plan limit, go for durable functions - async patterns or scale the hosting plan such as premium or app service plan.
Another cause of this issue is might due to Length Limits of Http Request or the URL.
Related
I have azure functions developed in node js. When I create a cloud instance for function app, it gets stuck on deployment process with all the resources OK status. Microsoft.Web/serverfarms returning 429. The error message reads as:
**"status"**: "Failed",
**"error"**: {
**"code"**: "429",
**"message"**: "App Service Plan Create operation is throttled for subscription <subcription_id>. Please contact support if issue persists.",
}
Please let me know what the possible solution will be for this
Turns out, you just need to create a new function application with different server location. and delete all the related instances of the previously deployed azure function.
While answering Retrieve quota for Microsoft Azure App Service Storage, I stumbled upon the FileSystemUsage metric for Microsoft.Web/sites resource type. As per the documentation, this metric should return Percentage of filesystem quota consumed by the app..
However when I execute Metrics - List REST API operation (and also in the Metrics blade in Azure Portal) for my web app, the value is always returned as zero. I checked it against a number of web apps in my Azure Subscriptions and for all of them the result was zero. I am curious to know the reason for that.
In contrast, if I execute App Service Plans - List Usages REST API operation, it returns me the correct value. For example, if my App Service Plan is S2, I get following response back:
{
"unit": "Bytes",
"nextResetTime": "9999-12-31T23:59:59.9999999Z",
"currentValue": 815899648,
"limit": 536870912000,//500 GB (50 GB/instance x max 10 instances)
"name": {
"value": "FileSystemStorage",
"localizedValue": "File System Storage"
}
},
Did I misunderstand FileSystemUsage for Web Apps? Would appreciate if someone can explain the purpose of this metric? If it is indeed what is documented, then why the API is returning zero value?
This should be the default behavior, please check this doc Understand metrics:
Note
File System Usage is a new metric being rolled out globally, no data
is expected unless your app is hosted in an App Service Environment.
So currently this metric File System Usage should only be working on ASE.
I need to have only one instance of an Azure Function App in place and put the following json code in host.json.
But when a function gets triggered by a servicebus queue, I can clearly see in Live Metrics Stream in Application Insights that several servers get provisioned to serve the load. What am I missing to limit the running servers to only one?
{
"version": "2.0",
"WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT": 1,
"maxConcurrentCalls": 2,
"extensions": {
"serviceBus": {
"prefetchCount": 1,
"messageHandlerOptions": {
"maxConcurrentCalls": 2
}
}
}
}
Why do you want to do that?
Remember that each instance can process multiple messages at the same time, so 1 instance does not means one message at at time
Anyways, you can go to your app settings and add the following:
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT = 1
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT is an application setting, not a host.json setting. I would also point out that it's somewhat less than 100% reliable, so if you need a hard guarantee of only 1 instance then you should use either a Premium plan or a standard App Service plan rather than a Consumption plan.
I am using Azure Logic App with Azure BLOB Storage trigger.
When a blob is updated or modified in Azure Storage, I pull the content of blob created or modified from Storage, do some transformations on data and push it back to Azure Storage as new blob content using Create Content - Azure Blob Storage action of LogicApp.
With large number of blobs inserted (for example 10000 files) or updated into blob storage, Logic App gets triggered multiple runs as expected for these inserted blobs, but the further Azure Blob Actions fail with following error:
{
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 16 seconds."
}
Did someone face similar issue in Logic App? If yes, can you suggest what could be the possible reason and probable fix.
Thanks
Seems like you are hitting the rate limits on the Azure Blob Managed API.
Please refer to Jörgen Bergström's blog about this: http://techstuff.bergstrom.nu/429-rate-limit-exceeded-in-logic-apps/
Essentially he says you can setup multiple API connections that do the same thing and then randomize the connection in the logic app code view to randomly use one of those connection which will eliminate the rate exceeding issue.
An example of this (I was using SQL connectors) is see below API connections I setup for my logic app. You can do the same with a blob storage connection and use a similar naming convention e.g. blob_1, blob_2, blob_3, ... and so on. You can create as many as you would like, I created 10 for mine:
You would then in your logic app code view replace all your current blob connections e.g.
#parameters('$connections')['blob']['connectionId']
Where "blob" is your current blob api connection with the following:
#parameters('$connections')[concat('blob_',rand(1,10))]['connectionId']
And then make sure to add all your "blob_" connections at the end of your code:
"blob_1": {
"connectionId": "/subscriptions/.../resourceGroups/.../providers/Microsoft.Web/connections/blob-1",
"connectionName": "blob-1",
"id": "/subscriptions/.../providers/Microsoft.Web/locations/.../managedApis/blob"
},
"blob_2": {
"connectionId": "/subscriptions/.../resourceGroups/.../providers/Microsoft.Web/connections/blob-2",
"connectionName": "blob-2",
"id": "/subscriptions/.../providers/Microsoft.Web/locations/.../managedApis/blob"
},
...
The logic app would then randomize which connection to use during the run eliminating the 429 rate limit error.
Please check this doc: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits
For each Azure subscription and tenant, Resource Manager allows up to
12,000 read requests per hour and 1,200 write requests per hour.
you can check the usage by:
response.Headers.GetValues("x-ms-ratelimit-remaining-subscription-reads").GetValue(0)
or
response.Headers.GetValues("x-ms-ratelimit-remaining-subscription-writes").GetValue(0)
I have this setup on Azure.
1 Azure Service Bus
1 Sql Azure Database
1 Dynamic App Service Plan
1 Azure function
I'm writing messages in the service bus, my function is triggered when a message is received and writes to the database.
I have a huge number of messages to process and I have this exception:
The request limit for the database is 90 and has been reached
I dig here on SO and in the docs and I found this answer from Paul Battum: https://stackoverflow.com/a/50769314/1026105
You can use the configuration settings in host.json to control the level of concurrency your functions execute at per instance and the max scaleout setting to control how many instances you scale out to. This will let you control the total amount of load put on your database.
What is the strategy to limit the function, since it can be limited on:
the level of concurrency your functions execute at per instance
the number of instances
Thanks guys!
Look in to using the Durable Extensions for Azure Functions where you can control the number of orchestrator and activity functions. You will need to change your design a little but will then get far better control over concurrency.
{
"version": "2.0",
"durableTask": {
"HubName": "MyFunctionHub",
"maxConcurrentActivityFunctions": 10,
"maxConcurrentOrchestratorFunctions": 10
},
"functionTimeout": "00:10:00"
}