Azure Function Proxy - Cold Startup - Error 429 Too many requests - azure

I've set up a function app in Azure. I've added a proxy to the function (so I can assign it a different URI).
When the proxy and function have been torn down and its time to wake it up, I sometimes get the error code 429: Too many requests from a single Postman/insomnia request to wake it up.
How do I stop this from happening?
For the time being, I've added a logic app to ping it every 5 mins.

Seems to be something with the last release of https://github.com/Azure/azure-functions-host/releases/tag/v3.0.15185
On the date of this release we started receiving 429s, a lot, on the functions we had running for a long time.
We fixed it by adding the following to the hosts.json:
"extensions": {
"http": {
"dynamicThrottlesEnabled": false
}
}
Doc: https://learn.microsoft.com/pt-br/azure/azure-functions/functions-bindings-http-webhook-output
My guess is that they've changed some default values.
EDIT:
We are operating for a long time using BOTH, the hosts.json update from above and the pinned version, stated by sanjo (https://stackoverflow.com/a/65311645/10585914).
You can follow the entire discussion here: https://github.com/Azure/azure-functions-host/issues/6984
And the PR: https://github.com/Azure/azure-functions-host/pull/6986

We are also experiencing 429's in our azure-function and has been advised by MS to force the Azure Functions Extensions to a lower version by setting FUNCTIONS_EXTENSION_VERSION to 3.0.14916.0 instead of ~3
We're still evaluating the "solution".

From Microsoft support, there are 2 workarounds:
Cassio's answer, which actually worked for us for a couple hours but then stopped working. We had been getting very consistent 429s for multiple days, then a brief stoppage after the change, then it came back.
Update your FUNCTIONS_EXTENSION_VERSION app setting to the previous version (3.0.14916.0). This has worked again in the short time since we've changed it.
App Setting Update

I don't think your 5 minute ping is a problem like the answer from Hury Shen. We have recently begun receiving 429 requests anytime our functions wake from a cold period. I don't know what has changed at Azure side but it is not good! One fix you could try is simply redeploy your function, we did this and it worked at least for a time! Will report back if we find anything else

It seems the error was caused by the logic app ping the function every 5 mins. Per my understanding, you schedule the logic app request function to keep the function awake.
If so, you do not need to create the logic specifically to wake it up. You can choose Premium plan for your function app when you create it.
And then go to "Scale out" tab of your function app, you can set Always Ready Instances as 1. Then your function will have one instance always awake, function will not cold start when a request come.
As Premium plan plan provides the same features and scaling mechanism used on the Consumption plan (based on number of events) with no cold start, so it will cost much more than Consumption plan. You can refer to this page about function cost.

Related

Azure Timer Function connection limits

I have an Azure Trimer function that executes every 15 minutes. The function compiles data from 3 data sources, WCF service, REST endpoint and Table Storage, and insert the data into CosmosDb. Where I am running into an issues is that after 7 or 8 executions of function I get the "Host thresholds exceeded: [Connections]" error. Here is what is really strange, the function takes about 2 minutes to execute. The error doesn't show in the logs until well after the function is done executing.
I have gone through all the connection limits documentation and understand it. Where I am a bit confused is when the limits matter. A single execution of my function does not come anywhere close to hitting the 600 active connection limit. Do the connection limits apply to the individual execution of the timer function or are the limits an cumulative over multiple executions?
Here is the real kicker, this function was running fine for two weeks until 07/22/2012. Nothing in the code has changed and it has not been redeployed.
Runtime is 3.1.3
Is your function on a Consumption Plan or in an App Service Plan?
From your description it just sounds like your code may be leaking connections and building up a large number of them over time.
Maybe this blog post can help in ensuring the right usage patterns? https://4lowtherabbit.github.io/blogs/2019/10/SNAT/

How to find/cure source of function app throughput issues

I have an Azure function app triggered by an HttpRequest. The function app reads the request, tosses one copy of it into a storage table for safekeeping and sends another copy to a queue for further processing by another element of the system. I have a client running an ApacheBench test that reports approximately 148 requests per second processed. That rate of processing will not be enough for our expected load.
My understanding of function apps is that it should spawn as many instances as is needed to handle the load sent to it. But this function app might not be scaling out quickly enough as it’s only handling that 148 requests per second. I need it to handle at least 200 requests per second.
I’m not 100% sure the problem is on my end, though. In analyzing the performance of my function app I found a LOT of 429 errors. What I found online, particularly https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits, suggests that these errors could be due to too many requests being sent from a single IP. Would several ApacheBench 10K and 20K request load tests within a given day cause the 429 error?
However, if that’s not it, if the problem is with my function app, how can I force my function app to spawn more instances more quickly? I assume this is the way to get more throughput per second. But I’m still very new at working with function apps so if there is a different way, I would more than welcome your input.
Maybe the Premium app service plan that’s in public preview would handle more throughput? I’ve thought about switching over to that and running a quick test but am unsure if I’d be able to switch back?
Maybe EventHub is something I need to investigate? Is that something that might increase my apparent throughput by catching more requests and holding on to them until the function app could accept and process them?
Thanks in advance for any assistance you can give.
You dont provide much context of you app but this is few steps how you can improve
If you want more control you need to use App Service plan with always on to avoid cold start, also you will need to configure auto scaling since you are responsible in this plan and auto scale is not enabled by default in app service plan.
Your azure function must be fully async as you have external dependencies so you dont want to block thread while you are calling them.
Look on the limits. Using host.json you can tweek it.
429 error means that function is busy to process your request, so probably when you writing to table you are not using async and blocking thread
Function apps work very well and scale as it says. It could be because request coming from Single IP and Azure could be considering it DDOS. You can do the following
AzureDevOps Load Test
You can load test using one of the azure service . I am very sure they have better criteria of handling IPs. Azure DeveOps Load Test
Provision VM in Azure
The way i normally do is provision the VM (windows 10 pro) in azure and use JMeter to Load test. I have use this method to test and it works fine. You can provision couple of them and subdivide the load.
Use professional Load testing services
If possible you may use services like Loader.io . They use sophisticated algos to run the load test and provision bunch of VMs to run the same test.
Use Application Insights
If not already you must be using application insights to have a better look from server perspective. Go to live stream and see how many instance it would provision to handle the load test . You can easily look into events and error logs that may be arising and investigate. You can deep dive into each associated dependency and investigate the problem.

Azure Functions: Application freezes - without any error message

I have an Azure Functions application which once in a while "freezes" and stops processing messages and timed events.
When this happens I do not see anything in the logs (AppInsight), neither exceptions nor any kind of unfamiliar traces.
The application has following functions:
One processing messages from a Service Bus topic subscription (belonging to another application)
One processing from an internal storage queue
One timer based function triggered every half hour
Four HTTP endpoints
Our production app runs fine. This is due to an internal dashboard (on big screen in the office), which polls one of the HTTP endpoints every 5 minutes, there by keeping it alive.
Our test, stage and preproduction apps stop after a while, stopping to process messages and timer events.
This question is more or less the same as my previous question, but the without error message that was in focus then. Much fewer error messages now, as our deployment has been fixed.
A more detailed analysis can be found in the GitHub issue.
On a consumption plan, all triggers are registered in the host, so that these can be handled, leading to my functions being called at the right time. This part of the host also handles scalability.
I had two bugs:
Wrong deployment. Do zip based deployment as described in the Docs.
Malformed host.json. Comments in JSON are not right, although it does work in most circumstances in Azure Functions. But not all.
The sites now works as expected, both concerning availability and scalability.
Thanks to the people in the Azure Functions team (Ling Toh, Fabio Cavalcante, David Ebbo) for helping me out with this.

Azure Function app periodically not firing on trigger or timer

I have an Azure Function app with 4 functions
one triggered on a timer every 24 hours
one triggered on events from IoT Hub
two others triggered on events from Service Bus as a result of the previous function
All functions work as expected when first deployed but after a period of time the functions stop running and the app appears to be scaled down with no servers online. At this point the functions are never triggered again unless I either restart the app, or drill into a function and view details of it (supposedly, forcing the function to start up).
I have the exact same code deployed to a different environment and it runs perfectly and has never encountered this issue. I've checked all the settings and configuration and can't see any material differences between the two.
This is really frustrating and is becoming a big issue. Any help would be much appreciated.
Function App is hosted in Australia Southeast.
This is the last execution (as of now)
10:45 PM UTC - Function started (Id=4d29555b-d3af-43d7-95e9-1a4a2d43dc46)
The event triggered function should run every few minutes as the IoT Hub it's triggering from has a steady stream of events coming in. When I prod the function (or restart it) and it comes to life it quickly churns through a backlog of messages queued in the IoT Hub.
I see the problem: you have comments in your host.json, which makes it invalid and throws off the parser at the scale controller level.
Admittedly, the error handling is quite poor here. But anyway, remove the commented out logger, and it should all work.

Azure Bot Service using over 1GB of data transfer out per day. Why? How can I stop that?

I created a QnA bot using the Azure Bot service, and now I'm seeing data transfers out of my subscription of over 1 GB a day! I cannot figure out why, but since it's billable, I'd like to know why and how I can stop it.
The bot isn't being used yet, so no one is sending queries to it. I'm confused how this is happening.
Here's a screen shot of the graph for use in the last hour as well as a screen shot of the billing for the last few days showing the sudden jump in use.
Is this normal?
If you add AzureWebJobsDisableHomepage with a value of true, to the App settings, the data out will stop.
The setting itself is documented here: https://github.com/Azure/azure-webjobs-sdk-script/wiki/Configuration-Settings (although it doesn't provide an explanation for how this setting affects a bot specifically)
The reasoning behind what is happening is a little complex. Azure Functions are not normally "in memory" and available all the time. There is a small spinup time that is not ideal within a bot. So, apparently there is a job setup with consumption plan bots to ping it every 10 seconds (and by 'ping', i mean retrieve the root of the site). If you open the Log Stream, you'll see an http get request every 10 seconds. Adding AzureWebJobsDisableHomepage doesn't disable the request, but changes the status of what is returned from "OK" to "NoContent".
This will be added to the Bot Service arm template soon (so future consumption plan bots do not automatically accrue these data usages).

Resources