I already faced similar problem few times:
Azure Function with ServiceBusTrigger by some reason (misconfiguration, infrastructure issues, doesn't really matter) fails to connect to ServiceBus (so it happens on trigger level) and it leads to two issues:
It tries to restart all the time, increasing CPU consumption
It generates literally a millions of exceptions in AppInsights, which leads to quota exceedance
Practically every error in configuration means significantly increased bills and requires thorough monitoring after every deployment, which is annoying and error prone solution.
So, my question: If there is a way to set some delay between restart attempts to (for example) one second? And, as addition - is there way to limit amount of restart attempts and then shut down the Function?
Establishing a connection to the broker to fetch messages is Functions responsibility, Scale Controller. That aspect is entirely abstracted from customers and not configurable. I suggest raising an issue with Azure Functions team, likely under the Runtime repo.
Related
I've had a couple of random instances in a 1 hour period, of the Azure Search service returning a connection timeout, it is being called from a .net core web application running as an Azure App Service.
App Insights has a dependency failure for the same time (a POST to /indexes('products')/docs/search.post.search?api-version=2019-05-06) with a response of "Faulted".
Any help/idea on why this happened and how I can prevent would be appreciated.
You could be attempting to retrieve too much data at once. Or you may have a throttling issue because of too much traffic. The reason for the timeout is not possible to determine without more context.
To avoid timeouts, you could optimize and resolve your root cause by reducing the response size, limiting the number of requests, or addressing your issue's root cause.
Also, consider implementing a retry mechanism with exponential backoff. See this thread for information: Azure Search .net SDK- How to use "FindFailedActionsToRetry"?
As Dan mentioned, it is recommended to use retries since failures due to network or many other reasons can happen and this will help improve your app availability. However, if you are seeing failures happen repeatedly or need more information then please open a support issue so the support team can investigate it further.
We have a doubt about Azure because in some cases we have some dead times when we received requests in one of our AppServices or when a Service Bus triggers, for example, an Azure Functions.
If you see this image, you will see an example:
AppInsight Example Image
We execute a Request and at 5 seconds, but Azure delays more than 30 seconds to start the execution. We made a lot of optimizations in our apps, but we have no visibility about this delay.
Did someone face the same issue and found some solution? We believe it is a performance issue in the Workers, but, this happens also when the Workers are with a low load of memory and CPU. So we don't know how to scale horizontally automatically the resource if it is without load.
This happens also in our AZF, but we believe it's an issue between the Service Bus and the container of the AZF. In these cases we found the AZF has a higher consumption of CPU, but we don't why, because in the local environment we process a lot of messages with multithreading without any problem.
We have a long running ASP.NET WebApp in Azure which has no real endpoints exposed – it serves a single functional purpose primarily reading and manipulating database data, effectively a batched, scheduled task, triggered by a timer every 30 seconds.
The app runs fine most of the time but we are seeing occasional issues where the CPU load for the app goes close to the maximum for the AppServicePlan, instantaneously rather than gradually, and stops executing any more timer triggers and we cannot find anything explicitly in the executing code to account for it (no signs of deadlocks etc. and all code paths have try/catch so there should be no unhandled exceptions). More often than not we see errors getting a connection to a database but it’s not clear if those are cause or symptoms.
Note, this is the only resource within the AppService Plan. The Azure SQL database is in the same region and whilst utilised by other apps is very lightly used by them and they also exhibit none of the issues seen by the problem app.
It feels like this is infrastructure related but we have been unable to find anything to explain what is happening so if anyone has any suggestions for where we should be looking they would be gratefully received. We have enabled basic Application Insights (not SDK) but other than seeing CPU load spike prior to loss of app response there is little information of interest given our limited knowledge of how to best utilise Insights.
According to your description, I thought of two points to troubleshoot your problem. First of all, you can track the running status of your program through the code, and put a log at the beginning and end of your batch scheduled tasks to record the status of each run. If possible, record request and response information and start and end information. This can completely record the time and running status of your task.
Secondly, you can record logs before the program starts database operations, and whether the database connection is successful. The best case is to be able to record, what business will trigger CPU load when operating, and track the specific operating conditions, in order to specifically analyze what causes the database connection failure.
Because you cannot reproduce your problem, you can only guess the cause of the problem. If you still can't find where the problem is through the above two points, then modify your timer appropriately, and let the program trigger once every 5 minutes instead of 30s.
I know that Web Apps will be rebooted during maintenance without notice.
But how about the case of Functions?
During maintenance, does the current execution get stopped?
I think it is difficult to retry Timer, Http, Event Hub Triggered Functions.
But I wish Functions runtime will make my code retry after the maintenance finishes.
Your question has several parts, so:
Probably yes, Azure will stop routing requests to an instance which is about to get maintenance done. Because Function executions are short-lived (on Consumption Plan), that's relatively easy to do.
"Probably" - because this is not something they guarantee to you. Overall, Functions on Consumption Plan have no SLA, and host behavior details might change over time.
If stopping in the middle of function execution is a problem for your business case, you still need to handle it. Any instance can experience hardware failure at any time, including the least convenient time possible.
The observed behavior in case of such failure will differ per trigger type. E.g. HTTP call will just fail with 5xx code and the client is supposed to retry it. Queue-based triggers have a mechanism with locks, timeouts and retry counts. Event Hub will restart at the last checkpoint.
I might be wrong but the whole thing about serverless computing is that you don't have to worry about these things anymore. So I would trust Microsoft that they won't stop your function during a maintenance. Thats probably one of the reasons why a function can only run for a limit time period.
I have a service bus trigger function that when receiving a message from the queue will do a simple db call, and then send out emails/sms. Can I run > 1000 calls in my service bus queue to trigger a function to run simultaneously without the run time being affected?
My concern is that I queue up 1000+ messages to trigger my function all at the same time, say 5:00PM to send out emails/sms. If they end up running later because there is so many running threads the users receiving the emails/sms don't get them until 1 hour after the designated time!
Is this a concern and if so is there a remedy?
FYI - I know I can make the function run asynchronously, would that make any difference in this scenario?
1000 messages is not a big number. If your e-mail/sms service can handle them fast, the whole batch will be gone relatively quickly. Few things to know though:
Functions won't scale to 1000 parallel executions in this case. They will start with 1 instance doing ~16 parallel calls at the same time, and then observe how fast the processing goes, then maybe add a second instance, wait again etc.
The exact scaling behavior is not publicly described and can change over time. Thus, YMMV, and you need to test against your specific scenario.
Yes, make the functions async whenever you can. I don't expect a huge boost in processing speed just because of that, but it certainly won't hurt.
Bottom line: your scenario doesn't sound like a problem for Functions, but if you need very short latency, you'll have to run a test before relying on it.
I'm assuming you are talking about an Azure Service Bus Binding to an Azure Function. There should be no issue with >1000 Azure Functions firing at the same time. They are a Serverless runtime and should be able to scale greatly if you are running under a consumption model. If you are running the functions in a service plan, you may be limited by the service plan.
In your scenario you are probably more likely to overwhelm the downstream dependencies: the database and SMS sending system, before you overwhelm the Azure Functions infrastructure.
The best thing to do is to do some load testing, and monitor the exceptions coming out of the connections to the database and SMS systems.