We have a service which pings our EP1 Premium service and yesterday we received 3 client side timeout errors after 2 minutes of waiting. When opening the trace in App insights, these requests which time out are not even logged and have no trace of ever being received Azure side, and therefore stay unanswered. By looking at the metrics provided in the Azure Functions app, I found out that 1-2 minutes after the request has been sent, the app loses all its ability to work as its Total App Domains falls to 0 as well as all connections, threads and so on and this state lasts until the next request is received, therefore "skipping" the request that happened beforehand. This is a big issue as I need to make sure requests get answered in a timely manner.
The client service sent HTTP requests to the Azure Functions app expecting an answer, only to time out while the Azure-side doesn't have any record of ever receiving the request.
I believe this issues is related to Consumption Plan of Azure Functions called Cold Start behaviour. The "skipping" mechanism is explained below:
Apps may scale to zero when idle, meaning some requests may have additional latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from pre-warmed placeholder functions that already have the function host and language processes running.https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#cold-start-behavior
Please also consider of having look on this article, which explains the behaviour. https://azure.microsoft.com/en-us/blog/understanding-serverless-cold-start/
Related
I am running a nodejs app on Heroku free tier.
Free tier is more than enough for occasional traffic, just a few times a month for a backend admin system.
However, if a few multiple Rest API calls were made within 1 or 2 minutes, it will encounter timeout error. Actual scenario - my node server is receiving WhatsApp replies coming from Twilio - Twilio uses webhook to call the REST API from my node server hosted at Heroku free tier. Multiple WhatsApp replies of about 10 - 20 is expected but could come in within a minute. Node server receives the data from Twilio and writes to FireStore for each WhatsApp reply.
I am reading Heroku's help document on how to deal with request time-outs > https://help.heroku.com/PFSOIDTR/why-am-i-seeing-h12-request-timeouts-high-response-times-in-my-app
Can I increase the number of web workers under the current Free unverified account?
I tried from the Heroku CLI and get the following response
heroku ps:type worker=standard-2x
› Warning: heroku update available from 7.59.2 to 7.60.2.
▸ Type worker not found in process formation.
▸ Types: web
If I get my account verified by adding a credit card, how many web workers can I add to the existing free dyno?
If I stay at Free tier, without upgrading to say Hobby tier. Even if there is unexpected event of a spike in traffic, my dyno will just go to sleep once the free dyno hours are used up - Heroku will not automatically charge my credit card for the excess traffic right? I am very concern due to news about Firebase users charged for thousands of dollars due to unexpected spike in traffic and there is no way to cap or limit it.
If I am expecting up to 20 simultaneous/concurrent REST API calls to my node server at Heroku within a minute - how many web workers should I increase?
On a free account, you can only have one worker at a time. If you add more dynos, you will be charged.
When your free Heroku dyno sleeps because it is not being used, if it receives a new request it has to wake up. Waking up takes quite a bit of time and likely is longer than Twilio's 10 second timeout for HTTP responses. I think your app is probably failing not because of the spike of traffic especially as you are using Node which can handle multiple connections with ease, but because your sleeping dyno times out.
You might find some benefit adding the same URL as the fallback URL for your number. That would allow Twilio to call your server, wake it up and if it fails the first time because it timed out, the fallback URL will make a second attempt which would hopefully succeed.
In reality, the best practice is to pay for an account that allows you to keep your dyno running the whole time.
In some 1-5% of our requests, we are seeing slow communication between APIs (REST API requests). Both APIs are developed by us and hosted on Azure, each app service on its own app service plan in the same region, P1v2 tier.
What we are seeing on application insights is that POST or GET requests on origin API can take a few seconds to execute, while real execution time on destination API is only a few milliseconds.
Examples (first line POST request on origin, second execution time on destination API): slow req 1, slow req 2
Our best guess is that the time difference is lost in communication between components. We don't have an explanation for it since the payload is really small and in most cases, communication takes less than 5 milliseconds.
We dismiss the possible explanation it could be due to component cold start since it happens during constant load and no horizontal scaling was performed.
Do you have any idea what might cause it or how to do additional analysis in order to discover it?
If you're running multiple sites on the App Service Plan, then enable the "Always On" setting for your web app > All Settings > Application Settings > Click on Always On
See here for details: https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
When Always On is off, the site is shut down after 20 minutes of inactivity to free up resources for any additional websites that might be using the same App Service Plan.
The amount of information it needs to collect, process and then present itself requires some time, and involve internal calls as well, that is why considering the server load and usage, it takes around 6 to 7 seconds sometimes even more.
To Troubleshoot that latency, try this steps, provided by Microsoft.
I have deployed a .Net Core API to Azure as an App Service.
I have set the Always on feature to true.
When I log the requests, I see that Azure Always on requests are coming every 5 minutes.
My usage with API is HTTPS but Always on requests are sending with HTTP. I don't know if this is the case
For the first request, it is sometimes 10 seconds, but after the first request, it is around 100ms.
What is missing here?
I have logged the durations:
There are quite a few reasons why this might be the case:
You're connecting to resources that take time connecting to the first time
Some information is being cached and needs to be read the first time
There is initialization code present
Lazy instantiation of (static/singleton) instances
... other ...
Add some logging to your application, maybe enable Application Insights if you haven't done so already and go try to find the culprit.
I recently deployed a Node.js Backend Service to Azure and have the following problem. The service becomes unresponsive after a certain amount of time, and only comes back to life if a external request is sent. The problem is, that it takes about 3 minutes for the Container to start back up and actually return the request. I'm running Node 14 LTS. I also added a health check yesterday, but azure simply doesn't bother actually keeping the app alive, here is the metric off azure
I verified azure is actually trying to reach the correct endpoint, and it does. I also have "Always On" enabled. I also verified that the app itself, is not crashing. I log every request and all of a sudden requests are no longer received, which means the health endpoint doesn't respond either, but it does not result in a container restart. It just waits for an external request to appear and then decides to start everything back up, which takes too long.
I feel like it's some kind of configuration issue, because the app itself is not very complex and I never experienced crashes when doing local development.
The official document tells us that the Free pricing tier you are currently using, Always on does not take effect.
How do I decrease the response time for the first request after idle time?
I have an Azure Functions application which once in a while "freezes" and stops processing messages and timed events.
When this happens I do not see anything in the logs (AppInsight), neither exceptions nor any kind of unfamiliar traces.
The application has following functions:
One processing messages from a Service Bus topic subscription (belonging to another application)
One processing from an internal storage queue
One timer based function triggered every half hour
Four HTTP endpoints
Our production app runs fine. This is due to an internal dashboard (on big screen in the office), which polls one of the HTTP endpoints every 5 minutes, there by keeping it alive.
Our test, stage and preproduction apps stop after a while, stopping to process messages and timer events.
This question is more or less the same as my previous question, but the without error message that was in focus then. Much fewer error messages now, as our deployment has been fixed.
A more detailed analysis can be found in the GitHub issue.
On a consumption plan, all triggers are registered in the host, so that these can be handled, leading to my functions being called at the right time. This part of the host also handles scalability.
I had two bugs:
Wrong deployment. Do zip based deployment as described in the Docs.
Malformed host.json. Comments in JSON are not right, although it does work in most circumstances in Azure Functions. But not all.
The sites now works as expected, both concerning availability and scalability.
Thanks to the people in the Azure Functions team (Ling Toh, Fabio Cavalcante, David Ebbo) for helping me out with this.