I have a Function App, and sometimes, I notice that a Function Api(HTTP trigger) takes longer to respond. I belive this is due to the cold start limation.
Within my Function App, I have a function which is a timer triggered for every minute.
Due to function app being triggered every 1 minute, should the cold start limation still effect my Function App?
Thanks
When using the consumption plan, container resources are deallocated after roughly 20 minutes of inactivity, meaning that the next invocation will result in a cold start - source. So you shouldn't have sequential cold starts if you're triggering the same function app every minute.
Additionally, cold starts can still occur due to auto-scaling to handle capacity when creating a new container instance.
However, if you choose the premium plan, function apps are running continuously, or nearly continuously, avoiding cold starts with perpetually warm instances.
Related
I have a scenario where an azure function(http trigger) calls an orchestrator function, which calls multiple entities/activies.
The azure function then waits for the orchestrator to finish all its subtasks.
The following image shows a simplified sequence diagram.
The problem:
Calls to the orchestrator or entity function often(>20% of the time) take 15-20s to start executing(marked red in the diagram).
What I tried so far:
Switched the service plan from consumption(serverless) to premium with a minimum of 5 instances(=more than the number of activities/entities being called).
=> No cold start of new instances should occur
Called the http trigger many times in succession (but one after the other)
=> Some calls work, some don't, seems random
Gave it a few minutes between calls to the http trigger
When I run it locally with azurite, it works flawless every time
Why do the orchestator or entity functions often take so much time to start executing when hosted in Azure?
Update:
Changed maxQueuePollingInterval down to 3 seconds. No change in behavior.
I need to develop a process (e.g. Azure fucntion app) that will load a file from FTP once every week, and perform ETL and update to other service for a long time (100mins).
My question is that will Timer Trigger Azure Function app with COMSUMPTION plan works in this scenario, given that the max running time of Azure function app is 10 mins.
Update
My theory of using Timer trigger function with Comumption plan is that if the timer is set to wake up every 4 mins from certain period (e.g. 5am - 10am Monday only), and within the function, a status tells whether or not an existing processing is in progress. If it is, the process continues its on-going job, otherwise, the function exits.
Is it doable or any flaw?
I'm not sure what is your exact scenario, but I would consider one of the following options:
Option 1
Use durable functions. (Here is a C# example)
It will allow you to start your process and while you wait for different tasks to complete, your function won't actually be running.
Option2
In case durable functions doesn't suit your needs, you can try to use a combination of a timer triggered function and ACI with your logic.
In a nutshell, your flow should looks something like this:
Timer function is triggered
Call an API to create the ACI
End of timer function.
The service in the ACI starts his job
After the service is done, it calls an API to remove it's own ACI.
But in anyway, durable functions usually do the trick.
Let me know if something is unclear.
Good luck. :)
With Consumptions plan, the azure function can run for max 10 minutes, still, you need to configure in host.json
You can go for the App Service Plan which has no time limit. Again you need to configure function timeout property in host.json
for more seed the following tutorial
https://sps-cloud-architect.blogspot.com/2019/12/azure-data-load-etl-process-using-azure.html
I am writing an Azure Durable Function to process various bulk operations. Te code can get 1000 operations in a file and it's breaking those down to call the same activity function 1000 times.
The problem is that this can flood an API that the activity function uses up to the point that our activity function gets a 429 - Too Many Requests from the API. We are thinking of reading the Retry-After header offered and putting the thread to sleep for that period of time.
In this case, we're wondering if Azure will bill us for the seconds we're waiting for. Also, would this time count towards the timeout for the Azure function?
First, use the Durable Function Timers, not Thread.Sleep(). Then, the following applies:
If your function app uses the Consumption plan, you will still be
billed for any time and memory consumed by the abandoned activity
function. By default, functions running in the Consumption plan have a
timeout of five minutes. If this limit is exceeded, the Azure
Functions host is recycled to stop all execution and prevent a runaway
billing situation. The function timeout is configurable.
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-timers
I created function app v1 with one function in it. The function is a service bus trigger.
After publishing it to Azure i see this strange behaviour.
Sometimes the function gets triggered and sometimes it's not. There are no errors in logs at that time, but i see messages were put into dead letter queue with "delivery count excided" error.
Now, if i try to resend failed messages - it fails again. But if i go to portal, refresh function app and send the messages again - they get picked up and processed as usual.
Here's how i send messages:
var client = new QueueClient(connectionString, queueName);
var bytes = Encoding.UTF8.GetBytes(msgStr);
var message = new Message(bytes);
await queueClient.SendAsync(message);
I don't see this issue when run function locally.
New messages (<5) arrive every 15-20 minutes
I'm using Consumption plan
host.json is empty
Please help.
Cold Start
On consumption plan your function app goes to sleep after 10-15mins if it's idle. Any upcoming trigger awakes your function app which may take couple of seconds called cold start.
The problem in my opinion
Looking at the stats you shared in my opinion when your service bus tries to deliver message the function app is sleeping and is unable to deliver considering it message failed to deliver. When you restart your function app your function app instance is warmed up explicitly and is able to handle the messages for that window but it would go again to sleep after idle period.
Possible solutions
Keep instance warmed
Keep your function app warmed up all the time. You can simply create another time triggered function which keeps hitting your service bus triggered function (function that is handling messages) every 5 mins or so, it will keep the instance running all the time.
Increase retry count
You need to check your retries count and increase to greater number so when function apps wake up the service bus is still able to deliver the messages.
Use App service plan
Another way is to use app service plan instead consumption plan. This will keep the function app warm up all the time to handle the messages. considering your workload (<5) it could be bit expensive option ,since in consumption plan you get 1Million req/month free of cost.
I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.