I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.
Related
Because http-trigger azure function has a strict time limit for 230s, I created a http-trigger durable azure function. I find that when I trigger durable function multiple times and if the last run is not completed, the current run will continue the last run until it is finished. It is a little confused for me because I only want each run to do the task of the current run, not replay the last un-finished run. So my question is that:
Is it by design for durable function to make sure each run is completed (succeed or failed)?
Can durable function only focus on the current run just like the normal http-trigger azure function?
If 2) is not, is there any way to mitigate the time limit issue for normal http-trigger azure function?
Thanks a lot!
The function runs until it gets results that is Successfully completed or failed message.
According to Microsoft-Documentation it says,
Azure Functions times out after 230 seconds regardless of the functionTimeout setting you've configured in the settings.
I have an Azure Trimer function that executes every 15 minutes. The function compiles data from 3 data sources, WCF service, REST endpoint and Table Storage, and insert the data into CosmosDb. Where I am running into an issues is that after 7 or 8 executions of function I get the "Host thresholds exceeded: [Connections]" error. Here is what is really strange, the function takes about 2 minutes to execute. The error doesn't show in the logs until well after the function is done executing.
I have gone through all the connection limits documentation and understand it. Where I am a bit confused is when the limits matter. A single execution of my function does not come anywhere close to hitting the 600 active connection limit. Do the connection limits apply to the individual execution of the timer function or are the limits an cumulative over multiple executions?
Here is the real kicker, this function was running fine for two weeks until 07/22/2012. Nothing in the code has changed and it has not been redeployed.
Runtime is 3.1.3
Is your function on a Consumption Plan or in an App Service Plan?
From your description it just sounds like your code may be leaking connections and building up a large number of them over time.
Maybe this blog post can help in ensuring the right usage patterns? https://4lowtherabbit.github.io/blogs/2019/10/SNAT/
I need to develop a process (e.g. Azure fucntion app) that will load a file from FTP once every week, and perform ETL and update to other service for a long time (100mins).
My question is that will Timer Trigger Azure Function app with COMSUMPTION plan works in this scenario, given that the max running time of Azure function app is 10 mins.
Update
My theory of using Timer trigger function with Comumption plan is that if the timer is set to wake up every 4 mins from certain period (e.g. 5am - 10am Monday only), and within the function, a status tells whether or not an existing processing is in progress. If it is, the process continues its on-going job, otherwise, the function exits.
Is it doable or any flaw?
I'm not sure what is your exact scenario, but I would consider one of the following options:
Option 1
Use durable functions. (Here is a C# example)
It will allow you to start your process and while you wait for different tasks to complete, your function won't actually be running.
Option2
In case durable functions doesn't suit your needs, you can try to use a combination of a timer triggered function and ACI with your logic.
In a nutshell, your flow should looks something like this:
Timer function is triggered
Call an API to create the ACI
End of timer function.
The service in the ACI starts his job
After the service is done, it calls an API to remove it's own ACI.
But in anyway, durable functions usually do the trick.
Let me know if something is unclear.
Good luck. :)
With Consumptions plan, the azure function can run for max 10 minutes, still, you need to configure in host.json
You can go for the App Service Plan which has no time limit. Again you need to configure function timeout property in host.json
for more seed the following tutorial
https://sps-cloud-architect.blogspot.com/2019/12/azure-data-load-etl-process-using-azure.html
I have data going from my system to an azure iot. I timestamp the data packet when I send it.Then I have an azure function that is triggered by the iothub. In the azure function I get the message and get the timestamp and record how long it took the data to get to the function. I also have another program running on my system that listens for data on the iothub and records that time too.
So most of the time, the time in the azure function is in millisecs, but sometimes, I see a large time for the azure function to be triggered(I conclude it is this because the program that reads from the iot hub shows that the data reached the iot hub quickly and there was no delay).
Would anybody know the reasons for why azure function might be triggering late
Is this the same question that was asked here? https://github.com/Azure/Azure-Functions/issues/711
I'll copy/paste my answer for others to see:
Based on what I see in the logs and your description, I think the latency can be explained as being caused by a cold-start of your function app process. If a function app goes idle for approximately 20 minutes, then it is unloaded from memory and any subsequent trigger will initiate a cold start.
Basically, the following sequence of events takes place:
The function app goes idle and is unloaded (this happened about 5 minutes before the trigger you mentioned).
You send the new event.
The event eventually gets noticed by our scale controller, which polls for events on a 10 second interval.
Our scale controller initiates a cold-start of your function app. This can add a few more seconds depending on the content of your function app (it was about 6 seconds in this case).
So unfortunately this is a known behavior with the consumption plan. You can read up on this issue here: https://blogs.msdn.microsoft.com/appserviceteam/2018/02/07/understanding-serverless-cold-start/. The blog post also discusses some ways you can work around this if it's problematic for your scenario.
I have an SFTP trigger that send the contents to an Azure Function. When Logic App invokes the Function, in the designer view I observe that it fails after 9 minutes. When I look at the Function monitor, I observe that the function is still running. The function is C#. I When the function completes, it logs the difference in DateTime between when it starts and ends. The time printed is about 300 seconds or five minutes. I know this is the limit for the time for a function to run.
This function runs in only 30 seconds on a VM on my five year old computer. Why is performance in Azure Function so poor? Is there anything that can be done to make it perform better?
What are you trying to do in the function?
What is the processing-time of the function if you test it in Function app itself rather than from the Logic App?