Delay in Azure function triggering off IOThub - azure

I have data going from my system to an azure iot. I timestamp the data packet when I send it.Then I have an azure function that is triggered by the iothub. In the azure function I get the message and get the timestamp and record how long it took the data to get to the function. I also have another program running on my system that listens for data on the iothub and records that time too.
So most of the time, the time in the azure function is in millisecs, but sometimes, I see a large time for the azure function to be triggered(I conclude it is this because the program that reads from the iot hub shows that the data reached the iot hub quickly and there was no delay).
Would anybody know the reasons for why azure function might be triggering late

Is this the same question that was asked here? https://github.com/Azure/Azure-Functions/issues/711
I'll copy/paste my answer for others to see:
Based on what I see in the logs and your description, I think the latency can be explained as being caused by a cold-start of your function app process. If a function app goes idle for approximately 20 minutes, then it is unloaded from memory and any subsequent trigger will initiate a cold start.
Basically, the following sequence of events takes place:
The function app goes idle and is unloaded (this happened about 5 minutes before the trigger you mentioned).
You send the new event.
The event eventually gets noticed by our scale controller, which polls for events on a 10 second interval.
Our scale controller initiates a cold-start of your function app. This can add a few more seconds depending on the content of your function app (it was about 6 seconds in this case).
So unfortunately this is a known behavior with the consumption plan. You can read up on this issue here: https://blogs.msdn.microsoft.com/appserviceteam/2018/02/07/understanding-serverless-cold-start/. The blog post also discusses some ways you can work around this if it's problematic for your scenario.

Related

Azure Timer Function connection limits

I have an Azure Trimer function that executes every 15 minutes. The function compiles data from 3 data sources, WCF service, REST endpoint and Table Storage, and insert the data into CosmosDb. Where I am running into an issues is that after 7 or 8 executions of function I get the "Host thresholds exceeded: [Connections]" error. Here is what is really strange, the function takes about 2 minutes to execute. The error doesn't show in the logs until well after the function is done executing.
I have gone through all the connection limits documentation and understand it. Where I am a bit confused is when the limits matter. A single execution of my function does not come anywhere close to hitting the 600 active connection limit. Do the connection limits apply to the individual execution of the timer function or are the limits an cumulative over multiple executions?
Here is the real kicker, this function was running fine for two weeks until 07/22/2012. Nothing in the code has changed and it has not been redeployed.
Runtime is 3.1.3
Is your function on a Consumption Plan or in an App Service Plan?
From your description it just sounds like your code may be leaking connections and building up a large number of them over time.
Maybe this blog post can help in ensuring the right usage patterns? https://4lowtherabbit.github.io/blogs/2019/10/SNAT/

Why IotHub events are delayed when stored in Time Series Insights?

I have a Time Series Insights Environment with an IoT Hub data source configured.
What I noticed is that there is a specific 20-30 seconds delay from sending an event to IoT Hub and seeing it stored in TSI.
After I found this, I attached a Function Trigger directly to the Iot Hub. What happened is that events were received immediately by the trigger, but TSI returned them 20-30 seconds later.
So, I have two questions:
Where does that delay come from?
Is there anything I can do about minimizing the delay?
Thanks!
There is an expected measurable delay of up to 1 minute before you will see it in TSI and you cannot dial that up/down. It's just how the service works.
Just in case you haven't already, also make sure you've configured your SKU and capacity to support your use cases.

Azure Function - Event Hub Trigger stopped

I've got an Azure Function app in production on an event hub trigger, it's low throughput with the function typically only being triggered once daily. It's running on an S1 plan at the moment and has a few other functions such as timer triggered and HTTP triggered.
It's been running fine but today it stopped being triggered by new messages until I restarted the app. All other functions were working just fine and responding to their associated triggers.
I've look through App Insights and there are no reported errors or issues, it's just not doing anything.
Has anyone else had this issue or know of what may be causing it?
First of all - is your App Service has Always On enabled?
Second thing - have you tried to test your trigger locally, so you can be sure, that there are no issues with your Event Hub?
Personally, I faced such issues when Event Host Processor implemented in EventHubTrigger was losing a lease because of additional processor introduced. It is also possible, that since it faces a low throughput, it lost a lease and for some reason was not able to renew it:
As an instance of EventProcessorHost starts it will acquire as many
leases as possible and begin reading events. As the leases draw near
expiration EventProcessorHost will attempt to renew them by placing a
reservation. If the lease is available for renewal the processor
continues reading, but if it is not the reader is closed and
CloseAsync is called - this is a good time to perform any final
cleanup for that partition.
https://blogs.msdn.microsoft.com/servicebus/2015/01/21/event-processor-host-best-practices-part-2/
Nonetheless, it is worth to contact the support to make sure there were no other issues.

Azure Function app periodically not firing on trigger or timer

I have an Azure Function app with 4 functions
one triggered on a timer every 24 hours
one triggered on events from IoT Hub
two others triggered on events from Service Bus as a result of the previous function
All functions work as expected when first deployed but after a period of time the functions stop running and the app appears to be scaled down with no servers online. At this point the functions are never triggered again unless I either restart the app, or drill into a function and view details of it (supposedly, forcing the function to start up).
I have the exact same code deployed to a different environment and it runs perfectly and has never encountered this issue. I've checked all the settings and configuration and can't see any material differences between the two.
This is really frustrating and is becoming a big issue. Any help would be much appreciated.
Function App is hosted in Australia Southeast.
This is the last execution (as of now)
10:45 PM UTC - Function started (Id=4d29555b-d3af-43d7-95e9-1a4a2d43dc46)
The event triggered function should run every few minutes as the IoT Hub it's triggering from has a steady stream of events coming in. When I prod the function (or restart it) and it comes to life it quickly churns through a backlog of messages queued in the IoT Hub.
I see the problem: you have comments in your host.json, which makes it invalid and throws off the parser at the scale controller level.
Admittedly, the error handling is quite poor here. But anyway, remove the commented out logger, and it should all work.

Queue trigger in azure apparently not clearing up after succesful functions run

I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.

Resources