I have an Azure Function triggered by an EventGrid when a blob is created.
Based on the size of the blob (this is a pdf file), my Azure Function can take any where between 2 seconds to 600 seconds (10 minutes) to execute.
As per Azure documentation, the ExentGrid retries to deliver the event again if it does not receive a response from the end point (in this case, the end point is my Azure function) with in 30 seconds.
10 seconds
30 seconds
1 minute
5 minutes
10 minutes
30 minutes
1 hour
3 hours
6 hours
Every 12 hours up to 24 hours
I don't see any issues for the smaller files that I upload to the storage, My azure function executes and hopefully the EventGrid receives the response under 30 senconds, hence my function is execute only once.
Issue:
For larger files, My azure function is triggered by the eventgrid (as expected) and the execution starts, however due to the large file size, my function executes for well over 30 seconds, Since the eventgrid did not receive any success response from end point (as the function is still executing), it sends another event and my function initiates another instance for the same file, this way the function executes several times for time same file.
How can I handle this situation, Can I change the retry mechanism for the eventgrid only for this function, or is there a better way to handle this problem.
Any help would be greatly appreciated.
Azure looks for timely response(<30s) from Azure Function or webhook event handlers, there seems to be no setting to increase this time limit. On receiving an event, instead of doing the actual long running work, you must push a message to a Azure queue, and let your function pick up messages from that queue. This allows you to just enqueue the work and quickly return response to Azure Event grid within 30seconds, and also scales up your event handling[even if more blobs are uploaded as a burst, your application can handle it].
Related
I need to process a task queue and I wonder if Azure Queue will work for my case. Task execution implies querying a rate-limited API and for that reason I want polling to happen every X seconds (can be slower, but must not be faster than that). Azure Function app would consume queue messages with concurrency of 1.
In the host.json settings maxPollingInterval can be configured. For the minimum interval it says
Minimum is 00:00:00.100 (100 ms) and increments up to 00:01:00 (1 min)
Is there any way to force the required delay between polls?
The azure queue may not meet your need. Here is the polling algorithm:
When a message is found, the runtime waits two seconds and then
checks for another message
When no message is found, it waits about four seconds before trying
again.
After subsequent failed attempts to get a queue message, the wait
time continues to increase until it reaches the maximum wait time(maxPollingInterval),
which defaults to one minute.
So it does not poll the queue every X seconds.
You may consider using timer trigger function which can be specified to run at every X seconds; and inside the function, you can write your logic to call the api.
So suppose that you have an application that lets user request a job. For example (hypothetical): user uploads a video. There is an entry made in RDBMs with the URL to video on blob and the status is set to "Pending".
There is a recurring time triggered functionapp that is executed every 10 seconds or so which gets 10 pending jobs from RDBMS and performs some compression etc.
The problem here is that as long as the number of requests stay 10-30 videos per 10 seconds we should be fine. But if the number of requests increase all of a sudden .. say 200 requests per 10 seconds this would mean that there will be a lot of job pending and the user would have to wait 10 times longer than usual to see status change. How do you scale out function app automatically in such scenario? Does it have to be manual?
There's an easier way to get fan out and parallel processing through multiple concurrently running Azure Functions.
Add an Azure Service Bus Queue to your solution.
For each video that needs to be processed, enqueue a service bus message with the appropriate data you'll need to retrieve and process the video (like the BlobId).
Have your Azure Function triggered by an ServiceBusTrigger.
Azure will spin up additional instances of your Azure Function as the queue depth increases. It'll also scale in idle instances after there's no more data to process.
I am using Azure Function V1 c#. I have a time triggered azure function which is checking for some data in my database every second. If the data is found I want to perform some operation on it. This operation can take 30 seconds to 5 minutes depending on the operations happening on it.
When I my time triggered function gets data and starts performing operation on it. Time triggered function is not getting executed again until first operation is completed. So, even if time triggered function is scheduled to be executed every second, it is not getting executed for next 30 seconds if the operation in previous iteration took 30 seconds. How can I solve it?
Can I call some other azure function from current time triggered function that can take care of that 30 sec. running operation and my time triggered function runs smoothly every second?
How can I call another azure function (Custom Function) from current time triggered function?
Thanks,
You may need to consider logic apps for this scenario. Logic Apps are serverless workflow offering from Azure. Use recurrence trigger to schedule the job (http call) and it will trigger the azure function regardless.
https://learn.microsoft.com/en-us/azure/connectors/connectors-native-recurrence
If you want to trigger any external function you may use httpclient.
Azure Functions call http post inside function
I have data going from my system to an azure iot. I timestamp the data packet when I send it.Then I have an azure function that is triggered by the iothub. In the azure function I get the message and get the timestamp and record how long it took the data to get to the function. I also have another program running on my system that listens for data on the iothub and records that time too.
So most of the time, the time in the azure function is in millisecs, but sometimes, I see a large time for the azure function to be triggered(I conclude it is this because the program that reads from the iot hub shows that the data reached the iot hub quickly and there was no delay).
Would anybody know the reasons for why azure function might be triggering late
Is this the same question that was asked here? https://github.com/Azure/Azure-Functions/issues/711
I'll copy/paste my answer for others to see:
Based on what I see in the logs and your description, I think the latency can be explained as being caused by a cold-start of your function app process. If a function app goes idle for approximately 20 minutes, then it is unloaded from memory and any subsequent trigger will initiate a cold start.
Basically, the following sequence of events takes place:
The function app goes idle and is unloaded (this happened about 5 minutes before the trigger you mentioned).
You send the new event.
The event eventually gets noticed by our scale controller, which polls for events on a 10 second interval.
Our scale controller initiates a cold-start of your function app. This can add a few more seconds depending on the content of your function app (it was about 6 seconds in this case).
So unfortunately this is a known behavior with the consumption plan. You can read up on this issue here: https://blogs.msdn.microsoft.com/appserviceteam/2018/02/07/understanding-serverless-cold-start/. The blog post also discusses some ways you can work around this if it's problematic for your scenario.
I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.