I created function app v1 with one function in it. The function is a service bus trigger.
After publishing it to Azure i see this strange behaviour.
Sometimes the function gets triggered and sometimes it's not. There are no errors in logs at that time, but i see messages were put into dead letter queue with "delivery count excided" error.
Now, if i try to resend failed messages - it fails again. But if i go to portal, refresh function app and send the messages again - they get picked up and processed as usual.
Here's how i send messages:
var client = new QueueClient(connectionString, queueName);
var bytes = Encoding.UTF8.GetBytes(msgStr);
var message = new Message(bytes);
await queueClient.SendAsync(message);
I don't see this issue when run function locally.
New messages (<5) arrive every 15-20 minutes
I'm using Consumption plan
host.json is empty
Please help.
Cold Start
On consumption plan your function app goes to sleep after 10-15mins if it's idle. Any upcoming trigger awakes your function app which may take couple of seconds called cold start.
The problem in my opinion
Looking at the stats you shared in my opinion when your service bus tries to deliver message the function app is sleeping and is unable to deliver considering it message failed to deliver. When you restart your function app your function app instance is warmed up explicitly and is able to handle the messages for that window but it would go again to sleep after idle period.
Possible solutions
Keep instance warmed
Keep your function app warmed up all the time. You can simply create another time triggered function which keeps hitting your service bus triggered function (function that is handling messages) every 5 mins or so, it will keep the instance running all the time.
Increase retry count
You need to check your retries count and increase to greater number so when function apps wake up the service bus is still able to deliver the messages.
Use App service plan
Another way is to use app service plan instead consumption plan. This will keep the function app warm up all the time to handle the messages. considering your workload (<5) it could be bit expensive option ,since in consumption plan you get 1Million req/month free of cost.
Related
I have a Queue trigger Azure function which is triggered whenever a queue msg appears in Azure Queue Storage.
My work flow is:
A user may schedule a task which needs to run after a few days at a particular time (execute at time)
So I put a message in azure queue with visibility timeout as the time difference b/w current time and the execute at time of that task.
So when the msg is visible in the queue it gets picked up by the azure Function and gets executed.
I'm facing an intermittent issue when the queue message is supposed to be visible after a few days(<7 days). But somehow it got dropped/removed from the queue. So it was never picked up by the function, and that task still shows pending.
I've gone through all the articles I have found on the internet and didn't find solution to my problem.
The worst part is that it works fine for a few weeks but every now and then the queue messages (invisible ones)
Suddenly disappears. (I use azure storage explorer to check number of invisible messages)
I am working with Azure Function Apps in Python, that has two Functions HTTPTrigger & QueueTrigger, in the QueueTrigger I call my custom code, which takes more than 10 mins to process. I changed it from 5 mins to 10 mins in host.json {"functionTimeout": "00:10:00"} . My question is, is there a way to extend process time by updating the QueueMessage content
or visibilityTimeout or Timeout? In other words, would the Function App process time be extended if you extend the message's invisibility until it is processed? see Python API QueueService.update_message()
Is there any other serverless options to run long processes?
Updates the visibility timeout of a message. You can also use this
operation to update the contents of a message.
This operation can be used to continually extend the invisibility of a
queue message. This functionality can be useful if you want a worker
role to "lease" a queue message. For example, if a worker role calls
get_messages and recognizes that it needs more time to process a
message, it can continually extend the message's invisibility until it
is processed. If the worker role were to fail during processing,
eventually the message would become visible again and another worker
role could process it.
update_message(queue_name, message_id, pop_receipt, visibility_timeout, content=None, timeout=None)
If you need Functions that can run longer then 10 min, you need to switch to App Service Plan. There you can run Functions indefinitely: https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale#timeout
Be aware, though, this isn't fully "serverless" any more in terms of scaling. App Service plan won't scale more or less indefinitely in the same way as consumption plan scales. Plus you pay the fixed price for the app service plan.
I have data going from my system to an azure iot. I timestamp the data packet when I send it.Then I have an azure function that is triggered by the iothub. In the azure function I get the message and get the timestamp and record how long it took the data to get to the function. I also have another program running on my system that listens for data on the iothub and records that time too.
So most of the time, the time in the azure function is in millisecs, but sometimes, I see a large time for the azure function to be triggered(I conclude it is this because the program that reads from the iot hub shows that the data reached the iot hub quickly and there was no delay).
Would anybody know the reasons for why azure function might be triggering late
Is this the same question that was asked here? https://github.com/Azure/Azure-Functions/issues/711
I'll copy/paste my answer for others to see:
Based on what I see in the logs and your description, I think the latency can be explained as being caused by a cold-start of your function app process. If a function app goes idle for approximately 20 minutes, then it is unloaded from memory and any subsequent trigger will initiate a cold start.
Basically, the following sequence of events takes place:
The function app goes idle and is unloaded (this happened about 5 minutes before the trigger you mentioned).
You send the new event.
The event eventually gets noticed by our scale controller, which polls for events on a 10 second interval.
Our scale controller initiates a cold-start of your function app. This can add a few more seconds depending on the content of your function app (it was about 6 seconds in this case).
So unfortunately this is a known behavior with the consumption plan. You can read up on this issue here: https://blogs.msdn.microsoft.com/appserviceteam/2018/02/07/understanding-serverless-cold-start/. The blog post also discusses some ways you can work around this if it's problematic for your scenario.
I have an Azure Function app with 4 functions
one triggered on a timer every 24 hours
one triggered on events from IoT Hub
two others triggered on events from Service Bus as a result of the previous function
All functions work as expected when first deployed but after a period of time the functions stop running and the app appears to be scaled down with no servers online. At this point the functions are never triggered again unless I either restart the app, or drill into a function and view details of it (supposedly, forcing the function to start up).
I have the exact same code deployed to a different environment and it runs perfectly and has never encountered this issue. I've checked all the settings and configuration and can't see any material differences between the two.
This is really frustrating and is becoming a big issue. Any help would be much appreciated.
Function App is hosted in Australia Southeast.
This is the last execution (as of now)
10:45 PM UTC - Function started (Id=4d29555b-d3af-43d7-95e9-1a4a2d43dc46)
The event triggered function should run every few minutes as the IoT Hub it's triggering from has a steady stream of events coming in. When I prod the function (or restart it) and it comes to life it quickly churns through a backlog of messages queued in the IoT Hub.
I see the problem: you have comments in your host.json, which makes it invalid and throws off the parser at the scale controller level.
Admittedly, the error handling is quite poor here. But anyway, remove the commented out logger, and it should all work.
I am very new to Azure so I am not sure if my question is stated correctly but I will do my best:
I have an App that sends data in the form (1.bin, 2.bin, 3.bin...) always in consecutive order to a blob input container, when this happens it triggers an Azure function via QueueTrigger and the output of the function (1output.bin, 2output.bin, 3output.bin...) is stored in a blog output container.
When azure crashes the program tries 5 times before giving up. When azure succeeds it will run just once and that's it.
I am not sure what happened last week but since last week after each successful run, functions is idle like for 7 minutes and then it starts the process again as if it was the first time. So for example the blob receives 22.bin and functions process 22.bin and generates 22output.bin, it is supossed to stop after that but after seven minutes is processing 22.bin again.
I don't think is the app because each time the app sends data, even if it is the same one it will name the data with the next number (in my example 23.bin) but this is not the case it is just doing 22.bin again as if the trigger queue was not clear after the successful azure run, and it keeps doing it over and over again until I have to stop functions and make it crash i order to stop it.
Any idea in why is this happening and what can I try to correct it is greatly appreciated. I am just starting to learn about all this stuff.
One thing that could be possibly happening is that, the function execution time is exceeding 5 mins. Since this is a hard limit, function runtime would terminate the current execution and restart the function host.
One way to test this would be to create a Function app using Standard App Service plan instead of Consumption plan. Function app created with standard plan does not have execution time limit. You can log function start time and end time to see if it is taking longer than 5 mins to complete processing queue message.