We have a durable function which is invoked from a Service Bus Trigger working fine with Azure
However, it appears as though the message doesn't get removed from Service Bus until the whole process has finished
Is this by design? Is there a workaround for this? Because it will cause me problems if the message gets picked up twice
Paul
Yes, that's by design. It completes it automatically when the function execution completes successfully - and that is not until the end of the orchestration, if it gets all the way through.
You could have a separate non-durable Azure Function with the ServiceBus trigger - and get that to start a new instance of the durable function. That orchestration would run asynchronously, meanwhile the trigger function itself would return immediately (and mark the message as completed). You'd just need to ensure your durable function does the right thing (for you) in event of a failure.
See https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-instance-management?tabs=csharp for example
Related
I am running a node app on Google Cloud functions. This gets triggered by a webhook on Github.
Once the webhook gets triggered, my Google Cloud Function starts its function:
Function execution started
Normally if nothing happens, the function will complete to execution: Function execution took 13 ms, finished with status code: 200.
But if the webhook gets triggered again, then my Google Cloud function will abandon the function that was running and say Function execution started.
Is there a way for me to queue my functions. Have a function finish to execution and then start the next?
You would need a "queue" or broker mechanism in your flow. In GCP, this can be achieved by using 2 functions and a pubsub topic (and subscription)
Function 1 (GitHub webhook listener):
This function will listen for the GitHub webhook calls and publishes a pub/sub message to a topic.
Function 2 (Event handler):
This function will listen for the pub/sub subscription of that topic in "push" configuration. It is vital that in the settings of this function, the max instance parameter is set to one. This will ensure that only one instance of your function is active at once, so your events get processed one after the other, waiting for one message to complete processing before moving on to the other. Also important in this function, is to carefully consider when to acknowledge the pub/sub message to allow the broker to send the next massage.
We are working with Azure functions, which are triggered on every message in the service bus queue. We are trying to solve a problem whereby we need to disable a function on the function app processing messages, dynamically, so that it does not process messages any further and we do not lose any message in the process as well.
We can disable the functions via multiple ways, referring to link but the problem remains the same. Unable to figure out what happens to the functions already spawned when trying to disable the same.
Since the function is service bus triggered there is always a possibility that the function is processing a message and we disable the same, does it get processed, any sorts of cancellation is raised, it just dies out with an exception?
It would be great someone could direct me to some documentation or something. Thanks.
Azure Service Bus triggered function will already have a lock on the message that's being processed. If Function is terminated and the message was not completed or disposition, the lock will expire and the message will reappear on the queue. That's because of the Functions runtime receives a message in PeekLock mode.
One factor to consider is the queue's MaxDeliveryCount. If a function is terminated upon the last processing attempt, the message will be dead-lettered as all processing attempts have been exhausted. That's a standard Azure Service Bus behaviour.
I'm a bit confused regarding the EventHubTrigger for Azure functions.
I've got an IoT Hub, and am using its eventhub-compatible endpoint to trigger an Azure function that is going to process and store the received data.
However, if my function fails (= throws an exception), that message (or messages) being processed during that function call will get lost. I actually would expect the Azure function runtime to process the messages at a later time again. Specifically, I would expect this behavior because the EventHubTrigger is keeping checkpoints in the Function Apps storage account in order to keep track of where in the event stream it has to continue.
The documention of the EventHubTrigger even states that
If all function executions succeed without errors, checkpoints are added to the associated storage account
But still, even when I deliberately throw exceptions in my function, the checkpoints will get updated and the messages will not get received again.
Is my understanding of the EventHubTriggers documentation wrong, or is the EventHubTriggers implementation (or its documentation) wrong?
This piece of documentation seems confusing indeed. I guess they mean the errors of Function App host itself, not of your code. An exception inside function execution doesn't stop the processing and checkpointing progress.
The fact is that Event Hubs are not designed for individual message retries. The processor works in batches, and it can either mark the whole batch as processed (i.e. create a checkpoint after it), or retry the whole batch (e.g. if the process crashed).
See this forum question and answer.
If you still need to re-process failed events from Event Hub (and errors don't happen too often), you could implement such mechanism yourself. E.g.
Add an output Queue binding to your Azure Function.
Add try-catch around processing code.
If exception is thrown, add the problematic event to the Queue.
Have another Function with Queue trigger to process those events.
Note that the downside of this is that you will loose ordering guarantee provided by Event Hubs (since Queue message will be processed later than its neighbors).
Quick fix. As retry policy would not work if down system is down for few hours. You can call Process.GetCurrentProcess().Kill(); in exception handling. This would stop the checkpoint moving forward. I have tested this with consumption based function app. You will not see anything in logs but i added email to notify that something went wrong and to avoid data loss i have killed the function instance.
Hope this helps.
Would put an blog over it and other part of workflow where I stop function in case of continuous failure on down system using logic app.
Per Azure Functions Service Bus bindings:
Trigger behavior
...
PeekLock behavior - The Functions runtime receives a message in PeekLock mode and calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed.
I am assuming that when azure function calls Complete on the message, it will be removed from the queue.
What should I do in my function if I want my function to spy on the message but never delete it?
Unsuccessful processing of a message resulting in function throwing an exception or an explicit abandon operation on the message will not complete the message.
Saying that, I see a problem with this approach. You're not truly "spying" on the messages, but actively processing those. Which means a given message will be re-delivered and eventually end up in the dead letter queue. If you want to spy, you should peek at the messages, but Azure Service Bus trigger doesn't do that.
If you need a wiretap implementation, it's probably not a bad idea to use a topic and have a 2 subscriptions, one to consume the messages and another to duplicate all the messages for your wiretap function (that perhaps does some sort of analysis or logging). Without understanding the full scope of what you're doing, hard to provide an answer.
I'm working out a scenario where a post a message to an Azure Storage Queue. For testing purposes I've developed a console app, where I get the message and I'm able to update it with a try count, and when the logic is done, I delete the message.
Now I'm trying to port my code to an Azure Function. One thing that seems to be very different is, when the Azure Function is called, the message is deleted from the queue.
I find it hard to find any documentation on this specific subject and I feel I'm missing something with regard to the concept of combining these two.
My questions:
Am I right, that when you trigger a function on a new queue item, the function takes the message and deletes it from the queue, even if the function fails?
If 1 is correct, how do you make sure that the message is retried and posted to a dead queue for later processing?
The runtime only deletes the queue message when your Function successfully processes it (i.e. no error has occurred). When the message is dequeued and passed to your function, it becomes invisible for a period of time (10 minutes). While your function is running this invisibility is maintained. If your function fails, the message is not deleted - it remains in the queue in an invisible state. After the visibilty timeout expires, the message will become visible in the queue again for reprocessing.
The details of how core WebJobs SDK queue processing works can be found here. On that page, see the section "How to handle poison messages" which addresses your question. Basically you'll get all the right behaviors for free - retry handling, poison message handling, etc. :)