Azure iothub can't triggering multiple Azure functions - node.js

I have azure-iot-hub
Spectraqual-free.azure-devices.net
triggering a function-app
https://spectralqual.azurewebsites.net -consumption plan-
as event hub triggers its functions, and have bindings with azure-table-storage
https://spectralqualstorage.table.core.windows.net/
The problem is that the functions not work smoothly, something goes wrong, from the functions log I can see that some of my functions triggered but didn't receive eventhubmessage and accordingly not updating the bind table storage,
others not triggered at all after it was triggered successfully before,
I developed those functions first with Visual studio code, after debugging and make sure it's free of error, I pushed them to the cloud using github sync, functions that was working -triggered- before on visual studio code, I can't make it triggered again on VScode, and I'm sure that iot-hub is receiving my messages, so the messages are delivered but no trigger happen, any help please!

If multiple Functions are bound to the same Event Hub, and you want each Function to process all events of that Hub, you have to put each Function to a dedicated Consumer Group.
Otherwise, Functions will compete for events, and the one which locks a given partition at a given time will be the only Function to get events from that partition.
Read more about Consumer Groups in docs. Use consumerGroup property of the binding to set its value for each Function.

Related

Azure Service Bus message receiver - loop vs stream

I'm working on standing up the Azure Service Bus messaging infrastructure for my team, and I'm trying to establish best practices for developing Service Bus message receivers. We are standing up a new service to consume the Service Bus messages; the start up script will instantiate the message receivers and start their message reception.
The pattern I'm setting up for my team is to extend a base receiver class and implement an abstract function that will starts the message receiver in the stream fashion.
I'm curious if there are any notable differences between receiving messages using ServiceBusReceiver::subscribe vs ServiceBusReceiver::receiveMessages (stream vs loop)? I'm suggesting that my team uses ServiceBusReceiver::subscribe since it registers the reception forever and it seems to handle errors more gracefully.
I've noticed two differences between the stream vs loop:
ServiceBusReceiver::receiveMessages is asynchronous. This means that in my script I would need to run Promise.all or Promise.allSettled to start the receivers in parallel. Because of the limited error handling with the loop message reception, I noticed that if the receiver hits an error, it will halt messaging processing. This scenario would require our team to restart the service if any of the receivers hits an error which is a con for our team.
The streaming method is synchronous so my start up script can register the subscriptions, save the return values, and close the subscriptions on shutdown.
If I refer to this object's properties in the ServiceBusReceiver::subscribe callback functions, I get an error that the property is undefined. It seems like the callback functions lose context of the object?
Thanks in advance
The intended way of receiving messages is definitely streaming for the messaging services though both the ways of receiving work just fine with the ServiceBus JS SDK.
receiveMessages (loop) is more for the convenience of the users who just want to receive the messages simply and don't want to deal with the callbacks, handlers, etc.
Internally, receiveMessages also does streaming to receive the messages and waits for the given duration before returning the array of messages.
Hope that might clarify your doubts.
If I refer to this object's properties in the ServiceBusReceiver::subscribe callback functions, I get an error that the property is undefined. It seems like the callback functions lose context of the object?
You can perhaps use arrow functions. For reference, please check this part of an unrelated subscribe test...
https://github.com/Azure/azure-sdk-for-js/blob/d417e93b53450b2660c34965ffa177f3d4d2f947/sdk/servicebus/perf-tests/service-bus/test/subscribe.spec.ts#L72

ServiceBus message delivery time reliable?

I'm working on creating an events system with Azure ServiceBus, I find events generally hits reliably at the scheduled time I had them set to run - so if event 'pop' is supposed to run at 12:30pm it generally would be delivered at that time to my reciever.
I wanted to know is there a guarantee that events are always fired within the scheduled time or is that more of a suggested time and the system can get clogged and backlogged causing longer queues to form?
There are quite a few differences between messages (which are handled with Service Bus) and events, as you can see in the article Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus.
An event is a lightweight notification of a condition or a state change. The publisher of the event has no expectation about how the event is handled. The consumer of the event decides what to do with the notification. Events can be discrete units or part of a series.
[...]
A message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline.
It sounds like you need a reliable way to have a timer trigger execute on a specific time. Service Bus is not the correct service for that, since "the message enquing time does not mean that the message will be sent at that time. It will get enqueued, but the actual sending time depends on the queue's workload and its state." (see BrokeredMessage.ScheduledEnqueueTimeUtc Property).
For handling the triggering in a reliable way, you could use services like Logic Apps (if you want to create it low-code/no-code) or Azure Functions (for the Serverless solution with code).
If you're actually looking for events, consider Event Grid.

What happens if multiple azure function apps bind to the same storage queue for input

I have function apps running in two different regions for redundancy. i.e. there are two separate apps in azure portal (deployed from the same code). So both apps have the function that input binds to the same storage queue. Would all messages be delivered to both or would the messages get split between the two?
I am using C#, dotnet core, and Functions 2.0.
You do not have to worry about it. The function runtime will lock the messages using the default storage queue behavior.
From the docs:
The queue trigger automatically prevents a function from processing a queue message multiple times; functions do not have to be written to be idempotent.
Now I do know the docs are talking about one function that is scaling out but the same applies to two functions with the same qeueue binding.
So
Would all messages be delivered to both or would the messages get split between the two?
The latter, messages will split between the two.
Is anyone seeing anything different with this? I'm using the Azure Message Bus Queues, but it should be the same. I can see in our log where the queue starts processing the item at almost exactly the same time and it does it twice which matches the number of function apps pointing to the same queue. It doesn't do it every time, but often enough where I can say it's not locking the message from being picked up multiple times.
This might be the case for a single function app that has logic to mitigate processing the same message twice.
However:
I Have seen a scaled function app using a queue trigger sometimes getting the same message on multiple instances.
You have to be prepared for that when scaling the function.

Azure Function Event Hub Trigger reliability

I'm a bit confused regarding the EventHubTrigger for Azure functions.
I've got an IoT Hub, and am using its eventhub-compatible endpoint to trigger an Azure function that is going to process and store the received data.
However, if my function fails (= throws an exception), that message (or messages) being processed during that function call will get lost. I actually would expect the Azure function runtime to process the messages at a later time again. Specifically, I would expect this behavior because the EventHubTrigger is keeping checkpoints in the Function Apps storage account in order to keep track of where in the event stream it has to continue.
The documention of the EventHubTrigger even states that
If all function executions succeed without errors, checkpoints are added to the associated storage account
But still, even when I deliberately throw exceptions in my function, the checkpoints will get updated and the messages will not get received again.
Is my understanding of the EventHubTriggers documentation wrong, or is the EventHubTriggers implementation (or its documentation) wrong?
This piece of documentation seems confusing indeed. I guess they mean the errors of Function App host itself, not of your code. An exception inside function execution doesn't stop the processing and checkpointing progress.
The fact is that Event Hubs are not designed for individual message retries. The processor works in batches, and it can either mark the whole batch as processed (i.e. create a checkpoint after it), or retry the whole batch (e.g. if the process crashed).
See this forum question and answer.
If you still need to re-process failed events from Event Hub (and errors don't happen too often), you could implement such mechanism yourself. E.g.
Add an output Queue binding to your Azure Function.
Add try-catch around processing code.
If exception is thrown, add the problematic event to the Queue.
Have another Function with Queue trigger to process those events.
Note that the downside of this is that you will loose ordering guarantee provided by Event Hubs (since Queue message will be processed later than its neighbors).
Quick fix. As retry policy would not work if down system is down for few hours. You can call Process.GetCurrentProcess().Kill(); in exception handling. This would stop the checkpoint moving forward. I have tested this with consumption based function app. You will not see anything in logs but i added email to notify that something went wrong and to avoid data loss i have killed the function instance.
Hope this helps.
Would put an blog over it and other part of workflow where I stop function in case of continuous failure on down system using logic app.

Azure Functions notification on failure

I have timer-triggered Azure functions running in production, but now I want to be notified if the function fails.
In my case, access to various connected services can cause crashes, and there are many to troubleshoot. The crash is the type of error I need notification for.
When the function does fail, the log entry indicates failure, so I wonder if there is a hook in the system that would allow me to cause the system to generate a notification.
I know that blob and queue bindings, for instance, support the creation of poison queue entries, but timer trigger binding doesn't say anything about any trigger outputs of that nature.
I see that functions can pass their $return status as input to other functions, but that operation is not explained in depth in the docs. Also, in that case, I need to write another function to process the error status, and I was looking for something built-in.
I Have inquired with #AzureSupport on this, but their answer had nothing to do with Azure Functions, instead referring me to DLL notification hooks, then recommending I file on uservoice.
I'm sure there must be people here who have implemented some sort of error status notification. I prefer a solution that doesn't require code.
The recommended way to monitor and alert on failures is to use AppInsights which integrates fully with Azure Functions now
https://blogs.msdn.microsoft.com/appserviceteam/2017/04/06/azure-functions-application-insights/
Since all the logs are available in AppInsights it's easy to monitor for failures and setup alerts based on your own criteria.
However, if you only care about alerting and not things like monitoring etc, you could use Azure Monitor instead: https://learn.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-get-started
When the function does fail, the log entry indicates failure, so I wonder if there is a hook in the system that would allow me to cause the system to generate a notification.
...
I prefer a solution that doesn't require code.
This is a zero-code solution:
I poked #AzureFunctions once before on this topic, and a suggested response was to use Application Insights. It can handle the alerts upon failure and also can use webhooks.
See the Azure Functions App-Insights documentation on how to link your function app to App Insights. Then set up any alerts you want.
Unfortunately this hook doesn't exist.
Can you switch from a timer trigger to a queue trigger?
You can get retries (if you want them), and after the specified number of attempts the message is sent to a poison queue.
To schedule executions you can add queue messages with a visibility timeout to match your schedule.
In order to get alerts on failure you have two options:
A timer trigger than scans the execution logs (via SFTP) for failures.
Wrap the whole function in a try/catch block and in the catch block write a few lines to send you an email with the error details.
Hope this helps.
No code:
Go to your azure cloud account
From the menu select Monitor
Then select Add New Rule
Then Select your condition, action and add the alert details.

Resources