I am using the Simple Service Bus from Codeplex and have a handler that provides me with a message and an IMessageContext.
public void Handle(MyEnquiryMessage message, IMessageContext context)
I store both these in a list and let the handler complete. At some point in the future I do some processing and try to send a reply by taking the context that I stored and calling:
context.Endpoint.MessageBus.Reply(myResponse)
Unfortunately this throws an exception “Object reference not set to an instance of an object”. Is this asynchronous way of replying possible or can “reply” only be used within the handler?
I don't know Simple Service Bus but I would guess that your context is only valid in the handler. If you want to send back a response you need to gather all the data you need from the context and simply do a 'send' at that later stage.
Even so, it sounds a bit strange performing process 'later' when it could probably be handled in another endpoint that processes a relevant message type. Without more information it is difficulty to tell, but your design may not be optimal.
Related
I'm working on standing up the Azure Service Bus messaging infrastructure for my team, and I'm trying to establish best practices for developing Service Bus message receivers. We are standing up a new service to consume the Service Bus messages; the start up script will instantiate the message receivers and start their message reception.
The pattern I'm setting up for my team is to extend a base receiver class and implement an abstract function that will starts the message receiver in the stream fashion.
I'm curious if there are any notable differences between receiving messages using ServiceBusReceiver::subscribe vs ServiceBusReceiver::receiveMessages (stream vs loop)? I'm suggesting that my team uses ServiceBusReceiver::subscribe since it registers the reception forever and it seems to handle errors more gracefully.
I've noticed two differences between the stream vs loop:
ServiceBusReceiver::receiveMessages is asynchronous. This means that in my script I would need to run Promise.all or Promise.allSettled to start the receivers in parallel. Because of the limited error handling with the loop message reception, I noticed that if the receiver hits an error, it will halt messaging processing. This scenario would require our team to restart the service if any of the receivers hits an error which is a con for our team.
The streaming method is synchronous so my start up script can register the subscriptions, save the return values, and close the subscriptions on shutdown.
If I refer to this object's properties in the ServiceBusReceiver::subscribe callback functions, I get an error that the property is undefined. It seems like the callback functions lose context of the object?
Thanks in advance
The intended way of receiving messages is definitely streaming for the messaging services though both the ways of receiving work just fine with the ServiceBus JS SDK.
receiveMessages (loop) is more for the convenience of the users who just want to receive the messages simply and don't want to deal with the callbacks, handlers, etc.
Internally, receiveMessages also does streaming to receive the messages and waits for the given duration before returning the array of messages.
Hope that might clarify your doubts.
If I refer to this object's properties in the ServiceBusReceiver::subscribe callback functions, I get an error that the property is undefined. It seems like the callback functions lose context of the object?
You can perhaps use arrow functions. For reference, please check this part of an unrelated subscribe test...
https://github.com/Azure/azure-sdk-for-js/blob/d417e93b53450b2660c34965ffa177f3d4d2f947/sdk/servicebus/perf-tests/service-bus/test/subscribe.spec.ts#L72
I'm a bit confused regarding the EventHubTrigger for Azure functions.
I've got an IoT Hub, and am using its eventhub-compatible endpoint to trigger an Azure function that is going to process and store the received data.
However, if my function fails (= throws an exception), that message (or messages) being processed during that function call will get lost. I actually would expect the Azure function runtime to process the messages at a later time again. Specifically, I would expect this behavior because the EventHubTrigger is keeping checkpoints in the Function Apps storage account in order to keep track of where in the event stream it has to continue.
The documention of the EventHubTrigger even states that
If all function executions succeed without errors, checkpoints are added to the associated storage account
But still, even when I deliberately throw exceptions in my function, the checkpoints will get updated and the messages will not get received again.
Is my understanding of the EventHubTriggers documentation wrong, or is the EventHubTriggers implementation (or its documentation) wrong?
This piece of documentation seems confusing indeed. I guess they mean the errors of Function App host itself, not of your code. An exception inside function execution doesn't stop the processing and checkpointing progress.
The fact is that Event Hubs are not designed for individual message retries. The processor works in batches, and it can either mark the whole batch as processed (i.e. create a checkpoint after it), or retry the whole batch (e.g. if the process crashed).
See this forum question and answer.
If you still need to re-process failed events from Event Hub (and errors don't happen too often), you could implement such mechanism yourself. E.g.
Add an output Queue binding to your Azure Function.
Add try-catch around processing code.
If exception is thrown, add the problematic event to the Queue.
Have another Function with Queue trigger to process those events.
Note that the downside of this is that you will loose ordering guarantee provided by Event Hubs (since Queue message will be processed later than its neighbors).
Quick fix. As retry policy would not work if down system is down for few hours. You can call Process.GetCurrentProcess().Kill(); in exception handling. This would stop the checkpoint moving forward. I have tested this with consumption based function app. You will not see anything in logs but i added email to notify that something went wrong and to avoid data loss i have killed the function instance.
Hope this helps.
Would put an blog over it and other part of workflow where I stop function in case of continuous failure on down system using logic app.
Per Azure Functions Service Bus bindings:
Trigger behavior
...
PeekLock behavior - The Functions runtime receives a message in PeekLock mode and calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails. If the function runs longer than the PeekLock timeout, the lock is automatically renewed.
I am assuming that when azure function calls Complete on the message, it will be removed from the queue.
What should I do in my function if I want my function to spy on the message but never delete it?
Unsuccessful processing of a message resulting in function throwing an exception or an explicit abandon operation on the message will not complete the message.
Saying that, I see a problem with this approach. You're not truly "spying" on the messages, but actively processing those. Which means a given message will be re-delivered and eventually end up in the dead letter queue. If you want to spy, you should peek at the messages, but Azure Service Bus trigger doesn't do that.
If you need a wiretap implementation, it's probably not a bad idea to use a topic and have a 2 subscriptions, one to consume the messages and another to duplicate all the messages for your wiretap function (that perhaps does some sort of analysis or logging). Without understanding the full scope of what you're doing, hard to provide an answer.
I currently am using Spring Integration to get messages off of a queue and send them to a service using a service activator. My issue is that the service I am calling requires a security context to be in place for the current thread. This can be setup by calling a no-argument method, handleAuthentication(), of another bean. I am wondering what the best way is to call this whenever a new message is received, prior to calling the service activator service? I was originally thinking I would chain together two service activators, with the first one calling handleAuthentication(), but this seems incorrect as handleAuthentication() does not require any information from the actual message.
Yes, your assumption about the security handling is correct. It is really just a side-effect aspect which should not be tied with the business logic.
Therefore we should use something which allows us to follow with the same behavior in the program. It is called as an Aspect in the programming as well.
For this purpose Spring Integration suggests a hook like MessageChannelInterceptor, where you can implement your handleAuthentication() exactly in the preReceive() callback, according to your explanation.
Another trick can be achieved with the <request-handler-advice-chain> and MethodInterceptor implementation which should populate the SecurityContext into the current thread just before target service invocation.
We are using Azure service bus via NServiceBus and I am facing a problem with deciding the correct architecture for dealing with long running tasks as a result of messages.
As is good practice, we don't want to block the message handler from returning by making it wait for long running processes (downloading a large file from a remote server), and actually doing so will cause the lock on the message to be lost with Azure SB. The plan is to respond by spawning a separate task and allow the message handler to return immediately.
However this means that the handler is now immediately available for the next message which will cause another task to be spawned and so on until the message queue is empty. What I'd like is some way to stop taking messages while we are processing (a limited number of) earlier messages. Is there an accepted pattern for this with NServiceBus and Azure Service Bus?
The following is what I'd kind of do if I was programming directly against the Azure SB
{
while(true)
{
var message = bus.Next();
message.Complete();
// Do long running stuff here
}
}
The verbs Next and Complete are probably wrong but what happens under Azure is that Next gets a temporary lock on the message so that other consumers can no longer see the message. Then you can decide if you really want to process the message and if so call Complete. That then removes the message from the queue entirely, failing to do so will cause the message to appear back on the queue after a period of time as Azure assumes you crashed. As dirty as this code looks it would achieve my goals (so why not do it?) as my consumer is only going to consume the next time I'm available (after the long running task). Other consumers (other instances) can jump in if necessary.
The problem is that NServiceBus adds a level of abstraction so that now handling a message is via a method on a handler class.
void Handle(NewFileMessage message)
{
// Do work here
}
The problem is that Azure does not get the call to message.Complete() until after your work and after the Handle method exits. This is why you need to keep the work short. However if you exit you also signal that you are ready to handle another message. This is my Catch 22
Downloading on a background thread is a good idea. You don't want to to increase lock duration, because that's a symptom, not the problem. Your download can easily get longer than maximum lock duration (5mins) and then you're back to square one.
What you can do is have an orchestrating saga for download. Saga can monitor the download process and when download is completed, b/g process would signal to the saga about completion. If download is never finished, you can have a timeout (or multiple timeouts) to indicate that and have a compensating action or retry, whatever works for your business case.
Documentation on Sagas should get you going: http://docs.particular.net/nservicebus/sagas/
In Azure Service Bus you can increase the lock duration of a message (default set to 30 seconds) in case the handling will take a long time.
But, besides you are able to increase your lock duration, it's generally an indication that your handler takes care of to much work which can be divided over different handlers.
If it is critical that the file is downloaded, I would keep the download operation in the handler. That way if the download fails the message can be handled again and the download can be retried. If however you want to free up the handler instantly to handle more messages, I would suggest that you scale out the workers that perform the download task so that the system can cope with the demand.