Poller with backoff policy for temporarily unavailable pollable message source - spring-integration

I'm trying to implement a Poller with a DynamicPeriodicTrigger which would backoff (increase duration between polls) if the pollable message source (e.g. FTP server) becomes unavailable, a bit like what is already done through SimpleActiveIdleMessageSourceAdvice but the advice would need to be able to catch the exception thrown during the poll. Unfortunately the invoke method of AbstractMessageSourceAdvice is final, so I can't overwrite it.
I also tried a different approach which is to catch the poll exception by having the poller forward it to an error-channel, where I can increase the duration of the trigger (that part works ok). The problem in that case is how to reset the trigger the next time the poll succeed (i.e. the message source is available again). I can't just reset the trigger in a downstream handler method because the message source may have recovered, but there could still be no message available (in which case my downstream handler method is never called to reset the duration of the trigger).
Thank you very much advance for your expertise and your time.  
  
Best Regards

You don't have to override AbstractMessageSourceAdvice; as you can see its invoke method is pretty trivial; just copy it and add functionality as needed (just be sure to implement MessageSourceMutator so it's detected as a receive-only advice).
Maybe it's as simple as moving the invocation.proceed() to a protected non-final method.
If you come up with something that you think will be generally useful to the community, consider contributing it back to the framework.

Related

Spring Integration TransactionSynchronizationFactory deleting file before Flow ends

Our TransactionSynchronizationFactory is deleting the source file even before the flow ends and this is causing a failure in the flow. After reading the file, we split(), make a WebClient Gateway call, resequence() and then aggregate(). Just after aggregation the TransactionSynchronizationFactory is performing a commit. Please suggest why is the behavior?
syncProcessor.setAfterCommitExpression(parser.parseExpression("payload.delete()"));
syncProcessor.setAfterRollbackExpression(parser.parseExpression("payload.delete()"));
return Pollers.fixedDelay(Duration.ofMinutes(pollInterval))
.maxMessagesPerPoll(maxMessagesPerPoll)
.transactionSynchronizationFactory(transactionSynchronizationFactory)
.transactional(pseudoTransactionManager)
.advice(loggingAdvice);
The transaction synchronization is tied to a thread which has started transaction. Whenever you leave that thread (kinda unblock), the end of transaction is triggered. Be sure that you don't shift a message after that aggregate() to some other thread, e.g. via an ExecutorChannel or QueueChannel.
In addition I would look into some other solution where you are not tied to transaction and threading model. Just have the file stored in the headers and whenever you done call its delete()! No reason to deal with transaction with simple files.

Azure durable functions

Ok completely new to this essentially we want to be able to time out a session after 10mins. that's pretty easy.
We also want to wait for external user input -- essentially data from a multi step form. also pretty easy.
We want to be able to Task.WaitAny (waitforexternalevent("updatedata"), timeout)
But this is causing issues in the orchestration.
Individually these concepts work, however we see the Task.WaitAny to unblock and reuse the first "updatedata" event.. other "updatedata" events never reach the orchestration.
Is this expected behavior, are we mixing concepts in an invalid way, or is this a bug?
We might need to see some more of your code, but with what you've described here I think the behavior you're seeing is what should be expected.
Your orchestration is "waiting" on the timeout or the external event. Once that external event is triggered, the orchestration is going to move forward and, even if something triggers that event again, the orchestration is not expecting/waiting on it.
Again, this is based on the sliver of code you've included in your question thus far. If you need to handle the event being broadcast into the orchestration multiple times you would need to have a loop of some kind.

ServiceStack: How to make InMemoryTransientMessageService run in a background

What needs to be done to make InMemoryTransientMessageService run in a background thread? I publish things inside a service using
base.MessageProducer.Publish(new RequestDto());
and they are exececuted immediately inside the service-request.
The project is self-hosted.
Here is a quick unit test showing the blocking of the current request instead of deferring it to the background:
https://gist.github.com/lmcnearney/5407097
There is nothing out of the box. You would have to build your own. Take a look at ServiceStack.Redis.Messaging.RedisMqHost - most of what you need is there, and it is probably simpler (one thread does everything) to get you going when compared to ServiceStack.Redis.Messaging.RedisMqServer (one thread for queue listening, one for each worker). I suggest you take that class and adapt it to your needs.
A few pointers:
ServiceStack.Message.InMemoryMessageQueueClient does not implement WaitForNotifyOnAny() so you will need an alternative way of getting the background thread to wait to incoming messages.
Closely related, the ServiceStack.Redis implementation uses topic subscriptions, which in this class is used to transfer the WorkerStatus.StopCommand, which means you have to find an alternative way of getting the background thread to stop.
Finally, you may want to adapt ServiceStack.Redis.Messaging.RedisMessageProducer as its Publish() method pushes the message requested to the queue and pushes the channel / queue name to the TopicIn queue. After reading the code you can see how the three points tie together.
Hope this helps...

CQRS - When to send confirmation message?

Example: Business rules states that the customer should get a confirmation message (email or similar) when an order has been placed.
Lets say that a NewOrderRegisteredEvent is dispatched from the domain and is picked up by an event listener that sends of the confirmation message. When that is done some other event handler throws an exception or something else goes wrong and the unit of work is rolled back. We've now sent the user a confirmation message for something that was rolled back.
What is the "cqrs" way of solving problems like this where you want to do something after a unit of work has been committed? Another complicating factor is replaying of events. I don't want old confirmation messages to be re-sent whenever I replay recorded events in order to build a new view / projection.
My best theory so far: I've just started to look into the fascinating world of cqrs and was wondering whether this is something that would be implemented as a saga? If a saga is like a state machine where each transition only can take place a single time then I guess that would solve this problem? I just have a hard time visualizing how this will fit together with the command bus and domain events..
An Event should only occur after the transaction has been completed. If anything goes wrong and there's a rollback, then the event didn't occur from an external point of view. Therefore it shouldn't be published at all. Though an OrderRegistrationFailed event could be published if necessary.
You wouldn't want the mail to be sent unless the command has sucessfully been executed.
First a few reasons why the command handler -- as proposed in another answer -- would be the wrong place: Under some circumstances the command handler wouldn't be able to tell if the command will eventually succeed or not. Having the command handler invoke the mail sending would also put process knowledge inside the command handler, which would break the SRM and too tightly couple business rules with the application layer.
The mail should be sent after the fact, i.e. from an event handler.
To prevent this handler from firing during replay, you can just not register it. This works similar to how you test your application. You only register the handlers that you actually need.
Production system -> register all event handlers
Tests -> register only the tested event handlers
Replay -> register only the projection/denormalization handlers
Another - even more loosely coupled, though a bit more complex - possibility would be to have a Saga handle the NewOrderRegisteredEvent and issue a SendMail command to the appropriate bounded context (thanks, Yves Reynhout, for pointing this out in the question's comments).
There are two likely solutions
1) The publishing of the event and the handling of the event (i.e. the email) are part of a single transaction. In this case, your transaction framework takes care of it for you. If the email fails, then the event is rolled back. You'll likely retry the command. This is conceptually clean and easy to think about. No event is finished publishing until everyone that has something to say about it has had their say. However practically speaking, this can be painful, as it typically involves distributed transactions. These are hard to come by. Can your email client enroll in the same transaction as the database which is holding your events?
2) The publishing of the event is transactional, but the event handlers each deal with transactions in their own way. The event handler which sends emails could keep track of which events it had seen. If it crashed, it would request old events and process them. You could make a business decision as to how big a deal it would be if people had missing or duplicate emails. (For money-related transactions, the answer is probably you shouldn't allow it.)
Solution (2) is typically what you see promoted in DDD/CQRS circles as it's the more loosely coupled solution. Solution (1) is quite practical in a small system where the event store and the projections are in a single database and the projections don't change often. Solution (2) allows a diversity of event handlers to work in their own way. Solution (1) can cause lots of non-overlapping concerns to become entagled. In this case your order business rules don't complete until the many bizarre things that happen in emailing are taken care of. For one thing, it may slow you down quite a bit.
If the sending of the email were more interesting than "saw the event, sent the email", then you're right, you might have a saga or workflow on your hands. Email in large operations is often a complex system in its own right which you're unlikely to have to implement much of. You just need to be sure you put your email into a request queue of some sort (using approach (2)), and the email system is likely to do retries/batching/spam avoidance/working overnight/etc.

Patterns to azure idempotent operations?

anybody know patterns to design idempotent operations to azure manipulation, specially the table storage? The more common approach is generate a id operation and cache it to verify new executions, but, if I have dozen of workers processing operations this approach will be more complicated. :-))
Thank's
Ok, so you haven't provided an example, as requested by knightpfhor and codingoutloud. That said, here's one very common way to deal with idempotent operations: Push your needed actions to a Windows Azure queue. Then, regardless of the number of worker role instances you have, only one instance may work on a specific queue item at a time. When a queue message is read from the queue, it becomes invisible for the amount of time you specify.
Now: a few things can happen during processing of that message:
You complete processing after your timeout period. When you go to delete the message, you get an exception.
You realize you're running out of time, so you increase the queue message timeout (today, you must call the REST API to do this; one day it'll be included in the SDK).
Something goes wrong, causing an exception in your code before you ever get to delete the message. Eventually, the message becomes visible in the queue again (after specified invisibility timeout period).
You complete processing before the timeout and successfully delete the message.
That deals with concurrency. For idempotency, that's up to you to ensure you can repeat an operation without side-effects. For example, you calculate someone's weekly pay, queue up a print job, and store the weekly pay in a Table row. For some reason, a failure occurs and you either don't ever delete the message or your code aborts before getting an opportunity to delete the message.
Fast-forward in time, and another worker instance (or maybe even the same one) re-reads this message. At this point, you should theoretically be able to simply re-perform the needed actions. If this isn't really possible in your case, you don't have an idempotent operation. However, there are a few mechanisms at your disposal to help you work around this:
Each queue message has a DequeueCount. You can use this to determine if the queue message has been processed before and, if so, take appropriate action (maybe examine the Table row for that employee, for example).
Maybe there are stages of your processing pipeline that can't be repeated. In that case: you now have the ability to modify the queue message contents while the queue message is still invisible to others and being processed by you. So, imagine appending something like |SalaryServiceCalled . Then a bit later, appending |PrintJobQueued and so on. Now, if you have a failure in your pipeline, you can figure out where you left off, the next time you read your message.
Hope that helps. Kinda shooting in the dark here, not knowing more about what you're trying to achieve.
EDIT: I guess I should mention that I don't see the connection between idempotency and Table Storage. I think that's more of a concurrency issue, as idempotency would need to be dealt with whether using Table Storage, SQL Azure, or any other storage container.
I believe you can use Reply log storage way to solve this problem

Resources