Delayer transaction management - spring-integration

I want to make sure the Delayer tied to a PersistentMessageStore will rollback to the DB if there was an exception proceeding from the Delayer after the delay time.
Will the transactional attribute take care of this or I need to have a txAdvice?
<int:delayer id="abcDelayer"
default-delay="1000"
message-store="JDBCMessageStore">
<int:transactional/>
</int:delayer>

Quoting Reference Manual:
The <delayer> can be enriched with mutually exclusive sub-elements <transactional> or <advice-chain>. The List of these AOP Advices is applied to the proxied internal DelayHandler.ReleaseMessageHandler, which has the responsibility to release the Message, after the delay, on a Thread of the scheduled task. It might be used, for example, when the downstream message flow throws an Exception and the ReleaseMessageHandler's transaction will be rolled back. In this case the delayed Message will remain in the persistent MessageStore.

Related

Spring Integration TransactionSynchronizationFactory deleting file before Flow ends

Our TransactionSynchronizationFactory is deleting the source file even before the flow ends and this is causing a failure in the flow. After reading the file, we split(), make a WebClient Gateway call, resequence() and then aggregate(). Just after aggregation the TransactionSynchronizationFactory is performing a commit. Please suggest why is the behavior?
syncProcessor.setAfterCommitExpression(parser.parseExpression("payload.delete()"));
syncProcessor.setAfterRollbackExpression(parser.parseExpression("payload.delete()"));
return Pollers.fixedDelay(Duration.ofMinutes(pollInterval))
.maxMessagesPerPoll(maxMessagesPerPoll)
.transactionSynchronizationFactory(transactionSynchronizationFactory)
.transactional(pseudoTransactionManager)
.advice(loggingAdvice);
The transaction synchronization is tied to a thread which has started transaction. Whenever you leave that thread (kinda unblock), the end of transaction is triggered. Be sure that you don't shift a message after that aggregate() to some other thread, e.g. via an ExecutorChannel or QueueChannel.
In addition I would look into some other solution where you are not tied to transaction and threading model. Just have the file stored in the headers and whenever you done call its delete()! No reason to deal with transaction with simple files.

Poller with backoff policy for temporarily unavailable pollable message source

I'm trying to implement a Poller with a DynamicPeriodicTrigger which would backoff (increase duration between polls) if the pollable message source (e.g. FTP server) becomes unavailable, a bit like what is already done through SimpleActiveIdleMessageSourceAdvice but the advice would need to be able to catch the exception thrown during the poll. Unfortunately the invoke method of AbstractMessageSourceAdvice is final, so I can't overwrite it.
I also tried a different approach which is to catch the poll exception by having the poller forward it to an error-channel, where I can increase the duration of the trigger (that part works ok). The problem in that case is how to reset the trigger the next time the poll succeed (i.e. the message source is available again). I can't just reset the trigger in a downstream handler method because the message source may have recovered, but there could still be no message available (in which case my downstream handler method is never called to reset the duration of the trigger).
Thank you very much advance for your expertise and your time.  
  
Best Regards
You don't have to override AbstractMessageSourceAdvice; as you can see its invoke method is pretty trivial; just copy it and add functionality as needed (just be sure to implement MessageSourceMutator so it's detected as a receive-only advice).
Maybe it's as simple as moving the invocation.proceed() to a protected non-final method.
If you come up with something that you think will be generally useful to the community, consider contributing it back to the framework.

How to avoid concurrency on aggregates status using Rebus in a server cluster

I have a web service that use Rebus as Service Bus.
Rebus is configured as explained in this post.
The web service is load balanced with a two servers cluster.
These services are for a production environment and each production machine sends commands to save the produced quantities and/or to update its state.
In the BL I've modelled an Aggregate Root for each machine and it executes the commands emitted by the real machine. To preserve the correct status, the Aggregate needs to receive the commands in the same sequence as they were emitted, and, since there is no concurrency for that machine, that is the same order they are saved on the bus.
E.G.: the machine XX sends a command of 'add new piece done' and then the command 'Set stop for maintenance'. Executing these commands in a sequence you should have Aggregate XX in state 'Stop', but, with multiple server/worker roles, you could have that both commands are executed at the same time on the same version of Aggregate. This means that, depending on who saves the aggregate first, I can have Aggregate XX with state 'Stop' or 'Producing pieces' ... that is not the same thing.
I've introduced a Service Bus to add scale out as the number of machine scales and resilience (if a server fails I have only slowdown in processing commands).
Actually I'm using the name of the aggregate like a "topic" or "destinationAddress" with the IAdvancedApi, so the name of the aggregate is saved into the recipient of the transport. Then I've created a custom Transport class that:
1. does not remove the messages in progress but sets them in state
InProgress.
2. to retrive the messages selects only those that are in a recipient that have no one InProgress.
I'm wandering: is this the best way to guarantee that the bus executes the commands for aggregate in the same sequence as they arrived?
The solution would be have some kind of locking of your aggregate root, which needs to happen at the data store level.
E.g. by using optimistic locking (probably implemented with some kind of revision number or something like that), you would be sure that you would never accidentally overwrite another node's edits.
This would allow for your aggregate to either
a) accept the changes in either order (which is generally preferable – makes your system more tolerant), or
b) reject an invalid change
If the aggregate rejects the change, this could be implemented by throwing an exception. And then, in the Rebus handler that catches this exception, you can e.g. await bus.Defer(TimeSpan.FromSeconds(5), theMessage) which will cause it to be delivered again in five seconds.
You should never rely on message order in a service bus / queuing / messaging environment.
When you do find yourself in this position you may need to re-think your design. Firstly, a service bus is most certainly not an event store and attempting to use it like one is going to lead to pain and suffering :) --- not that you are attempting this but I thought I'd throw it in there.
As for your design, in order to manage this kind of state you may want to look at a process manager. If you are not generating those commands then even this will not help.
However, given your scenario it seems as though the calls are sequential but perhaps it is just your example. In any event, as mookid8000 said, you either want to:
discard invalid changes (with the appropriate feedback),
allow any order of messages as long as they are valid,
ignore out-of-sequence messages till later.
Hope that helps...
"exactly the same sequence as they were saved on the bus"
Just... why?
Would you rely on your HTTP server logs to know which command actually reached an aggregate first? No because it is totally unreliable, just like it is with at-least-one delivery guarantees and it's also irrelevant.
It is your event store and/or normal persistence state that should be the source of truth when it comes to knowing the sequence of events. The order of commands shouldn't really matter.
Assuming optimistic concurrency, if the aggregate is not allowed to transition from A to C then it should guard this invariant and when a TransitionToStateC command will hit it in the A state it will simply get rejected.
If on the other hand, A->C->B transitions are valid and that is the order received by your aggregate well that is what happened from the domain perspective. It really shouldn't matter which command was published first on the bus, just like it doesn't matter which user executed the command first from the UI.
"In my scenario the calls for a specific aggregate are absolutely
sequential and I must guarantee that are executed in the same order"
Why are you executing them asynchronously and potentially concurrently by publishing on a bus then? What you are basically saying is that calls are sequential and cannot be processed concurrently. That means everything should be synchronous because there is no potential benefit from parallelism.
Why:
executeAsync(command1)
executeAsync(command2)
executeAsync(command3)
When you want:
execute(command1)
execute(command2)
execute(command3)
You should have a single command message and the handler of this message executes multiple commands against the aggregate. Then again, in this case I'd just create a single operation on the aggregate that performs all the transitions.

How does group timeout work?

Aggregator is a passive component and the release logic is only triggered when a new message arrives. How then does the group timeout work?
Is it a scheduled task similar to reaper that constantly monitors the state of the aggregator. Does that also mean it repeatedly evaluates the group-timeout-expression to determine the value of group-timeout, or is it evaluated once at the start? I am assuming, since there are some examples based on size of payload, that means it must evaluate the group-timeout-expression repeatedly but if that's the case how often does that happen? Can that frequency of evaluation be controlled/modified. Along the same lines if aggregator is a POJO, has this group-timeout functionality already flexible enough to be able to specify the timout from a POJO method.
Another interesting thing I noticed is that for my group-timeout-expression I was trying a spell expression and was passing payload or headers but those apparently aren't available in the context. Seems like the context within this group-timeout-expression points to SimpleMessageGroup which doesn't have payload or headers properties. So, how can I access payload or headers within the spel expression of group-timeout-expression?
In fact in my case I want the actual message (the wrapper around the payload) because my method signature expects an actual SI message passed to it not the payload.
Prior to Spring Integration 4.0, the aggregator was a passive component and you had to configure a MessageGroupStoreReaper to expire groups.
The group-timeout* attributes were added in 4.0; when a new message arrives, a task is scheduled to time out the group. If the next message arrives before the timeout, the task is cancelled, the new message is added to the group and, if the release doesn't happen, a new task is scheduled. The expression is re-evaluated each time (the example in the documentation looks at the group size).
Yes, the root object for expression evaluation is the message group.
Which "payload and headers" to you need? There are likely multiple messages in the group. You can easily access the first message in the group using one.payload or one.headers['foo'] (these expressions use group.getOne() which peeks at the first message in the group).
If you need to access a different message, you would need to write some code to iterate over the messages in the group. We don't currently, but it would be possible to make the newly arrived message available as a variable #newMessage or similar; feel free to open an 'Improvement' JIRA issue for that.

EJB Singleton - "Transaction is not active" after the thread is finished

I've the following case:
When the asynchronous processing of the thread is finished, an exception is thrown at line 15 with the following message: Transactions is not active.
Notice that I set the transaction timeout, because the error occurs only after several minutes of execution of the method "doAnything()" When execution take one or two minutes, the error does not occur. However, setting the timeout did not work.
Any idea?
Thanks.
This bean is illegal -- you cannot start a new thread. Doing so goes behind the back of the container and you lose your transaction management, security management and more.
See this answer for details on how transaction propagation works under the covers
See this answer for how you can use #Asynchronous instead of starting your own threads
Note, even with #Asynchronous you cannot have a transaction that spans multiple threads. There are no TransactionManagers out there that can support it and therefore the specs do not allow it.

Resources