how to add try catch exception for spring integration flow to achieve nested transactions - spring-integration

How to handle nested transactions in spring integration flow. Basically i have a process that fetches all the orders from database and process it order by order, in case of exception thrown on single order, all the orders processed are getting rolled back.
IntegrationFlows.from("perOrder")
.filter(Order.class, order -> order.getItems().size() > 0)
.handle(orderHandler, "handle") /*someway i way want to add try/catch for this method here so that
if handle method throws exception, want to suppress for that order and mark as failure only for that order */
.get();
public class OrderHandler {
#Transactional(propagation = Propagation.NESTED)
public handle() {
processing code
throw exception in case of any validation failure
}
}

For this purpose we provide an adviceChain to be injected into the endpoint of that handle():
.handle((GenericHandler<?>) (p, h) -> {
throw new RuntimeException("intentional");
}, e -> e.advice(retryAdvice()))
You can inject there any available Advice implementation: https://docs.spring.io/spring-integration/docs/current/reference/html/#message-handler-advice-chain, including TransactionInterceptor: https://docs.spring.io/spring-integration/docs/current/reference/html/#tx-handle-message-advice
The best way to have a try...catch semantics is with the ExpressionEvaluatingRequestHandlerAdvice. See its description in the Docs and also its JavaDocs.

Related

the right way to return a Single from a CompletionStage

I'm playing around with reactive flows using RxJava2, Micronaut and Cassandra. I'm new to rxjava and not sure what is the correct way to return a of List Person in the best async manner?
data is coming from a Cassandra Dao interface
public interface PersonDAO {
#Query("SELECT * FROM cass_drop.person;")
CompletionStage<MappedAsyncPagingIterable<Person>> getAll();
}
that gets injected into a micronaut controller
return Single.just(personDAO.getAll().toCompletableFuture().get().currentPage())
.subscribeOn(Schedulers.io())
.map(people -> HttpResponse.ok(people));
OR
return Single.just(HttpResponse.ok())
.subscribeOn(Schedulers.io())
.map(it -> it.body(personDAO.getAll().toCompletableFuture().get().currentPage()));
OR switch to RxJava3
return Single.fromCompletionStage(personDAO.getAll())
.map(page -> HttpResponse.ok(page.currentPage()))
.onErrorReturn(throwable -> HttpResponse.ok(Collections.emptyList()));
Not a pro of RxJava nor Cassandra :
In your first and second example, you are blocking the thread executing the CompletionStage with get, even if you are doing it in the IO thread, I would not recommand doing so.
You are also using a Single wich can emit, only one value, or an error. Since you want to return a List, I would sugest to go for at least an Observable.
Third point, the result from Cassandra is paginated, I don't know if it's intentionnaly but you list only the first page, and miss the others.
I would try a solution like the one below, I kept using the IO thread (the operation may be costly in IO) and I iterate over the pages Cassandra fetch :
/* the main method of your controller */
#Get()
public Observable<Person> listPersons() {
return next(personDAO.getAll()).subscribeOn(Schedulers.io());
}
private Observable<Person> next(CompletionStage<MappedAsyncPagingIterable<Person>> pageStage) {
return Single.fromFuture(pageStage.toCompletableFuture())
.flatMapObservable(personsPage -> {
var o = Observable.fromIterable(personsPage.currentPage());
if (!personsPage.hasMorePages()) {
return o;
}
return o.concatWith(next(personsPage.fetchNextPage()));
});
}
If you ever plan to use reactor instead of RxJava, then you can give cassandra-java-driver-reactive-mapper a try.
The syntax is fairly simple and works in compile-time only.

Applying rejections in Aggregates in Spine Event Engine

I am trying to apply a rejection to the aggregate which threw it:
public class WalletAggregate extends Aggregate<WalletId, Wallet, Wallet.Builder> {
#Assign MoneyCharged handle(ChargeMoney cmd) throws InsuffisientFunds {
...
}
#Apply void on(MoneyCharged event) {
...
}
#Apply void on(InsuffisientFunds event) {
...
}
}
I can clearly see the MoneyChanged event applied. But when InsuffisientFunds is thrown, it is not applied. Am I missing something? How can I apply a thrown rejection to an aggregate?
Rejections don't work that way. A Rejection is more then just negative-case Event. It is emitted when an illegal operation is attempted and the handler (in your case, the Aggregate) refuses (rejects) to execute the Command, and thus does not change its state.
In practice, you might want to consider creating a separate entity that would process Rejections.
If you want to store the failed actions of a user, a Projection would probably work best:
public class ChargeAttemptsProjection extends Projection<...> {
#Subscribe
void on(InsuffisientFunds rejection) {
// Update the state of the Projection.
}
}
If there is a flow for recovering from the faulty situation, a ProcessManager sounds more fit.
public class FailedTransactionRecovery extends ProcessManager<...> {
#React
YourOtherEvent on(InsuffisientFunds rejection) {
// Start the recovery process.
}
}
Last but not least, there's always the possibility to subscribe to the Rejection on the client to handle it gracefully on the UI.
See the JS reference documentation for more on client-side subscriptions.

spring-integration: how to deliver deferred details as SSE

I have a list of items which I want to retrieve and return as fast as possible.
For each item I also need to retrieve details, they may be returned a few seconds later.
I could of course create two different routes with HTTP gateways and request first the list, then the details. However, I then have to wait until all details have arrived. I want to send back the list immediately and then the details as soon as I get them.
UPDATE
Following Artem Bilan's advice my flow returns a Flux as payload which merges the list of items as a Mono and the processed items as a Flux.
Note that the example below simulates detail processing of the items by calling toUpperCase; my real use case requires routing and outgoing calls to get the details for each item:
#Bean
public IntegrationFlow sseFlow() {
return IntegrationFlows
.from(WebFlux.inboundGateway("/strings/sse")
.requestMapping(m -> m.produces(MediaType.TEXT_EVENT_STREAM_VALUE))
.mappedResponseHeaders("*"))
.enrichHeaders(Collections.singletonMap("aHeader", new String[]{"foo", "bar"}))
.transform("headers.aHeader")
.<String[]>handle((p, h) -> {
return Flux.merge(
Mono.just(p),
Flux.fromArray(p)
.map(t -> {
return t.toUpperCase();
// return detailsResolver.resolveDetail(t);
}));
})
.get();
}
That comes closer to my goal. When I request data from this flow using curl, I get the list of items immediately and the processed items slightly later:
λ curl http://localhost:8080/strings/sse
data:["foo","bar"]
data:FOO
data:BAR
While simply converting the string to uppercase works fine, I have difficulty to make an outgoing call for details using WebFlux.outboundGateway. The detailsResolver in the commented out code above is defined as follows:
#MessagingGateway
public interface DetailsResolver {
#Gateway(requestChannel = "itemDetailsFlow.input")
Object resolveDetail(String item);
}
#Bean
IntegrationFlow itemDetailsFlow() {
return f -> f.handle(WebFlux.<String>outboundGateway(m ->
UriComponentsBuilder.fromUriString("http://localhost:3003/rest/path/")
.path(m.getPayload())
.build()
.toUri())
.httpMethod(HttpMethod.GET)
.expectedResponseType(JsonNode.class)
.replyPayloadToFlux(false));
}
When I comment in the detailsResolver call and comment out t.toUpperCase, the outboundGateway seems to be set up properly (the log says Subscriber present, Demand signaled) but never gets a response (doesn't reach a breakpoint in ExchangeFunctions.exchange#91).
I have ensured that the DetailsResolver itself is working by getting it as a bean from the context and invoking its method - that gives me a JsonNode response.
What can be the reason?
Yes, I wouldn't use toReactivePublsiher() there because you have a context of the current request. You need fluxes per request. I would use something like Flux.merge(Publisher<? extends I>... sources), where the first Flux is for items and the second is for details per item (something like Tuple2).
For this purpose you really can use something like this:
IntegrationFlows
.from(WebFlux.inboundGateway("/sse")
.requestMapping(m -> m.produces(MediaType.TEXT_EVENT_STREAM_VALUE)))
And your downstream flow should produce Flux as a payload for reply.
I have a sample like this in test cases:
#Bean
public IntegrationFlow sseFlow() {
return IntegrationFlows
.from(WebFlux.inboundGateway("/sse")
.requestMapping(m -> m.produces(MediaType.TEXT_EVENT_STREAM_VALUE))
.mappedResponseHeaders("*"))
.enrichHeaders(Collections.singletonMap("aHeader", new String[] { "foo", "bar", "baz" }))
.handle((p, h) -> Flux.fromArray((String[]) h.get("aHeader")))
.get();
}

Using filter with a discard channel in Spring Integration DSL

I don't know if this question is about spring-integration, spring-integration-dsl or both, so I just added the 2 tags...
I spend a considerable amount of time today, first doing a simple flow with a filter
StandardIntegrationFlow flow = IntegrationFlows.from(...)
.filter(messagingFilter)
.transform(transformer)
.handle((m) -> {
(...)
})
.get();
The messagingFilter being a very simple implementation of a MessageSelector. So far so good, no much time spent. But then I wanted to log a message in case the MessageSelector returned false, and here is where I got stuck.
After quite some time I ended up with this:
StandardIntegrationFlow flow = IntegrationFlows.from(...)
.filter(messagingFilters, fs -> fs.discardFlow( i -> i.channel(discardChannel()))
.transform(transformer)
.handle((m) -> {
(...)
})
.get();
(...)
public MessageChannel discardChannel() {
MessageChannel channel = new MessageChannel(){
#Override
public boolean send(Message<?> message) {
log.warn((String) message.getPayload().get("msg-failure"));
return true;
}
#Override
public boolean send(Message<?> message, long timeout) {
return this.send(message);
}
};
return channel;
}
This is both ugly and verbose, so the question is, what have I done wrong here and how should I have done it in a better, cleaner, more elegant solution?
Cheers.
Your problem that you don't see that Filter is a EI Pattern implementation and the maximum it can do is to send discarded message to some channel. It isn't going to log anything because that approach won't be Messaging-based already.
The simplest way you need for your use-case is like:
.discardFlow(df -> df
.handle(message -> log.warn((String) message.getPayload().get("msg-failure")))))
That your logic to just log. Some other people might do more complicated logic. So, eventually you'll get to used to with channel abstraction between endpoints.
I agree that new MessageChannel() {} approach is wrong. The logging indeed should be done in the MessageHandler instead. That is the level of the service responsibility. Also don't forget that there is LoggingHandler, which via Java DSL can be achieved as:
.filter(messagingFilters, fs -> fs.discardFlow( i -> i.log(message -> (String) message.getPayload().get("msg-failure"))))

Enriching in parallel after a split

This is a continuation of the shopping cart sample, where we have an external API that allows checkout from a shopping cart. To recap, we have a flow where we create an empty shopping, add line item(s) and finally checkout. All the operations above, happen as enrichments through HTTP calls to an external service. We would like to add line items concurrently (as part of the add line items) call. Our current configuration looks like this:
#Bean
public IntegrationFlow fullCheckoutFlow() {
return f -> f.channel("inputChannel")
.transform(fromJson(ShoppingCart.class))
.enrich(e -> e.requestChannel(SHOPPING_CART_CHANNEL))
.split(ShoppingCart.class, ShoppingCart::getLineItems)
.enrich(e -> e.requestChannel(ADD_LINE_ITEM_CHANNEL))
.aggregate(aggregator -> aggregator
.outputProcessor(g -> g.getMessages()
.stream()
.map(m -> (LineItem) m.getPayload())
.map(LineItem::getName)
.collect(joining(", "))))
.enrich(e -> e.requestChannel(CHECKOUT_CHANNEL))
.<String>handle((p, h) -> Message.called("We have " + p + " line items!!"));
}
#Bean
public IntegrationFlow addLineItem(Executor executor) {
return f -> f.channel(MessageChannels.executor(ADD_LINE_ITEM_CHANNEL, executor).get())
.handle(outboundGateway("http://localhost:8080/api/add-line-item", restTemplate())
.httpMethod(POST)
.expectedResponseType(String.class));
}
#Bean
public Executor executor(Tracer tracer, TraceKeys traceKeys, SpanNamer spanNamer) {
return new TraceableExecutorService(newFixedThreadPool(10), tracer, traceKeys, spanNamer);
}
To add line items in parallel, we are using an executor channel. However, they still seem to be getting processed sequentially when seen in zipkin:
What are we doing wrong? The source for the whole project is on github for reference.
Thanks!
First of all the main feature of Spring Integration is MessageChannel, but it still isn't clear to me why people are missing .channel() operator in between endpoint definitions.
I mean that for your case it must be like:
.split(ShoppingCart.class, ShoppingCart::getLineItems)
.channel(c -> c.executor(executor()))
.enrich(e -> e.requestChannel(ADD_LINE_ITEM_CHANNEL))
Now about your particular problem.
Look, ContentEnricher (.enrich()) is request-reply component: http://docs.spring.io/spring-integration/reference/html/messaging-transformation-chapter.html#payload-enricher.
Therefore it sends request to its requestChannel and waits for reply. And it is done independently of the requestChannel type.
I raw Java we can demonstrate such a behavior with this code snippet:
for (Object item: items) {
Data data = sendAndReceive(item);
}
where you should see that ADD_LINE_ITEM_CHANNEL as an ExecutorChannel doesn't have much value because we are blocked within loop for the reply anyway.
A .split() does exactly similar loop, but since by default it is with the DirectChannel, an iteration is done in the same thread. Therefore each next item waits for the reply for the previous.
That's why you definitely should parallel exactly as an input for the .enrich(), just after .split().

Resources