Handling errors after a message splitter with direct channels - spring-integration

I'm working on a service which sends emails using the spring integration java dsl.
I have a batch message which is split into a collection of individual messages which will be turned into emails.
The issue I am experiencing is that if one of these individual messages throws an error, the other messages in the batch are not processed.
Is there a way to configure the flow so that when a message throws an exception, the exception is handled gracefully and the next message in the batch is processed?
The following code achieves the functionality I would like but I'm wondering if there is an easier / better way to achieve this, ideally in a single IntegrationFlow? :
#Bean
public MessageChannel individualFlowInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow batchFlow() {
return f -> f
.split()
.handle(message -> {
try {
individualFlowInputChannel().send(message);
} catch (Exception e) {
e.printStackTrace();
}
});
}
#Bean
public IntegrationFlow individualFlow() {
return IntegrationFlows.from(individualFlowInputChannel())
.handle((payload, headers) -> {
throw new RuntimeException("BOOM!");
}).get();
}

You can add ExpressionEvaluatingRequestHandlerAdvice to the last handle() definition with its trapException option:
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
On the other hand, if you are talking about "sends emails", wouldn't it be better to consider to do that in the separate thread for each splitted item? In this case the ExecutorChannel after .split() comes to the rescue!

Related

Configured errorChannel not called after aggregation

We are facing a strange behavior in our integration flows where the errorChannel does not receive a message in case an exception is thrown in a step after an aggregation.
This is the (reduced) flow:
#Bean
public StandardIntegrationFlow startKafkaInbound() {
return IntegrationFlows.from(Kafka
.messageDrivenChannelAdapter(
kafkaConsumerFactory,
ListenerMode.record,
serviceProperties.getInputTopic().getName())
.errorChannel(errorHandler.getInputChannel())
)
.channel(nextChannel().getInputChannel())
.get();
}
#Bean
public IntegrationFlow nextChannel() {
return IntegrationFlows.from("next")
.transform(Transformers.fromJson(MyObject.class)) // An exception here is sent to errorChannel
.aggregate(aggregatorSpec ->
aggregatorSpec
.releaseStrategy(new MessageCountReleaseStrategy(100))
.sendPartialResultOnExpiry(true)
.groupTimeout(2000L)
.expireGroupsUponCompletion(true)
.correlationStrategy(message -> KafkaHeaderUtils.getOrDefault(message.getHeaders(), MY_CORRELATION_HEADER, ""))
)
.transform(myObjectTransformer) // Exception here is not sent to errorChannel
.channel(acknowledgeMyObjectFlow().getInputChannel())
.get();
}
If we add an explicit channel which is not of type DirectChannel the errorHandling is working as expected. Working code looks like:
// ...
.aggregate(aggregatorSpec -> ...)
.channel(MessageChannels.queue())
.transform(myObjectTransformer) // Now the exception is sent to errorChannel
.channel(acknowledgeMyObjectFlow().getInputChannel())
// ...
Also we'd like to mention, that we have a very similar flow with an aggregation where errorHandling works as expected (Exception sent to errorChannel)
So we were actually able to get the code running, but since errorHandling is a very critical part of the application we'd really like to understand how we can ensure each error will be sent to the configured channel and why explicitly setting a QueueChannel leads to the wanted behavior.
Thanks in advance
You can add this
.enrichHeaders(headers -> headers.header(MessageHeaders.ERROR_CHANNEL, (errorHandler.getInputChannel()))
before an aggregator.
The .channel(MessageChannels.queue()) is misleading over here because the error is sent to the global errorChannel, which is apparently is the same as yours errorHandler.getInputChannel().
The problem that .groupTimeout(2000L) is done on a separate TaskScheduler thread and when an error happens downstream there is no knowledge about try..catch in that Kafka.messageDrivenChannelAdapter.
Feel free to raise a GH issue, so we will think about populating that errorChannel into message headers from the MessageProducerSupport, like that Kafka.messageDrivenChannelAdapter. So, the error handling would be the same independently of the async nature of the downstream flow.
UPDATE
Please, try this as a solution:
.transform(Transformers.fromJson(MyDataObject.class)) // An exception here is sent to errorChannel
.enrichHeaders(headers -> headers.header(MessageHeaders.ERROR_CHANNEL, (errorHandler.getInputChannel())))
.aggregate(aggregatorSpec ->
The enrichHeaders() should do the trick to determine a proper error channel to send error.
Plus your MyDataObjectTransformer has to be modified to this:
throw new MessageTransformationException(source, "test");
The point is that there is a logic like this when exception is caught by the endpoint:
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
where:
if (!(ex instanceof MessagingException) ||
((MessagingException) ex).getFailedMessage() == null) {
runtimeException = new MessageDeliveryException(message, text.get(), ex);
}
And then in the AbstractCorrelatingMessageHandler:
catch (MessageDeliveryException ex) {
logger.warn(ex, () ->
"The MessageGroup [" + groupId +
"] is rescheduled by the reason of: ");
scheduleGroupToForceComplete(groupId);
}
That's how your exception does not reach the error channel.
You may consider to not use that MessageTransformationException. The logic in the wrapping handler is like this:
protected Object handleRequestMessage(Message<?> message) {
try {
return this.transformer.transform(message);
}
catch (Exception e) {
if (e instanceof MessageTransformationException) { // NOSONAR
throw (MessageTransformationException) e;
}
throw new MessageTransformationException(message, "Failed to transform Message in " + this, e);
}
}
UPDATE 2
OK. I see that you use Spring Boot and that one does not register a respective ErrorHandler to the TaskScheduler used in the aggregator for group timeout feature.
Please, consider to add this bean into your configuration:
#Bean
TaskSchedulerCustomizer taskSchedulerCustomizer(ErrorHandler integrationMessagePublishingErrorHandler) {
return taskScheduler -> taskScheduler.setErrorHandler(integrationMessagePublishingErrorHandler);
}
And then feel free to raise a GH issue for Spring Boot to make this customization as a default one in the auto-configuration.

Receive messages from a channel by some event spring integration dsl [duplicate]

i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.
I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.

Thread safety in executor channel

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. I have used the Executor channel to process messages in parallel and then the flow passes through some common handler class.
Please find below the snippet of code -
baseEventFlow() - We receive the message from the EMS queue and send it to a router
router() - Based on the id of the message" a particular ExecutorChannel instance is configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread.
skwDefaultChannel(), gjsucaDefaultChannel(), rpaDefaultChannel() - All the ExecutorChannel beans are marked with the #BridgeTo for the same channel which starts that common flow.
uaEventFlow() - Here each message will get processed
#Bean
public IntegrationFlow baseEventFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)
.route(router()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(skwDefaultChannel());
}else if (message.getPayload().toString().contains("\"id\":\"ASH")) {
return Collections.singletonList(rpaDefaultChannel());
} else if (message.getPayload().toString().contains("\"id\":\"GJS")
|| message.getPayload().toString().contains("\"id\":\"UCA")) {
return Collections.singletonList(gjsucaDefaultChannel());
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel skwDefaultChannel() {
return MessageChannels.executor(SKW_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel gjsucaDefaultChannel() {
return MessageChannels.executor(GJS_UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel rpaDefaultChannel() {
return MessageChannels.executor(RPA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel")
.wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
.handle(uaImpl, "process").get();
}
My concern is in the uaEVentFlow() the common transform and handler method are not thread safe and it may cause issue. How can we ensure that we inject a new transformer and handler at every message invocation?
Should I change the scope of the transformer and handler bean as prototype?
Instead of bridging to a common flow, you should move the .transform() and .handle() to each of the upstream flows and add
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
to their #Bean definitions so each gets its own instance.
But, it's generally better to make your code thread-safe.

Spring Integration: MessageSource doesn't honor errorChannel header

I've the following flow:
#Resource(name = S3_CLIENT_BEAN)
private MessageSource<InputStream> messageSource;
public IntegrationFlow fileStreamingFlow() {
return IntegrationFlows.from(s3Properties.getFileStreamingInputChannel())
.enrichHeaders(spec -> spec.header(ERROR_CHANNEL, S3_ERROR_CHANNEL, true))
.handle(String.class, (fileName, h) -> {
if (messageSource instanceof S3StreamingMessageSource) {
S3StreamingMessageSource s3StreamingMessageSource = (S3StreamingMessageSource) messageSource;
ChainFileListFilter<S3ObjectSummary> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilters(...);
s3StreamingMessageSource.setFilter(chainFileListFilter);
return s3StreamingMessageSource.receive();
}
return messageSource.receive();
}, spec -> spec
.requiresReply(false) // in case all messages got filtered out
)
.channel(s3Properties.getFileStreamingOutputChannel())
.get();
}
I found that if s3StreamingMessageSource.receive throws an exception, the error ends up in the error channel configured for the previous flow in the pipeline, not the S3_ERROR_CHANNEL that's configured for this flow. Not sure if it's related to this question.
The s3StreamingMessageSource.receive() is called from the SourcePollingChannelAdapter:
protected Message<?> receiveMessage() {
return this.source.receive();
}
This one called from the AbstractPollingEndpoint:
private boolean doPoll() {
message = this.receiveMessage();
...
this.handleMessage(message);
...
}
That handleMessage() does this:
this.messagingTemplate.send(getOutputChannel(), message);
So, that is definitely still far away from the mentioned .enrichHeaders(spec -> spec.header(ERROR_CHANNEL, S3_ERROR_CHANNEL, true)) downstream.
However you still can catch an exception in that S3_ERROR_CHANNEL. Pay attention to the second argument of the IntegrationFlows.from():
IntegrationFlows.from(s3Properties.getFileStreamingInputChannel(),
e -> e.poller(Pollers.fixedDelay(...)
.errorChannel(S3_ERROR_CHANNEL)))
Or, according your current you have somewhere a global poller, so configure an errorChannel there.

Calls to gateway result never return to caller when successful

I am using Spring Integration DSL and have a simple Gateway:
#MessagingGateway(name = "eventGateway", defaultRequestChannel = "inputChannel")
public interface EventProcessorGateway {
#Gateway(requestChannel="inputChannel")
public void processEvent(Message message)
}
My spring integration flow is defined as:
#Bean MessageChannel inputChannel() { return new DirectChannel(); }
#Bean MessageChannel errorChannel() { return new DirectChannel(); }
#Bean MessageChannel retryGatewayChannel() { return new DirectChannel(); }
#Bean MessageChannel jsonChannel() { return new DirectChannel(); }
#Bean
public IntegrationFlow postEvents() {
return IntegrationFlows.from(inputChannel())
.route("headers.contentType", m -> m.channelMapping(MediaType.APPLICATION_JSON_VALUE, "json")
)
.get();
}
#Bean
public IntegrationFlow retryGateway() {
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
}
#Bean
public IntegrationFlow transformJsonEvents() {
return IntegrationFlows
.from(retryGatewayChannel())
.transform(new JsonTransformer())
.handle(new JsonHandler())
.get();
}
The JsonTransformer is a simple AbstractTransformer that transforms the JSON data and passes it to the JsonHandler.
class JsonHandler extends AbstractMessageHandler {
public void handleMessageInternal(Message message) throws Exception {
// do stuff, return nothing if success else throw Exception
}
}
I call my gateway from code as such:
try {
Message<List<EventRecord>> message = MessageBuilder.createMessage(eventList, new MessageHeaders(['contentType': contentType]))
eventProcessorGateway.processEvent(message)
logSuccess(eventList)
} catch (Exception e) {
logError(eventList)
}
I want the entire call and processing to be synchronous, and any errors that occur to be caught so I can handle them appropriately. The call to the gateway works, the message gets sent to through the Transformer and to the Handler, processed and if an Exception occurs it bubbles back and is caught and logError() is called. However if the call is successful, the call to logSuccess() never occurs. It is like execution stops/hangs after the Handler processes the message and never returns. I do not need to actually get any response, I am more concerned if something fails to process. Do I need to send something back to the initial EventProcessorGateway?
Your issue is here:
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
where that .gateway() is request/reply because it is a part of the main flow.
It is something similar to the <gateway> within <chain>.
So, even if your main flow is one-way, using .gateway() inside that requires from your sub-flow some reply, but this one:
.handle(new JsonHandler())
.get();
doesn't do that.
Because it is one-way MessageHandler.
From other side, even if you'd make the last one as request-reply (AbstractReplyProducingMessageHandler), it won't help you because you don't know what to do with that reply after the mid-flow gateway. Just because your main flow is the one-way.
You must re-think your desing a bit more and try to get rid of that mid-flow gateway. I see that you try to make some logic with retryAdvice().
But how about to move it to the .handle(new JsonHandler()) instead of that wrong .gateway()?

Resources