Spring-Integration Webflux exception handling - spring-integration

If an exception occurs in a spring-integration webflux flow, the exception itself (with stacktrace) is sent back to the caller as payload through MessagePublishingErrorHandler, which uses an error channel from the "errorChannel" header, not the default error channel.
How can I set up an error handler similar to WebExceptionHandler? I want to produce an Http status code and possibly a DefaultErrorAttributes object as response.
Simply defining a flow that starts from the errorChannel doesn't work, the error message won't end up there. I tried to define my own fluxErrorChannel, but it appears that it is also not used as error channel, the errors do not end up in my errorFlow:
#Bean
public IntegrationFlow fooRestFlow() {
return IntegrationFlows.from(
WebFlux.inboundGateway("/foo")
.requestMapping(r -> r.methods(HttpMethod.POST))
.requestPayloadType(Map.class)
.errorChannel(fluxErrorChannel()))
.channel(bazFlow().getInputChannel())
.get();
}
#Bean
public MessageChannel fluxErrorChannel() {
return MessageChannels.flux().get();
}
#Bean
public IntegrationFlow errorFlow() {
return IntegrationFlows.from(fluxErrorChannel())
.transform(source -> source)
.enrichHeaders(h -> h.header(HttpHeaders.STATUS_CODE, HttpStatus.BAD_GATEWAY))
.get();
}
#Bean
public IntegrationFlow bazFlow() {
return f -> f.split(Map.class, map -> map.get("items"))
.channel(MessageChannels.flux())
.<String>handle((p, h) -> throw new RuntimeException())
.aggregate();
}
UPDATE
In MessagingGatewaySupport.doSendAndReceiveMessageReactive my error channel defined on the WebFlux.inboundGateway is never used to set the error channel, rather the error channel is always the replyChannel which is being created here:
FutureReplyChannel replyChannel = new FutureReplyChannel();
Message<?> requestMessage = MutableMessageBuilder.fromMessage(message)
.setReplyChannel(replyChannel)
.setHeader(this.messagingTemplate.getSendTimeoutHeader(), null)
.setHeader(this.messagingTemplate.getReceiveTimeoutHeader(), null)
.setErrorChannel(replyChannel)
.build();
The error channel is ultimately being reset to the originalErrorChannelHandler in Mono.fromFuture, but that error channel is ˋnullˋ in my case. Also, the onErrorResume lambda is never invoked:
return Mono.fromFuture(replyChannel.messageFuture)
.doOnSubscribe(s -> {
if (!error && this.countsEnabled) {
this.messageCount.incrementAndGet();
}
})
.<Message<?>>map(replyMessage ->
MessageBuilder.fromMessage(replyMessage)
.setHeader(MessageHeaders.REPLY_CHANNEL, originalReplyChannelHeader)
.setHeader(MessageHeaders.ERROR_CHANNEL, originalErrorChannelHeader)
.build())
.onErrorResume(t -> error ? Mono.error(t) : handleSendError(requestMessage, t));
How is this intended to work?

It's a bug; the ErrorMessage created for the exception by the error handler is sent to the errorChannel header (which has to be the replyChannel so the gateway gets the result). The gateway should then invoke the error flow (if present) and return the result of that.
https://jira.spring.io/browse/INT-4541

Related

Configured errorChannel not called after aggregation

We are facing a strange behavior in our integration flows where the errorChannel does not receive a message in case an exception is thrown in a step after an aggregation.
This is the (reduced) flow:
#Bean
public StandardIntegrationFlow startKafkaInbound() {
return IntegrationFlows.from(Kafka
.messageDrivenChannelAdapter(
kafkaConsumerFactory,
ListenerMode.record,
serviceProperties.getInputTopic().getName())
.errorChannel(errorHandler.getInputChannel())
)
.channel(nextChannel().getInputChannel())
.get();
}
#Bean
public IntegrationFlow nextChannel() {
return IntegrationFlows.from("next")
.transform(Transformers.fromJson(MyObject.class)) // An exception here is sent to errorChannel
.aggregate(aggregatorSpec ->
aggregatorSpec
.releaseStrategy(new MessageCountReleaseStrategy(100))
.sendPartialResultOnExpiry(true)
.groupTimeout(2000L)
.expireGroupsUponCompletion(true)
.correlationStrategy(message -> KafkaHeaderUtils.getOrDefault(message.getHeaders(), MY_CORRELATION_HEADER, ""))
)
.transform(myObjectTransformer) // Exception here is not sent to errorChannel
.channel(acknowledgeMyObjectFlow().getInputChannel())
.get();
}
If we add an explicit channel which is not of type DirectChannel the errorHandling is working as expected. Working code looks like:
// ...
.aggregate(aggregatorSpec -> ...)
.channel(MessageChannels.queue())
.transform(myObjectTransformer) // Now the exception is sent to errorChannel
.channel(acknowledgeMyObjectFlow().getInputChannel())
// ...
Also we'd like to mention, that we have a very similar flow with an aggregation where errorHandling works as expected (Exception sent to errorChannel)
So we were actually able to get the code running, but since errorHandling is a very critical part of the application we'd really like to understand how we can ensure each error will be sent to the configured channel and why explicitly setting a QueueChannel leads to the wanted behavior.
Thanks in advance
You can add this
.enrichHeaders(headers -> headers.header(MessageHeaders.ERROR_CHANNEL, (errorHandler.getInputChannel()))
before an aggregator.
The .channel(MessageChannels.queue()) is misleading over here because the error is sent to the global errorChannel, which is apparently is the same as yours errorHandler.getInputChannel().
The problem that .groupTimeout(2000L) is done on a separate TaskScheduler thread and when an error happens downstream there is no knowledge about try..catch in that Kafka.messageDrivenChannelAdapter.
Feel free to raise a GH issue, so we will think about populating that errorChannel into message headers from the MessageProducerSupport, like that Kafka.messageDrivenChannelAdapter. So, the error handling would be the same independently of the async nature of the downstream flow.
UPDATE
Please, try this as a solution:
.transform(Transformers.fromJson(MyDataObject.class)) // An exception here is sent to errorChannel
.enrichHeaders(headers -> headers.header(MessageHeaders.ERROR_CHANNEL, (errorHandler.getInputChannel())))
.aggregate(aggregatorSpec ->
The enrichHeaders() should do the trick to determine a proper error channel to send error.
Plus your MyDataObjectTransformer has to be modified to this:
throw new MessageTransformationException(source, "test");
The point is that there is a logic like this when exception is caught by the endpoint:
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
where:
if (!(ex instanceof MessagingException) ||
((MessagingException) ex).getFailedMessage() == null) {
runtimeException = new MessageDeliveryException(message, text.get(), ex);
}
And then in the AbstractCorrelatingMessageHandler:
catch (MessageDeliveryException ex) {
logger.warn(ex, () ->
"The MessageGroup [" + groupId +
"] is rescheduled by the reason of: ");
scheduleGroupToForceComplete(groupId);
}
That's how your exception does not reach the error channel.
You may consider to not use that MessageTransformationException. The logic in the wrapping handler is like this:
protected Object handleRequestMessage(Message<?> message) {
try {
return this.transformer.transform(message);
}
catch (Exception e) {
if (e instanceof MessageTransformationException) { // NOSONAR
throw (MessageTransformationException) e;
}
throw new MessageTransformationException(message, "Failed to transform Message in " + this, e);
}
}
UPDATE 2
OK. I see that you use Spring Boot and that one does not register a respective ErrorHandler to the TaskScheduler used in the aggregator for group timeout feature.
Please, consider to add this bean into your configuration:
#Bean
TaskSchedulerCustomizer taskSchedulerCustomizer(ErrorHandler integrationMessagePublishingErrorHandler) {
return taskScheduler -> taskScheduler.setErrorHandler(integrationMessagePublishingErrorHandler);
}
And then feel free to raise a GH issue for Spring Boot to make this customization as a default one in the auto-configuration.

Spring Integration: MessageSource doesn't honor errorChannel header

I've the following flow:
#Resource(name = S3_CLIENT_BEAN)
private MessageSource<InputStream> messageSource;
public IntegrationFlow fileStreamingFlow() {
return IntegrationFlows.from(s3Properties.getFileStreamingInputChannel())
.enrichHeaders(spec -> spec.header(ERROR_CHANNEL, S3_ERROR_CHANNEL, true))
.handle(String.class, (fileName, h) -> {
if (messageSource instanceof S3StreamingMessageSource) {
S3StreamingMessageSource s3StreamingMessageSource = (S3StreamingMessageSource) messageSource;
ChainFileListFilter<S3ObjectSummary> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilters(...);
s3StreamingMessageSource.setFilter(chainFileListFilter);
return s3StreamingMessageSource.receive();
}
return messageSource.receive();
}, spec -> spec
.requiresReply(false) // in case all messages got filtered out
)
.channel(s3Properties.getFileStreamingOutputChannel())
.get();
}
I found that if s3StreamingMessageSource.receive throws an exception, the error ends up in the error channel configured for the previous flow in the pipeline, not the S3_ERROR_CHANNEL that's configured for this flow. Not sure if it's related to this question.
The s3StreamingMessageSource.receive() is called from the SourcePollingChannelAdapter:
protected Message<?> receiveMessage() {
return this.source.receive();
}
This one called from the AbstractPollingEndpoint:
private boolean doPoll() {
message = this.receiveMessage();
...
this.handleMessage(message);
...
}
That handleMessage() does this:
this.messagingTemplate.send(getOutputChannel(), message);
So, that is definitely still far away from the mentioned .enrichHeaders(spec -> spec.header(ERROR_CHANNEL, S3_ERROR_CHANNEL, true)) downstream.
However you still can catch an exception in that S3_ERROR_CHANNEL. Pay attention to the second argument of the IntegrationFlows.from():
IntegrationFlows.from(s3Properties.getFileStreamingInputChannel(),
e -> e.poller(Pollers.fixedDelay(...)
.errorChannel(S3_ERROR_CHANNEL)))
Or, according your current you have somewhere a global poller, so configure an errorChannel there.

Handling errors after a message splitter with direct channels

I'm working on a service which sends emails using the spring integration java dsl.
I have a batch message which is split into a collection of individual messages which will be turned into emails.
The issue I am experiencing is that if one of these individual messages throws an error, the other messages in the batch are not processed.
Is there a way to configure the flow so that when a message throws an exception, the exception is handled gracefully and the next message in the batch is processed?
The following code achieves the functionality I would like but I'm wondering if there is an easier / better way to achieve this, ideally in a single IntegrationFlow? :
#Bean
public MessageChannel individualFlowInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow batchFlow() {
return f -> f
.split()
.handle(message -> {
try {
individualFlowInputChannel().send(message);
} catch (Exception e) {
e.printStackTrace();
}
});
}
#Bean
public IntegrationFlow individualFlow() {
return IntegrationFlows.from(individualFlowInputChannel())
.handle((payload, headers) -> {
throw new RuntimeException("BOOM!");
}).get();
}
You can add ExpressionEvaluatingRequestHandlerAdvice to the last handle() definition with its trapException option:
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
On the other hand, if you are talking about "sends emails", wouldn't it be better to consider to do that in the separate thread for each splitted item? In this case the ExecutorChannel after .split() comes to the rescue!

Strange behavior when returning ObjectNode from IntegrationFlows

When an ObjectNode is passed from the extractFramesFlow() and reaches the httpCallbackFlow(), HTTP request is successfully performed and JSON formatted payload is 'POST'ed to the "call_back" uri specified.
#Bean
public IntegrationFlow extractFramesFlow() {
return IntegrationFlows.from(extractFramesChannel())
.handle(ObjectNode.class, (payload, headers) -> {
payload = validateFields(payload);
String path = payload.get("path").asText();
try {
File moviePath = new File(path);
ArrayNode arrayNode = mapper.createArrayNode();
String imageType = payload.path("image_type").asText("JPG");
String prefix = payload.path("prefix").asText();
Tools.thumbnails(moviePath, payload.get("slice").asInt(), payload.get("scale").asInt(),
imageType, prefix, file -> arrayNode.add(file.toString()));
payload.set("files", arrayNode);
} catch (IOException e) {
e.printStackTrace();
}
return payload;
}).enrichHeaders(h-> h.header("errorChannel", "asyncErrorChannel", true))
.<ObjectNode, Boolean>route(p-> !p.hasNonNull("id"),
m->m.channelMapping("true","httpCallbackFlow.input")
.channelMapping("false","uploadToS3Channel")).get();
}
#Bean
public IntegrationFlow httpCallbackFlow() {
return f->f.handle(Http.<JsonNode>outboundChannelAdapter(m->m.getPayload().get("call_back").asText()));
}
However, when an ObjectNode is chained from the handleAsyncErrors() flow and reaches the same httpCallbackFlow(), we get an Exception which is caused by
org.springframework.web.client.RestClientException: Could not write request: no suitable HttpMessageConverter found for request type [com.fasterxml.jackson.databind.node.ObjectNode] and content type [application/x-java-serialized-object]
at org.springframework.web.client.RestTemplate$HttpEntityRequestCallback.doWithRequest(RestTemplate.java:811)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:594)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:572)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:493)
at org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler.handleRequestMessage(HttpRequestExecutingMessageHandler.java:382)
... 24 more
#Bean
public IntegrationFlow handleAsyncErrors() {
return IntegrationFlows.from(asyncErrorChannel())
.<MessagingException>handle((p, h) -> {
ObjectNode objectNode = mapper.createObjectNode();
objectNode.put("call_back", "http://some.test.uri");
return objectNode;
}).channel("httpCallbackFlow.input").get();
}
I don't know why we get this Exception handled by the same exact IntegrationFlow notwithstanding.
The message on the error flow has no contentType header.
It is an error message with the MessagingException payload; which has 2 properties; the cause and failedMessage.
Presumably you have a content type on the main flow message. You can set the content type with a header enricher, or add
.<MessagingException, Message<?>>transform(p -> p.getFailedMessage())
before your existing error handler, to restore the headers from the failed message.

Calls to gateway result never return to caller when successful

I am using Spring Integration DSL and have a simple Gateway:
#MessagingGateway(name = "eventGateway", defaultRequestChannel = "inputChannel")
public interface EventProcessorGateway {
#Gateway(requestChannel="inputChannel")
public void processEvent(Message message)
}
My spring integration flow is defined as:
#Bean MessageChannel inputChannel() { return new DirectChannel(); }
#Bean MessageChannel errorChannel() { return new DirectChannel(); }
#Bean MessageChannel retryGatewayChannel() { return new DirectChannel(); }
#Bean MessageChannel jsonChannel() { return new DirectChannel(); }
#Bean
public IntegrationFlow postEvents() {
return IntegrationFlows.from(inputChannel())
.route("headers.contentType", m -> m.channelMapping(MediaType.APPLICATION_JSON_VALUE, "json")
)
.get();
}
#Bean
public IntegrationFlow retryGateway() {
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
}
#Bean
public IntegrationFlow transformJsonEvents() {
return IntegrationFlows
.from(retryGatewayChannel())
.transform(new JsonTransformer())
.handle(new JsonHandler())
.get();
}
The JsonTransformer is a simple AbstractTransformer that transforms the JSON data and passes it to the JsonHandler.
class JsonHandler extends AbstractMessageHandler {
public void handleMessageInternal(Message message) throws Exception {
// do stuff, return nothing if success else throw Exception
}
}
I call my gateway from code as such:
try {
Message<List<EventRecord>> message = MessageBuilder.createMessage(eventList, new MessageHeaders(['contentType': contentType]))
eventProcessorGateway.processEvent(message)
logSuccess(eventList)
} catch (Exception e) {
logError(eventList)
}
I want the entire call and processing to be synchronous, and any errors that occur to be caught so I can handle them appropriately. The call to the gateway works, the message gets sent to through the Transformer and to the Handler, processed and if an Exception occurs it bubbles back and is caught and logError() is called. However if the call is successful, the call to logSuccess() never occurs. It is like execution stops/hangs after the Handler processes the message and never returns. I do not need to actually get any response, I am more concerned if something fails to process. Do I need to send something back to the initial EventProcessorGateway?
Your issue is here:
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
where that .gateway() is request/reply because it is a part of the main flow.
It is something similar to the <gateway> within <chain>.
So, even if your main flow is one-way, using .gateway() inside that requires from your sub-flow some reply, but this one:
.handle(new JsonHandler())
.get();
doesn't do that.
Because it is one-way MessageHandler.
From other side, even if you'd make the last one as request-reply (AbstractReplyProducingMessageHandler), it won't help you because you don't know what to do with that reply after the mid-flow gateway. Just because your main flow is the one-way.
You must re-think your desing a bit more and try to get rid of that mid-flow gateway. I see that you try to make some logic with retryAdvice().
But how about to move it to the .handle(new JsonHandler()) instead of that wrong .gateway()?

Resources