Error 'is a one-way 'MessageHandler' for spring-integration aggregator DSL - spring-integration

I'm trying to test some stuff with spring-integration using the DSL. This is only a test so far, the flow is simple:
create some messages
process (log) them in parallel
aggregate them
log the aggregate
Apart from the aggregator, it is working fine:
#Bean
public IntegrationFlow integrationFlow() {
return IntegrationFlows
.from(integerMessageSource(), c -> c.poller(Pollers.fixedRate(1, TimeUnit.SECONDS)))
.channel(MessageChannels.executor(Executors.newCachedThreadPool()))
.handle((GenericHandler<Integer>) (payload, headers) -> {
System.out.println("\t delaying message:" + payload + " on thread "
+ Thread.currentThread().getName());
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
System.err.println(e.getMessage());
}
return payload;
})
.handle(this::logMessage)
.aggregate(a ->
a.releaseStrategy(g -> g.size()>10)
.outputProcessor(g ->
g.getMessages()
.stream()
.map(e -> e.getPayload().toString())
.collect(Collectors.joining(",")))
)
.handle(this::logMessage)
.get();
}
If I leave out the .aggregate(..), part, the sample is working.
Withe the aggregator, I get the following exception:
Caused by: org.springframework.beans.factory.BeanCreationException: The 'currentComponent' (org.faboo.test.ParallelIntegrationApplication$$Lambda$9/1341404543#6fe1b4fb) is a one-way 'MessageHandler' and it isn't appropriate to configure 'outputChannel'. This is the end of the integration flow.
As far as I understand, it complains that there is no output from the aggregator?
The full source can be found here: hithub

The problem is the handle() before the aggregator - it produces no result so there is nothing to aggregate...
.handle(this::logMessage)
.aggregate(a ->
Presumably logMessage(Message<?>) has a void return type.
If you want to log before the aggregator, use a wireTap, or change logMessage to return the Message<?> after logging.
.wireTap(sf -> sf.handle(this::logMessage))

Related

Configured errorChannel not called after aggregation

We are facing a strange behavior in our integration flows where the errorChannel does not receive a message in case an exception is thrown in a step after an aggregation.
This is the (reduced) flow:
#Bean
public StandardIntegrationFlow startKafkaInbound() {
return IntegrationFlows.from(Kafka
.messageDrivenChannelAdapter(
kafkaConsumerFactory,
ListenerMode.record,
serviceProperties.getInputTopic().getName())
.errorChannel(errorHandler.getInputChannel())
)
.channel(nextChannel().getInputChannel())
.get();
}
#Bean
public IntegrationFlow nextChannel() {
return IntegrationFlows.from("next")
.transform(Transformers.fromJson(MyObject.class)) // An exception here is sent to errorChannel
.aggregate(aggregatorSpec ->
aggregatorSpec
.releaseStrategy(new MessageCountReleaseStrategy(100))
.sendPartialResultOnExpiry(true)
.groupTimeout(2000L)
.expireGroupsUponCompletion(true)
.correlationStrategy(message -> KafkaHeaderUtils.getOrDefault(message.getHeaders(), MY_CORRELATION_HEADER, ""))
)
.transform(myObjectTransformer) // Exception here is not sent to errorChannel
.channel(acknowledgeMyObjectFlow().getInputChannel())
.get();
}
If we add an explicit channel which is not of type DirectChannel the errorHandling is working as expected. Working code looks like:
// ...
.aggregate(aggregatorSpec -> ...)
.channel(MessageChannels.queue())
.transform(myObjectTransformer) // Now the exception is sent to errorChannel
.channel(acknowledgeMyObjectFlow().getInputChannel())
// ...
Also we'd like to mention, that we have a very similar flow with an aggregation where errorHandling works as expected (Exception sent to errorChannel)
So we were actually able to get the code running, but since errorHandling is a very critical part of the application we'd really like to understand how we can ensure each error will be sent to the configured channel and why explicitly setting a QueueChannel leads to the wanted behavior.
Thanks in advance
You can add this
.enrichHeaders(headers -> headers.header(MessageHeaders.ERROR_CHANNEL, (errorHandler.getInputChannel()))
before an aggregator.
The .channel(MessageChannels.queue()) is misleading over here because the error is sent to the global errorChannel, which is apparently is the same as yours errorHandler.getInputChannel().
The problem that .groupTimeout(2000L) is done on a separate TaskScheduler thread and when an error happens downstream there is no knowledge about try..catch in that Kafka.messageDrivenChannelAdapter.
Feel free to raise a GH issue, so we will think about populating that errorChannel into message headers from the MessageProducerSupport, like that Kafka.messageDrivenChannelAdapter. So, the error handling would be the same independently of the async nature of the downstream flow.
UPDATE
Please, try this as a solution:
.transform(Transformers.fromJson(MyDataObject.class)) // An exception here is sent to errorChannel
.enrichHeaders(headers -> headers.header(MessageHeaders.ERROR_CHANNEL, (errorHandler.getInputChannel())))
.aggregate(aggregatorSpec ->
The enrichHeaders() should do the trick to determine a proper error channel to send error.
Plus your MyDataObjectTransformer has to be modified to this:
throw new MessageTransformationException(source, "test");
The point is that there is a logic like this when exception is caught by the endpoint:
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
where:
if (!(ex instanceof MessagingException) ||
((MessagingException) ex).getFailedMessage() == null) {
runtimeException = new MessageDeliveryException(message, text.get(), ex);
}
And then in the AbstractCorrelatingMessageHandler:
catch (MessageDeliveryException ex) {
logger.warn(ex, () ->
"The MessageGroup [" + groupId +
"] is rescheduled by the reason of: ");
scheduleGroupToForceComplete(groupId);
}
That's how your exception does not reach the error channel.
You may consider to not use that MessageTransformationException. The logic in the wrapping handler is like this:
protected Object handleRequestMessage(Message<?> message) {
try {
return this.transformer.transform(message);
}
catch (Exception e) {
if (e instanceof MessageTransformationException) { // NOSONAR
throw (MessageTransformationException) e;
}
throw new MessageTransformationException(message, "Failed to transform Message in " + this, e);
}
}
UPDATE 2
OK. I see that you use Spring Boot and that one does not register a respective ErrorHandler to the TaskScheduler used in the aggregator for group timeout feature.
Please, consider to add this bean into your configuration:
#Bean
TaskSchedulerCustomizer taskSchedulerCustomizer(ErrorHandler integrationMessagePublishingErrorHandler) {
return taskScheduler -> taskScheduler.setErrorHandler(integrationMessagePublishingErrorHandler);
}
And then feel free to raise a GH issue for Spring Boot to make this customization as a default one in the auto-configuration.

Spring Integration read and process a file without polling

I'm currently trying to write and integration flow then reads a csv file and processes it in chunks (Calls API for enrichment) then writes in back out as a new csv. I currently have an example working perfectly except that it is polling a directory. What I would like to do is be able to pass the file-path and file-name to the integration flow in the headers and then just perform the operation on that one file.
Here is my code for the polling example that works great except for the polling.
#Bean
#SuppressWarnings("unchecked")
public IntegrationFlow getUIDsFromTTDandOutputToFile() {
Gson gson = new GsonBuilder().disableHtmlEscaping().create();
return IntegrationFlows
.from(Files.inboundAdapter(new File(inputFilePath))
.filter(getFileFilters())
.preventDuplicates(true)
.autoCreateDirectory(true),
c -> c
.poller(Pollers.fixedRate(1000)
.maxMessagesPerPoll(1)
)
)
.log(Level.INFO, m -> "TTD UID 2.0 Integration Start" )
.split(Files.splitter())
.channel(c -> c.executor(Executors.newFixedThreadPool(7)))
.handle((p, h) -> new CSVUtils().csvColumnSelector((String) p, ttdColNum))
.channel("chunkingChannel")
.get();
}
#Bean
#ServiceActivator(inputChannel = "chunkingChannel")
public AggregatorFactoryBean chunker() {
log.info("Initializing Chunker");
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(batchSize));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(100L));
aggregator.setOutputChannelName("chunkingOutput");
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setCorrelationStrategy(new CorrelationStrategyIml());
return aggregator;
}
#Bean
public IntegrationFlow enrichFlow() {
return IntegrationFlows.from("chunkingOutput")
.handle((p, h) -> gson.toJson(new TradeDeskUIDRequestPayloadBean((Collection<String>) p)))
.enrichHeaders(eh -> eh.async(false)
.header("accept", "application/json")
.header("contentType", "application/json")
.header("Authorization", "Bearer [TOKEN]")
)
.log(Level.INFO, m -> "Sending request of size " + batchSize + " to: " + TTD_UID_IDENTITY_MAP)
.handle(Http.outboundGateway(TTD_UID_IDENTITY_MAP)
.requestFactory(
alliantPooledHttpConnection.get_httpComponentsClientHttpRequestFactory())
.httpMethod(HttpMethod.POST)
.expectedResponseType(TradeDeskUIDResponsePayloadBean.class)
.extractPayload(true)
)
.log(Level.INFO, m -> "Writing response to output file" )
.handle((p, h) -> ((TradeDeskUIDResponsePayloadBean) p).printMappedBodyAsCSV2())
.handle(Files.outboundAdapter(new File(outputFilePath))
.autoCreateDirectory(true)
.fileExistsMode(FileExistsMode.APPEND)
//.appendNewLine(true)
.fileNameGenerator(m -> m.getHeaders().getOrDefault("file_name", "outputFile") + "_out.csv")
)
.get();
}
public class CorrelationStrategyIml implements CorrelationStrategy {
#Override
public Object getCorrelationKey(Message<?> message) {
return message.getHeaders().getOrDefault("", 1);
}
}
#Component
public class CSVUtils {
#ServiceActivator
String csvColumnSelector(String inputStr, Integer colNum) {
return StringUtils.commaDelimitedListToStringArray(inputStr)[colNum];
}
}
private FileListFilter<File> getFileFilters(){
ChainFileListFilter<File> cflf = new ChainFileListFilter<>();
cflf.addFilter(new LastModifiedFileListFilter(30));
cflf.addFilter(new AcceptOnceFileListFilter<>());
cflf.addFilter(new SimplePatternFileListFilter(fileExtention));
return cflf;
}
If you know the file, then there is no reason in any special component from the framework. You just start your flow from a channel and send a message to it with File object as a payload. That message is going to be carried on to the slitter in your flow and everything is going to work OK.
If you really want to have a high-level API on the matter, you can expose a #MessagingGateway as a beginning of that flow and end-user is going to call your gateway method with desired file as an argument. The framework will create a message on your behalf and send it to the message channel in the flow for processing.
See more info in docs about gateways:
https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#gateway
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#integration-flow-as-gateway
And also a DSL definition starting from some explicit channel:
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-channels

Spring-Integration Webflux exception handling

If an exception occurs in a spring-integration webflux flow, the exception itself (with stacktrace) is sent back to the caller as payload through MessagePublishingErrorHandler, which uses an error channel from the "errorChannel" header, not the default error channel.
How can I set up an error handler similar to WebExceptionHandler? I want to produce an Http status code and possibly a DefaultErrorAttributes object as response.
Simply defining a flow that starts from the errorChannel doesn't work, the error message won't end up there. I tried to define my own fluxErrorChannel, but it appears that it is also not used as error channel, the errors do not end up in my errorFlow:
#Bean
public IntegrationFlow fooRestFlow() {
return IntegrationFlows.from(
WebFlux.inboundGateway("/foo")
.requestMapping(r -> r.methods(HttpMethod.POST))
.requestPayloadType(Map.class)
.errorChannel(fluxErrorChannel()))
.channel(bazFlow().getInputChannel())
.get();
}
#Bean
public MessageChannel fluxErrorChannel() {
return MessageChannels.flux().get();
}
#Bean
public IntegrationFlow errorFlow() {
return IntegrationFlows.from(fluxErrorChannel())
.transform(source -> source)
.enrichHeaders(h -> h.header(HttpHeaders.STATUS_CODE, HttpStatus.BAD_GATEWAY))
.get();
}
#Bean
public IntegrationFlow bazFlow() {
return f -> f.split(Map.class, map -> map.get("items"))
.channel(MessageChannels.flux())
.<String>handle((p, h) -> throw new RuntimeException())
.aggregate();
}
UPDATE
In MessagingGatewaySupport.doSendAndReceiveMessageReactive my error channel defined on the WebFlux.inboundGateway is never used to set the error channel, rather the error channel is always the replyChannel which is being created here:
FutureReplyChannel replyChannel = new FutureReplyChannel();
Message<?> requestMessage = MutableMessageBuilder.fromMessage(message)
.setReplyChannel(replyChannel)
.setHeader(this.messagingTemplate.getSendTimeoutHeader(), null)
.setHeader(this.messagingTemplate.getReceiveTimeoutHeader(), null)
.setErrorChannel(replyChannel)
.build();
The error channel is ultimately being reset to the originalErrorChannelHandler in Mono.fromFuture, but that error channel is ˋnullˋ in my case. Also, the onErrorResume lambda is never invoked:
return Mono.fromFuture(replyChannel.messageFuture)
.doOnSubscribe(s -> {
if (!error && this.countsEnabled) {
this.messageCount.incrementAndGet();
}
})
.<Message<?>>map(replyMessage ->
MessageBuilder.fromMessage(replyMessage)
.setHeader(MessageHeaders.REPLY_CHANNEL, originalReplyChannelHeader)
.setHeader(MessageHeaders.ERROR_CHANNEL, originalErrorChannelHeader)
.build())
.onErrorResume(t -> error ? Mono.error(t) : handleSendError(requestMessage, t));
How is this intended to work?
It's a bug; the ErrorMessage created for the exception by the error handler is sent to the errorChannel header (which has to be the replyChannel so the gateway gets the result). The gateway should then invoke the error flow (if present) and return the result of that.
https://jira.spring.io/browse/INT-4541

Spring Integration: MessageSource doesn't honor errorChannel header

I've the following flow:
#Resource(name = S3_CLIENT_BEAN)
private MessageSource<InputStream> messageSource;
public IntegrationFlow fileStreamingFlow() {
return IntegrationFlows.from(s3Properties.getFileStreamingInputChannel())
.enrichHeaders(spec -> spec.header(ERROR_CHANNEL, S3_ERROR_CHANNEL, true))
.handle(String.class, (fileName, h) -> {
if (messageSource instanceof S3StreamingMessageSource) {
S3StreamingMessageSource s3StreamingMessageSource = (S3StreamingMessageSource) messageSource;
ChainFileListFilter<S3ObjectSummary> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilters(...);
s3StreamingMessageSource.setFilter(chainFileListFilter);
return s3StreamingMessageSource.receive();
}
return messageSource.receive();
}, spec -> spec
.requiresReply(false) // in case all messages got filtered out
)
.channel(s3Properties.getFileStreamingOutputChannel())
.get();
}
I found that if s3StreamingMessageSource.receive throws an exception, the error ends up in the error channel configured for the previous flow in the pipeline, not the S3_ERROR_CHANNEL that's configured for this flow. Not sure if it's related to this question.
The s3StreamingMessageSource.receive() is called from the SourcePollingChannelAdapter:
protected Message<?> receiveMessage() {
return this.source.receive();
}
This one called from the AbstractPollingEndpoint:
private boolean doPoll() {
message = this.receiveMessage();
...
this.handleMessage(message);
...
}
That handleMessage() does this:
this.messagingTemplate.send(getOutputChannel(), message);
So, that is definitely still far away from the mentioned .enrichHeaders(spec -> spec.header(ERROR_CHANNEL, S3_ERROR_CHANNEL, true)) downstream.
However you still can catch an exception in that S3_ERROR_CHANNEL. Pay attention to the second argument of the IntegrationFlows.from():
IntegrationFlows.from(s3Properties.getFileStreamingInputChannel(),
e -> e.poller(Pollers.fixedDelay(...)
.errorChannel(S3_ERROR_CHANNEL)))
Or, according your current you have somewhere a global poller, so configure an errorChannel there.

Handling errors after a message splitter with direct channels

I'm working on a service which sends emails using the spring integration java dsl.
I have a batch message which is split into a collection of individual messages which will be turned into emails.
The issue I am experiencing is that if one of these individual messages throws an error, the other messages in the batch are not processed.
Is there a way to configure the flow so that when a message throws an exception, the exception is handled gracefully and the next message in the batch is processed?
The following code achieves the functionality I would like but I'm wondering if there is an easier / better way to achieve this, ideally in a single IntegrationFlow? :
#Bean
public MessageChannel individualFlowInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow batchFlow() {
return f -> f
.split()
.handle(message -> {
try {
individualFlowInputChannel().send(message);
} catch (Exception e) {
e.printStackTrace();
}
});
}
#Bean
public IntegrationFlow individualFlow() {
return IntegrationFlows.from(individualFlowInputChannel())
.handle((payload, headers) -> {
throw new RuntimeException("BOOM!");
}).get();
}
You can add ExpressionEvaluatingRequestHandlerAdvice to the last handle() definition with its trapException option:
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
On the other hand, if you are talking about "sends emails", wouldn't it be better to consider to do that in the separate thread for each splitted item? In this case the ExecutorChannel after .split() comes to the rescue!

Resources