Splitter aborts during exception with out processing subsequent messages - spring-integration

I have a requirement to split the messages and process one by one. If any of the messages fails, I would like to report it to error channel and resume processing the next available messages
I am using spring cloud aws stream starter with 1.0.0-SNAPSHOT
I wrote a sample program using splitter
#Bean
public MessageChannel channelSplitOne() {
return new DirectChannel();
}
#StreamListener(INTERNAL_CHANNEL)
public void channelOne(String message) {
if (message.equals("l")) {
throw new RuntimeException("Error due to l");
}
System.out.println("Internal: " + message);
}
#Splitter(inputChannel = Sink.INPUT, outputChannel = INTERNAL_CHANNEL)
public List<Message> extractItems(Message<String> input) {
return Arrays.stream(input.getPayload().split(""))
.map(s -> MessageBuilder.withPayload(s).copyHeaders(input.getHeaders()).build())
.collect(Collectors.toList());
}
When I send the message as Hello, the exxpectation is that
'h','e','o' shall be processed, but 'l' shall be reported as error.
But here the after 'l', the processing is not resumed.
Is there any way to achieve this.

You can do that, but with the #ServiceActivator instead of #StreamListener. The first one has adviceChain option where you can inject an ExpressionEvaluatingRequestHandlerAdvice: https://docs.spring.io/spring-integration/docs/5.0.4.RELEASE/reference/html/messaging-endpoints-chapter.html#expression-advice.
The problem that the splitter is like a regular loop in Java, so to continue after error we need to add somehow a try...catch there. But that’s already not a splitter responsibility. Therefore we have to move such a logic into the place we have a error problem.

Related

Spring Intgergation aws - KinesisMessageHandler Direct Channel

My Message handler for publishing messages to the kinesis stream is as follows
public MessageHandler kinesisMessageHandler(final AmazonKinesisAsync amazonKinesis,
#Qualifier("successChannel") MessageChannel successChannel,
#Qualifier("errorChannel") MessageChannel errorChannel) {
KinesisMessageHandler kinesisMessageHandler = new KinesisMessageHandler(amazonKinesis);
kinesisMessageHandler.setSync(false);
kinesisMessageHandler.setOutputChannel(successChannel);
kinesisMessageHandler.setFailureChannel(errorChannel);
return kinesisMessageHandler;
}
#Bean(name = "errorChannel")
public MessageChannel errorChannel() {
return MessageChannels.direct().get();
}
#Bean(name = "successChannel")
public MessageChannel successChannel() {
return MessageChannels.direct().get();
}
The setSync flag is set as false so that the messages are getting processed asynchronously.Also, I have created separate IntegrationFlow to receive and process Kinesis response from the success & error channel.
public IntegrationFlow successMessageIntegrationFlow(MessageChannel successChannel,
MessageChannel inboundKinesisMessageChannel,
MessageReceiverServiceActivator kinesisMessageReceiverServiceActivator) {
return IntegrationFlows.from(successChannel).channel(inboundKinesisMessageChannel)
.handle(kinesisMessageReceiverServiceActivator, "receiveMessage").get();
}
#Bean
public IntegrationFlow errorMessageIntegrationFlow(MessageChannel errorChannel,
MessageChannel inboundKinesisErrorChannel,
MessageReceiverServiceActivator kinesisErrorReceiverServiceActivator
) {
return IntegrationFlows.from(errorChannel).channel(inboundKinesisErrorChannel)
.handle(kinesisErrorReceiverServiceActivator, "receiveMessage").get();
}
I wanted to know if you see any issues in using Direct Channel to receive success & error responses from Kinesis and processing it using an IntegrationFlow. As far as I know, with Direct Channel a producer is a blocker during send until the consumer finishes its work and returns management to the producer caller back. Is it a correct assumption that here the producer is executed in a different set of thread pools by the AmazonKinesisAsyncClient and the producer will not wait for the IntegrationFlow to process the messages? Let me know If I need to implement it differently
Your assumption about blocking is correct: the control does not come back to the producing thread. So, if have a limited number of threads in that Kinesis client, you need to be sure that you free them as soon as possible. You might consider to have those callbacks in the queue channel instead. They are asynchronous anyway, but won’t hold Kinesis client if that.
You still have a flaw in your flows: .channel(inboundKinesisMessageChannel) . That means the same channel in the middle if two different flows . And if it is a direct one , then you end up with round robin distribution. I would just remove it altogether .

Java: MQTT MessageProducerSupport to Flux

I have a simple MQTT Client that outputs received messages via IntegrationFlow:
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
MqttConnectOptions options = new MqttConnectOptions();
options.setServerURIs(new String[] { "tcp://test.mosquitto.org:1883" });
factory.setConnectionOptions(options);
return factory;
}
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(
"myConsumer",
mqttClientFactory(),
"/test/#");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
I need to pipe all received messages into a Flux though for further processing.
public Flux<String> mqttChannel() {
...
return mqttFlux;
}
How can I do that? The loggingHandler receives all messages from the IntegrationFlow. Couldn't my Flux get it's input in a similar fashion - by passing it somehow to IntegrationFlows handle function?
MQTT Example code is take from https://github.com/spring-projects/spring-integration-samples/blob/master/basic/mqtt/src/main/java/org/springframework/integration/samples/mqtt/Application.java
Attempt: Following Artem Bilans advise I'm now trying to use toReactivePublisher to convert my inbound IntegrationFlow to Flux.
public Flux<String> mqttChannel() {
Publisher<Message<Object>> flow = IntegrationFlows.from(mqttInbound())
.toReactivePublisher();
Flux<String> mqttFlux = Flux.from(flow)
.log()
.map(i -> "TESTING: Received a MQTT message");
return mqttFlux;
}
Running the example i get following error:
10:14:39.541 [MQTT Call: myConsumer] ERROR o.s.i.m.i.MqttPahoMessageDrivenChannelAdapter - Unhandled exception for GenericMessage [payload=OFF,26.70,65.00,663,-62,192.168.2.100,0.026,25,4,6,7,933,278,27,4,1,0,1580496218,730573600,1800000,1980000,1580496218,730573600,10800000,11880000, headers={mqtt_receivedRetained=true, mqtt_id=0, mqtt_duplicate=false, id=3f7565aa-ff4f-c389-d8a9-712d4f06f1cb, mqtt_receivedTopic=/083B7036697886C41D2DF2FD919143EE/MasterBedroom/Sensor/, mqtt_receivedQos=0, timestamp=1602231279537}]
Conclusion: as soon as the first message arrives, it's handled wrong and an exception is thrown.
Please, read this doc: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/reactive-streams.html#reactive-streams
It is not clear what you would like to achieve with that "my flux" and how that could look, but for your current configuration there are a couple of solutions.
You can use a FluxMessageChannel which is already a Publisher, so you can simply use Flux.from() and subscriber to that for consuming data produced by the mentioned MqttPahoMessageDrivenChannelAdapter.
Another way is to use a toReactivePublisher() on the IntegrationFlowBuilder to expose the whole flow as a reactive Publsiher source. In this case, of course, you can't use the LoggingHandler because it is a one-way and makes your flow ending exactly here. You may consider to use a log() operator instead though: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-log
By the way the FluxMessageChannel is publish-subscribe, so you can have it in the flow for those logs and also have it externally for Flux.from() subscription. All the subscribers to this channel are going to get the same message.

How to pass object associated with message past outbound channel adapter

I have the following:
[inbound channel adapter] -> ... -> foo -> [outbound channel adapter] -> bar
How can I write my spring-integration app so that foo can an extra object that's not part of the message the [outbound channel adapter] is to consume, such that bar gets it?
My app basically receives messages from AWS SQS (using spring-integration-aws), does some filtering / transformations, then publishes a message to Apache Kafka (using spring-integration-kafka), and if and only if that succeeds, deletes the original message off the SQS queue.
For that reason, when I receive the SQS message, I want to hold onto the receipt handle / acknowledgement object, transform the rest of the message into the Kafka message to be published, and then if that succeeds, make use of that receipt handle / acknowledgement object to dequeue the original message.
So say I'm using this example code off the spring-integration-kafka docs:
#Bean
#ServiceActivator(inputChannel = "toKafka", outputChannel = "result")
public MessageHandler handler() throws Exception {
KafkaProducerMessageHandler<String, String> handler =
new KafkaProducerMessageHandler<>(kafkaTemplate());
handler.setTopicExpression(new LiteralExpression("someTopic"));
handler.setMessageKeyExpression(new LiteralExpression("someKey"));
handler.setFailureChannel(failures());
return handler;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
// set more properties
return new DefaultKafkaProducerFactory<>(props);
}
With the above, if I have a message message and some extra, unrelated info extra, what do I send to the toKafka channel such that handler will consume message, and if that was successful, the result channel will receive extra?
Outbound channel adapters don't produce output - they are one-way only and end the flow.
You can make toKafka a PublishSubscribeChannel and add a second service activator; by default, the second will only be called if the first is successful.

Is there a default output channel if DSL flow ends with endpoin?

The last element in the code for the following DSL flow is Service Activator (.handle method).
Is there a default output direct channel to which I can subscribe here? If I understand things correctly, the output channel must be present
I know I can add .channel("name") at the end but the question is what if it's not written explicitly.
Here is the code:
#SpringBootApplication
#IntegrationComponentScan
public class QueueChannelResearch {
#Bean
public IntegrationFlow lambdaFlow() {
return f -> f.channel(c -> c.queue(50))
.handle(System.out::println);
}
public static void main(String[] args) {
ConfigurableApplicationContext ctx = SpringApplication.run(QueueChannelResearch.class, args);
MessageChannel inputChannel = ctx.getBean("lambdaFlow.input", MessageChannel.class);
for (int i = 0; i < 1000; i++) {
inputChannel.send(MessageBuilder.withPayload("w" + i)
.build());
}
ctx.close();
}
Another question is about QueueChannel. The program hangs if comment handle() and completes if uncomment it. Does that mean that handle() add a default Poller before it?
return f -> f.channel(c -> c.queue(50));
// .handle(System.out::println);
No, that doesn't work that way.
Just recall that integration flow is a filter-pipes architecture and result of the current step is going to be sent to next one. Since you use .handle(System.out::println) there is no output from that println() method call therefore nothing is returned to build a Message to sent to the next channel if any. So, the flow stops here. The void return type or null returned value is a signal for service activator to stop the flow. Consider your .handle(System.out::println) as an <outbound-channel-adapter> in the XML configuration.
And yes: there is no any default channels, unless you define one via replyChannel header in advance. But again: your service method must return something valuable.
The output from service activator is optional, that's why we didn't introduce extra operator for the Outbound Channel Adapter.
The question about QueueChannel would be better to handle in the separate SO thread. There is no default poller unless you declare one as a PollerMetadata.DEFAULT_POLLER. You might use some library which delcares that one for you.

Handling errors after a message splitter with direct channels

I'm working on a service which sends emails using the spring integration java dsl.
I have a batch message which is split into a collection of individual messages which will be turned into emails.
The issue I am experiencing is that if one of these individual messages throws an error, the other messages in the batch are not processed.
Is there a way to configure the flow so that when a message throws an exception, the exception is handled gracefully and the next message in the batch is processed?
The following code achieves the functionality I would like but I'm wondering if there is an easier / better way to achieve this, ideally in a single IntegrationFlow? :
#Bean
public MessageChannel individualFlowInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow batchFlow() {
return f -> f
.split()
.handle(message -> {
try {
individualFlowInputChannel().send(message);
} catch (Exception e) {
e.printStackTrace();
}
});
}
#Bean
public IntegrationFlow individualFlow() {
return IntegrationFlows.from(individualFlowInputChannel())
.handle((payload, headers) -> {
throw new RuntimeException("BOOM!");
}).get();
}
You can add ExpressionEvaluatingRequestHandlerAdvice to the last handle() definition with its trapException option:
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
On the other hand, if you are talking about "sends emails", wouldn't it be better to consider to do that in the separate thread for each splitted item? In this case the ExecutorChannel after .split() comes to the rescue!

Resources