Ignore message from a MQ topic from a specific channel - spring-integration

There is a IBM MQ topic which accepts two types of messages Orders and Shipments.
I have a Springboot subscriber app which is interested in subscribing only the Shipment message type.
Below is how I am routing the channel. If the inbound message is neither of above types it will be thrown to errorChannel that I have.
Here if I do not have orderChannel app will throw an error saying no proper channel for the inbound message.
How do I silently ignore the messages of type order here?
#Bean
#ServiceActivator(inputChannel = "routerChannel")
public HeaderValueRouter router() throws Exception {
HeaderValueRouter router = new HeaderValueRouter(messageType);
router.setChannelMapping(shipment, "shipmentChannel");
router.setChannelMapping(order, "orderChannel");
router.setDefaultOutputChannel(invalidHeaderValueChannel);
return router;
}
Currently I have the below code snippet which I need to have just to avoid the error when there was a Order message.
#ServiceActivator(inputChannel = "orderChannel")
public void getInboundOrderMessage(Message<?> message) throws Exception {
logger.info("Inbound Order message...");
String payload = (String) message.getPayload();
logger.info("Order Header: {}, payload: \n{}", pMessage.getHeaders(), payload);
}
Below is how I have the MsgDrivenChannelAdapter defined
#MessageEndpoint
public class MsgDrivenChannelAdapter {
private AbstractMessageListenerContainer messageListenerContainer;
private DirectChannel inboundErrorChannel;
private DirectChannel routerChannel;
public MsgDrivenChannelAdapter(AbstractMessageListenerContainer pMessageListenerContainer,
DirectChannel pInboundErrorChannel,
DirectChannel pRouterChannel) {
this.messageListenerContainer = pMessageListenerContainer;
this.inboundErrorChannel = pInboundErrorChannel;
this.routerChannel = pRouterChannel;
}
#Bean
public IntegrationFlow jmsInboundFlow() throws Exception {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(messageListenerContainer)
.errorChannel(inboundErrorChannel))
.channel(routerChannel)
.get();
}
}
Is there anyway I can avoid this? thanks in advance

See this option on the router:
/**
* When true (default), if a resolved channel key does not exist in the channel map,
* the key itself is used as the channel name, which we will attempt to resolve to a
* channel. Set to false to disable this feature. This could be useful to prevent
* malicious actors from generating a message that could cause the message to be
* routed to an unexpected channel, such as one upstream of the router, which would
* cause a stack overflow.
* #param channelKeyFallback false to disable the fall back.
* #since 5.2
*/
public void setChannelKeyFallback(boolean channelKeyFallback) {
So, it does not fallback to the order as a channel name.
Then it will return as null from the mapping and the logic goes like this:
if (!sent) {
getDefaultOutputChannel();
if (this.defaultOutputChannel != null) {
this.messagingTemplate.send(this.defaultOutputChannel, message);
}
else {
throw new MessageDeliveryException(message, "No channel resolved by router '" + this
+ "' and no 'defaultOutputChannel' defined.");
}
}
If you want just to ignore it and don't want to have that MessageDeliveryException, configure a defaultOutputChannel as a nullChannel.
But better to consider a messageSelector for the listener container, so it does not pull messages from a topic which it is not interested in.

This is how I did it to make it work as I wanted. I used filter to fetch only the particular message type
#Bean
public IntegrationFlow jmsInboundFlow() throws Exception {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(messageListenerContainer)
.errorChannel(inboundErrorChannel))
.filter(Message.class, m -> m.getHeaders().get("message_type").equals("shipment"))
.channel(routerChannel)
.get();
}

Related

Spring Kafka outboundChannelAdapter's control does not return back in the integration flow

After a messsage is sent, it gets published to Kafka topic but the Message from KafkaSuccessTransformer does not return back to the REST controller. I am trying to return the message as-is if sent successfully but nothing after Kafka handler seems to be invoked.
#MessagingGateway
public interface MyGateway<String, Message<?>> {
#Gateway(requestChannel = "enrollChannel")
Message<?> sendMsg(#Payload String payload);
}
------------------------
#RestController
public class Controller {
MyGateway<String, Message<?>> myGateway;
#PostMapping
public Message<?> send(#RequestBody String request) throws Exception {
Message<?> resp = myGateway.sendMsg(request);
log.info("I am back"); // control doesn't come to this point
return resp;
}
}
--------------------------
#Component
public class MyIntegrationFlow {
KafkaSuccessTransformer stransformer;
#Bean
public MessageChannel enrollChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel kafkaSuccessChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow enrollIntegrationFlow() {
return IntegrationFlows.from("enrollChannel")
//another transformer which turns the string to Message<?>
.handle(Kafka.outboundChannelAdapter(kafkaTemplate) //kafkaTemplate has the necesssary config
.topic("topic1")
.messageKey(messageKeyFunction -> messageKeyFunction.getHeaders()
.get("key1")
.sendSuccessChannel("kafkaSuccessChannel"));
}
#Bean
public IntegrationFlow successfulKafkaSends() {
return f -> IntegrationFlows.from("kafkaSuccessChannel").transform(stransformer);
}
}
--------------
#Component
public class KafkaSuccessTransformer {
#Transformer
public Message<?> transform(Message<?> message) {
log.info("Message is sent to Kafka");
return message; //control comes here but does not return to REST controller
}
}
Channel adapters are for one-way traffic; there is no result.
Add a publishSubscribe channel with two subflows; the second one can be just a bridge to nowhere - .bridge() ends the flow. It will then return the outbound message to the gateway.
See https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-subflows
Per Artem:
Something is off in the configuration or code. The logic is like this: processSendResult(message, producerRecord, sendFuture, getSendSuccessChannel());. Then: getMessageBuilderFactory().fromMessage(message). So, the replyChannel header is present in this "success" message. Therefore that transform(stransformer) should really produce its return to the replyChannel for a gateway in the beginning. Only the problem could be in the KafkaSuccessTransformer code where it does not copy request message headers for reply message. Please, share its whole code.

Spring Integration aws Kinesis , message aggregator, Release Strategy

this is a follow-up question to Spring Integration AWS RabbitMQ Kinesis
I have the following configuration. I am noticing that when I send a message to the input channel named kinesisSendChannel for the first time, the aggregator and release strategy is getting invoked and messages are sent to Kinesis Streams. I put debug breakpoints at different places and could verify this behavior. But when I again publish messages to the same input channel the release strategy and the outbound processor are not getting invoked and messages are not sent to the Kinesis. I am not sure why the aggregator flow is getting invoked only the first time and not for subsequent messages. For testing purpose , the TimeoutCountSequenceSizeReleaseStrategy is set with count as 1 & time as 60 seconds. There is no specific MessageStore used. Could you help identify the issue?
#Bean(name = "kinesisSendChannel")
public MessageChannel kinesisSendChannel() {
return MessageChannels.direct().get();
}
#Bean(name = "resultChannel")
public MessageChannel resultChannel() {
return MessageChannels.direct().get();
}
#Bean
#ServiceActivator(inputChannel = "kinesisSendChannel")
public MessageHandler aggregator(TestMessageProcessor messageProcessor,
MessageChannel resultChannel,
TimeoutCountSequenceSizeReleaseStrategy timeoutCountSequenceSizeReleaseStrategy) {
AggregatingMessageHandler handler = new AggregatingMessageHandler(messageProcessor);
handler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("headers['foo']"));
handler.setReleaseStrategy(timeoutCountSequenceSizeReleaseStrategy);
handler.setOutputProcessor(messageProcessor);
handler.setOutputChannel(resultChannel);
return handler;
}
#Bean
#ServiceActivator(inputChannel = "resultChannel")
public MessageHandler kinesisMessageHandler1(#Qualifier("successChannel") MessageChannel successChannel,
#Qualifier("errorChannel") MessageChannel errorChannel, final AmazonKinesisAsync amazonKinesis) {
KinesisMessageHandler kinesisMessageHandler = new KinesisMessageHandler(amazonKinesis);
kinesisMessageHandler.setSync(true);
kinesisMessageHandler.setOutputChannel(successChannel);
kinesisMessageHandler.setFailureChannel(errorChannel);
return kinesisMessageHandler;
}
public class TestMessageProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected Object aggregatePayloads(MessageGroup group, Map<String, Object> defaultHeaders) {
final PutRecordsRequest putRecordsRequest = new PutRecordsRequest().withStreamName("test-stream");
final List<PutRecordsRequestEntry> putRecordsRequestEntry = group.getMessages().stream()
.map(message -> (PutRecordsRequestEntry) message.getPayload()).collect(Collectors.toList());
putRecordsRequest.withRecords(putRecordsRequestEntry);
return putRecordsRequestEntry;
}
}
I believe the problem is here handler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("headers['foo']"));. All your messages come with the same foo header. So, all of them form the same message group. As long as you release group and don’t remove it, all the new messages are going to be discarded.
Please, revise aggregator documentation to make yourself familiar with all the possible behavior : https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator

Java: MQTT MessageProducerSupport to Flux

I have a simple MQTT Client that outputs received messages via IntegrationFlow:
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
MqttConnectOptions options = new MqttConnectOptions();
options.setServerURIs(new String[] { "tcp://test.mosquitto.org:1883" });
factory.setConnectionOptions(options);
return factory;
}
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(
"myConsumer",
mqttClientFactory(),
"/test/#");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
I need to pipe all received messages into a Flux though for further processing.
public Flux<String> mqttChannel() {
...
return mqttFlux;
}
How can I do that? The loggingHandler receives all messages from the IntegrationFlow. Couldn't my Flux get it's input in a similar fashion - by passing it somehow to IntegrationFlows handle function?
MQTT Example code is take from https://github.com/spring-projects/spring-integration-samples/blob/master/basic/mqtt/src/main/java/org/springframework/integration/samples/mqtt/Application.java
Attempt: Following Artem Bilans advise I'm now trying to use toReactivePublisher to convert my inbound IntegrationFlow to Flux.
public Flux<String> mqttChannel() {
Publisher<Message<Object>> flow = IntegrationFlows.from(mqttInbound())
.toReactivePublisher();
Flux<String> mqttFlux = Flux.from(flow)
.log()
.map(i -> "TESTING: Received a MQTT message");
return mqttFlux;
}
Running the example i get following error:
10:14:39.541 [MQTT Call: myConsumer] ERROR o.s.i.m.i.MqttPahoMessageDrivenChannelAdapter - Unhandled exception for GenericMessage [payload=OFF,26.70,65.00,663,-62,192.168.2.100,0.026,25,4,6,7,933,278,27,4,1,0,1580496218,730573600,1800000,1980000,1580496218,730573600,10800000,11880000, headers={mqtt_receivedRetained=true, mqtt_id=0, mqtt_duplicate=false, id=3f7565aa-ff4f-c389-d8a9-712d4f06f1cb, mqtt_receivedTopic=/083B7036697886C41D2DF2FD919143EE/MasterBedroom/Sensor/, mqtt_receivedQos=0, timestamp=1602231279537}]
Conclusion: as soon as the first message arrives, it's handled wrong and an exception is thrown.
Please, read this doc: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/reactive-streams.html#reactive-streams
It is not clear what you would like to achieve with that "my flux" and how that could look, but for your current configuration there are a couple of solutions.
You can use a FluxMessageChannel which is already a Publisher, so you can simply use Flux.from() and subscriber to that for consuming data produced by the mentioned MqttPahoMessageDrivenChannelAdapter.
Another way is to use a toReactivePublisher() on the IntegrationFlowBuilder to expose the whole flow as a reactive Publsiher source. In this case, of course, you can't use the LoggingHandler because it is a one-way and makes your flow ending exactly here. You may consider to use a log() operator instead though: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-log
By the way the FluxMessageChannel is publish-subscribe, so you can have it in the flow for those logs and also have it externally for Flux.from() subscription. All the subscribers to this channel are going to get the same message.

Receive messages from a channel by some event spring integration dsl [duplicate]

i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.
I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.

Handling errors after a message splitter with direct channels

I'm working on a service which sends emails using the spring integration java dsl.
I have a batch message which is split into a collection of individual messages which will be turned into emails.
The issue I am experiencing is that if one of these individual messages throws an error, the other messages in the batch are not processed.
Is there a way to configure the flow so that when a message throws an exception, the exception is handled gracefully and the next message in the batch is processed?
The following code achieves the functionality I would like but I'm wondering if there is an easier / better way to achieve this, ideally in a single IntegrationFlow? :
#Bean
public MessageChannel individualFlowInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow batchFlow() {
return f -> f
.split()
.handle(message -> {
try {
individualFlowInputChannel().send(message);
} catch (Exception e) {
e.printStackTrace();
}
});
}
#Bean
public IntegrationFlow individualFlow() {
return IntegrationFlows.from(individualFlowInputChannel())
.handle((payload, headers) -> {
throw new RuntimeException("BOOM!");
}).get();
}
You can add ExpressionEvaluatingRequestHandlerAdvice to the last handle() definition with its trapException option:
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
On the other hand, if you are talking about "sends emails", wouldn't it be better to consider to do that in the separate thread for each splitted item? In this case the ExecutorChannel after .split() comes to the rescue!

Resources