Receive messages from a channel by some event spring integration dsl [duplicate] - spring-integration

i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.

I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.

Related

Ignore message from a MQ topic from a specific channel

There is a IBM MQ topic which accepts two types of messages Orders and Shipments.
I have a Springboot subscriber app which is interested in subscribing only the Shipment message type.
Below is how I am routing the channel. If the inbound message is neither of above types it will be thrown to errorChannel that I have.
Here if I do not have orderChannel app will throw an error saying no proper channel for the inbound message.
How do I silently ignore the messages of type order here?
#Bean
#ServiceActivator(inputChannel = "routerChannel")
public HeaderValueRouter router() throws Exception {
HeaderValueRouter router = new HeaderValueRouter(messageType);
router.setChannelMapping(shipment, "shipmentChannel");
router.setChannelMapping(order, "orderChannel");
router.setDefaultOutputChannel(invalidHeaderValueChannel);
return router;
}
Currently I have the below code snippet which I need to have just to avoid the error when there was a Order message.
#ServiceActivator(inputChannel = "orderChannel")
public void getInboundOrderMessage(Message<?> message) throws Exception {
logger.info("Inbound Order message...");
String payload = (String) message.getPayload();
logger.info("Order Header: {}, payload: \n{}", pMessage.getHeaders(), payload);
}
Below is how I have the MsgDrivenChannelAdapter defined
#MessageEndpoint
public class MsgDrivenChannelAdapter {
private AbstractMessageListenerContainer messageListenerContainer;
private DirectChannel inboundErrorChannel;
private DirectChannel routerChannel;
public MsgDrivenChannelAdapter(AbstractMessageListenerContainer pMessageListenerContainer,
DirectChannel pInboundErrorChannel,
DirectChannel pRouterChannel) {
this.messageListenerContainer = pMessageListenerContainer;
this.inboundErrorChannel = pInboundErrorChannel;
this.routerChannel = pRouterChannel;
}
#Bean
public IntegrationFlow jmsInboundFlow() throws Exception {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(messageListenerContainer)
.errorChannel(inboundErrorChannel))
.channel(routerChannel)
.get();
}
}
Is there anyway I can avoid this? thanks in advance
See this option on the router:
/**
* When true (default), if a resolved channel key does not exist in the channel map,
* the key itself is used as the channel name, which we will attempt to resolve to a
* channel. Set to false to disable this feature. This could be useful to prevent
* malicious actors from generating a message that could cause the message to be
* routed to an unexpected channel, such as one upstream of the router, which would
* cause a stack overflow.
* #param channelKeyFallback false to disable the fall back.
* #since 5.2
*/
public void setChannelKeyFallback(boolean channelKeyFallback) {
So, it does not fallback to the order as a channel name.
Then it will return as null from the mapping and the logic goes like this:
if (!sent) {
getDefaultOutputChannel();
if (this.defaultOutputChannel != null) {
this.messagingTemplate.send(this.defaultOutputChannel, message);
}
else {
throw new MessageDeliveryException(message, "No channel resolved by router '" + this
+ "' and no 'defaultOutputChannel' defined.");
}
}
If you want just to ignore it and don't want to have that MessageDeliveryException, configure a defaultOutputChannel as a nullChannel.
But better to consider a messageSelector for the listener container, so it does not pull messages from a topic which it is not interested in.
This is how I did it to make it work as I wanted. I used filter to fetch only the particular message type
#Bean
public IntegrationFlow jmsInboundFlow() throws Exception {
return IntegrationFlows.from(Jms.messageDrivenChannelAdapter(messageListenerContainer)
.errorChannel(inboundErrorChannel))
.filter(Message.class, m -> m.getHeaders().get("message_type").equals("shipment"))
.channel(routerChannel)
.get();
}

Thread safety in executor channel

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. I have used the Executor channel to process messages in parallel and then the flow passes through some common handler class.
Please find below the snippet of code -
baseEventFlow() - We receive the message from the EMS queue and send it to a router
router() - Based on the id of the message" a particular ExecutorChannel instance is configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread.
skwDefaultChannel(), gjsucaDefaultChannel(), rpaDefaultChannel() - All the ExecutorChannel beans are marked with the #BridgeTo for the same channel which starts that common flow.
uaEventFlow() - Here each message will get processed
#Bean
public IntegrationFlow baseEventFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)
.route(router()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(skwDefaultChannel());
}else if (message.getPayload().toString().contains("\"id\":\"ASH")) {
return Collections.singletonList(rpaDefaultChannel());
} else if (message.getPayload().toString().contains("\"id\":\"GJS")
|| message.getPayload().toString().contains("\"id\":\"UCA")) {
return Collections.singletonList(gjsucaDefaultChannel());
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel skwDefaultChannel() {
return MessageChannels.executor(SKW_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel gjsucaDefaultChannel() {
return MessageChannels.executor(GJS_UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel rpaDefaultChannel() {
return MessageChannels.executor(RPA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel")
.wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
.handle(uaImpl, "process").get();
}
My concern is in the uaEVentFlow() the common transform and handler method are not thread safe and it may cause issue. How can we ensure that we inject a new transformer and handler at every message invocation?
Should I change the scope of the transformer and handler bean as prototype?
Instead of bridging to a common flow, you should move the .transform() and .handle() to each of the upstream flows and add
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
to their #Bean definitions so each gets its own instance.
But, it's generally better to make your code thread-safe.

How to consume all messages from channel with Spring Integration Java DSL?

I'm trying to define a flow for a single-threaded handler. Messages come in great number and the handler is slow (it's inefficient to process them one by one). So I want to make the handler consume all messages available in the channel at once (or wait until a few messages accumulate) with Java DSL. If there are no messages in the channel and the handler has processed a previous group it should wait for a certain period of time (timeout "a") for a few messages to accumulate in the channel. But if messages keep coming, the handler MUST consume them after a certain period of time from the previous execution (timeout "b"). Therefore time intervals between handler executions should be no more than "b" (unless no messages arrive in the channel).
There is no reason to make multiple instances of that sort of handler: it generates data for interfaces. The code below describes some basic configuration. My problem is that I'm not able to come up with debouncing (the timeout "b") and releasing the group once the handler execution is completed.
#Configuration
public class SomeConfig {
private AtomicBoolean someHandlerBusy = new AtomicBoolean(false);
#Bean
StandardIntegrationFlow someFlow() {
return IntegrationFlows
.from("someChannel")
.aggregate(aggregatorSpec -> aggregatorSpec
//The only rule to release a group:
//wait 500ms after last message and have a free someHandler
.groupTimeout(500)
.sendPartialResultOnExpiry(true) //if 500ms expired - send group
.expireGroupsUponCompletion(true) //group should be filled again
.correlationStrategy(message -> true) //one group key, all messages in oe group
.releaseStrategy(message -> false) //never release messages, only with timeout
//Send messages one by one. This is not part of this task.
//I just want to know how to do that. Like splitter.
//.outputProcessor(MessageGroup::getMessages)
)
.handle("someHandler")
.get();
}
}
I have the solution with plain Java (kotlin) code: https://pastebin.com/mti3Y5tD
UPDATE
The configuration below does not erase group. The group is growing and growing and it falls wit error at the end.
Error:
*** java.lang.instrument ASSERTION FAILED ***: "!errorOutstanding" with message transform method call failed at JPLISAgent.c line: 844
Configuration:
#Configuration
public class InterfaceHandlerConfigJava {
#Bean
MessageChannel interfaceAggregatorFlowChannel() {
return MessageChannels.publishSubscribe("interfaceAggregatorFlowChannel").get();
}
#EventListener(ApplicationReadyEvent.class)
public void initTriggerPacket(ApplicationReadyEvent event) {
MessageChannel channel = event.getApplicationContext().getBean("interfaceAggregatorFlowChannel", MessageChannel.class);
channel.send(MessageBuilder.withPayload(new InterfaceHandler.HandlerReadyMessage()).build());
}
#Bean
StandardIntegrationFlow someFlow(
InterfaceHandler interfaceHandler
) {
long lastMessageTimeout = 10L;
return IntegrationFlows
.from("interfaceAggregatorFlowChannel")
.aggregate(aggregatorSpec -> aggregatorSpec
.groupTimeout(messageGroup -> {
if (haveInstance(messageGroup, InterfaceHandler.HandlerReadyMessage.class)) {
System.out.println("case HandlerReadyMessage");
if (haveInstance(messageGroup, DbChangeStreamConfiguration.InitFromDbMessage.class)) {
System.out.println("case InitFromDbMessage");
return 0L;
} else if (messageGroup.size() > 1) {
long groupCreationTimeout =
messageGroup.getTimestamp() + 500L - System.currentTimeMillis();
long timeout = Math.min(groupCreationTimeout, lastMessageTimeout);
System.out.println("case messageGroup.size() > 1, timeout: " + timeout);
return timeout;
}
}
System.out.println("case Handler NOT ReadyMessage");
return null;
})
.sendPartialResultOnExpiry(true)
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.correlationStrategy(message -> true)
.releaseStrategy(message -> false)
)
.handle(interfaceHandler, "handle")
.channel("interfaceAggregatorFlowChannel")
.get();
}
private boolean haveInstance(MessageGroup messageGroup, Class clazz) {
for (Message<?> message : messageGroup.getMessages()) {
if (clazz.isInstance(message.getPayload())) {
return true;
}
}
return false;
}
}
I want to highlight: this flow is in the cycle. There is IN and no OUT. Messages go to the IN but handler emits HandlerReadyMessage at the end.
Maybe there should be some thread breaker channel?
FINAL VARIANT
As aggregator and handler should not blocks each other and should not try to make a stackoverflow exception they should run in different threads. In the configuration above this achieved with queue channels. Looks that publish-subscribe channels are not running subscribers in different threads (at least for one subscriber).
#Configuration
public class InterfaceHandlerConfigJava {
// acts as thread breaker too
#Bean
MessageChannel interfaceAggregatorFlowChannel() {
return MessageChannels.queue("interfaceAggregatorFlowChannel").get();
}
#Bean
MessageChannel threadBreaker() {
return MessageChannels.queue("threadBreaker").get();
}
#EventListener(ApplicationReadyEvent.class)
public void initTriggerPacket(ApplicationReadyEvent event) {
MessageChannel channel = event.getApplicationContext().getBean("interfaceAggregatorFlowChannel", MessageChannel.class);
channel.send(MessageBuilder.withPayload(new InterfaceHandler.HandlerReadyMessage()).build());
}
#Bean
StandardIntegrationFlow someFlow(
InterfaceHandler interfaceHandler
) {
long lastMessageTimeout = 10L;
return IntegrationFlows
.from("interfaceAggregatorFlowChannel")
.aggregate(aggregatorSpec -> aggregatorSpec
.groupTimeout(messageGroup -> {
if (haveInstance(messageGroup, InterfaceHandler.HandlerReadyMessage.class)) {
System.out.println("case HandlerReadyMessage");
if (haveInstance(messageGroup, DbChangeStreamConfiguration.InitFromDbMessage.class)) {
System.out.println("case InitFromDbMessage");
return 0L;
} else if (messageGroup.size() > 1) {
long groupCreationTimeout =
messageGroup.getTimestamp() + 500L - System.currentTimeMillis();
long timeout = Math.min(groupCreationTimeout, lastMessageTimeout);
System.out.println("case messageGroup.size() > 1, timeout: " + timeout);
return timeout;
}
}
System.out.println("case Handler NOT ReadyMessage");
return null;
})
.sendPartialResultOnExpiry(true)
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.correlationStrategy(message -> true)
.releaseStrategy(message -> false)
.poller(pollerFactory -> pollerFactory.fixedRate(1))
)
.channel("threadBreaker")
.handle(interfaceHandler, "handle", spec -> spec.poller(meta -> meta.fixedRate(1)))
.channel("interfaceAggregatorFlowChannel")
.get();
}
private boolean haveInstance(MessageGroup messageGroup, Class clazz) {
for (Message<?> message : messageGroup.getMessages()) {
if (clazz.isInstance(message.getPayload())) {
return true;
}
}
return false;
}
}
It's not clear what you mean by timer b, but you can use a .groupTimeoutExpression(...) to dynamically determine the group timeout.
You don't need to worry about sending messages one by one; when the output processor returns a collection of Message<?> they are sent one-at-a-time.

Handling errors after a message splitter with direct channels

I'm working on a service which sends emails using the spring integration java dsl.
I have a batch message which is split into a collection of individual messages which will be turned into emails.
The issue I am experiencing is that if one of these individual messages throws an error, the other messages in the batch are not processed.
Is there a way to configure the flow so that when a message throws an exception, the exception is handled gracefully and the next message in the batch is processed?
The following code achieves the functionality I would like but I'm wondering if there is an easier / better way to achieve this, ideally in a single IntegrationFlow? :
#Bean
public MessageChannel individualFlowInputChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow batchFlow() {
return f -> f
.split()
.handle(message -> {
try {
individualFlowInputChannel().send(message);
} catch (Exception e) {
e.printStackTrace();
}
});
}
#Bean
public IntegrationFlow individualFlow() {
return IntegrationFlows.from(individualFlowInputChannel())
.handle((payload, headers) -> {
throw new RuntimeException("BOOM!");
}).get();
}
You can add ExpressionEvaluatingRequestHandlerAdvice to the last handle() definition with its trapException option:
/**
* If true, any exception will be caught and null returned.
* Default false.
* #param trapException true to trap Exceptions.
*/
public void setTrapException(boolean trapException) {
On the other hand, if you are talking about "sends emails", wouldn't it be better to consider to do that in the separate thread for each splitted item? In this case the ExecutorChannel after .split() comes to the rescue!

Calls to gateway result never return to caller when successful

I am using Spring Integration DSL and have a simple Gateway:
#MessagingGateway(name = "eventGateway", defaultRequestChannel = "inputChannel")
public interface EventProcessorGateway {
#Gateway(requestChannel="inputChannel")
public void processEvent(Message message)
}
My spring integration flow is defined as:
#Bean MessageChannel inputChannel() { return new DirectChannel(); }
#Bean MessageChannel errorChannel() { return new DirectChannel(); }
#Bean MessageChannel retryGatewayChannel() { return new DirectChannel(); }
#Bean MessageChannel jsonChannel() { return new DirectChannel(); }
#Bean
public IntegrationFlow postEvents() {
return IntegrationFlows.from(inputChannel())
.route("headers.contentType", m -> m.channelMapping(MediaType.APPLICATION_JSON_VALUE, "json")
)
.get();
}
#Bean
public IntegrationFlow retryGateway() {
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
}
#Bean
public IntegrationFlow transformJsonEvents() {
return IntegrationFlows
.from(retryGatewayChannel())
.transform(new JsonTransformer())
.handle(new JsonHandler())
.get();
}
The JsonTransformer is a simple AbstractTransformer that transforms the JSON data and passes it to the JsonHandler.
class JsonHandler extends AbstractMessageHandler {
public void handleMessageInternal(Message message) throws Exception {
// do stuff, return nothing if success else throw Exception
}
}
I call my gateway from code as such:
try {
Message<List<EventRecord>> message = MessageBuilder.createMessage(eventList, new MessageHeaders(['contentType': contentType]))
eventProcessorGateway.processEvent(message)
logSuccess(eventList)
} catch (Exception e) {
logError(eventList)
}
I want the entire call and processing to be synchronous, and any errors that occur to be caught so I can handle them appropriately. The call to the gateway works, the message gets sent to through the Transformer and to the Handler, processed and if an Exception occurs it bubbles back and is caught and logError() is called. However if the call is successful, the call to logSuccess() never occurs. It is like execution stops/hangs after the Handler processes the message and never returns. I do not need to actually get any response, I am more concerned if something fails to process. Do I need to send something back to the initial EventProcessorGateway?
Your issue is here:
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
where that .gateway() is request/reply because it is a part of the main flow.
It is something similar to the <gateway> within <chain>.
So, even if your main flow is one-way, using .gateway() inside that requires from your sub-flow some reply, but this one:
.handle(new JsonHandler())
.get();
doesn't do that.
Because it is one-way MessageHandler.
From other side, even if you'd make the last one as request-reply (AbstractReplyProducingMessageHandler), it won't help you because you don't know what to do with that reply after the mid-flow gateway. Just because your main flow is the one-way.
You must re-think your desing a bit more and try to get rid of that mid-flow gateway. I see that you try to make some logic with retryAdvice().
But how about to move it to the .handle(new JsonHandler()) instead of that wrong .gateway()?

Resources