Multiple IntegrationFlows attached to the same request channel in Gateway method - spring-integration

Given I have application which uses Spring Integration and I define gateway:
#Component
#MessagingGateway
public interface SmsGateway {
#Gateway(requestChannel = CHANNEL_SEND_SMS)
void sendSms(SendSmsRequest request);
}
public interface IntegrationChannels {
String CHANNEL_SEND_SMS = "channelSendSms";
}
I also attach IntegrationFlow to CHANNEL_SEND_SMS channel:
#Bean
public IntegrationFlow sendSmsFlow() {
return IntegrationFlows.from(CHANNEL_SEND_SMS)
.transform(...)
.handle(...)
.get();
}
Whenever I call sendSms gateway method from business code, sendSmsFlow is executed as expected.
When I want to attach another IntegrationFlow to the same CHANNEL_SEND_SMS channel, e.g.
#Bean
public IntegrationFlow differentFlow() {
return IntegrationFlows.from(CHANNEL_SEND_SMS)
.transform(...)
.handle(...)
.get();
}
then this differentFlow is not executed.
Why does it behave this way?
Is there any solution to make it work for both flows?

The default channel type is DirectChannel and messages are distributed to multiple subscribed channels in a round robin fashion by default.
Declare CHANNEL_SEND_SMS as a PublishSubscribeChannel if you want each flow to get every message.
This will only work with a void gateway method; if there is a return type, you will get the first (or random if there is any async downstream processing) and the others will be discarded.

Related

If two integrationflows are passed to one general MessageHandler class, is it thread safe? in Spring Integration DSL

I have two IntegrationFlows
both receive messages from Apache Kafka
first IntegrationFlow - in the input channel, Consumer1(concurrency=4) reads topic_1
second IntegrationFlow - in the input channel, Consumer2(concurrency=4) reads topic_2
but these two IntegrationFlows, send messages to the output channel, where one common class MyMessageHandler is specified
like this:
#Bean
public IntegrationFlow sendFromQueueFlow1(MyMessageHandler message) {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory1, "topic_1")
.configureListenerContainer(configureListenerContainer_priority1)
)
.handle(message)
.get();
}
#Bean
public IntegrationFlow sendFromQueueFlow2(MyMessageHandler message) {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory2, "topic_2")
.configureListenerContainer(configureListenerContainer_priority2)
)
.handle(message)
.get();
}
class MyMessageHandler have method send(message), this method passes messages further to another service
class MyMessageHandler {
protected void handleMessageInternal(Message<?> message)
{
String postResponse = myService.send(message); // remote service calling
msgsStatisticsService.sendMessage(message, postResponse);
// *******
}
}
inside each IntegrationFlow, 4 Consumer-threads are working (
a total of 8 threads), and they all go to one class MyMessageHandler,
into one metod send()
what problems could there be?
two IntegrationFlow, do they see each other when they pass a message to one common class??? do I need to provide thread safety in the MyMessageHandler class??? Do I need to prepend the send () method with the word synchronized???
But what if we make a third IntegrationFlow?
so that only one IntegrationFlow can pass messages through itself to the MyMessageHandler class? then would it be thread safe? example:
#Bean
public IntegrationFlow sendFromQueueFlow1() {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory1, "topic_1")
.configureListenerContainer(configureListenerContainer_priority1)
)
.channel(**SOME_CHANNEL**())
.get();
}
#Bean
public IntegrationFlow sendFromQueueFlow2() {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory2, "topic_2")
.configureListenerContainer(configureListenerContainer_priority2)
)
.channel(**SOME_CHANNEL**())
.get();
}
#Bean
public MessageChannel **SOME_CHANNEL**() {
DirectChannel channel = new DirectChannel();
return channel;
}
#Bean
public IntegrationFlow sendALLFromQueueFlow(MyMessageHandler message) {
return IntegrationFlows
.from(**SOME_CHANNEL**())
.handle(message)
.get();
}
You need to make your handler code thread-safe.
Using synchronized on the whole method you will effectively disable the concurrency.
It's better to use thread-safe techniques - no mutable fields or use limited synchronization blocks, just around critical code.

Spring Integration one channel for multiple producers and consumers

I have this direct channel:
#Bean
public DirectChannel emailingChannel() {
return MessageChannels
.direct( "emailingChannel")
.get();
}
Can I define multiple flows for the same channel like this:
#Bean
public IntegrationFlow flow1FromEmailingChannel() {
return IntegrationFlows.from( "emailingChannel" )
.handle( "myService" , "handler1")
.get();
}
#Bean
public IntegrationFlow flow2FromEmailingChannel() {
return IntegrationFlows.from( "emailingChannel" )
.handle( "myService" , "handler2" )
.get();
}
EDIT
#Service
public class MyService {
public void handler1(Message<String> message){
....
}
public void handler2(Message<List<String>> message){
....
}
}
Each flow's handle(...) method manipulates different payload data types but the goal is the same, i-e reading data from the channel and call the relevant handler. I would like to avoid many if...else to check the data type in one handler.
Additional question: What happens when multiple threads call the same channel (no matter its type: Direct, PubSub or Queue) at the same time (as per default a #Bean has singleton scope)?
Thanks a lot
With a direct channel messages will be round-robin distributed to the consumers.
With a queue channel only one consumer will get each message; the distribution will be based on their respective pollers.
With a pub/sub channel both consumers will get each message.
You need to provide more information but it sounds like you need to add a payload type router to your flow to direct the messages to the right consumer.
EDIT
When the handler methods are in the same class you don't need two flows; the framework will examine the methods and, as long as there is no ambiguity) will call the method that matches the payload type.
.handle(myServiceBean())

Dynamically create a chain of ordered IntegrationFlows

I'm creating a series of processes that use Spring Integration explicitly using the Java DSL. Each of these processes do different things but they have some of the same processing logic
Example:
get
process
deduplicate
emit
I would like to essentially create a chain of post processing integration flows that can be enabled/disabled via configuration/profiles.
Example:
get
preprocess flow 1 (if enabled)
...
preprocess flow n (if enabled)
process
postprocess flow 1 (if enabled)
...
postprocess flow n (if enabled)
emit
I'm pretty sure this doesn't exist yet in SI but thought I'd ask. The only thing i could think of would be to create a bean that created direct message channels on the fly and that, during configuration, i could give to each of the integration flows to use to get their "from" and "channel" message channels.
Example:
#Configuration
public class BaseIntegrationConfiguration {
#Bean
public MessageChannel preProcessMessageChannel() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel processMessageChannel() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel postProcessMessageChannel() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel emitMessageChannel() {
return MessageChannels.direct().get();
}
#Bean
public IntegrationFlow getDataFlow(MessageChannel preProcessMessageChannel) {
return IntegrationFlows
.from(/* some inbound channel adapter */)
// do other flow stuff
.channel(preProcessMessageChannel)
.get();
}
#Bean
public IntegrationFlowChainMessageChannelGenerator preProcessFlowGenerator(
MessageChannel preProcessMessageChannel,
MessageChannel processMessageChannel) {
IntegrationFlowChainMessageChannelGenerator generator = new IntegrationFlowChainMessageChannelGenerator ();
generator.startWith(preProcessMessageChannel);
generator.endWith(processMessageChannel);
return generator;
}
#Bean
public IntegrationFlow processFlow(
MessageChannel processMessageChannel,
MessageChannel postProcessMessageChannel) {
return IntegrationFlows
.from(processMessageChannel)
// do other flow stuff
.channel(postProcessMessageChannel)
.get();
}
#Bean
public IntegrationFlowChainMessageChannelGenerator postProcessFlowGenerator(
MessageChannel postProcessMessageChannel,
MessageChannel emitMessageChannel) {
IntegrationFlowChainMessageChannelGenerator generator = new IntegrationFlowChainMessageChannelGenerator ();
generator.startWith(postProcessMessageChannel);
generator.endWith(emitMessageChannel);
return generator;
}
}
#Configuration
#Order(1)
#Profile("PreProcessFlowOne")
public class PreProcessOneIntegrationConfiguration {
#Bean
public IntegrationFlow preProcessFlowOne(IntegrationFlowChainMessageChannelGenerator preProcessFlowGenerator) {
return IntegrationFlows
.from(preProcessFlowGenerator.getSourceChannel())
// flow specific behavior here
.channel(preProcessFlowGenerator.getDestinationChannel())
.get();
}
}
#Configuration
#Order(2)
#Profile("PreProcessFlowTwo")
public class PreProcessTwoIntegrationConfiguration {
#Bean
public IntegrationFlow preProcessFlowTwo(IntegrationFlowChainMessageChannelGenerator preProcessFlowGenerator) {
return IntegrationFlows
.from(preProcessFlowGenerator.getSourceChannel())
// flow specific behavior here
.channel(preProcessFlowGenerator.getDestinationChannel())
.get();
}
}
#Configuration
#Order(1)
#Profile("PostProcessFlowOne")
public class PostProcessOneIntegrationConfiguration {
#Bean
public IntegrationFlow postProcessFlowOne(IntegrationFlowChainMessageChannelGenerator postProcessFlowGenerator) {
return IntegrationFlows
.from(postProcessFlowGenerator.getSourceChannel())
// flow specific behavior here
.channel(postProcessFlowGenerator.getDestinationChannel())
.get();
}
}
#Configuration
#Order(2)
#Profile("PostProcessFlowTwo")
public class PostProcessTwoIntegrationConfiguration {
#Bean
public IntegrationFlow postProcessFlowTwo(IntegrationFlowChainMessageChannelGenerator postProcessFlowGenerator) {
return IntegrationFlows
.from(postProcessFlowGenerator.getSourceChannel())
// flow specific behavior here
.channel(postProcessFlowGenerator.getDestinationChannel())
.get();
}
}
The idea here being that the invokations of "getDestinationChannel" would create a new channel every time and bridge the output of the last generated channel to the configured "endWith" and every invokation to "getSourceChannel" returns the last created destination channel or, if there are none, the "startWith" channel.
As I write and think about this, I'm starting to think there is probably a better way but thought that I would put this out there for some input.
Thank you.
It's not currently supported directly in the DSL, but the routing slip might satisfy your needs.
If your get, dedup etc are individual flows, you can initialize the routing slip at the start of the initial flow to either include, or not, input channels for the preprocessing step(s) in the list in between the channels for the main flows.
Although there is not yet first class support in the DSL, you can use a header enricher to set up the routing slip. The header name is IntegrationMessageHeaderAccessor.ROUTING_SLIP.
EDIT
Actually, don't maintain the header yourself; scroll down the reference manual chapter about routing slip to see how to configure the HeaderEnricher using Java.

Time-limited aggregation with publish-subscribe in Spring Integration

I am trying to implement the following using Spring Integration with DSL and lambda:
Given a message, send it to N consumers (via publish-subscribe). Wait for limited time and return all results that have arrived form consumers (<= N) during that interval.
Here is an example configuration I have so far:
#Configuration
#EnableIntegration
#IntegrationComponentScan
#ComponentScan
public class ExampleConfiguration {
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedRate(1000).maxMessagesPerPoll(1).get();
}
#Bean
public MessageChannel publishSubscribeChannel() {
return MessageChannels.publishSubscribe(splitterExecutorService()).applySequence(true).get();
}
#Bean
public ThreadPoolTaskExecutor splitterExecutorService() {
final ThreadPoolTaskExecutor executorService = new ThreadPoolTaskExecutor();
executorService.setCorePoolSize(3);
executorService.setMaxPoolSize(10);
return executorService;
}
#Bean
public DirectChannel errorChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel channel1() {
return new DirectChannel();
}
#Bean
public DirectChannel channel2() {
return new DirectChannel();
}
#Bean
public DirectChannel collectorChannel() {
return new DirectChannel();
}
#Bean
public TransformerChannel1 transformerChannel1() {
return new TransformerChannel1();
}
#Bean
public TransformerChannel2 transformerChannel2() {
return new TransformerChannel2();
}
#Bean
public IntegrationFlow errorFlow() {
return IntegrationFlows.from(errorChannel())
.handle(m -> System.err.println("[" + Thread.currentThread().getName() + "] " + m.getPayload()))
.get();
}
#Bean
public IntegrationFlow channel1Flow() {
return IntegrationFlows.from(publishSubscribeChannel())
.transform("1: "::concat)
.transform(transformerChannel1())
.channel(collectorChannel())
.get();
}
#Bean
public IntegrationFlow channel2Flow() {
return IntegrationFlows.from(publishSubscribeChannel())
.transform("2: "::concat)
.transform(transformerChannel2())
.channel(collectorChannel())
.get();
}
#Bean
public IntegrationFlow splitterFlow() {
return IntegrationFlows.from(requestChannel())
.channel(publishSubscribeChannel())
.get();
}
#Bean
public IntegrationFlow collectorFlow() {
return IntegrationFlows.from(collectorChannel())
.resequence(r -> r.releasePartialSequences(true),
null)
.aggregate(a ->
a.sendPartialResultOnExpiry(true)
.groupTimeout(500)
, null)
.get();
}
}
TransformerChannel1 and TransformerChannel2 are sample consumers and have been implemented with just a sleep to emulate delay.
The message flow is:
splitterFlow -> channel1Flow \
-> channel2Flow / -> collectorFlow
Everything seem to work as expected, but I see warnings like:
Reply message received but the receiving thread has already received a reply
which is to be expected, given that partial result was returned.
Questions:
Overall, is this a good approach?
What is the right way to gracefully service or discard those delayed messages?
How to deal with exceptions? Ideally I'd like to send them to errorChannel, but am not sure where to specify this.
Yes, the solution looks good. I guess it fits for the Scatter-Gather pattern. The implementation is provided since version 4.1.
From other side there is on more option for the aggregator since that version, too - expire-groups-upon-timeout, which is true for the aggregator by default. With this option as false you will be able to achieve your requirement to discard all those late messages. Unfortunately DSL doesn't support it yet. Hence it won't help even if you upgrade your project to use Spring Integration 4.1.
Another option for those "Reply message received but the receiving thread has already received a reply" is on the spring.integraton.messagingTemplate.throwExceptionOnLateReply = true option using spring.integration.properties file within the META-INF of one of jar.
Anyway I think that Scatter-Gather is the best solution for you use-case.
You can find here how to configure it from JavaConfig.
UPDATE
What about exceptions and error channel?
Since you get deal already with the throwExceptionOnLateReply I guess you send a message to the requestChannel via #MessagingGateway. The last one has errorChannel option. From other side the PublishSubscribeChannel has errorHandler option, for which you can use MessagePublishingErrorHandler with your errorChannel as a default one.
BTW, don't forget that Framework provides errorChannel bean and the endpoint on it for the LoggingHandler. So, think, please, if you really need to override that stuff. The default errorChannel is PublishSubscribeChannel, hence you can simply add your own subscribers to it.

Spring Integration 4 - configuring a LoadBalancingStrategy in Java DSL

I have a simple Spring Integration 4 Java DSL flow which uses a DirectChannel's LoadBalancingStrategy to round-robin Message requests to a number of possible REST Services (i.e. calls a REST service from one of two possible service endpoint URIs).
How my flow is currently configured:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
LoadBalancingStrategy loadBalancingStrategy = new RoundRobinLoadBalancingStrategy();
DirectChannel directChannel = new DirectChannel(loadBalancingStrategy);
return directChannel;
}
#Bean
public IntegrationFlow testLoadBalancing0Flow() {
return IntegrationFlows.from("test.load.balancing.ch")
.handle(restHandler0())
.channel("test.result.ch")
.get();
}
#Bean
public IntegrationFlow testLoadBalancing1Flow() {
return IntegrationFlows.from("test.load.balancing.ch")
.handle(restHandler1())
.channel("test.result.ch")
.get();
}
#Bean
public HttpRequestExecutingMessageHandler restHandler0() {
return createRestHandler(endpointUri0, 0);
}
#Bean
public HttpRequestExecutingMessageHandler restHandler1() {
return createRestHandler(endpointUri1, 1);
}
private HttpRequestExecutingMessageHandler createRestHandler(String uri, int order) {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler(uri);
// handler configuration goes here..
handler.setOrder(order);
return handler;
}
My configuration works, but I am wondering whether there is a simpler/better way of configuring the flow using Spring Integration's Java DSL?
Cheers,
PM
First of all the RoundRobinLoadBalancingStrategy is the default one for the DirectChannel.
So, can get rid of the testLoadBalancingCh() bean definition at all.
Further, to avoid duplication for the .channel("test.result.ch") you can configure it on the HttpRequestExecutingMessageHandler as setOutputChannel().
From other side your configuration is so simple that I don't see reason to use DSL. You can achieve the same just with annotation configuration:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
return new DirectChannel();
}
#Bean(name = "test.result.ch")
public DirectChannel testResultCh() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "test.load.balancing.ch")
public HttpRequestExecutingMessageHandler restHandler0() {
return createRestHandler(endpointUri0, 0);
}
#Bean
#ServiceActivator(inputChannel = "test.load.balancing.ch")
public HttpRequestExecutingMessageHandler restHandler1() {
return createRestHandler(endpointUri1, 1);
}
private HttpRequestExecutingMessageHandler createRestHandler(String uri, int order) {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler(uri);
// handler configuration goes here..
handler.setOrder(order);
handler.setOutputChannel(testResultCh());
return handler;
}
From other side there is MessageChannels builder factory to allow to simplify loadBalancer for your case:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
return MessageChannels.direct()
.loadBalancer(new RoundRobinLoadBalancingStrategy())
.get();
}
However, I can guess that you want to avoid duplication within DSL flow definition to DRY, but it isn't possible now. That's because IntegrationFlow is linear to tie endoints bypassing the boilerplate code for standard objects creation.
As you see to achieve Round-Robin we have to duplicate, at least, inputChannel, to subscribe several MessageHandlers to the same channel. And we do that in the XML, via Annotations and, of course, from DSL.
I'm not sure that it will be so useful for real applications to provide a hook to configure several handlers using single .handle() for the same Round-Robin channel. Because the further downstream flow may not be so simple as your .channel("test.result.ch").
Cheers

Resources