Spring Integration one channel for multiple producers and consumers - spring-integration

I have this direct channel:
#Bean
public DirectChannel emailingChannel() {
return MessageChannels
.direct( "emailingChannel")
.get();
}
Can I define multiple flows for the same channel like this:
#Bean
public IntegrationFlow flow1FromEmailingChannel() {
return IntegrationFlows.from( "emailingChannel" )
.handle( "myService" , "handler1")
.get();
}
#Bean
public IntegrationFlow flow2FromEmailingChannel() {
return IntegrationFlows.from( "emailingChannel" )
.handle( "myService" , "handler2" )
.get();
}
EDIT
#Service
public class MyService {
public void handler1(Message<String> message){
....
}
public void handler2(Message<List<String>> message){
....
}
}
Each flow's handle(...) method manipulates different payload data types but the goal is the same, i-e reading data from the channel and call the relevant handler. I would like to avoid many if...else to check the data type in one handler.
Additional question: What happens when multiple threads call the same channel (no matter its type: Direct, PubSub or Queue) at the same time (as per default a #Bean has singleton scope)?
Thanks a lot

With a direct channel messages will be round-robin distributed to the consumers.
With a queue channel only one consumer will get each message; the distribution will be based on their respective pollers.
With a pub/sub channel both consumers will get each message.
You need to provide more information but it sounds like you need to add a payload type router to your flow to direct the messages to the right consumer.
EDIT
When the handler methods are in the same class you don't need two flows; the framework will examine the methods and, as long as there is no ambiguity) will call the method that matches the payload type.
.handle(myServiceBean())

Related

Spring Intgergation aws - KinesisMessageHandler Direct Channel

My Message handler for publishing messages to the kinesis stream is as follows
public MessageHandler kinesisMessageHandler(final AmazonKinesisAsync amazonKinesis,
#Qualifier("successChannel") MessageChannel successChannel,
#Qualifier("errorChannel") MessageChannel errorChannel) {
KinesisMessageHandler kinesisMessageHandler = new KinesisMessageHandler(amazonKinesis);
kinesisMessageHandler.setSync(false);
kinesisMessageHandler.setOutputChannel(successChannel);
kinesisMessageHandler.setFailureChannel(errorChannel);
return kinesisMessageHandler;
}
#Bean(name = "errorChannel")
public MessageChannel errorChannel() {
return MessageChannels.direct().get();
}
#Bean(name = "successChannel")
public MessageChannel successChannel() {
return MessageChannels.direct().get();
}
The setSync flag is set as false so that the messages are getting processed asynchronously.Also, I have created separate IntegrationFlow to receive and process Kinesis response from the success & error channel.
public IntegrationFlow successMessageIntegrationFlow(MessageChannel successChannel,
MessageChannel inboundKinesisMessageChannel,
MessageReceiverServiceActivator kinesisMessageReceiverServiceActivator) {
return IntegrationFlows.from(successChannel).channel(inboundKinesisMessageChannel)
.handle(kinesisMessageReceiverServiceActivator, "receiveMessage").get();
}
#Bean
public IntegrationFlow errorMessageIntegrationFlow(MessageChannel errorChannel,
MessageChannel inboundKinesisErrorChannel,
MessageReceiverServiceActivator kinesisErrorReceiverServiceActivator
) {
return IntegrationFlows.from(errorChannel).channel(inboundKinesisErrorChannel)
.handle(kinesisErrorReceiverServiceActivator, "receiveMessage").get();
}
I wanted to know if you see any issues in using Direct Channel to receive success & error responses from Kinesis and processing it using an IntegrationFlow. As far as I know, with Direct Channel a producer is a blocker during send until the consumer finishes its work and returns management to the producer caller back. Is it a correct assumption that here the producer is executed in a different set of thread pools by the AmazonKinesisAsyncClient and the producer will not wait for the IntegrationFlow to process the messages? Let me know If I need to implement it differently
Your assumption about blocking is correct: the control does not come back to the producing thread. So, if have a limited number of threads in that Kinesis client, you need to be sure that you free them as soon as possible. You might consider to have those callbacks in the queue channel instead. They are asynchronous anyway, but won’t hold Kinesis client if that.
You still have a flaw in your flows: .channel(inboundKinesisMessageChannel) . That means the same channel in the middle if two different flows . And if it is a direct one , then you end up with round robin distribution. I would just remove it altogether .

Spring Integration - 2 MessageHandlers handle half of messages

I am using spring integration framework, with a Transformer
inputChannel -> kafka consumer
outputChannel -> database jdbc writer
#Bean
public DirectChannel inboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
#Transformer(inputChannel="inboundChannel", outputChannel="outboundChannel")
public JsonToObjectTransformer jsonToObjectTransformer() {
return new JsonToObjectTransformer(Item.class);
}
#Bean
#ServiceActivator(inputChannel = "outboundChannel")
public MessageHandler jdbcmessageHandler() {
JdbcMessageHandler jdbcMessageHandler = new ...
return ...;
}
#Bean
#ServiceActivator(inputChannel = "inboundChannel")
public MessageHandler kafkahandler() {
return new ...;
}
in both handlers I override
public void handleMessage(Message<?> message)
The problem: if in kafka there are total N messages,
then each handleMessage() is invoked exactly n/2 times!
I assumed that each handler will be invoked n times, because each handler linked to different channel and there are n messages in total.
What am I missing?
(if I disable the kafak handler, the second handler gets all n messages)
Update:
I need to subscriber to get all the messages from the same channel (kafka handler will do something with the raw data, and jdbc handler will push the transformed
data)
First of all your inboundChannel and outboundChannel are out of use: you nowhere (at least in the question) specify their names.
The names like input and output are taken by the framework and used to create new MessageChannel beans, which are used in other places.
Now see what you have:
#Transformer(inputChannel="input"
#ServiceActivator(inputChannel = "input")
Both of them are subscribers to the same input channel and since it is created automatically by the framework as a DirectChannel. This channel is based on a round-robin LoadBalancingStrategy, therefore you see n/2 in your Kafka since its service activator deals only with every second message sent to that input channel.
Please, see more info in docs: https://docs.spring.io/spring-integration/reference/html/core.html#channel-configuration-directchannel

Spring Integration Flow DSL with SQS and Reactive

How can I setup a reactive flow using DSL for the following steps:
Receive an SQS Message using SqsMessageDrivenChannelAdapter
Validate the Json message [JsonSchemaValidator class with validate method]
Transform the json to objects
Pass the objects to a service activator (BusinessService : business logic, state machine)
Persist the Objects R2DBC outbound adapter
I was looking at this : https://github.com/spring-projects/spring-integration/blob/master/spring-integration-core/src/test/java/org/springframework/integration/dsl/reactivestreams/ReactiveStreamsTests.java
In the above example, there are dedicated flows created that return a Publisher and in the tests the Publishers are subscribed. However, my flow will be triggered when SqsMessageDrivenChannelAdapter brings in a message into a channel.
How to achieve a reactive flow configuration, for the scenario above steps 1 to 5?
Update : Sample code added
#Bean
public IntegrationFlow importFlow() {
IntegrationFlows.from(sqsInboundChannel())
.handle((payload, messageHeaders) -> jsonSchemaValidator.validate(payload.toString()))
.transform(Transformers.fromJson(Entity.class))
.handle((payload, messageHeaders) ->businessService.process((Entity) payload))
.handle(
Jpa.outboundAdapter(this.entityManagerFactory)
.entityClass(Entity)
.persistMode(PersistMode.PERSIST),
ConsumerEndpointSpec::transactional)
.get();
}
#Bean
public MessageProducer sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter sqsMessageDrivenChannelAdapter =
new SqsMessageDrivenChannelAdapter(asyncSqsClient, queueName);
sqsMessageDrivenChannelAdapter.setAutoStartup(true);
sqsMessageDrivenChannelAdapter.setOutputChannel(sqsInboundChannel());
return sqsMessageDrivenChannelAdapter;
}
#Bean
public MessageChannel sqsInboundChannel() {
return MessageChannels.flux().get();
}
Update 2 : Moved JPA to a diff thread using executor channel
#Bean
public IntegrationFlow importFlow() {
IntegrationFlows.from(sqsInboundChannel())
.handle((payload, messageHeaders) -> jsonSchemaValidator.validate(payload.toString()))
.transform(Transformers.fromJson(Entity.class))
.handle((payload, messageHeaders) ->businessService.process((Entity) payload))
.channel(persistChannel())
.handle(
Jpa.outboundAdapter(this.entityManagerFactory)
.entityClass(Entity)
.persistMode(PersistMode.PERSIST),
ConsumerEndpointSpec::transactional)
.get();
}
#Bean
public MessageProducer sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter sqsMessageDrivenChannelAdapter =
new SqsMessageDrivenChannelAdapter(asyncSqsClient, queueName);
sqsMessageDrivenChannelAdapter.setAutoStartup(true);
sqsMessageDrivenChannelAdapter.setOutputChannel(sqsInboundChannel());
return sqsMessageDrivenChannelAdapter;
}
#Bean
public MessageChannel sqsInboundChannel() {
return MessageChannels.flux().get();
}
#Bean
public MessageChannel persistChannel() {
return MessageChannels.executor(Executors.newCachedThreadPool()).get();
}
You probably need to make yourself more familiar with what we have so far for Reactive Streams in Spring Integration: https://docs.spring.io/spring-integration/docs/current/reference/html/reactive-streams.html#reactive-streams
The sample you show with that test class is fully not relevant to your use case. In that test we try to cover some API we expose in Spring Integration, kinda unit tests. It has nothing to do with the whole flow.
Your use-case is really just a full black box flow starting with SQS listener and ending in the R2DBC. Therefore there is no point in your flow to try to convert part of it into the Publisher and then bring it back to another part of the flow: you are not going to track some how and subscribe to that Publisher yourself.
You may consider to place a FluxMessageChannel in between endpoints in your flow, but it still does not make sense for your use-case. It won't be fully reactive as you expect just because a org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer is not blocking on the consumer thread to be ready for a back-pressure from downstream.
The only really reactive part of your flow is that R2DBC outbound channel adapter, but probably it does not bring you too much value because the source of data is not reactive.
As I said: you can try to place a channel(channels -> channels.flux()) just after an SqsMessageDrivenChannelAdapter definition to start a reactive flow from that point. At the same time you should try to set a maxNumberOfMessages to 1 to try to make it waiting for a free space in before pulling the next mesasge from SQS.

Thread safety in executor channel

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. I have used the Executor channel to process messages in parallel and then the flow passes through some common handler class.
Please find below the snippet of code -
baseEventFlow() - We receive the message from the EMS queue and send it to a router
router() - Based on the id of the message" a particular ExecutorChannel instance is configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread.
skwDefaultChannel(), gjsucaDefaultChannel(), rpaDefaultChannel() - All the ExecutorChannel beans are marked with the #BridgeTo for the same channel which starts that common flow.
uaEventFlow() - Here each message will get processed
#Bean
public IntegrationFlow baseEventFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)
.route(router()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(skwDefaultChannel());
}else if (message.getPayload().toString().contains("\"id\":\"ASH")) {
return Collections.singletonList(rpaDefaultChannel());
} else if (message.getPayload().toString().contains("\"id\":\"GJS")
|| message.getPayload().toString().contains("\"id\":\"UCA")) {
return Collections.singletonList(gjsucaDefaultChannel());
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel skwDefaultChannel() {
return MessageChannels.executor(SKW_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel gjsucaDefaultChannel() {
return MessageChannels.executor(GJS_UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel rpaDefaultChannel() {
return MessageChannels.executor(RPA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel")
.wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
.handle(uaImpl, "process").get();
}
My concern is in the uaEVentFlow() the common transform and handler method are not thread safe and it may cause issue. How can we ensure that we inject a new transformer and handler at every message invocation?
Should I change the scope of the transformer and handler bean as prototype?
Instead of bridging to a common flow, you should move the .transform() and .handle() to each of the upstream flows and add
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
to their #Bean definitions so each gets its own instance.
But, it's generally better to make your code thread-safe.

Multiple IntegrationFlows attached to the same request channel in Gateway method

Given I have application which uses Spring Integration and I define gateway:
#Component
#MessagingGateway
public interface SmsGateway {
#Gateway(requestChannel = CHANNEL_SEND_SMS)
void sendSms(SendSmsRequest request);
}
public interface IntegrationChannels {
String CHANNEL_SEND_SMS = "channelSendSms";
}
I also attach IntegrationFlow to CHANNEL_SEND_SMS channel:
#Bean
public IntegrationFlow sendSmsFlow() {
return IntegrationFlows.from(CHANNEL_SEND_SMS)
.transform(...)
.handle(...)
.get();
}
Whenever I call sendSms gateway method from business code, sendSmsFlow is executed as expected.
When I want to attach another IntegrationFlow to the same CHANNEL_SEND_SMS channel, e.g.
#Bean
public IntegrationFlow differentFlow() {
return IntegrationFlows.from(CHANNEL_SEND_SMS)
.transform(...)
.handle(...)
.get();
}
then this differentFlow is not executed.
Why does it behave this way?
Is there any solution to make it work for both flows?
The default channel type is DirectChannel and messages are distributed to multiple subscribed channels in a round robin fashion by default.
Declare CHANNEL_SEND_SMS as a PublishSubscribeChannel if you want each flow to get every message.
This will only work with a void gateway method; if there is a return type, you will get the first (or random if there is any async downstream processing) and the others will be discarded.

Resources