Thread safety in executor channel - spring-integration

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. I have used the Executor channel to process messages in parallel and then the flow passes through some common handler class.
Please find below the snippet of code -
baseEventFlow() - We receive the message from the EMS queue and send it to a router
router() - Based on the id of the message" a particular ExecutorChannel instance is configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread.
skwDefaultChannel(), gjsucaDefaultChannel(), rpaDefaultChannel() - All the ExecutorChannel beans are marked with the #BridgeTo for the same channel which starts that common flow.
uaEventFlow() - Here each message will get processed
#Bean
public IntegrationFlow baseEventFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)
.route(router()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(skwDefaultChannel());
}else if (message.getPayload().toString().contains("\"id\":\"ASH")) {
return Collections.singletonList(rpaDefaultChannel());
} else if (message.getPayload().toString().contains("\"id\":\"GJS")
|| message.getPayload().toString().contains("\"id\":\"UCA")) {
return Collections.singletonList(gjsucaDefaultChannel());
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel skwDefaultChannel() {
return MessageChannels.executor(SKW_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel gjsucaDefaultChannel() {
return MessageChannels.executor(GJS_UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel rpaDefaultChannel() {
return MessageChannels.executor(RPA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel")
.wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
.handle(uaImpl, "process").get();
}
My concern is in the uaEVentFlow() the common transform and handler method are not thread safe and it may cause issue. How can we ensure that we inject a new transformer and handler at every message invocation?
Should I change the scope of the transformer and handler bean as prototype?

Instead of bridging to a common flow, you should move the .transform() and .handle() to each of the upstream flows and add
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
to their #Bean definitions so each gets its own instance.
But, it's generally better to make your code thread-safe.

Related

Spring Integration resequencer does not release the last group of messages

I have the following configuration:
#Bean
public IntegrationFlow messageFlow(JdbcMessageStore groupMessageStore, TransactionSynchronizationFactory syncFactory, TaskExecutor te, ThreadPoolTaskScheduler ts, RealTimeProcessor processor) {
return IntegrationFlows
.from("inputChannel")
.handle(processor, "handleInputMessage", consumer -> consumer
.taskScheduler(ts)
.poller(poller -> poller
.fixedDelay(pollerFixedDelay)
.receiveTimeout(pollerReceiveTimeout)
.maxMessagesPerPoll(pollerMaxMessagesPerPoll)
.taskExecutor(te)
.transactional()
.transactionSynchronizationFactory(syncFactory)))
.resequence(s -> s.messageStore(groupMessageStore)
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(50, 30000)))
.channel("sendingChannel")
.handle(processor, "sendMessage")
.get();
}
If I send a single batch of e.g. 100 messages to the inputChannel it works as expected until there are no messages in the inputChannel. After the inputChannel becomes empty it also stops processing for messages that were waiting for sequencing. As a result there are always a couple of messages left in the groupMessageStore even after the set release timeout.
I'm guessing it's because the poller is configured only for the inputChannel and if there are no messages in there it will never get to the sequencer (so will never call canRelease on the release strategy).
But if I try adding a separate poller for the resequencer I get the following error A poller should not be specified for endpoint since channel x is a SubscribableChannel (not pollable).
Is there a different way to configure it so that the last group of messages is always released?
The release strategy is passive and needs something to trigger it to be called.
Add .groupTimeout(...) to release the partial sequence after the specified time elapses.
EDIT
#SpringBootApplication
public class So67993972Application {
private static final Logger log = LoggerFactory.getLogger(So67993972Application.class);
public static void main(String[] args) {
SpringApplication.run(So67993972Application.class, args);
}
#Bean
IntegrationFlow flow(MessageGroupStore mgs) {
return IntegrationFlows.from(MessageChannels.direct("input"))
.resequence(e -> e.messageStore(mgs)
.groupTimeout(5_000)
.sendPartialResultOnExpiry(true)
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(50, 2000)))
.channel(MessageChannels.queue("output"))
.get();
}
#Bean
MessageGroupStore mgs() {
return new SimpleMessageStore();
}
#Bean
public ApplicationRunner runner(MessageChannel input, QueueChannel output, MessageGroupStore mgs) {
return args -> {
MessagingTemplate template = new MessagingTemplate(input);
log.info("Sending");
template.send(MessageBuilder.withPayload("foo")
.setHeader(IntegrationMessageHeaderAccessor.CORRELATION_ID, "bar")
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER, 2)
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_SIZE, 2)
.build());
log.info(output.receive(10_000).toString());
Thread.sleep(1000);
log.info(mgs.getMessagesForGroup("bar").toString());
};
}
}

Vaadin and receiving email asynchronouosly

My Vaadin 14 application should receive emails in the background. If emails with a certain subject have been received, the user should be informed about this via PUSH message on the UI.
For the entire email handling I implemented the email / message handling from Spring integration and that works too. Two beans (IntegrationFlow and a ServiceActivator) are generated via #Configuration and #Bean annotation in the Spring Application Context like so:
#Configuration
public class EmailReceiver {
#Bean
public HeaderMapper<MimeMessage> mailHeaderMapper() {
return new DefaultMailHeaderMapper();
}
#Bean
public IntegrationFlow imapMailFlow() {
IntegrationFlow flow = IntegrationFlows
.from(Mail.imapInboundAdapter("imaps://user:pass#imap.ionos.de/INBOX")
.userFlag("testSIUserFlag")
.javaMailProperties(new Properties()),
e -> e.autoStartup(true)
.poller(p -> p.fixedDelay(5000)))
.transform(Mail.toStringTransformer())
.channel(MessageChannels.queue("imapChannel"))
.get();
return flow;
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(1000));
return pollerMetadata;
}
#Bean
#ServiceActivator(inputChannel = "imapChannel")
public MessageHandler processNewEmail() {
MessageHandler messageHandler = new MessageHandler() {
#Override
public void handleMessage(org.springframework.messaging.Message<?> message) throws MessagingException {
System.out.println("new email received");
}
};
return messageHandler;
}
}
See also here: https://docs.spring.io/spring-integration/docs/current/reference/html/mail.html#mail-java-dsl-configuration
With such a #Configuration annotated class, the emails are received in the background of the Vaadin app. Check.
But how can I integrate a callback into a Vaadin view in the method EmailReceiver.processNewEmail?
#Bean
#ServiceActivator(inputChannel = "imapChannel")
public MessageHandler processNewEmail(UI ui) {
This always throws an error at application start: Scope vaadin-ui is not active for the current thread; consider defining a scoped proxy for this bean.
There is the example for asynchronous updates with Vaadin https://vaadin.com/docs/v14/flow/advanced/tutorial-push-access.
In contrast to this, I have to create a #Bean for #ServiceActivator handling. As soon as that is the case, there is always the error There is no UI available. The UI scope is not active.
If I move the method processNewEmail() into a separate class I still cannot reference a Vaadin UI:
#MessageEndpoint
class EmailMessageHandler {
private UI ui;
public EmailMessageHandler(UI ui) {
this.ui = ui;
}
#Bean
#ServiceActivator(inputChannel = "imapChannel")
public MessageHandler processNewEmail() {
MessageHandler messageHandler = new MessageHandler() {
#Override
public void handleMessage(org.springframework.messaging.Message<?> message) throws MessagingException {
System.out.println("new email received" + message);
}
};
return messageHandler;
}
}
How can I combine Vaadin asynchronous handling and Spring-Integration Email/ServiceActivator processing?
The point is that your mail receiving functionality is singleton per your application. On the other hand you are going to have as many UIs as many users do HTTP requests to your application. So, you need to think about some intermediary to dump email and get them from there when UI request happens.
You already have that imapChannel as a QueueChannel. So, you can take it from your UI scoped code and call its receive() API to pull the next message. Only the problem that it is a queue: as long as one call receive(), the other won't see the same message. Probably this is OK for your so far, but better to think about something what could be treated as topic in messaging terms. A good candidate easy to use is a Reactor's Sinks.Many: https://projectreactor.io/docs/core/release/reference/#sinks

Multiple IntegrationFlows attached to the same request channel in Gateway method

Given I have application which uses Spring Integration and I define gateway:
#Component
#MessagingGateway
public interface SmsGateway {
#Gateway(requestChannel = CHANNEL_SEND_SMS)
void sendSms(SendSmsRequest request);
}
public interface IntegrationChannels {
String CHANNEL_SEND_SMS = "channelSendSms";
}
I also attach IntegrationFlow to CHANNEL_SEND_SMS channel:
#Bean
public IntegrationFlow sendSmsFlow() {
return IntegrationFlows.from(CHANNEL_SEND_SMS)
.transform(...)
.handle(...)
.get();
}
Whenever I call sendSms gateway method from business code, sendSmsFlow is executed as expected.
When I want to attach another IntegrationFlow to the same CHANNEL_SEND_SMS channel, e.g.
#Bean
public IntegrationFlow differentFlow() {
return IntegrationFlows.from(CHANNEL_SEND_SMS)
.transform(...)
.handle(...)
.get();
}
then this differentFlow is not executed.
Why does it behave this way?
Is there any solution to make it work for both flows?
The default channel type is DirectChannel and messages are distributed to multiple subscribed channels in a round robin fashion by default.
Declare CHANNEL_SEND_SMS as a PublishSubscribeChannel if you want each flow to get every message.
This will only work with a void gateway method; if there is a return type, you will get the first (or random if there is any async downstream processing) and the others will be discarded.

Calls to gateway result never return to caller when successful

I am using Spring Integration DSL and have a simple Gateway:
#MessagingGateway(name = "eventGateway", defaultRequestChannel = "inputChannel")
public interface EventProcessorGateway {
#Gateway(requestChannel="inputChannel")
public void processEvent(Message message)
}
My spring integration flow is defined as:
#Bean MessageChannel inputChannel() { return new DirectChannel(); }
#Bean MessageChannel errorChannel() { return new DirectChannel(); }
#Bean MessageChannel retryGatewayChannel() { return new DirectChannel(); }
#Bean MessageChannel jsonChannel() { return new DirectChannel(); }
#Bean
public IntegrationFlow postEvents() {
return IntegrationFlows.from(inputChannel())
.route("headers.contentType", m -> m.channelMapping(MediaType.APPLICATION_JSON_VALUE, "json")
)
.get();
}
#Bean
public IntegrationFlow retryGateway() {
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
}
#Bean
public IntegrationFlow transformJsonEvents() {
return IntegrationFlows
.from(retryGatewayChannel())
.transform(new JsonTransformer())
.handle(new JsonHandler())
.get();
}
The JsonTransformer is a simple AbstractTransformer that transforms the JSON data and passes it to the JsonHandler.
class JsonHandler extends AbstractMessageHandler {
public void handleMessageInternal(Message message) throws Exception {
// do stuff, return nothing if success else throw Exception
}
}
I call my gateway from code as such:
try {
Message<List<EventRecord>> message = MessageBuilder.createMessage(eventList, new MessageHeaders(['contentType': contentType]))
eventProcessorGateway.processEvent(message)
logSuccess(eventList)
} catch (Exception e) {
logError(eventList)
}
I want the entire call and processing to be synchronous, and any errors that occur to be caught so I can handle them appropriately. The call to the gateway works, the message gets sent to through the Transformer and to the Handler, processed and if an Exception occurs it bubbles back and is caught and logError() is called. However if the call is successful, the call to logSuccess() never occurs. It is like execution stops/hangs after the Handler processes the message and never returns. I do not need to actually get any response, I am more concerned if something fails to process. Do I need to send something back to the initial EventProcessorGateway?
Your issue is here:
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
where that .gateway() is request/reply because it is a part of the main flow.
It is something similar to the <gateway> within <chain>.
So, even if your main flow is one-way, using .gateway() inside that requires from your sub-flow some reply, but this one:
.handle(new JsonHandler())
.get();
doesn't do that.
Because it is one-way MessageHandler.
From other side, even if you'd make the last one as request-reply (AbstractReplyProducingMessageHandler), it won't help you because you don't know what to do with that reply after the mid-flow gateway. Just because your main flow is the one-way.
You must re-think your desing a bit more and try to get rid of that mid-flow gateway. I see that you try to make some logic with retryAdvice().
But how about to move it to the .handle(new JsonHandler()) instead of that wrong .gateway()?

Time-limited aggregation with publish-subscribe in Spring Integration

I am trying to implement the following using Spring Integration with DSL and lambda:
Given a message, send it to N consumers (via publish-subscribe). Wait for limited time and return all results that have arrived form consumers (<= N) during that interval.
Here is an example configuration I have so far:
#Configuration
#EnableIntegration
#IntegrationComponentScan
#ComponentScan
public class ExampleConfiguration {
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedRate(1000).maxMessagesPerPoll(1).get();
}
#Bean
public MessageChannel publishSubscribeChannel() {
return MessageChannels.publishSubscribe(splitterExecutorService()).applySequence(true).get();
}
#Bean
public ThreadPoolTaskExecutor splitterExecutorService() {
final ThreadPoolTaskExecutor executorService = new ThreadPoolTaskExecutor();
executorService.setCorePoolSize(3);
executorService.setMaxPoolSize(10);
return executorService;
}
#Bean
public DirectChannel errorChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel channel1() {
return new DirectChannel();
}
#Bean
public DirectChannel channel2() {
return new DirectChannel();
}
#Bean
public DirectChannel collectorChannel() {
return new DirectChannel();
}
#Bean
public TransformerChannel1 transformerChannel1() {
return new TransformerChannel1();
}
#Bean
public TransformerChannel2 transformerChannel2() {
return new TransformerChannel2();
}
#Bean
public IntegrationFlow errorFlow() {
return IntegrationFlows.from(errorChannel())
.handle(m -> System.err.println("[" + Thread.currentThread().getName() + "] " + m.getPayload()))
.get();
}
#Bean
public IntegrationFlow channel1Flow() {
return IntegrationFlows.from(publishSubscribeChannel())
.transform("1: "::concat)
.transform(transformerChannel1())
.channel(collectorChannel())
.get();
}
#Bean
public IntegrationFlow channel2Flow() {
return IntegrationFlows.from(publishSubscribeChannel())
.transform("2: "::concat)
.transform(transformerChannel2())
.channel(collectorChannel())
.get();
}
#Bean
public IntegrationFlow splitterFlow() {
return IntegrationFlows.from(requestChannel())
.channel(publishSubscribeChannel())
.get();
}
#Bean
public IntegrationFlow collectorFlow() {
return IntegrationFlows.from(collectorChannel())
.resequence(r -> r.releasePartialSequences(true),
null)
.aggregate(a ->
a.sendPartialResultOnExpiry(true)
.groupTimeout(500)
, null)
.get();
}
}
TransformerChannel1 and TransformerChannel2 are sample consumers and have been implemented with just a sleep to emulate delay.
The message flow is:
splitterFlow -> channel1Flow \
-> channel2Flow / -> collectorFlow
Everything seem to work as expected, but I see warnings like:
Reply message received but the receiving thread has already received a reply
which is to be expected, given that partial result was returned.
Questions:
Overall, is this a good approach?
What is the right way to gracefully service or discard those delayed messages?
How to deal with exceptions? Ideally I'd like to send them to errorChannel, but am not sure where to specify this.
Yes, the solution looks good. I guess it fits for the Scatter-Gather pattern. The implementation is provided since version 4.1.
From other side there is on more option for the aggregator since that version, too - expire-groups-upon-timeout, which is true for the aggregator by default. With this option as false you will be able to achieve your requirement to discard all those late messages. Unfortunately DSL doesn't support it yet. Hence it won't help even if you upgrade your project to use Spring Integration 4.1.
Another option for those "Reply message received but the receiving thread has already received a reply" is on the spring.integraton.messagingTemplate.throwExceptionOnLateReply = true option using spring.integration.properties file within the META-INF of one of jar.
Anyway I think that Scatter-Gather is the best solution for you use-case.
You can find here how to configure it from JavaConfig.
UPDATE
What about exceptions and error channel?
Since you get deal already with the throwExceptionOnLateReply I guess you send a message to the requestChannel via #MessagingGateway. The last one has errorChannel option. From other side the PublishSubscribeChannel has errorHandler option, for which you can use MessagePublishingErrorHandler with your errorChannel as a default one.
BTW, don't forget that Framework provides errorChannel bean and the endpoint on it for the LoggingHandler. So, think, please, if you really need to override that stuff. The default errorChannel is PublishSubscribeChannel, hence you can simply add your own subscribers to it.

Resources