I am using a (already set up) Spring Integration worfklow in which I try to add a service-activator that will basically count messages.
In this example I log the payload of the message but only for debugging purposes.
<bean id="messageCounterActivator" class="com.my.company.activators.MessageCounterActivator"/>
<bean id="jmsQueue2" class="com.ibm.mq.jms.MQQueue" depends-on="jmsConnectionFactory">...</bean>
<int:channel id="channelMQ_MQ" ></int:channel>
<int:service-activator input-channel="channelMQ_MQ" method="countMessage" ref="messageCounterActivator"/>
<int-jms:message-driven-channel-adapter id="jmsIn" channel="channelMQ_MQ" destination="jmsQueue"/>
<int-jms:outbound-channel-adapter channel="channelMQ_MQ"
id="jmsOut2"
destination="jmsQueue2"
connection-factory="connectionFactoryCaching2"
delivery-persistent="true"
explicit-qos-enabled="true"
session-transacted="true" />
And from the activator bean:
public Message<?> countMessage(Message<?> message) {
LOG.debug("=> " + message.getPayload());
someCounter.increment();
return message;
}
Once this bidge is up, it works well : all messages are routed from A to B (MqSeries => MqSeries), and the counter is updated.... every 2 messages!
MessageCounterActivator - => message 1
MessageCounterActivator - => message 3
MessageCounterActivator - => message 5
MessageCounterActivator - => message 7
MessageCounterActivator - => message 9
MessageCounterActivator - => message 11
MessageCounterActivator - => message 13
MessageCounterActivator - => message 15
MessageCounterActivator - => message 17
MessageCounterActivator - => message 19
This is very odd to me, but I suspect some embedded activator/listener is in competition with my service-activator (I read from some posts here that when 2 activators, including logger ones, are active at once : you may have such behavior). Maybe it's because I have 2 channel adapters? I don't see here who is the competitor here.
Any idea how to have my activator working the right way?
I would say your counting logic is not a part of main flow. So, better to look into a wire-tap pattern and consider to use an <int:outbound-channel-adapter> instead. Just because you not going to return anything from that counting method.
Another approach for calling different endpoints for the same message is an <int:publish-subscribe-channel>.
Right now you declare a DirectChannel with that <int:channel id="channelMQ_MQ">, which has a round-robing distribution strategy by default for its subscribers. That's exactly why you see that odd-even behavior for your subscribers.
Not sure why you are missing that, but you indeed have two subscribers for this direct channel:
<int:service-activator input-channel="channelMQ_MQ"
<int-jms:outbound-channel-adapter channel="channelMQ_MQ"
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations-directchannel
Related
How do I handle errors thrown in the below SpringIntegrationFlow just by logging the error and continuing with the next record. As well should be able to send an email only once irrespective of 1 or more records failed. Is there something equivalent of Spring Batch's StepListener? Thanks
return IntegrationFlows.from(jdbcMessageSource(), p -> p.poller(pollerSpec()))
.enrichHeaders(Collections.singletonMap(MessageHeaders.ERROR_CHANNEL, appErrorChannel()))
.split()
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.transform(transformer, "transform")
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE))
.handle(Http.outboundGateway(url)
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class)
.requestFactory(requestFactory))
.get();
#Bean
public MessageChannel appErrorChannel() {
return new DirectChannel();
}
#Bean("appErrorFlow")
public IntegrationFlow appErrorFlow() {
// #formatter:off
return IntegrationFlows.from(appErrorChannel())
.log(Level.ERROR, message -> " Failed Message " + message.getPayload())
.aggregate(a -> a.correlationStrategy(m -> !m.getHeaders().isEmpty()))
.handle(Http.outboundGateway(mailURL)
.httpMethod(HttpMethod.POST))
.get();
// #formatter:on
}
Exception:
2022-01-13 06:41:31,702 [Worker-1 ] WARN o.s.i.c.MessagePublishingErrorHandler - 005006f2dee5b673 Error message was not delivered.
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'unknown.channel.name'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=ErrorMessage [payload=*******Removed On Purpose*******]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:76)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.channel.MessagePublishingErrorHandler.handleError(MessagePublishingErrorHandler.java:96)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:60)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:64)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72)
... 11 common frames omitted
I believe the closest equivalent for StepListener is a ChannelInterceptor.
See IntegrationFlowDefinition.intercept() operator to place in between your endpoints in that flow.
However according your splitter configuration, it doesn't seem like you are going to lose anything and errors are just going to be logged. That c.executor() does the trick to shift the work for each record into a thread pool and its error cannot effect the rest of records.
Anyway it sounds like better for you to have a custom error channel be set into headers just before that split() - .enrichHeaders(Collections.singletonMap(MessageHeaders.ERROR_CHANNEL, myErrorChannel())).
Than you have an aggregator subscribed to this channel where you aggregate errors for the batch and emit a message to send an email. This is going to happen really only once per batch or none at all if no errors.
I have prefetch size set to 1 (jms.prefetchPolicy.all=1 in url). In web console I can see that prefetch is 1 for all of my consumers. One consumer got stuck and there were 67 messages on his dispatch queue -see my screenshot
Could you help me understand how could it happen? I've read plenty of articles on this and my understanding is that Dispatch queue size should be up to prefetch size?!
I use following configuration to consume messages from queue:
ConnectionFactory getActiveMQConnectionFactory() {
// Configure the ActiveMQConnectionFactory
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setUserName(user);
activeMQConnectionFactory.setPassword(password);
activeMQConnectionFactory.setNonBlockingRedelivery(true);
// Configure the redeliver policy and the dead letter queue
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setInitialRedeliveryDelay(initialRedeliveryDelay);
redeliveryPolicy.setRedeliveryDelay(redeliveryDelay);
redeliveryPolicy.setUseExponentialBackOff(useExponentialBackOff);
redeliveryPolicy.setMaximumRedeliveries(maximumRedeliveries);
RedeliveryPolicyMap redeliveryPolicyMap = activeMQConnectionFactory.getRedeliveryPolicyMap();
redeliveryPolicyMap.put(new ActiveMQQueue(thumbnailQueue), redeliveryPolicy);
activeMQConnectionFactory.setRedeliveryPolicy(redeliveryPolicy);
return activeMQConnectionFactory;
}
public IntegrationFlow createThumbnailFlow(String concurrency, CreateThumbnailReceiver receiver) {
return IntegrationFlows.from(
Jms.messageDrivenChannelAdapter(
Jms.container(getActiveMQConnectionFactory(), thumbnailQueue)
.concurrency(concurrency)
.sessionTransacted(true)
.get()
))
.transform(new JsonToObjectTransformer(CreateThumbnailRequest.class, jsonObjectMapper()))
.handle(receiver)
.get();
}
The problem was cause by difference between version of broker (5.14.5) and client (5.15.3). After upgrading broker dispatched queue contains at most 2 message as expected.
My program does the following in the high level
Task 1
get the data from the System X
the Java DSL split
post the data to the System Y
post the reply data to the X
the Java DSL aggregate
Task 2
get the data from the System X
the Java DSL split
post the data to the System Y
post the reply data to the X
the Java DSL aggregate
...
The problem is that when one post the data to the System Y sub task fails, the error message is correctly send back to the System X, but after that any other sub tasks or tasks are not executed.
My error handler does this:
...
Message<String> newMessage = MessageBuilder.withPayload("error occurred")
.copyHeadersIfAbsent(message.getPayload().getFailedMessage().getHeaders()).build();
...
Set some extra headers etc.
...
return newMessage;
What could be the problem?
Edit:
I debugged the Spring Integration. In the error situation only first error message comes to the method AbstractCorrelatingMessageHandler.handleMessageInternal. Other successfull and failing messages not come to the method.
If there are not errors all the messages come to the method and finally the group is released.
What could be wrong in my program?
Edit 2:
This is working:
Added the advice for the Http.outboundGateway:
.handle(Http.outboundGateway(...,
c -> c.advice(myAdvice()))
and the myAdvice bean
#Bean
private Advice myAdvice() {
return new MyAdvice();
}
and the MyAdvice class
public class MyAdvice<T> extends AbstractRequestHandlerAdvice {
#SuppressWarnings("unchecked")
#Override
protected Object doInvoke(final ExecutionCallback callback, final Object target, final Message<?> message)
throws Exception {
...
try {
result = (MessageBuilder<T>) callback.execute();
} catch (final MessageHandlingException e) {
take the exception cause for the new payload
}
return new message with the old headers and replyChannel header and result.payload or the exception cause as a payload
}
}
There is nothing wrong with your program. That's exactly how regular loop works in Java. To catch exception for each iteration and continue with other remaining item you definitely need a try..catch in the Java loop. So, something similar you need to apply here for the splitter. It can be achieved with the ExpressionEvaluatingRequestHandlerAdvice, an ExectutorChannel as an output from the splitter or with the gateway call via service activator on the splitter output channel.
Since the story is about an aggregator afterward, you still need to finish a group somehow and this can be done only with some error compensation message to be emitted from the error handling to return back to the aggregator's input channel. In this case you need to ensure to copy request headers from the failedMessage of the MessagingException thrown to the error flow. After aggregation of the group you would need to sever messages with error from the normal ones. That can be done only with the special payload or you may just have an exception as a payload for the proper distinguishing errors from normal messages in the final result from the aggregator.
I am working on a Spring application which will receive around 500 xml messages per minute. The xml configuration below only allows to process around 60 messages per minute, rest of the messages are stored in the queue (persisted in DB) and they are retrieved at the rate of 60 messages per minute.
Tried reading documentation from multiple sources but still not clear on the role of Poller combined with task executor. My understanding of why 60 messages per minute are processed currently is because the "fixed-delay" value in the poller configuration is set to 10 (so it will poll 6 times in 1 minute) and the "max-messages-per-poll" is set to 10 so 6x10=60 messages are being processed per minute.
Please advise if my understanding is not correct and help to modify the xml configuration to achieve processing of incoming messages at a higher rate.
The role of task executor is unclear too - does it mean that pool-size="50" will allow 50 threads to run in parallel to process the messages polled by the poller?
What I want in entirety is:
JdbcChannelMessageStore is used to store the incoming xml messages in the database (INT_CHANNEL_MESSAGE) table. This is required so in case of server restart messages are still stored in the table and not lost.
Incoming messages to be executed in parallel but in a controlled/limited amount. Based on the capacity of system processing these messages, I would like to limit how many messages system should process in parallel.
As this configuration will be used on multiple servers in a cluster, any server can pickup any message so it should not cause any conflict of same message being processed by two servers. Hopefully that is handled by Spring Integration.
Apologies if this has been answered elsewhere but after reading numerous posts I still don't understand how this works.
Thanks in advance.
<!-- Message Store configuration start -->
<!-- JDBC message store configuration -->
<bean id="store" class="org.springframework.integration.jdbc.store.JdbcChannelMessageStore">
<property name="dataSource" ref="dataSource"/>
<property name="channelMessageStoreQueryProvider" ref="queryProvider"/>
<property name="region" value="TX_TIMEOUT"/>
<property name="usingIdCache" value="true"/>
</bean>
<bean id="queryProvider" class="org.springframework.integration.jdbc.store.channel.MySqlChannelMessageStoreQueryProvider" />
<int:transaction-synchronization-factory
id="syncFactory">
<int:after-commit expression="#store.removeFromIdCache(headers.id.toString())" />
<int:after-rollback expression="#store.removeFromIdCache(headers.id.toString())" />
</int:transaction-synchronization-factory>
<task:executor id="pool" pool-size="50" queue-capacity="100" rejection-policy="CALLER_RUNS" />
<int:poller id="messageStorePoller" fixed-delay="10"
receive-timeout="500" max-messages-per-poll="10" task-executor="pool"
default="true" time-unit="SECONDS">
<int:transactional propagation="REQUIRED"
synchronization-factory="syncFactory" isolation="READ_COMMITTED"
transaction-manager="transactionManager" />
</int:poller>
<bean id="transactionManager"
class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<!-- 1) Store the message in persistent message store -->
<int:channel id="incomingXmlProcessingChannel">
<int:queue message-store= "store" />
</int:channel>
<!-- 2) Check in, Enrich the headers, Check out -->
<!-- (This is the entry point for WebService requests) -->
<int:chain input-channel="incomingXmlProcessingChannel" output-channel="incomingXmlSplitterChannel">
<int:claim-check-in message-store="simpleMessageStore" />
<int:header-enricher >
<int:header name="CLAIM_CHECK_ID" expression="payload"/>
<int:header name="MESSAGE_ID" expression="headers.id" />
<int:header name="IMPORT_ID" value="XML_IMPORT"/>
</int:header-enricher>
<int:claim-check-out message-store="simpleMessageStore" />
</int:chain>
Added after response from Artem:
Thanks Artem. So, on every poll which happens after a fixed delay of 10 seconds (as per the config above), the task executor will check the task queue and if possible (and required) start a new task? And each pollingTask (thread) will receive "10" messages, as per the "maxMessagesPerPoll" config, from the message store (queue).
In order to achieve higher processing time of incoming messages, should I reduce the fixedDelay on poller so that more threads can be started by the task executor? If I set the fixedDelay to 2 seconds, a new thread will be started to execute 10messages and roughly 30 such threads will be started in a minute, processing "roughly" 300 incoming messages in a minute.
Sorry for asking too much in one question - just wanted to explain the complete problem.
The main logic is behind this class:
private final class Poller implements Runnable {
private final Callable<Boolean> pollingTask;
Poller(Callable<Boolean> pollingTask) {
this.pollingTask = pollingTask;
}
#Override
public void run() {
AbstractPollingEndpoint.this.taskExecutor.execute(() -> {
int count = 0;
while (AbstractPollingEndpoint.this.initialized
&& (AbstractPollingEndpoint.this.maxMessagesPerPoll <= 0
|| count < AbstractPollingEndpoint.this.maxMessagesPerPoll)) {
try {
if (!Poller.this.pollingTask.call()) {
break;
}
count++;
}
catch (Exception e) {
if (e instanceof MessagingException) {
throw (MessagingException) e;
}
else {
Message<?> failedMessage = null;
if (AbstractPollingEndpoint.this.transactionSynchronizationFactory != null) {
Object resource = TransactionSynchronizationManager.getResource(getResourceToBind());
if (resource instanceof IntegrationResourceHolder) {
failedMessage = ((IntegrationResourceHolder) resource).getMessage();
}
}
throw new MessagingException(failedMessage, e);
}
}
finally {
if (AbstractPollingEndpoint.this.transactionSynchronizationFactory != null) {
Object resource = getResourceToBind();
if (TransactionSynchronizationManager.hasResource(resource)) {
TransactionSynchronizationManager.unbindResource(resource);
}
}
}
}
});
}
}
As you see the taskExecutor is responsible to spin the pollingTask until the maxMessagesPerPoll in one thread. The other threads from the pool are going to be involved if the current polling task is too long for a new schedule. But all the messages in one poll are processed in the same thread, not in parallel .
That is how it works. Since you are asking too much in one SO question, I hope this information is enough to figure out next your steps.
My integration context is as follows :
<int:channel id="fileInboundChannelAdapter"/>
<int-file:inbound-channel-adapter directory="${directory}" channel="fileInboundChannelAdapter" auto-startup="false" >
<int:poller fixed-rate="5000" max-messages-per-poll="1" />
</int-file:inbound-channel-adapter>
And I am manually triggering this channel after some condition is met:
#Resource(name = "fileInboundChannelAdapter")
private MessageChannel messageChannel;
Inside some method
Message<File> fileMessage = MessageBuilder.withPayload(fileObject).build();
boolean success = messageChannel.send(fileMessage, 1000 * 60);
At this line, the messageChannel.send doesnot respond even after the time out exceeds and no other request is served, And needs to restart the server.
You must share a subscriber for that fileInboundChannelAdapter. Having that we will try to understand what's going on. And take a look to logs to figure the issue from your side.
timeout param (1000 * 60 in your case) doesn't have value for the DirectChannel:
protected boolean doSend(Message<?> message, long timeout) {
try {
return this.getRequiredDispatcher().dispatch(message);
}
catch (MessageDispatchingException e) {
String description = e.getMessage() + " for channel '" + this.getFullChannelName() + "'.";
throw new MessageDeliveryException(message, description, e);
}
}
So, it looks like your subscriber just blocks the calling thread somehow...
Need to see its code.