Spring Integration message channel hangs even if time out exceeds - spring-integration

My integration context is as follows :
<int:channel id="fileInboundChannelAdapter"/>
<int-file:inbound-channel-adapter directory="${directory}" channel="fileInboundChannelAdapter" auto-startup="false" >
<int:poller fixed-rate="5000" max-messages-per-poll="1" />
</int-file:inbound-channel-adapter>
And I am manually triggering this channel after some condition is met:
#Resource(name = "fileInboundChannelAdapter")
private MessageChannel messageChannel;
Inside some method
Message<File> fileMessage = MessageBuilder.withPayload(fileObject).build();
boolean success = messageChannel.send(fileMessage, 1000 * 60);
At this line, the messageChannel.send doesnot respond even after the time out exceeds and no other request is served, And needs to restart the server.

You must share a subscriber for that fileInboundChannelAdapter. Having that we will try to understand what's going on. And take a look to logs to figure the issue from your side.
timeout param (1000 * 60 in your case) doesn't have value for the DirectChannel:
protected boolean doSend(Message<?> message, long timeout) {
try {
return this.getRequiredDispatcher().dispatch(message);
}
catch (MessageDispatchingException e) {
String description = e.getMessage() + " for channel '" + this.getFullChannelName() + "'.";
throw new MessageDeliveryException(message, description, e);
}
}
So, it looks like your subscriber just blocks the calling thread somehow...
Need to see its code.

Related

Run Spring Integration flow concurrently for each Ftp file

I have a Integration flow configured using Java DSL which pulls file from Ftp server using Ftp.inboundChannelAdapter then transforms it to JobRequest, then I have a .handle() method which triggers my batch job, everything is working as per required but the process in running sequentially for each file inside the FTP folder
I added currentThreadName in my Transformer Endpoint it was printing same thread name for each file
Here is what I have tried till now
1.task executor bean
#Bean
public TaskExecutor taskExecutor(){
return new SimpleAsyncTaskExecutor("Integration");
}
2.Integration flow
#Bean
public IntegrationFlow integrationFlow(JobLaunchingGateway jobLaunchingGateway) throws IOException {
return IntegrationFlows.from(Ftp.inboundAdapter(myFtpSessionFactory)
.remoteDirectory("/bar")
.localDirectory(localDir.getFile())
,c -> c.poller(Pollers.fixedRate(1000).taskExecutor(taskExecutor()).maxMessagesPerPoll(20)))
.transform(fileMessageToJobRequest(importUserJob(step1())))
.handle(jobLaunchingGateway)
.log(LoggingHandler.Level.WARN, "headers.id + ': ' + payload")
.route(JobExecution.class,j->j.getStatus().isUnsuccessful()?"jobFailedChannel":"jobSuccessfulChannel")
.get();
}
3.I also read in another SO thread that I need ExecutorChannel so I configured one but I don't know how to inject this channel into my Ftp.inboundAdapter, from logs is see that the channel is always integrationFlow.channel#0 which I guess is a DirectChannel
#Bean
public MessageChannel inputChannel() {
return new ExecutorChannel(taskExecutor());
}
I dont know what I'm missing here, or I might have not properly understood Spring Messaging System as I'm very much new to Spring and Spring-Integration
Any help is appreciated
Thanks
The ExecutorChannel you can simply inject into the flow and it is going to be applied to the SourcePollingChannelAdapter by the framework. So, having that inputChannel defined as a bean you just do this:
.channel(inputChannel())
before your .transform(fileMessageToJobRequest(importUserJob(step1()))).
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-channels
On the other hand to process your files in parallel according your .taskExecutor(taskExecutor()) configuration, you just need to have a .maxMessagesPerPoll(20) as 1. The logic in the AbstractPollingEndpoint is like this:
this.taskExecutor.execute(() -> {
int count = 0;
while (this.initialized && (this.maxMessagesPerPoll <= 0 || count < this.maxMessagesPerPoll)) {
if (pollForMessage() == null) {
break;
}
count++;
}
So, we do have tasks in parallel, but only when they reach that maxMessagesPerPoll where it is 20 in your current case. There is also some explanation in the docs: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#endpoint-pollingconsumer
The maxMessagesPerPoll property specifies the maximum number of messages to receive within a given poll operation. This means that the poller continues calling receive() without waiting, until either null is returned or the maximum value is reached. For example, if a poller has a ten-second interval trigger and a maxMessagesPerPoll setting of 25, and it is polling a channel that has 100 messages in its queue, all 100 messages can be retrieved within 40 seconds. It grabs 25, waits ten seconds, grabs the next 25, and so on.

How to handle errors after message has been handed off to QueueChannel?

I have 10 rabbitMQ queues, called event.q.0, event.q.2, <...>, event.q.9. Each of these queues receive messages routed from event.consistent-hash exchange. I want to build a fault tolerant solution that will consume messages for a specific event in sequential manner, since ordering is important. For this I have set up a flow that listens to those queues and routes messages based on event ID to a specific worker flow. Worker flows work based on queue channels so that should guarantee the FIFO order for an event with specific ID. I have come up with with the following set up:
#Bean
public IntegrationFlow eventConsumerFlow(RabbitTemplate rabbitTemplate, Advice retryAdvice) {
return IntegrationFlows
.from(
Amqp.inboundAdapter(new SimpleMessageListenerContainer(rabbitTemplate.getConnectionFactory()))
.configureContainer(c -> c
.adviceChain(retryAdvice())
.addQueueNames(queueNames)
.prefetchCount(amqpProperties.getPreMatch().getDefinition().getQueues().getEvent().getPrefetch())
)
.messageConverter(rabbitTemplate.getMessageConverter())
)
.<Event, String>route(e -> String.format("worker-input-%d", e.getId() % numberOfWorkers))
.get();
}
private Advice deadLetterAdvice() {
return RetryInterceptorBuilder
.stateless()
.maxAttempts(3)
.recoverer(recoverer())
.backOffPolicy(backOffPolicy())
.build();
}
private ExponentialBackOffPolicy backOffPolicy() {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(1000);
backOffPolicy.setMultiplier(3.0);
backOffPolicy.setMaxInterval(15000);
return backOffPolicy;
}
private MessageRecoverer recoverer() {
return new RepublishMessageRecoverer(
rabbitTemplate,
"error.exchange.dlx"
);
}
#PostConstruct
public void init() {
for (int i = 0; i < numberOfWorkers; i++) {
flowContext.registration(workerFlow(MessageChannels.queue(String.format("worker-input-%d", i), queueCapacity).get()))
.autoStartup(false)
.id(String.format("worker-flow-%d", i))
.register();
}
}
private IntegrationFlow workerFlow(QueueChannel channel) {
return IntegrationFlows
.from(channel)
.<Object, Class<?>>route(Object::getClass, m -> m
.resolutionRequired(true)
.defaultOutputToParentFlow()
.subFlowMapping(EventOne.class, s -> s.handle(oneHandler))
.subFlowMapping(EventTwo.class, s -> s.handle(anotherHandler))
)
.get();
}
Now, when lets say an error happens in eventConsumerFlow, the retry mechanism works as expected, but when an error happens in workerFlow, the retry doesn't work anymore and the message doesn't get sent to dead letter exchange. I assume this is because once message is handed off to QueueChannel, it gets acknowledged automatically. How can I make the retry mechanism work in workerFlow as well, so that if exception happens there, it could retry a couple of times and send a message to DLX when tries are exhausted?
If you want resiliency, you shouldn't be using queue channels at all; the messages will be acknowledged immediately after the message is put in the in-memory queue;if the server crashes, those messages will be lost.
You should configure a separate adapter for each queue if you want no message loss.
That said, to answer the general question, any errors on downstream flows (including after a queue channel) will be sent to the errorChannel defined on the inbound adapter.

Spring Integration: How to increase processing of incoming messages

I am working on a Spring application which will receive around 500 xml messages per minute. The xml configuration below only allows to process around 60 messages per minute, rest of the messages are stored in the queue (persisted in DB) and they are retrieved at the rate of 60 messages per minute.
Tried reading documentation from multiple sources but still not clear on the role of Poller combined with task executor. My understanding of why 60 messages per minute are processed currently is because the "fixed-delay" value in the poller configuration is set to 10 (so it will poll 6 times in 1 minute) and the "max-messages-per-poll" is set to 10 so 6x10=60 messages are being processed per minute.
Please advise if my understanding is not correct and help to modify the xml configuration to achieve processing of incoming messages at a higher rate.
The role of task executor is unclear too - does it mean that pool-size="50" will allow 50 threads to run in parallel to process the messages polled by the poller?
What I want in entirety is:
JdbcChannelMessageStore is used to store the incoming xml messages in the database (INT_CHANNEL_MESSAGE) table. This is required so in case of server restart messages are still stored in the table and not lost.
Incoming messages to be executed in parallel but in a controlled/limited amount. Based on the capacity of system processing these messages, I would like to limit how many messages system should process in parallel.
As this configuration will be used on multiple servers in a cluster, any server can pickup any message so it should not cause any conflict of same message being processed by two servers. Hopefully that is handled by Spring Integration.
Apologies if this has been answered elsewhere but after reading numerous posts I still don't understand how this works.
Thanks in advance.
<!-- Message Store configuration start -->
<!-- JDBC message store configuration -->
<bean id="store" class="org.springframework.integration.jdbc.store.JdbcChannelMessageStore">
<property name="dataSource" ref="dataSource"/>
<property name="channelMessageStoreQueryProvider" ref="queryProvider"/>
<property name="region" value="TX_TIMEOUT"/>
<property name="usingIdCache" value="true"/>
</bean>
<bean id="queryProvider" class="org.springframework.integration.jdbc.store.channel.MySqlChannelMessageStoreQueryProvider" />
<int:transaction-synchronization-factory
id="syncFactory">
<int:after-commit expression="#store.removeFromIdCache(headers.id.toString())" />
<int:after-rollback expression="#store.removeFromIdCache(headers.id.toString())" />
</int:transaction-synchronization-factory>
<task:executor id="pool" pool-size="50" queue-capacity="100" rejection-policy="CALLER_RUNS" />
<int:poller id="messageStorePoller" fixed-delay="10"
receive-timeout="500" max-messages-per-poll="10" task-executor="pool"
default="true" time-unit="SECONDS">
<int:transactional propagation="REQUIRED"
synchronization-factory="syncFactory" isolation="READ_COMMITTED"
transaction-manager="transactionManager" />
</int:poller>
<bean id="transactionManager"
class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
<!-- 1) Store the message in persistent message store -->
<int:channel id="incomingXmlProcessingChannel">
<int:queue message-store= "store" />
</int:channel>
<!-- 2) Check in, Enrich the headers, Check out -->
<!-- (This is the entry point for WebService requests) -->
<int:chain input-channel="incomingXmlProcessingChannel" output-channel="incomingXmlSplitterChannel">
<int:claim-check-in message-store="simpleMessageStore" />
<int:header-enricher >
<int:header name="CLAIM_CHECK_ID" expression="payload"/>
<int:header name="MESSAGE_ID" expression="headers.id" />
<int:header name="IMPORT_ID" value="XML_IMPORT"/>
</int:header-enricher>
<int:claim-check-out message-store="simpleMessageStore" />
</int:chain>
Added after response from Artem:
Thanks Artem. So, on every poll which happens after a fixed delay of 10 seconds (as per the config above), the task executor will check the task queue and if possible (and required) start a new task? And each pollingTask (thread) will receive "10" messages, as per the "maxMessagesPerPoll" config, from the message store (queue).
In order to achieve higher processing time of incoming messages, should I reduce the fixedDelay on poller so that more threads can be started by the task executor? If I set the fixedDelay to 2 seconds, a new thread will be started to execute 10messages and roughly 30 such threads will be started in a minute, processing "roughly" 300 incoming messages in a minute.
Sorry for asking too much in one question - just wanted to explain the complete problem.
The main logic is behind this class:
private final class Poller implements Runnable {
private final Callable<Boolean> pollingTask;
Poller(Callable<Boolean> pollingTask) {
this.pollingTask = pollingTask;
}
#Override
public void run() {
AbstractPollingEndpoint.this.taskExecutor.execute(() -> {
int count = 0;
while (AbstractPollingEndpoint.this.initialized
&& (AbstractPollingEndpoint.this.maxMessagesPerPoll <= 0
|| count < AbstractPollingEndpoint.this.maxMessagesPerPoll)) {
try {
if (!Poller.this.pollingTask.call()) {
break;
}
count++;
}
catch (Exception e) {
if (e instanceof MessagingException) {
throw (MessagingException) e;
}
else {
Message<?> failedMessage = null;
if (AbstractPollingEndpoint.this.transactionSynchronizationFactory != null) {
Object resource = TransactionSynchronizationManager.getResource(getResourceToBind());
if (resource instanceof IntegrationResourceHolder) {
failedMessage = ((IntegrationResourceHolder) resource).getMessage();
}
}
throw new MessagingException(failedMessage, e);
}
}
finally {
if (AbstractPollingEndpoint.this.transactionSynchronizationFactory != null) {
Object resource = getResourceToBind();
if (TransactionSynchronizationManager.hasResource(resource)) {
TransactionSynchronizationManager.unbindResource(resource);
}
}
}
}
});
}
}
As you see the taskExecutor is responsible to spin the pollingTask until the maxMessagesPerPoll in one thread. The other threads from the pool are going to be involved if the current polling task is too long for a new schedule. But all the messages in one poll are processed in the same thread, not in parallel .
That is how it works. Since you are asking too much in one SO question, I hope this information is enough to figure out next your steps.

transaction with jms:inbound-channel-adapter

I want with jms:inbound-channel-adapter to read a jms message and apply treatment, if treatment throw exception i want that broker keep message
<int-jms:inbound-channel-adapter
id="jmsAdapter"
session-transacted="true"
destination="destination"
connection-factory="cachedConnectionFactory"
channel="inboundChannel"
auto-startup="false">
<int:poller fixed-delay="100"></int:poller>
</int-jms:inbound-channel-adapter>
I look code of jmsTemplate.doReceive
Message message = doReceive(consumer, timeout);
if (session.getTransacted()) {
// Commit necessary - but avoid commit call within a JTA transaction.
if (isSessionLocallyTransacted(session)) {
// Transacted session created by this template -> commit.
JmsUtils.commitIfNecessary(session);
}
}
else if (isClientAcknowledge(session)) {
// Manually acknowledge message, if any.
if (message != null) {
message.acknowledge();
}
}
We acknowledge after reading directly
How can i do ?

ExecutorService: calling future.get(long, TimeUnit) does not cause the queued Callable to run

I try to implement an asynchronous DNS resolver by calling all the routines that perform a DNS query in a separate thread using ThreadPoolExecutor.
I define Callable object like this:
public class SocketAddressCreator extends DnsCallable<String, InetSocketAddress> {
private static final Logger log = Logger.getLogger(SocketAddressCreator.class);
private int port;
public SocketAddressCreator(String host, int port) {
super(host);
this.port = port;
}
public InetSocketAddress call() throws Exception {
log.info("Starting to resolve. Host is: " + target + " .Port is: " + port);
long start = System.currentTimeMillis();
**InetSocketAddress addr = new InetSocketAddress(target, port);**
log.info("Time waiting: " + (System.currentTimeMillis() - start));
return addr;
}
}
Basically the callable object will attempt to resolve the hostname into an InetAddress.
Then I define an ExecutorService:
executor = new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>(), new ThreadFactory() {
public Thread newThread(Runnable r) {
Thread t = Executors.defaultThreadFactory()
.newThread(r);
t.setName("DnsResolver");
t.setDaemon(true);
return t;
}
});
And I submit the Callable task:
..............
**Future<V> f = executor.submit(task);**
try {
log.info("Query will be made");
log.info("Queue size: " + executor.getQueue().size());
**result = f.get(timeout, TimeUnit.MILLISECONDS);**
log.info("Queue size: " + executor.getQueue().size());
log.info("Query is finished");
} catch (TimeoutException e) {
boolean isCancelled = f.cancel(true);
log.info("Task was cancelled: " + isCancelled);
log.info("Queue size: " + executor.getQueue().size());
..........
}
..............
Then I watch the logs that are thrown by my program and they are quite strange.
This is where I have a timeout in resolving the DNS:
DnsResolver : Queue size: 1
DnsResolver : Task was cancelled: true
DnsResolver : Queue size: 1
So after submitting my Callable object but before calling future.get(long, TimeUnit) the queue size is 1. But that's ok for me.
However after I catch the TimeoutException and I cancel the Future, the queue size is the same (one). In my program there is only one thread which submits the Callable tasks to the ExecutorService and the same thread will also retrieve the results.
More than that, there is a even stranger issue here: the Callable.call() method is not called because if it were called I would get a log message:
log.info("Starting to resolve. Host is: " + target + " .Port is: " + port);
So how it is possible for the future.get(long, TimeUnit) method to throw a TimeoutException when the Callable is not called?
The following calls that make DNS queries:
1/ new InetSocketAddress(String, int) - name lookup
2/ InetAddress.getByName(String) - name lookup
3/ InetAddress.getHostName() - reverse name lookup
are NON-INTERRUPTIBLE blocking calls!
As I said before I use a thread pool composed from a single thread. I did not realized that it is necessary to have multiple threads
So if I catch the TimeoutException from future.get(long, TimeUnit) call, and after that I try to cancel the tasks in progress by calling future.cancel(boolean)... I do not stop the single running thread from what it is doing.
I try to simulate a long running DNS query and I modified resolv.conf like this:
nameserver X.X.X.X // this address does not have a valid DNS server!
options timeout:30
I want for the DNS client to block for some time before returning me a negative/positive response.
I have done a load testing on my applicaation and...it's a total disaster! That is because I have a single thread that resolves these DNS queries and calling future.get(long, TimeUnit) does not make it stop!
Of course, I can increase the thread pool size. I have done that and it fixes my issue.
But...it seems silly to have more than one thread in my pool size to resolve these DNS queries because there is only one thread that submits the Callables that are supposed to resolve the DNS queries and the same thread will also get also the results.

Resources