I am using a JmsOutboundGateway with a MessageChannelPartitionHandler for a partitioned batch job. I would like to be able to interrupt this code in the handle method
Message<Collection<StepExecution>> message = messagingGateway.receive(replyChannel);
From another thread I tried to call interrupt() on this thread. What is the best way to add this functionality?
CLARIFICATION:
The interruption does work. However, the thread for each partition that did receive a response has a stack trace(below). These threads remain blocked which pulls them from that threadpool and they are unavailable for subsequent partitioned jobs.
<b>Thread: springbatch.partitioned.jms.taskExecutor-1</b> :
priority:5, demon:false, threadId:823, threadState:TIMED_WAITING<br />
<blockquote>
- waiting on <0xf483c36> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)<br />sun.misc.Unsafe.park(Native
Method)<br />java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)<br />
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)<br />
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)<br />
org.springframework.integration.jms.JmsOutboundGateway.obtainReplyFromContainer(JmsOutboundGateway.java:865)<br />
org.springframework.integration.jms.JmsOutboundGateway.doSendAndReceiveAsync(JmsOutboundGateway.java:809)<br />
org.springframework.integration.jms.JmsOutboundGateway.sendAndReceiveWithContainer(JmsOutboundGateway.java:649)<br />
org.springframework.integration.jms.JmsOutboundGateway.handleRequestMessage(JmsOutboundGateway.java:580)<br />
org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:134)<br />
org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:73)<br />
org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:115)<br />
org.springframework.integration.dispatcher.UnicastingDispatcher.access$000(UnicastingDispatcher.java:52)<br />
org.springframework.integration.dispatcher.UnicastingDispatcher$1.run(UnicastingDispatcher.java:97)<br />
org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:52)<br />
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)<br />java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)<br />java.lang.Thread.run(Thread.java:744)<br />
<br />
How can I access these threads to interrupt also?
Interrupting the thread will work; however, the interrupt is caught (and the interrupt bit re-set in the QueueChannel).
After this, the received message is null so the partition handler will throw a MessageTimeoutException instead.
There is currently no way to determine whether the timeout is because of a real timeout or the interrupt.
Related
I am new to spring integration and we have created an SI flow where we have Splitter and Aggregator also recipient-list-router and Aggregator.
Today, while checking a code I got confused about how Aggregator will clean its store if we have an exception in between flow.
I am worried about the scenario where we got an exception between the flow and that creates stale state object in the system.
I have checked the spring integration doc but no luck (https://docs.spring.io/spring-integration/docs/2.0.0.RC1/reference/html/aggregator.html).
I can see only one topic "Managing State in an Aggregator: MessageGroupStore" but that is for "application shots down".
Also, I did google for the same and I found one thread https://dzone.com/articles/spring-integration-robust but not able to folow much. Sure, I will come back if I am able to find some solution.
I am using OOB Splitter, recipient-list-router and Aggregator. Considering pattern should have mechanism handle this common scenario.
Can you please guide me
i.e:
<int:recipient-list-router input-channel="inputChannel"
default-output-channel="nullChannel">
<int:recipient channel="aInputChannel" />
<int:recipient channel="bInputChannel" />
</int:recipient-list-router>
<int:service-activator ref="aHandler"
input-channel="aInputChannel" output-channel="aggregatorOutputChannel" />
<!-- we have exception in the bHandler -->
<int:service-activator ref="bHandler"
input-channel="bInputChannel" output-channel="aggregatorOutputChannel" />
<int:aggregator input-channel="aggregatorOutputChannel"
output-channel="outputChannel" />
OR
<int-file:splitter id="splitile"
charset="UTF-8" apply-sequence="true" iterator="false"
input-channel="inputChannel"
output-channel="bTransformerChannel" />
<!-- consider we have exception at 4th chunk -->
<int:service-activator ref="transform"
input-channel="bTransformerChannel" output-channel="aggregatorOutputChannel" />
<int:aggregator input-channel="aggregatorOutputChannel"
output-channel="outputChannel" />
Yes; the aggregator is a "passive" component by default - all actions are taken when a message arrives.
To time out stale groups you can use a reaper or, with more recent versions, a group-timeout.
I am using spring-rabbit-1.7.3.RELEASE.jar
I have defined a SimpleMessageListenerContainer in my xml with shutdownTimeout parameter.
bean id="aContainer"
class="org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer">
<property name="connectionFactory" ref="rabbitConnectionFactory" />
<property name="queueNames" value="aQueue" />
<property name="adviceChain" ref="retryAdvice" />
<property name="acknowledgeMode" value="AUTO" />
<property name="shutdownTimeout" value="900000" />
</bean>
When my service shuts down and there are still messages in "aQueue", I expect that the shutdownTimeout would allow the messages to get processed. But this doesn't seem to happen.
On further investigation I found out that the await() method defined in SimpleMessageListenerContainer is always returning true.
this.cancellationLock.await(Long.valueOf(this.shutdownTimeout), TimeUnit.MILLISECONDS);
I would like to understand how the logic for await works, how it acquires lock and what additional configuration is required at my end to make the code work.
It waits for the consumers on-the-fly, those who are busy to process already fetched but not yet acknowledged messages. Nobody is going to poll fresh messages from the queue during shutdown.
The ActiveObjectCounter awaits on the all internal CountDownLatchs to be released. And that happens when we:
public void handleShutdownSignal(String consumerTag, ShutdownSignalException sig) {
So, that really might be a fact that all your consumers (private volatile int concurrentConsumers = 1; by default) are cancelled and released during that shutdownTimeout.
But again: no body is going to poll new messages from the Broker when the state is shutdown.
How can I initiate a build process with Apache Ant that
is executed in parallel on multiple threads? That is,
I am searching for that corresponds to '-j' in GNU Make.
You can execute an Ant build target that has the parallel task. (see docs here)
Assuming you have a macro called 'dbpurge' that takes a 'file' argument. The following example would describe running 40 calls to dbpurge with a thread count (jobs count in GNU Make) of '4'.
<parallel threadCount="4">
<dbpurge file="db/one" />
<dbpurge file="db/two" />
<dbpurge file="db/three" />
<dbpurge file="db/four" />
<dbpurge file="db/five" />
<dbpurge file="db/six" />
<dbpurge file="db/seven" />
<dbpurge file="db/eight" />
<!-- repeated about 40 times -->
</parallel>
I am using Spring Integration in my project.I have a stored procedure which inserts a row and doesn't return any result. If I use int-jdbc:stored-proc-outbound-gateway, the flow simply terminates and won't connect to the next channel.
DEBUG [org.springframework.jdbc.core.JdbcTemplate.doInCallableStatement] CallableStatement.execute() returned 'false'
DEBUG [org.springframework.jdbc.core.JdbcTemplate.doInCallableStatement] CallableStatement.getUpdateCount() returned 0
DEBUG [org.springframework.jdbc.core.JdbcTemplate.extractReturnedResults] CallableStatement.getUpdateCount() returned -1
My requirement is to continue the flow even if the stored procedure doesn't return any result. What is the best way to handle?
UPDATE
After Artem's response, I have configured stored procedure outbound channel adapter in the following manner:
<int:service-activator ref="msgHandler" method="buildRequestBasedDataSource" input-channel="PQPutUserBAInformation-SPCall2" output-channel="PQPutUserBAInformation-publishSubscribeChannel"/>
<!-- PQPutUserBAInformation Channel -->
<int:publish-subscribe-channel id="PQPutUserBAInformation-publishSubscribeChannel" />
<int-jdbc:stored-proc-outbound-channel-adapter
id="PQPutUserBAInformation-AWD-StoredProcedure2"
channel="PQPutUserBAInformation-publishSubscribeChannel"
data-source="routingDataSource"
stored-procedure-name="ZSPPQINSERTUSERIDBA"
ignore-column-meta-data="true"
use-payload-as-parameter-source = "false" >
<int-jdbc:sql-parameter-definition name="P_USERID" direction="IN" type="VARCHAR" />
<int-jdbc:parameter name="P_USERID" expression="#xpath(payload, '//CurrentUserID')" />
</int-jdbc:stored-proc-outbound-channel-adapter>
<!-- Service Activator to build the Message from the Stored Procedure ResultSet -->
<int:service-activator input-channel="PQPutUserBAInformation-publishSubscribeChannel" ref="msgHandler" method="buildMessageFromExtSysResponse" />
Consider to use stored-proc-outbound-channel-adapter instead.
To continue the flow after that you should consider to use publish-subscribe-channel as an input for that adapter. And make one more subscriber to go ahead in the stream.
Another way to achieve the same behavior is recipient-list-router.
I have a rest call to do inside splitter and then aggregate. I planned to implement parallelism for this rest call. So i introduced task executor as per this link. Now parallelism works but sometimes it is working and sometimes not. Guessing the aggregator is not waiting for all the thread to finish. Not sure exactly the problem. Can you help me here.
<int:enricher ipChann="som" opCahnnel="inputChannel" />
<int:splitter ipchan="inputChannel" opChannel="opchannel" ref="customersplitter" />
<int:channel id="opchannel" >
<int:dispatcher task-executor="exec" />
</int:channel>
<task:executor id="exec" pool-size="4" queue-capacity="10"/>
<int:enricher ipChannel="opchannel" opChannel="aggregatorChan"></int:enricher>
<int:aggregator ipChann="aggregatorChan" />
For simplicity i didnt expand enricher, but the flow are same.