JMS connections exhausted using WebSphere MQ - spring-integration

I have configured CachingConnectionFactory that wraps a MQTopicConnectionFactory and MQQueueConnectionFactory with cache size set to 10 each.
These are than used in several jms:outbound-channel-adapter or jms:message-driven-channel-adapter as part of various spring integration workflows that I have in my application.
It is noticed that once in a while the connection count on MQ channel reaches maximum allowed (about 1000) when the process stops functioning. This is a serious problem for a production application.
Bringing the application down does not reduce the connection count so looks like orphaned connections on MQ side? I am not sure if I am missing anything in my spring jms / SI configuration that can resolve this issue, any help would be highly appreciated.
Also I would like to log connection open and close from application but don't see a way to do that.
<bean id="mqQcf" class="com.ibm.mq.jms.MQQueueConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="qcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqQcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
<bean id="mqTcf" class="com.ibm.mq.jms.MQTopicConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="tcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqTcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
//Qcf and tcf are than used in spring integration configuration as required

You really need to show your configuration but the Spring CachingConnectionFactory only creates a single connection that's shared for all sessions. Turning on INFO logging for the CCF category emits this log when a new connection is created...
if (logger.isInfoEnabled()) {
logger.info("Established shared JMS Connection: " + this.target);
}
EDIT:
There's nothing in your config that stands out. As I said, each CCF will have at most 1 connection open at a time.
One possibility, if you have idle times, is that the network (a switch or firewall) might be silently dropping connections without telling the client or server. The next time the client tries to use its connection it will fail and create a new one but the server may never find out that the old one is dead.
Typically, for such situations, enabling heartbeats or keepalives would keep the connection active (or at least allow the server to know it's dead).

I was debugging a similar issue in my application about the number of open output counts in MQ when there is only one Connection is opened by the connection factory.
The number output counts in MQ explorer is the number of connection handles created by the IBM MQ classes. Per IBM documentation, A Session object encapsulates an IBM MQ connection handle, which therefore defines the transnational scope of the session.
Since the session cache size was 10 in my application, there were 10 IBM MQ connections handles created (one for each session) stayed open for days and the handle state was inactive.
More info can be found in,
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q031960_.htm
As Gary Russell mentioned, Spring doesn't provide way to configure the time outs for these idle connections. IBM has inbuilt properties in MQConnectionFactory which can be configured to setup the reconnect timeouts.
More infor can be found in,
https://www.ibm.com/developerworks/community/blogs/messaging/entry/simplify_your_wmq_jms_client_with_automatic_client_reconnection19?lang=en
The reconnect on exception is true by default for CCF. So care should be taken if IBM throws an exception after time out interval. I am not sure if there is a max number of times reconnect will try before throwing an exception in CCF.

Related

Messages "in delivery" status not released on ActiveMQ Artemis broker

I have setup two ActiveMQ Artemis brokers (version 2.17) on a master-slave configuration with a shared file system for HA. During testing with
heavy traffic I would stop master broker and see that the slave takes over and starts forwarding messages to consumers. After a while
I can witness that a number of messages seem to have been stuck in queues. The "stuck" messages are considered as "in delivery" as I can
see from the Artemis UI when queried.
My issue is that even when I restart the master broker these messages are not delivered to the consumer and remain stuck even if more
messages are still populating the same queue and the queue has consumers. My assumption was that it had to do with previous connections
setup by consumers still remaining active because there were not acknowledged.
So I did try to setup <connection-ttl-override> on broker, or on client connection string
(tcp://host1:61616,tcp://host2:61616)?ha=true&connectionTtl=60000&reconnectAttempts=-1) but that did not seemed to have any effect
since the connection would not be closed and the messages were not released.
For consuming messages I am using Artemis JMS Spring client with a CachingConnectionFactory but also tried JmsPoolConnectionFactory
to no avail.
<bean id="connectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory">
<bean class="org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory">
<property name="user" value="${spring.artemis.user}" />
<property name="password" value="${spring.artemis.password}" />
<property name="brokerURL" value="${spring.artemis.brokerUrl}" />
</bean>
</property>
<property name="cacheConsumers" value="false"/>
</bean>
<int-jms:message-driven-channel-adapter
connection-factory="connectionFactory"
destination="myQueue"
message-converter="messageConverter"
channel="inputChannel"
concurrent-consumers="${processing.poolsize}"
max-concurrent-consumers="${max.processing.poolsize}"
error-channel="errorChannel"
acknowledge="transacted"
/>
The only remedy to this problem seems to be to restart consumer app which unblocks messages but that is not the desirable option.
How can I resolve this issue? Is there a way to release messages without manual intervention or restarting consumer app?
Using variable concurrency with the CachingConnectionFactory could be the issue.
When an "idle" consumer is returned to the cache, the broker doesn't know the consumer is not active and still sends messages to it.
The caching factory is really only needed on the producer side (JmsTemplate) to avoid creating a connection and session for each send.
It's best to not use the CCF if you use variable concurrency; or configure it to not cache consumers, or don't use variable concurrency.

Queue job instances in Spring Batch

We have a job running in Spring batch each weekday, triggered from another system. Sometimes there are several instances of the job to be run on the same day. Each one triggered from the other system.
Every job runs for about an hour and if there are several job instances to be run we experience some problems with the data.
We would like to optimize this step as following, if no job instance is running then start a new one, if there is a job instance running already put the new one in a queue.
Each job instance must be COMPLETED before the next one is triggered. If one fail the next one must wait.
The job parameters are an incrementer and a timestamp.
I've Googled a bit but can't find anything that I find useful.
So I wonder if this is duable, to queue job instances in spring batch?
If so, how do I do this? I have looked into Spring integration and job-launching-gateway but I don't really see how to implement it, I guess I don't understand how it works. I try to read about these things but I still don't understand.
Maybe I have the wrong versions of spring batch? Maybe I am missing something?
If you need more information from me please let me know!
Thank you!
We are using spring-core and spring-beans 3.2.5, spring-batch-integration 1.2.2, spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 2.0.3
Well, if you are good to have Spring Integration in your application alongside with the Spring Batch, that really would be great idea to leverage the job-launching-gateway capability.
Right, you can place your tasks into the queue - essentially QueueChannel.
The endpoint to poll that channel can be configured with the max-message-per-poll="1" to poll from the internal queue only one task at a time.
When you have just polled one message, send it into the job-launching-gateway and at the same time to the Control Bus component the command to stop that polling endpoint do not touch other messages in queue until the finish of the current job. When job is COMPLETED, you can send one more control message to start that polling endpoint.
Be sure that you use all the Spring Integration modules in the same version: spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 3.0.5, as well.
If you still require an answer one could use a ThreadPoolTaskExecutor with a core size of 1 and max size of 1 and then a queue size that you desire.
i.e.
<bean id="jobLauncherTaskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="1" />
<property name="maxPoolSize" value="1" />
<property name="queueCapacity" value="200" />
</bean>
and then pass that to the SimpleJobLauncher
i.e.
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="jobLauncherTaskExecutor" />
</bean>

Spring Integration Mail Inbound Channel Adapter configured for POP3 access and using a poller configuration hangs after running for some time

<int:channel id="emailInputChannel"/>
<!-- Email Poller. Only one poller thread -->
<task:executor id="emailPollingExecutor" pool-size="1" />
<int-mail:inbound-channel-adapter id="pop3EmailAdapter" store-uri="pop3://${pop3.user}:${pop3.pwd}#${pop3.server.host}/Inbox"
channel="emailInputChannel" should-delete-messages="true" auto-startup="true" java-mail-properties="javaMailProperties">
<int:poller max-messages-per-poll="1" fixed-delay="${email.poller.delay}" task-executor="emailPollingExecutor"/>
</int-mail:inbound-channel-adapter>
<!-- Java Mail POP3 properties -->
<util:properties id="javaMailProperties">
<beans:prop key="mail.debug">true</beans:prop>
<beans:prop key="mail.pop3.port">${pop3.server.port}</beans:prop>
</util:properties>
This application polls for emails containing application file attachments which contain the data to process. The email attachments are sent typically a few a day and are relatively sporadic. Since the files contain data for bulk load, we resorted to this configuration with a single poller for the Inbound POP3 mail adapter. Having multiple pollers caused duplicate poller invocations to pull the same email while another poller is processing it. With this configuration, however, the single poller hangs after some time with no indications of the problem in the logs. Please review what is wrong with this configuration. Also, is there is an alternative way to trigger the email adapter (e.g cron etc at a periodic interval)? I am using Spring Integration 2.1
A hung poller is most likely caused by the the thread stuck in user code. I see you have mail.debug=true. If that shows no activity then a hung thread is probably the cause. Use us jstack to take a thread dump.
Yes, you can use a cron expression but that's unlikely to change things.
2.1 is extremely old but I still think a hung thread is the cause.

Is there a configurable failure mechanism that can throw an exception if nodes go down from grid?

Gridgain has failover spi mechanism for failure of jobs on nodes.
However, we would like to configure a failure mechanism that throws exception even when once of the configured data nodes goes down.
How can we do this?
Are you trying to prevent failover for your tasks and throw an exception if a node that was in process of executing a job fails? (I'm not sure I understood you correctly, so please correct me if I'm wrong)
If I'm right, the easiest way is to configure NeverFailoverSpi, like this:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="failoverSpi">
<bean class="org.apache.ignite.spi.failover.never.NeverFailoverSpi"/>
</property>
</bean>
Another option is to use IgniteCompute.withAsyncNoFailover() method. It's useful if you want to disable failover for a small subset of tasks, but still use default mechanisms for others. Here is an example:
IgniteCompute compute = ignite.compute().withAsyncNoFailover();
// Tasks executed with this compute instance will never failover.
compute.execute(MyTask1.class, "arg");

Using Control bus to stop message-driven-channel-adapter that uses transactional session

My requirement is to use transactional session with message-driven-channel-adapter (JmsMessageDrivenEndpoint). I am able to setup the configuration buy using sessionTransacted = true for DefaultMessageListenerContainer.
Work flow: receive a message -> call the service activator -> service activator calls dao class
On successful commit to database a commit() is called by spring framework and on any runtime exception a rollback() is called by spring framework. Which works just fine. When a rollback happens JMS Broker sends the message back again to my application.
For a specific type of exception in dao I want to add a message header (i.e redelivery time) so that JMS Broker will not send the message again right away. How can I do it?
For another specific type of exception in dao I want to use control bus to stop the end point (message-driven-channel-adapter) and before stopping it rollback the previous transaction. How can I do it?
Any one can help me out?
There is no wonder, how to use Control Bus for start/stop endpoints:
<int:control-bus input-channel="controlChannel"/>
<int-jms:message-driven-channel-adapter id="jmsInboundEndpoint"/>
<int:transformer input-channel="stopImsInboundEndpointChannel"
outbound-channel="controlChannel"
expression="'#jmsInboundEndpoint.stop()'"/>
Or you can send to the controlChannel the same command string from any place of your code.
But it doesn't matter that the last transaction will be rollbacked. It depends on your 'unit of work' (in other words - the behaviour of your service).
However you can, at the same time when you send 'stop command', mark current transaction for rollback:
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
Another your question about 'adding some message header' is abnormal for Messaging at all.
If you change the message it will be a new one and you can't rollback message to the queue with some new info.
Of course, you can do it anyway and have new message. But you should resend it, not rollback. So, you should commit transaction anyway and send that new message somewhere (or to the same queue), but it will be new message as for Broker as well for your application. And one more time: for this case you have to commit transaction.
Not sure that it is very clear and I go right way in my asnwer, but hope it helps you a bit.
You cannot modify the message (add a header) before rollback. You could, of course, requeue it as a new message after catching the exception. Some brokers (e.g. ActiveMQ) provide a back-off retry policy after a rollback. That might be a better solution if your broker supports it.
You can use the control bus to stop the container, but you will probably have to do it asynchronously (invoke the stop on another thread, e.g. by using an ExecutorChannel on the control bus). Otherwise, depending on your environment you might have problems with the stop waiting for the container thread to exit, so you shouldn't execute the stop on the container thread itself.
Best thing to do is experiment.
Thanks Gary and Artem. The solution is working. I am using the below configuration:
<jms:message-driven-channel-adapter id="jmsMessageDrivenChannelAdapter" connection-factory="connectionFactory"
destination="destination" transaction-manager="jmsTransactionManager" channel="serviceChannel" error-channel="ultimateErrorChannel" />
<si:service-activator input-channel="ultimateErrorChannel" output-channel="controlChannel">
<bean class="play.spring.integration.TestErrorHandler">
<property name="adapterNeedToStop" value="jmsMessageDrivenChannelAdapter" />
<property name="exceptionWhenNeedToStop" value="play.spring.integration.ShutdownException" />
</bean>
</si:service-activator>
<si:channel id="controlChannel">
<si:dispatcher task-executor="controlBusExecutor" />
</si:channel>
<task:executor id='controlBusExecutor' pool-size='10' queue-capacity='50' />
<si:control-bus input-channel="controlChannel" />
Now my question is if I want to stop multiple inbound adapters how can I send a single message to control-bus for all these adapters?
I am going to study SpEL. Would appreciate if someone already know it.
Thanks

Resources