Jms Gateway Unable to refresh MQ connection - spring-integration

I have an inbound gateway which is listening to MQ for messages. Whenever the MQ server is bought down. I loose the connection and in the logs I see that my inbound gateway tries to refresh the connection every 5 seconds. But once the MQ server is up I still see the same error in the log. Can you please let me know if I'm missing any info from the config or is there anything to be done on the MQ server.
Spring Config
<bean id="cachingConnectionFactory1" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="mqConnectionFactory1"/>
<property name="reconnectOnException" value="true"/>
</bean>
<bean id="mqConnectionFactory1" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter">
<property name="targetConnectionFactory">
<bean class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${mq.hostname.1}"/>
<property name="port" value="${mq.port}"/>
<property name="queueManager" value="${mq.queuemanager.1}"/>
<property name="transportType" value="${mq.transporttype}"/>
<property name="channel" value="${mq.channel}"/>
</bean>
</property>
<property name="username" value="${mq.username}"/>
</bean>
<int-jms:message-driven-channel-adapter
id="mqEnterpriseRequestAdapter1"
connection-factory="cachingConnectionFactory1"
destination="xyzQueue"
concurrent-consumers="2"
max-concurrent-consumers="5"
cache-level="5"
idle-consumer-limit="5"
max-messages-per-task="2"
channel="xyzReceive"/>
Error Log:
[2015-09-19 01:52:56,267] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494492] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:51,292] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494037] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:51,263] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494488] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:46,291] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494033] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019
[2015-09-19 01:52:46,262] WARN [org.springframework.jms.listener.DefaultMessageListenerContainer#4-494485] (DefaultMessageListenerContainer.java:842) - Setup of JMS message listener invoker failed for destination 'queue:///queuename' - trying to recover. Cause: MQJMS2002: failed to get message from MQ queue; nested exception is com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2019

I found the answer to your question by googling MQJE001: Completion Code 2, Reason 2019
The answer is on IBM's support site.
Reason code 2019 usually occurs after a connection broken error (reason code 2009) occurs. You would see a JMSException with reason code 2009 preceding reason code 2019 in the SystemOut.log.
Reason code 2009 indicates that the connection to the MQ queue manager is no longer valid, usually due to a network or firewall issue.
Reason code 2019 errors will occur when invalid connections remain in the connection pool after the reason code 2009 error occurs. The next time that the application tries to use one of these connections, the reason code 2019 occurs.

Resolved after following the below post and the config changes after the modification
http://forum.spring.io/forum/spring-projects/integration/jms/89532-defaultmessagelistenercontainer-cachingconnectionfactory-tomcat-and-websphere-mq
<bean id="mqContainer1" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="cachingConnectionFactory1" />
<property name="exceptionListener" ref="cachingConnectionFactory1" />
<property name="destinationName" value="${mq.requestqueue}" />
<property name="maxConcurrentConsumers" value="x"/>
<property name="concurrentConsumers" value="x"/>
<property name="maxMessagesPerTask" value="x"/>
<property name="idleConsumerLimit" value="x"/>
</bean>
<int-jms:message-driven-channel-adapter
id="mqEnterpriseRequestAdapter1"
container="cbatsMqContainer1"
channel="mqMessageReceive"/>

Related

Error on JMS Connection using spring integration

We are using a jms message driven adapter with an error-channel to transfer messages from a MQServer :
<int:publish-subscribe-channel id="processChannel1" />
<int:logging-channel-adapter channel="processChannel1" logger-name="log_channel" level="ERROR" expression="payload" />
<int:service-activator input-channel="processChannel1" ref="channelUtils" output-channel="error1" method="treatment" />
<bean id="myListener" class="org.springframework.jms.listener.DefaultMessageListenerContainer" >
<property name="autoStartup" value="false" />
<property name="connectionFactory" ref="connectionFactoryCaching" />
<property name="destination" ref="jmsQueue" />
<property name="maxMessagesPerTask" value="1" />
<property name="receiveTimeout" value="1" />
<property name="backOff" ref="fixedBackOff" />
<property name="sessionTransacted" value="true"/>
<property name="errorHandler" ref="connectionJmsHandler"/>
</bean>
<int-jms:message-driven-channel-adapter id="jmsIn" container="myListener" channel="channelMQ_RMQ" error-channel="processChannel1"/>
When there is an error on the connection , the error-channel is not called because it is not a "message error".
The bean connectionJmsHandler is also not called...
Following log message as ERROR :
2021-03-16 16:47:05.050 [myListener-5669] ERROR o.s.j.l.DefaultMessageListenerContainer - Could not refresh JMS Connection for destination 'queue://QM1/QUEUE.IN.3?CCSID=819&persistence=2&targetClient=1&priority=0' - retrying using FixedBackOff{interval=5000, currentAttempts=0, maxAttempts=1}. Cause: JMSWMQ0018: Failed to connect to queue manager 'QM1' with connection mode 'Client' and host name '10.118.121.78(8081)'.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2538' ('MQRC_HOST_NOT_AVAILABLE').
2021-03-16 16:47:10.055 [myListener-5669] ERROR o.s.j.l.DefaultMessageListenerContainer - Could not refresh JMS Connection for destination 'queue://QM1/QUEUE.IN.3?CCSID=819&persistence=2&targetClient=1&priority=0' - retrying using FixedBackOff{interval=5000, currentAttempts=1, maxAttempts=1}. Cause: JMSWMQ0018: Failed to connect to queue manager 'QM1' with connection mode 'Client' and host name '10.118.121.78(8081)'.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2538' ('MQRC_HOST_NOT_AVAILABLE').
2021-03-16 16:47:10.055 [myListener-5669] ERROR o.s.j.l.DefaultMessageListenerContainer - Stopping container for destination 'queue://QM1/QUEUE.IN.3?CCSID=819&persistence=2&targetClient=1&priority=0': back-off policy does not allow for further attempts.
Is there a way to call a specific treatment when having this kind of errors on JMS connection?
Thanks for your help
Regards,
Eric
You can add an ExceptionListener to the DefaultMessageListenerContainer and handle connection exceptions there.

outbound-gateway executes the MV command very slowly

After we upgraded the version of Spring Integration from 4.2.13 to 5.3.1, SFTP Outbound Gateway would often execute the MV command for more than 30 seconds.
We use inbound-stream-channel-adapter to fetch the file and then use outbound-gateway to move it to the Backup folder, below is our xml code snippet
<int:channel id="input">
<int:queue />
</int:channel>
<int:channel id="output">
<int:queue />
<int:interceptors>
<int:wire-tap channel="successHistory"/>
</int:interceptors>
</int:channel>
<int-sftp:inbound-streaming-channel-adapter id="sftInboundAdapter"
session-factory="cachingSftpSessionFactory"
channel="input"
remote-file-separator="/"
remote-directory="/home/box">
<int:poller fixed-delay="2000" max-messages-per-poll="1"/>
</int-sftp:inbound-streaming-channel-adapter>
<int:chain id="chain1" input-channel=" input" output-channel="output">
<int:poller fixed-delay="1000"/>
<int:stream-transformer charset="UTF-8"/>
<int:header-enricher>
<int:error-channel ref="error" overwrite="true"/>
<int:header name="originalPayload" expression="payload"/>
</int:header-enricher>
<int-sftp:outbound-gateway session-factory="cachingSftpSessionFactory"
id="sftpOutboundGateway"
command="mv"
expression="headers.file_remoteDirectory+'/'+headers.file_remoteFile"
rename-expression="headers.file_remoteDirectory+'/backup/'+headers.file_remoteFile"
>
<int-sftp:request-handler-advice-chain>
<ref bean="gatewayLogger"/>
</int-sftp:request-handler-advice-chain>
</int-sftp:outbound-gateway>
<int:transformer expression="headers.originalPayload"/>
</int:chain>
<jms:outbound-channel-adapter channel="output" connection-factory="tibcoEmsConnectionFactory" destination="topic"/>
<bean id="sftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${sftp.host}"/>
<property name="port" value="${sftp.port}"/>
<property name="user" value="${sftp.user}"/>
<property name="password" value="${sftp.password}"/>
<property name="allowUnknownKeys" value="true"/>
<property name="timeout" value="300000"/>
</bean>
<bean id="cachingSftpSessionFactory"
class="org.springframework.integration.file.remote.session.CachingSessionFactory">
<constructor-arg ref="sftpSessionFactory"/>
<constructor-arg value="2"/>
<property name="sessionWaitTimeout" value="300000"/>
</bean>
The Gateway Advice generated logs are as follows, the rename(MV) operation took more than 30 seconds
2020-07-07 12:20:16 INFO [task-scheduler-8] gatewayLogger - ''int-sftp:outbound-gateway' with id='sftpOutboundGateway''#1346093219 - before: {file_remoteHostPort=0.0.0.0, fileName=20200707115747609.xml, errorChannel=bean 'error', file_remoteDirectory=/home/box, originalPayload=<?xml version="1.0" encoding="UTF-8"?>
2020-07-07 12:20:48 INFO [task-scheduler-8] gatewayLogger - ''int-sftp:outbound-gateway' with id='sftpOutboundGateway''#1346093219 - after: org.springframework.integration.support.MessageBuilder#153944c0
As we use a chain for message processing, and session will be released by Stream transformer, if the gateway runs too long , then messages will be pend in queue and related session can’t be released, that will cause message stuck and the adapter will use up all sessions in cache.
It caused by org.springframework.integration.file.remote.RemoteFileUtils#makeDirectories, which is synchronized static method, when there are lots of (S)ftp move operation concurrently and slow speed of network, all requests of AbstractRemoteFileOutboundGateway#mv are queued and observed as very slow speed.
The method signature is as below:
public static synchronized <F> void makeDirectories(String path, Session<F> session, String remoteFileSeparator,
Log logger) throws IOException {
I think the problem is really how you use a CachingSessionFactory. Your cache with that <constructor-arg value="2"/> is too low, therefore there is a high chance of race condition for cached sessions.
You use this session factory in the <int-sftp:inbound-streaming-channel-adapter> which opens a session and keeps it out of the cache until the <int:stream-transformer>. But that happens already on the other thread because your input channel is a QueueChannel. This way you let a thread for <int-sftp:inbound-streaming-channel-adapter> to go and this one is able to take a new session (if any) from the cache. So, when the <int-sftp:outbound-gateway> turn comes, there probably no sessions in the cache to deal with.
Explain, please, why your cache is so low and why do you use QueueChannel just after an inbound polling channel adapter? Not related, but why do you use the QueueChannel for output destination as well?
I think SpringIntegration-5.3.1 has a bug in int-sftp:outbound-gateway as we can easily reproduce sftp gateway execute mv command with long time on certain machine (Our production)
however after we replaced gateway with our own activator, the mv command executed very very fast.
we replaced:
<int-sftp:outbound-gateway session-factory="cachingSftpSessionFactory"
id="sftpOutboundGateway"
command="mv"
expression="headers.file_remoteDirectory+'/'+headers.file_remoteFile"
rename-expression="headers.file_remoteDirectory+'/backup/'+headers.file_remoteFile"
>
with:
<int:header-enricher>
<int:header name="PATH_FROM" expression="headers.file_remoteDirectory+'/'+headers.file_remoteFile"/>
<int:header name="PATH_TO" expression="headers.file_remoteDirectory+'/backup/'+headers.file_remoteFile"/>
</int:header-enricher>
<int:service-activator ref="remoteFileRenameActivator"/>
and here is the source code of our remoteFileRenameActivator
#ServiceActivator
public Message moveFile(Message message, #Header("PATH_FROM") String pathFrom, #Header("PATH_TO") String pathTo) throws IOException {
try (Session session = sessionFactory.getSession();) {
LOGGER.debug(contextName + " " + session.toString() + " is moving file from " + pathFrom + " to " + pathTo);
session.rename(pathFrom, pathTo);
}
return message;
}
The reason of why we think this is a bug because:
We upgraded Spring integration from 4.2.13 to 5.3.1, we didn't meet
such problem in 4.2.13
After we replaced gateway's mv command with
our own implemented mv command, mv command execution was not a
bottleneck anymore.
The issue was still there after we changed
QueueChannel to DirectChannel and increased the session quantity.
We run rename command by command line in client, it is also very fast

Configuring Service Discovery for Azure in Hazelcast

The Problem:
I'm setting up hazelcast AutoDiscovery feature to be used in Azure Environments. I am following the github document for the same: https://github.com/hazelcast/hazelcast-azure
But I am getting the following error:
Exception in thread "main" com.hazelcast.config.InvalidConfigurationException: Invalid configuration
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:147)
.....
com.hazelcast.core.server.StartServer.main(StartServer.java:46)
Caused by: com.hazelcast.config.properties.ValidationException: There is no discovery strategy factory to create 'DiscoveryStrategyConfig{properties={group-name=****, client-secret=****, subscription-id=****, client-id=****, tenant-id=****, cluster-id=****}, className='com.hazelcast.azure.AzureDiscoveryStrategy', discoveryStrategyFactory=null}' Is it a typo in a strategy classname? Perhaps you forgot to include implementation on a classpath?
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.buildDiscoveryStrategy(DefaultDiscoveryService.java:186)
...
I have tried the following properties:
<azure enabled="true">
<client-id>****</client-id>
<client-secret>****</client-secret>
<tenant-id>****</tenant-id>
<subscription-id>****</subscription-id>
<cluster-id>****</cluster-id>
<group-name>****</group-name>
</azure>
Since This was not working I even tried using discovery strategy with the following code snipppet:
<discovery-strategies>
<!-- class equals to the DiscoveryStrategy not the factory! -->
<discovery-strategy enabled="true" class="com.hazelcast.azure.AzureDiscoveryStrategy">
<properties>
<property name="client-id">****</property>
<property name="client-secret">****</property>
<property name="tenant-id">****</property>
<property name="subscription-id">****</property>
<property name="cluster-id">****</property>
<property name="group-name">****</property>
</properties>
</discovery-strategy>
</discovery-strategies>
Even switched the value of class from:
class="com.hazelcast.azure.AzureDiscoveryStrategy"
to
class="com.hazelcast.azure.AzureDiscoveryStrategyFactory"
I have tried the above with hazelcast versions 3.12.2, 3.12.1, 3.12 and 3.11.4 and all are giving same result.
Please suggest what I am doing incorrectly and what else is required.

TcpDiscoverySpi returns HTTP 401 - [Apache Ignite Cluster, installed on Azure Kubernetes Services]

EXCEPTION during IgniteClient startup (Springboot job):
TcpDiscoverySpi - Failed to get registered addresses from IP finder on start
Server returned HTTP response code: 401 for URL https://
I've created the service account, and read the token.
Then I0ve putted the value of che attribute token inside a file
Then I've tries with ignitevisorcmd to connect, but it's seems there is an error i'm not able to identify
Snippet of my "ignite-config.xml":
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Enables Kubernetes IP finder with default settings.
-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="MasterUrl" value="https://XXXX-aks-001-ignitecluster-XXXXXXXX.hcp.westeurope.azmk8s.io"/>
<property name="AccountToken" value="C:\Users\XXXXXXXX\Desktop\TOP\token"/>
<property name="ServiceName" value="ignite"/>
</bean>
</property>
</bean>
</property>
What's wrong!!?!??!?!??!
Check if you have RBAC enabled cluster. If you do, you will have to give permission to your ignite pod to access the endpoints.
https://kubernetes.io/docs/admin/authorization/rbac/

Retry on SFTP Permission errors

I am maintaining an existing Spring Integration application which is polling a third-party SFTP server for files. It occasionally throws permission or 'not found' errors, which I suspect are caused by transient problems at the remote end. I would like the application to retry on getting these errors, as it will probably resolve the issue. (I also have a requirement to "retry on any problems", which should cover this case.)
e.g.
org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is org.springframework.messaging.MessagingException: Failure occurred while copying from remote to local directory; nested exception is org.springframework.core.NestedIOException: failed to read file mypath/myfile.csv; nested exception is 3: Permission denied
at [snip]
Caused by: org.springframework.messaging.MessagingException: Failure occurred while copying from remote to local directory; nested exception is org.springframework.core.NestedIOException: failed to read file mypath/myfile.csv; nested exception is 3: Permission denied
at [snip]
Caused by: 3: Permission denied
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846) [snip]
After extensive googling and going round in circles, I am still unable to figure out how to do this with Spring Integration. Here is the existing config:
<bean id="myAcceptOnceFilter" class="org.springframework.integration.sftp.filters.SftpPersistentAcceptOnceFileListFilter">
<constructor-arg index="0" ref="myLocalFileStore"/>
<constructor-arg index="1" name="prefix" value="myprefix_"/>
<property name="flushOnUpdate" value="true"/>
</bean>
<bean id="myCompositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean class="org.springframework.integration.sftp.filters.SftpSimplePatternFileListFilter">
<constructor-arg value="myprefix" />
</bean>
<ref bean="myAcceptOnceFilter"/>
</list>
</constructor-arg>
</bean>
<int-sftp:inbound-channel-adapter id="myInboundChannel"
session-factory="mySftpSessionFactory"
channel="myDownstreamChannel"
remote-directory="blah"
filter="myCompositeFilter"
local-directory="blah"
auto-create-local-directory="true"
>
<int:poller fixed-rate="10000" max-messages-per-poll="-1">
<int:transactional transaction-manager="transactionManager" synchronization-factory="syncFactory" />
</int:poller>
</int-sftp:inbound-channel-adapter>
EDIT: I think the problem lies in myCompositeFilter. It doesn't look like rollback() is being called inside myAcceptOnceFilter when the exception is thrown. If I simply use myAcceptOnceFilter without the composite then the code works as intended (i.e. rollback() is called). Question is now: how do I continue to use a CompositeFilter which calls rollback on all its children?
I've looked into putting a retry adapter inside the poller (EDIT: I now know this is irrelevant):
<bean id="retryAdvice" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice"/>
<int:poller fixed-rate="10000" max-messages-per-poll="-1">
<int:advice-chain>
<tx:advice transaction-manager="transactionManager"/>
<int:ref bean="retryAdvice"/>
</int:advice-chain>
</int:poller>
...but this throws a warning that
This advice org.springframework.integration.handler.advice.RequestHandlerRetryAdvice can only be used for MessageHandlers
In short, I'm stuck. Any help on getting it to retry on this kind of sftp exception would be very gratefully received. Thanks!
EDIT: Added in mention of SftpPersistentAcceptOnceFileListFilter.
EDIT: Added discussion of CompositeFileLIstFilter, which now looks like the location of the problem.
The retry advice is for consuming endpoints (push-retrying).
It's not clear why you need to add retry here - the poller will inherently retry on the the next poll.

Resources