I want to block the retry from my of service activator, incase of any exception thrown.
Below is my configuration,
<http:inbound-gateway id="inboundGW"
request-channel="getCustInfo" supported-methods="GET"
error-channel="errorHandlerRouterChannel"
path="/Messages/{cust}" reply-channel="msgRetrival">
<http:header name="cust" expression="#pathVariables.cust" />
</http:inbound-gateway>
<int:service-activator input-channel="getCustInfo"
output-channel="msgRetrival" ref="createService" method="processCustInfo">
<int:request-handler-advice-chain>
<ref bean="retryWithBackoffAdviceSession"></ref>
</int:request-handler-advice-chain>
</int:service-activator>
<bean id="retryWithBackoffAdviceSession" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">
<property name="retryTemplate">
<bean class="org.springframework.retry.support.RetryTemplate">
<property name="retryPolicy">
<bean class="org.springframework.retry.policy.SimpleRetryPolicy">
<property name="maxAttempts" value="1" />
</bean>
</property>
</bean>
</property>
<property name="recoveryCallback">
<bean class="org.springframework.integration.handler.advice.ErrorMessageSendingRecoverer">
<constructor-arg ref="errorHandlerRouterChannel"/>
</bean>
</property>
</bean>
Exception thrown from my service activator doesn't get caught by the error-channel specified in the http:inbound-gateway
Spring integration config file was loaded twice in the deployment descriptor. After fixing this the issue was resolved.
Your question is not clear.
If you want to throw the exception after retries are exhausted, remove the recoveryCallback.
Related
I have a QueueChannel which is an output channel for my aggregator. The code works just fine on a single node cluster. The moment, I deploy it on 2 node cluster, the partitionStep is not completed and remains in STARTED state forever. Looking at PartitionStep it seems it waiting in the receive method and never marks the step COMPLETED. Below is the configuration that I am using.
<bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
<property name="environment">
<props>
<prop key="java.naming.factory.initial">weblogic.jndi.WLInitialContextFactory</prop>
<prop key="java.naming.provider.url">${oag_wl_nodes_url}</prop>
</props>
</property>
</bean>
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="jmsConnectionFactory" />
</bean>
<int:channel id="requestsChannel" />
<bean id="reqQueue" class="org.springframework.jndi.JndiObjectFactoryBean"
lazy-init="true" scope="prototype">
<property name="jndiTemplate">
<ref bean="jndiTemplate" />
</property>
<property name="jndiName">
<value>masterRequestQueue</value>
</property>
</bean>
<int-jms:outbound-channel-adapter
connection-factory="connectionFactory" channel="requestsChannel" destination="reqQueue" />
<int:channel id="replyChannel" />
<bean id="replyQueue" class="org.springframework.jndi.JndiObjectFactoryBean"
lazy-init="true" scope="prototype">
<property name="jndiTemplate">
<ref bean="jndiTemplate" />
</property>
<property name="jndiName">
<value>masterReplyQueue</value>
</property>
</bean>
<int-jms:message-driven-channel-adapter connection-factory="connectionFactory" destination="replyQueue"
channel="replyChannel" acknowledge="transacted" />
<int:channel id="aggregatedReplyChannel">
<int:queue />
</int:channel>
<int:aggregator ref="partitionHandler"
input-channel="replyChannel" output-channel="aggregatedReplyChannel" />
You are using a common reply queue replyQueue; that won't work because the two instances are competing for the replies and the reply might be consumed by the non-master node.
You need a separate reply destination for each.
That said, I don't see you setting up the replyTo at all on the outbound message, so I assume your replyTo is hard-coded on the consumer side..
You might find it easier to use an outbound gateway instead of a pair of channel adapters; the gateway will create a temporary reply queue (by default) for each request.
I have a spring xd source module which pulls file from s3 and splits line by line.I have my spring config as below.But I have 3 container and 1 admin server.Now I see duplicate message being processed by each container as each of them is downloading there own copy.
I can solve with making source s3 module deployment count as 1 but my processing of message is getting slow.?Any inputs to solve this?
<int:poller fixed-delay="${fixedDelay}" default="true">
<int:advice-chain>
<ref bean="pollAdvise"/>
</int:advice-chain>
</int:poller>
<bean id="pollAdvise"
</bean>
<bean id="credentials" class="org.springframework.integration.aws.core.BasicAWSCredentials">
<property name="accessKey" value="#{encryptedDatum.decryptBase64Encoded('${accessKey}')}"/>
<property name="secretKey" value="${secretKey}"/>
</bean>
<bean id="clientConfiguration" class="com.amazonaws.ClientConfiguration">
<property name="proxyHost" value="${proxyHost}"/>
<property name="proxyPort" value="${proxyPort}"/>
<property name="preemptiveBasicProxyAuth" value="false"/>
</bean>
<bean id="s3Operations" class="org.springframework.integration.aws.s3.core.CustomC1AmazonS3Operations">
<constructor-arg index="0" ref="credentials"/>
<constructor-arg index="1" ref="clientConfiguration"/>
<property name="awsEndpoint" value="s3.amazonaws.com"/>
<property name="temporaryDirectory" value="${temporaryDirectory}"/>
<property name="awsSecurityKey" value="${awsSecurityKey}"/>
</bean>
<bean id="encryptedDatum" class="abc"/>
<!-- aws-endpoint="https://s3.amazonaws.com" -->
<int-aws:s3-inbound-channel-adapter aws-endpoint="s3.amazonaws.com"
bucket="${bucket}"
s3-operations="s3Operations"
credentials-ref="credentials"
file-name-wildcard="${fileNameWildcard}"
remote-directory="${remoteDirectory}"
channel="splitChannel"
local-directory="${localDirectory}"
accept-sub-folders="false"
delete-source-files="true"
archive-bucket="${archiveBucket}"
archive-directory="${archiveDirectory}">
</int-aws:s3-inbound-channel-adapter>
<int-file:splitter input-channel="splitChannel" output-channel="output" markers="false" charset="UTF-8">
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:splitter>
<int:channel id="output"/>
[Updated]
I added the idempotency as suggested by you with a metadata store.But since my xd is running in 3 container cluster with rabbit will simple metadatastore work?I think I should use reds/mongo metadata source.If I use mongo/redis metadatastore howcan i evict/remove the messages because messages will pile up over time?
<int:idempotent-receiver id="expressionInterceptor" endpoint="output"
metadata-store="store"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload"/>
<bean id="store" class="org.springframework.integration.metadata.SimpleMetadataStore"/>
I can suggest you to take a look to the Idempotent Receiver.
With that you can use shared MetadataStore and don't accept duplicate files.
The <idempotent-receiver> should be configured for that your <int-file:splitter>. And yes: with the discard logic to avoid duplicate message.
UPDATE
.But since my xd is running in 3 container cluster with rabbit will simple metadatastore work?
That doesn't matter because you start the stream from the S3 MessageSource, so you should filter files already there. Therefore you need external shared MetadataStore.
.If I use mongo/redis metadatastore howcan i evict/remove the messages because messages will pile up over time?
That's correct. It is a side affect of the Idempotent Receiver logic. Not sure how it is a problem for you if you use a DataBase...
You can clean the collection/keys by some periodic task. Maybe once a week...
In my app the message comes to the inque and then is sent to the output queue. We do this thru spring integration. My requirement is if there is a problem in connecting to the output queue then it shud try reconnecting 3 times with a delay of 30 secs and finally if it fails then log the exception. Can you please help on how to achieve this ? My config file -
<bean id="mqQcfParent" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="transportType">
<util:constant static-field="com.ibm.mq.jms.JMSC.MQJMS_TP_CLIENT_MQ_TCPIP"/>
</property>
<property name="hostName" value="${mq.out.hostname}"/>
<property name="channel" value="${mq.out.channel}"/>
<property name="port" value="${mq.out.port}"/>
</bean>
<bean id="remoteConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="mqQcfParent"/>
<property name="sessionCacheSize" value="${mq.out.cacheSize}"/>
<property name="cacheProducers" value="true"/>
<property name="cacheConsumers" value="true"/>
</bean>
<bean id="inQueue" class="com.ibm.mq.jms.MQQueue">
<property name="baseQueueName" value="${mq.in.queue}"/>
</bean>
<bean id="aircraftAssignQueue" class="com.ibm.mq.jms.MQQueue">
<property name="baseQueueName" value="${mq.out.aircraftAssignQueue}"/>
</bean>
</bean>
<bean id="failureQueue" class="com.ibm.mq.jms.MQQueue">
<property name="baseQueueName" value="${mq.out.failureQueue}"/>
</bean>
<bean id="messageListenerContainerParent" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true">
<property name="destination" ref="inQueue"/>
<property name="sessionTransacted" value="true"/>
<property name="maxConcurrentConsumers" value="${mq.in.max.consumer}"/>
<property name="concurrentConsumers" value="${mq.in.min.consumer}"/>
<property name="receiveTimeout" value="5000"/>
<property name="recoveryInterval" value="60000"/>
<property name="autoStartup" value="true"/>
</bean>
<bean id="messageListenerContainerCDC" parent="messageListenerContainerParent">
<property name="connectionFactory">
<bean parent="remoteConnectionFactory">
<property name="targetConnectionFactory">
<bean parent="mqQcfParent">
<property name="hostName" value="${mq.in.cdc.hostname}"/>
<property name="channel" value="${mq.in.cdc.channel}"/>
<property name="port" value="${mq.in.cdc.port}"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean id="messageListenerContainerPDC" parent="messageListenerContainerParent">
<property name="connectionFactory">
<bean parent="remoteConnectionFactory">
<property name="targetConnectionFactory">
<bean parent="mqQcfParent">
<property name="hostName" value="${mq.in.pdc.hostname}"/>
<property name="channel" value="${mq.in.pdc.channel}"/>
<property name="port" value="${mq.in.pdc.port}"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
<context:property-placeholder location="config/application.properties"/>
<import resource="jms-listener-container-config.xml"/>
<!-- Get Input Messages -->
<int-jms:message-driven-channel-adapter id="msgInCDC" channel="toRoute" container="messageListenerContainerCDC" error-channel="errorChannel" acknowledge="transacted"/>
<int-jms:message-driven-channel-adapter id="msgInPDC" channel="toRoute" container="messageListenerContainerPDC" error-channel="errorChannel" acknowledge="transacted"/>
<!-- Route Messages Depending on Root Element -->
<int:channel id="toAircraftAssign"/>
<int:channel id="toDiversionChange"/>
<int:channel id="toFlightCreate"/>
<int:channel id="toFlightPlanRelease"/>
<int:channel id="toGateChange"/>
<int:channel id="toPositionReport"/>
<int:channel id="toScheduleChange"/>
<int-xml:xpath-router id="flightUpdateRouter" input-channel="toRoute" default-output-channel="errorChannel" evaluate-as-string="true">
<int-xml:xpath-expression expression="name(/*)"/>
<int-xml:mapping value="AircraftAssignment" channel="toAircraftAssign"/>
<int-xml:mapping value="DiversionChangesUpdate" channel="toDiversionChange"/>
<int-xml:mapping value="CreateFlight" channel="toFlightCreate"/>
<int-xml:mapping value="FlightPlanRelease" channel="toFlightPlanRelease"/>
<int-xml:mapping value="GateChange" channel="toGateChange"/>
<int-xml:mapping value="PositionReportUpdate" channel="toPositionReport"/>
<int-xml:mapping value="ScheduleChangesUpdate" channel="toScheduleChange"/>
</int-xml:xpath-router>
<int-jms:outbound-channel-adapter id="aircraftAssignMsgOut" channel="toAircraftAssign" connection-factory="remoteConnectionFactory" destination="aircraftAssignQueue"/>
<int-jms:outbound-channel-adapter id="diversionChangeMsgOut" channel="toDiversionChange" connection-factory="remoteConnectionFactory" destination="diversionChangeQueue"/>
<int-jms:outbound-channel-adapter id="flightCreateMsgOut" channel="toFlightCreate" connection-factory="remoteConnectionFactory" destination="flightCreateQueue"/>
<int-jms:outbound-channel-adapter id="flightPlanReleaseMsgOut" channel="toFlightPlanRelease" connection-factory="remoteConnectionFactory" destination="flightPlanReleaseQueue"/>
<int-jms:outbound-channel-adapter id="gateChangeMsgOut" channel="toGateChange" connection-factory="remoteConnectionFactory" destination="gateChangeQueue"/>
<int-jms:outbound-channel-adapter id="positionReportMsgOut" channel="toPositionReport" connection-factory="remoteConnectionFactory" destination="positionReportQueue"/>
<int-jms:outbound-channel-adapter id="scheduleChangeMsgOut" channel="toScheduleChange" connection-factory="remoteConnectionFactory" destination="scheduleChangeQueue"/>
<!-- Error Handling -->
<int-jms:outbound-channel-adapter id="errMsgOut" channel="errorChannel" connection-factory="remoteConnectionFactory" destination="failureQueue"/>
<!-- Logger -->
<int:wire-tap pattern="to*" order="7" channel="wireTapChannel"/>
<int:logging-channel-adapter id="wireTapChannel" level="debug" logger-name="WIRETAP"/>
<!-- Logger -->
<!--
<int:wire-tap pattern="to*" order="0" channel="loggerChannel"/>
-->
<!--
<int:logging-channel-adapter id="loggerChannel" level="DEBUG" expression="'YYYY'"/>
-->
</beans>
You don't show any Spring Integration configuration but, presuming you are using a JMS outbound channel adapter, you an add a retry advice with an appropriately configured SimplyRetryPolicy.
However, if the same broker is being used for the inbound queue too, that session will be broken and the message redelivered anyway; so you might be better off setting the retry policy in the broker.
I have a file dropped at the ftp location which should be picked up by ftp-inbound-adapter. This file is saved to a local-directory. This local-directory is in turn polled by spring file-inbound-adapter. The filenamegenerator bean is used in the file-inbound-adapter and decides the destination dynamically. I have also posted another question about the file in the local-directory not being deleted. This is the problem I am facing.
This is a my entire configuration
<util:properties id="someid" location="classpath:config/config.properties"/>
<mvc:annotation-driven />
<context:component-scan base-package="com.dms" />
<bean
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix">
<value>/WEB-INF/</value>
</property>
<property name="suffix">
<value>.jsp</value>
</property>
</bean>
<context:property-placeholder location="classpath:config/jdbc.properties,classpath:config/config.properties,classpath:config/ftp.properties"/>
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource"
>
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource">
<ref bean="dataSource" />
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.format_sql">${hibernate.format_sql}</prop>
<prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop>
</props>
</property>
<property name="packagesToScan">
<list>
<value>com.dms.entity</value>
</list>
</property>
</bean>
<tx:annotation-driven />
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="multipartResolver"
class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<!-- setting maximum upload size -->
<property name="maxUploadSize" value="10485760" />
</bean>
<!-- scheduler to pickup temp folder files to permanent location -->
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="simpleTrigger" />
</list>
</property>
</bean>
<bean id="dmsFilesDetectionJob" class="com.dms.scheduler.job.DMSFilesDetectionJob">
</bean>
<bean id="dmsFilesDetectionJobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject" ref="dmsFilesDetectionJob" />
<property name="targetMethod" value="pollTempFolder" />
<property name="concurrent" value="false" />
</bean>
<bean id="simpleTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="dmsFilesDetectionJobDetail" />
<!-- <property name="cronExpression" value="1 * * * * ?" /> -->
<property name="cronExpression" value="0 0/1 * * * ?" />
</bean>
<bean id="fileNameGenerator" class="com.dms.util.FileNameGenerator"/>
<int-file:inbound-channel-adapter id="filesIn" directory="file:${paths.root}" channel="abc" filter="compositeFilter" >
<int:poller id="poller" fixed-delay="5000" />
</int-file:inbound-channel-adapter>
<int:channel id="abc"/>
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<!-- Ensures that the file is whole before processing it -->
<bean class="com.dms.util.CustomFileFilter"/>
<!-- Ensures files are picked up only once from the directory -->
<bean class="org.springframework.integration.file.filters.AcceptOnceFileListFilter" />
</list>
</constructor-arg>
</bean>
<int-file:outbound-channel-adapter channel="abc" id="filesOut"
directory-expression="#outPathBean.getPath()"
delete-source-files="true" filename-generator="fileNameGenerator" />
<bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter">
<property name="messageConverters">
<list>
<ref bean="jsonMessageConverter"/>
</list>
</property>
</bean>
<bean id="jsonMessageConverter" class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter">
<!-- <property name="prefixJson" value="false"/> -->
<!-- <property name="objectMapper">
<bean class="com.dms.util.HibernateAwareObjectMapper" />
</property> -->
<property name="supportedMediaTypes" value="application/json"/>
</bean>
<bean id="ftpClientFactory"
class="org.springframework.integration.ftp.session.DefaultFtpSessionFactory">
<property name="host" value="${ftp.ip}"/>
<property name="port" value="${ftp.port}"/>
<property name="username" value="${ftp.username}"/>
<property name="password" value="${ftp.password}"/>
<property name="clientMode" value="0"/>
<property name="fileType" value="2"/>
<property name="bufferSize" value="100000"/>
</bean>
<int-ftp:outbound-channel-adapter id="ftpOutbound"
channel="ftpChannel"
session-factory="ftpClientFactory"
charset="UTF-8"
remote-file-separator="/"
auto-create-directory="true"
remote-directory="."
use-temporary-file-name="true"
auto-startup="true"
/>
<int-ftp:inbound-channel-adapter id="ftpInbound"
channel="ftpChannel"
session-factory="ftpClientFactory"
charset="UTF-8"
local-directory="file:${paths.root}"
delete-remote-files="true"
temporary-file-suffix=".writing"
remote-directory="."
filename-pattern="${file.char}*${file.char}"
preserve-timestamp="true"
auto-startup="true">
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
<int:channel id="ftpChannel" />
This is the error I am getting
18:02:34.655 E|LoggingHandler |org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'org.springframework.web.context.WebApplicationContext:/DMS/DMS-dispatcher.ftpChannel'.
This exception does not appear everytime.
As you can see I have added auto-startup="true". Have used unique id's for both the channels as well as adapters. Please let me know what is wrong here!
Thanks
I just had to deal with this but with a file inbound-channel-adapter. The issue is intermittent, and only at startup. I think adapters with pollers can start pulling in messages before Spring Integration has fully initialized.
My fix is to disable the adapter at startup. The details of the adapter are not so important, other than it has an id and is set to not autostart:
<!--
Read files from an "inbox" directory, placing them on an "inbox" channel...
-->
<int-file:inbound-channel-adapter id="inboxScanner"
directory="$import{inbox}"
auto-create-directory="true"
channel="fileInbox"
prevent-duplicates="false"
auto-startup="false">
<int:poller fixed-rate="$import{inbox.scan.rate.seconds}"
time-unit="SECONDS"
max-messages-per-poll="$import{inbox.max.imports.per.scan}"/>
</int-file:inbound-channel-adapter>
I then tap into Spring's application lifecycle events, and once the application context is finished being created (or refreshed), I tell the adapter to start:
<!--
Only start the scanner after the application has finished initializing...
-->
<int-event:inbound-channel-adapter event-types="org.springframework.context.event.ContextRefreshedEvent"
channel="contextRefreshEvents"/>
<int:publish-subscribe-channel id="contextRefreshEvents"/>
<int:outbound-channel-adapter channel="contextRefreshEvents"
expression="#inboxScanner.start()" />
The "event" components are from spring-integration-event.
First of all your integration flow ins't clear. But I see that your issue is for this
<int-ftp:inbound-channel-adapter id="ftpInbound" channel="ftpChannel"
Where there is really no one who is subscribed to that ftpChannel SubscribableChannel.
So, that adapter starts his work and sends message to that channel, but ... Dispatcher has no subscribers.
Try to fix that issue and figure out how to go ahead.
EDIT
Not sure what you have found in my answer so bad that it forced you to downvote it. Anyway.
There was some phase problem before, but as you see it has been fixed in the version 4.1.
So, to reach the immediate fix right now you should do this:
phase="0x7fffffff" // Integer.MAX_VALUE
by default phase is 0, so the inbound channel adapter may start before outbound channel adapter.
Or just upgrade to the latest Spring Integration!
i have setup a simple file integration the reads a files from a directory and transfer via sftp to a remote system. it is working fine, however i need to handle archiving after successful transfer and retry until *n if unsuccessful transfer. any advise how this can be done using spring-integration components?
<bean id="sftpSessionFactory" class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="sftp.host"/>
<property name="port" value="22"/>
<property name="user" value="user1"/>
<property name="password" value="pass1"/>
</bean>
<file:inbound-channel-adapter channel="fileInputChannel"
directory="C:/FileServer"
prevent-duplicates="true"
filename-pattern="*.pdf">
<integration:poller fixed-rate="5000"/>
</file:inbound-channel-adapter>
<sftp:outbound-channel-adapter id="sftpOutboundAdapter"
session-factory="sftpSessionFactory"
channel="fileInputChannel"
charset="UTF-8"
remote-directory=".">
<sftp:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.renameTo('C:/succeeded/' + payload.name)"/>
<property name="successChannel" ref="afterSuccessDeleteChannel"/>
<property name="onFailureExpression" value="payload.renameTo('C:/failed/' + payload.name)"/>
<property name="failureChannel" ref="afterFailRenameChannel" />
</bean>
<bean id="retryAdvice" class="org.springframework.integration.handler.advice.RequestHandlerRetryAdvice">
<property name="retryTemplate" ref="retryTemplate"/>
</bean>
</sftp:request-handler-advice-chain>
</sftp:outbound-channel-adapter>
Thanks
See the retry-and-more sample. It shows how to use a retry advice and how to take different actions based on success/failure.
For both you would declare the retry advice after the expression evaluating advice so the expression evaluating advice is invoked when retries are exhausted.