I have an integration that starts with a standard database query and it update the state in the database to indicate that the integration has worked fine. It works.
But if the data cannot be processed and an exception is raised, the state is not updated as intended, but I would like to update my database row with a 'KO' state so the same row won't fail over and over.
Is there a way to provide a second query to execute when integration fails?
It seems to me that it is very standard way of doing things but I couldn't find a simple way to do it. I could catch exception in every step of the integration and update the database, but it creates coupling, so there should be another solution.
I tried a lot of Google search but I could not find anything, but I'm pretty sure the answer is out there.
Just in case, there is my xml configuration to do the database query (nothing fancy) :
<int-jdbc:inbound-channel-adapter auto-startup="true" data-source="datasource"
query="select * FROM MyTable where STATE='ToProcess')"
channel="stuffTransformerChannel"
update="UPDATE MyTable SET STATE='OK' where id in (:id)"
row-mapper="myRowMapper" max-rows-per-poll="1">
<int:poller fixed-rate="1000">
<int:transactional />
</int:poller>
</int-jdbc:inbound-channel-adapter>
I'm using spring-integration version 4.0.0.RELEASE
Since you are within Transaction, it is normal behaviuor, that rallback is caused and your DB returns to the clear state.
And it is classical pattern to get the deal with data on application purpose in that case, not from some built-in tool. That's why we don't provide any on-error-update, because it can't be a use-case for evrything.
As soon as you are going to update the row anyway you should do something on onRallback event and do it within new transaction, though. However it should be in the same Thread, to prevent fetching the same row from the second polling task.
For this purpose we provide a transaction-synchronization-factory feature:
<int-jdbc:inbound-channel-adapter max-rows-per-poll="1">
<int:poller fixed-rate="1000" max-messages-per-poll="1">
<int:transactional synchronization-factory="syncFactory"/>
</int:poller>
</int-jdbc:inbound-channel-adapter>
<int:transaction-synchronization-factory id="syncFactory">
<int:after-rollback channel="stuffErrorChannel"/>
</int:transaction-synchronization-factory>
<int-jdbc:outbound-channel-adapter
query="UPDATE MyTable SET STATE='KO' where id in (:payload[id])"
channel="stuffErrorChannel">
<int-jdbc:request-handler-advice-chain>
<tx:advice id="requiresNewTx">
<tx:attributes>
<tx:method name="handle*Message" propagation="REQUIRES_NEW"/>
</tx:attributes>
</tx:advice>
</int-jdbc:request-handler-advice-chain>
</int-jdbc:outbound-channel-adapter>
Hope I am clear
Related
I have a spring-integration flow that starts with a file inbound-channel-adapter activated by a transactional poller (tx is handled by atomikos).
The text in the file is processed and the message goes down through the flow until it gets sent to one of the JMS queues (JMS outbound-channel-adapter).
In the middle, there are some database writes within a nested transaction.
The system is meant to run 24/7.
It happens that the single message flow, progressively slows down and when I investigated, I found that the stage that is responsable for the increasing delay is the read from filesystem.
Below, the first portion fo the integration flow:
<logging-channel-adapter id="logger" level="INFO"/>
<transaction-synchronization-factory id="sync-factory">
<after-commit expression="payload.delete()" channel="after-commit"/>
</transaction-synchronization-factory>
<service-activator input-channel="after-commit" output-channel="nullChannel" ref="tx-after-commit-service"/>
<!-- typeb inbound from filesystem -->
<file:inbound-channel-adapter id="typeb-file-inbound-adapter"
auto-startup="${fs.typeb.enabled:true}"
channel="typeb-inbound"
directory="${fs.typeb.directory.in}"
filename-pattern="${fs.typeb.filter.filenamePattern:*}"
prevent-duplicates="${fs.typeb.preventDuplicates:false}" >
<poller id="poller"
fixed-delay="${fs.typeb.polling.millis:1000}"
max-messages-per-poll="${fs.typeb.polling.numMessages:-1}">
<transactional synchronization-factory="sync-factory"/>
</poller>
</file:inbound-channel-adapter>
<channel id="typeb-inbound">
<interceptors>
<wire-tap channel="logger"/>
</interceptors>
</channel>
I read something about issues related to the prevent-duplicates option that stores a list of seen files, but that is not the case because I turned it off.
I don't think that it may be related to the filter (filename-pattern) because the expression I use in my config (*.RCV) is easy to apply and the input folder does not contain a lot of files (less than 100) at the same time.
Still, there is something that gradually makes the read from filesystem slower and slower over time, from a few millis to over 3 seconds within a few days of up-time.
Any hints?
You should remove, or move files after they have been processed; otherwise the whole directory has to be rescanned.
In newer versions, you can use a WatchServiceDirectoryScanner which is more efficient.
But it's still best practice to clean up old files.
Finally I got the solution.
The issue was related to the specific version of Spring I was using (4.3.4) that is affected by a bug I had not discovered yet.
The problem is something about DefaultConversionService and the use of converterCache (look at this for more details https://jira.spring.io/browse/SPR-14929).
Upgrading to a more recent version has resolved.
So I think I need to extend the current redis-sink provided in spring-xd to write into a redis Capped list, rather than creating a new one but unfortunately it seems it gets worse as I will have to go deeper into spring-integration and further back into spring-data (spring-data-redis) because the whole redis-sink seems to be based on the generic pub/sub abstraction on redis - or is there some type of handler that can be defined once the message arrives to the channel handler?
In order to have the "effect of a capped list" when I push data redis, I need to execute both a redis "push" and then an "rtrim" as outlined here - http://redis.io/topics/data-types-intro. If I am to build a custom spring-integration / spring-data module. I believe I see support for the "ltrim" but not the"rtrim" operation here http://docs.spring.io/spring-data/redis/docs/1.7.0.RC1/api/
Any Advice on how/where to start or an easier approach would be appreciated.
Actually even Redis doesn't have such a RTRIM command. We don't need it because we reach the same behavior with the negative indexes for LTRIM:
start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.
I think you should use <redis:store-outbound-channel-adapter> and add something like this into its configuration:
<int-redis:request-handler-advice-chain>
<beans:bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<beans:property name="onSuccessExpression" value="#redisTemplate.boundListOps(${keyExpression}).trim(1, -1)"/>
</beans:bean>
</int-redis:request-handler-advice-chain>
To remove the oldest element in the Redis List.
I am building a system that call many and diffrent web service and i wish to generate a report about all errors returned after calling ws.
For that, I use an <int:aggregator: > to aggregate messages from error-channel but i can't know the release-strategy because , i like to aggregate all messages of error-channel. so how can i configure <int:aggregator > to aggregate all messages.
<int:aggregator
correlation-strategy-expression="'${error.msg.correlation.key}'"
input-channel="ws.rsp.error.channel"
output-channel="outboundMailChannel"
ref="errorAggregator"
method="generateErrorReport"
release-strategy-expression="false"
group-timeout="2000"
expire-groups-upon-completion="true"/>
<int:service-activator
input-channel="outboundMailChannel"
ref="errorMsgAgregatedActivator"
method="handleMessage"
/>
And the activator:
#ServiceActivator
public void handleMessage(Message<Collection<Object>> errorList) {
Collection<Object> payload=errorList.getPayload();
System.out.println("error list: "+payload.toString());
}
thanks.
Aggregation either needs an appropriate release strategy, or you can simply use release-strategy-expression="false" (never release), and use a group-timeout to release whatever's in the group after some time.
You may want to use a constant correlation correlation-strategy-expresision="'foo'" and set expire-groups-upon-completion="true" so a new group starts with the next message.
I have the below configuration for my jdbc-inbound-adapter:
<si-jdbc:inbound-channel-adapter id="jdbcInboundAdapter"
channel="queueChannel" data-source="myDataSource"
auto-startup="true"
query="SELECT * FROM STAGE_TABLE WHERE STATUS='WAITING' FOR UPDATE SKIP LOCKED"
update="UPDATE STAGE_TABLE SET STATUS='IN_PROGRESS' WHERE ID IN (:Id)"
max-rows-per-poll="100" row-mapper="rowMapper"
update-per-row="true">
<si:poller fixed-rate="5000">
<si:advice-chain>
<ref bean="txAdvice"/>
<ref bean="inboundAdapterConfiguration"/>
</si:advice-chain>
</si:poller>
</si-jdbc:inbound-channel-adapter>
<tx:advice id="txAdvice">
<tx:attributes>
<tx:method name="get*" read-only="false"/>
<tx:method name="*"/>
</tx:attributes>
</tx:advice>
My question is does the both select and update statements would be executed in the same transaction.
In the spring-integration documentation it does not specify clearly about the transaction when advice-chain is used. (I am using spring-integration-jdbc-2.2.0.RC2.jar)
Please see section 18.1.1 Poller Transaction Support:
http://docs.spring.io/spring-integration/docs/2.0.0.RC1/reference/html/transactions.html
You are using a very old version (and only a release candidate at that, not a GA release. The current version is 3.0.2.RELEASE the latest 2.2.x is 2.2.6.RELEASE. Please upgrade to one of those.
http://projects.spring.io/spring-integration/
Yes, it's all done in the same transaction.
Are you referring to the old documentation too? The current documentation says
"A very important feature of the poller for JDBC usage is the option to wrap the poll operation in a transaction,..".
The transaction is started, the message source is invoked (to get the results) and the downstream flow all occurs in the transaction (unless you hand off to another thread, in which case the transaction will commit at that time).
In fact, the update is done right after the select (and before the message is sent), but the commit doesn't occur until the downstream flow is complete.
Your channel is called queueChannel; if it really is a queue channel, that means that the transaction will commit as soon as the message is stored in the queue.
If you feel documentation improvements are required, please open a JIRA Issue.
We have clustered environment with 2 nodes in Oracle Weblogic 10.3.6 server and it is Round-Robin.
I have service which goes and gets the message from a external system and puts them in the Database (Oracle DB).
I am using a jdbc-inbound-adapter to convert these messages and pass it to the channels.
And to have a message processed only once. I am planning to have a column(NODE_NAME) in the DB-table. When the first service which gets the message from the external system also updates the column with the NODE_NAME (weblogic.Name). In the SELECT query of jdbc-inbound-adapter if I specify the NODE_NAME then the messages would be processed only once.
i.e. If the Service1(of Node1) saves the message in DB then inbound-adapter1 (of node1) passes the message to channel.
Example:
<si-jdbc:inbound-channel-adapter id="jdbcInboundAdapter"
channel="queueChannel" data-source="myDataSource"
auto-startup="true"
query="SELECT * FROM STAGE_TABLE WHERE STATUS='WAITING' and NODE_NAME = '${weblogic.Name}'"
update="UPDATE STAGE_TABLE SET STATUS='IN_PROGRESS' WHERE ID IN (:Id)"
max-rows-per-poll="100" row-mapper="rowMapper"
update-per-row="true">
<si:poller fixed-rate="5000">
<si:advice-chain>
<ref bean="txAdvice"/>
<ref bean="inboundAdapterConfiguration"/>
</si:advice-chain>
</si:poller>
</si-jdbc:inbound-channel-adapter>
Is this a good design?
By second approach: using the below Select SQL in the jdbc-inbound-adapter but I am guessing this would fail as I am using Oracle Database.
SELECT * FROM TABLE WHERE STATUS='WAITING' FOR UPDATE SKIP LOCKED
It would be great if some one could point me in the right direction.
Actually, FOR UPDATE SKIP LOCKED is exactly Oracle's feature - https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2060739900346201280
If you are in doubt as, here is a code from Spring Integration: https://github.com/spring-projects/spring-integration/blob/master/spring-integration-jdbc/src/main/java/org/springframework/integration/jdbc/store/channel/OracleChannelMessageStoreQueryProvider.java#L39