inbound sftp channel adapter custom filter not accepting same file again - spring-integration

I have very simple custom filter for inbound sftp channel adapter where I just check if file extension is in list of accepted or not. If so it returns true and should allow to process that file.
What is happening is first time that file is processed it works fine. if same file is dropped in my sftp server it comes to filter and it is returning true that means file is accepted still it does not put that message on the downstream queue. Here is my sample config looks like
<int-sftp:inbound-channel-adapter id="sftpAdapter"
channel="ftpChannel"
session-factory="sftpSessionFactory"
local-directory="c:\\temp"
remote-directory="//test//inbound"
remote-file-separator="/"
auto-create-local-directory="true"
delete-remote-files="true"
filter="customfilter"
preserve-timestamp="true"
>
<int:poller cron="0/5 * * * * *" max-messages-per-poll="1"/>
</int-sftp:inbound-channel-adapter>

That's because there is one more FileListFilter in the AbstractInboundFileSynchronizingMessageSource:
private volatile FileListFilter<File> localFileListFilter = new AcceptOnceFileListFilter<File>();
Since you guarantee the duplicate logic with your filter="customfilter" you should configure local-filter:
<int-sftp:inbound-channel-adapter id="sftpAdapter"
channel="ftpChannel"
....
local-filter="acceptAllFileFilter"/>
<bean id="acceptAllFileFilter" class="org.springframework.integration.file.filters.AcceptAllFileListFilter"/>

Related

Spring Integration File Inbound Adapter Scan Directory Each Poll

I would like to enhance my current file inbound channel adapter that will scan the directory to refresh the file listing in the queue for each poll.
Below are the XML config for my current file inbound channel adapter :
<int-file:inbound-channel-adapter id="hostFilesOut" channel="hostFileOutChannel"
directory="${hostfile.dir.out}" prevent-duplicates="false"
filename-regex="${hostfile.out.filename-regex}" >
<int:poller id="poller" cron="${poller.cron:0,4,8,12,16,20,24,28,32,36,40,44,48,52,56 * * * * * }"
max-messages-per-poll="1" />
</int-file:inbound-channel-adapter>
I have try to create a custom scanner to read file. However, using the scanner to a file inbound channel adapter will cause the cron configuration not working.
Can someone give an advice on this or is there any other way can also achieve the same goal.
Thank you.
The FileReadingMessageSource has already such an option:
/**
* Optional. Set this flag if you want to make sure the internal queue is
* refreshed with the latest content of the input directory on each poll.
* <p>
* By default this implementation will empty its queue before looking at the
* directory again. In cases where order is relevant it is important to
* consider the effects of setting this flag. The internal
* {#link java.util.concurrent.BlockingQueue} that this class is keeping
* will more likely be out of sync with the file system if this flag is set
* to <code>false</code>, but it will change more often (causing expensive
* reordering) if it is set to <code>true</code>.
*
* #param scanEachPoll
* whether or not the component should re-scan (as opposed to not
* rescanning until the entire backlog has been delivered)
*/
public void setScanEachPoll(boolean scanEachPoll) {
However I'm surprised that we don't have that option exposed for the XML configuration although that option is there since day first https://jira.spring.io/browse/INT-583.
Here is a Doc on the matter.
As a workaround you can create FileReadingMessageSource bean and use it as a ref in the <int:inbound-channel-adapter>. Another way to proceed is Annotations or Java DSL configuration. You can find some sample in the Doc mentioned above.
For the XML support, please, raise a JIRA and we will add such a XSD definition. Also don't hesitate providing contribution on the matter!

Spring Integration File Outbound Channel Adapter and file last modified date

I'm trying to make a File Outbound Channel Adapter to write a file having the last modified date attribute set to a custom value instead of system current time.
according to the documentation (http://docs.spring.io/spring-integration/docs/4.3.11.RELEASE/reference/html/files.html#file-timestamps) I'm supposed to set the preserve-timestamp attribute to true on the outbound and set the header file_setModified to the desired timestamp in the messages.
Anyway I made several attempts without success.
This is a code snippet to show what I'm doing right now:
<int:inbound-channel-adapter
channel="msg.channel"
expression="'Hello'">
<int:poller fixed-delay="1000"/>
</int:inbound-channel-adapter>
<int:header-enricher
input-channel="msg.channel"
output-channel="msgEnriched.channel">
<int:header
name="file_setModified"
expression="new Long(1473897600)"/>
</int:header-enricher>
<int-file:outbound-channel-adapter
id="msgEnriched.channel"
preserve-timestamp="true"
directory="/tmp/foo"/>
what's wrong with that?
(using Spring Integration 4.3.11)
The timestamp value is overridden if your payload is a File:
Object timestamp = requestMessage.getHeaders().get(FileHeaders.SET_MODIFIED);
...
if (payload instanceof File) {
resultFile = handleFileMessage((File) payload, tempFile, resultFile);
timestamp = ((File) payload).lastModified();
}
...
if (this.preserveTimestamp) {
if (timestamp instanceof Number) {
resultFile.setLastModified(((Number) timestamp).longValue());
}
}
To avoid that override and really get a gain from the file_setModified, you should convert the File from the <int:inbound-channel-adapter> to its InputStream:
<transformer expression="new java.io.FileInputStream(payload)"/>
before <int-file:outbound-channel-adapter>.
The documentation warns about that though:
For File payloads, this will transfer the timestamp from the inbound file to the outbound (regardless of whether a copy was required)
UPDATE
I have just tested your use case and my /tmp/out directory looks like:
As you see all my files have the proper custom last modified.
What am I missing?
Maybe that 1473897600 (1970 year) is wrong for your operation system?
UPDATE
OK! The problem that preserve-timestamp isn't configured into the target component during parsing that XML: https://jira.spring.io/browse/INT-4324
The workaround for your use-case is like:
<int:outbound-channel-adapter id="msgEnriched.channel">
<bean class="org.springframework.integration.file.FileWritingMessageHandler">
<constructor-arg value="/tmp/foo"/>
<property name="preserveTimestamp" value="true"/>
<property name="expectReply" value="false"/>
</bean>
</int:outbound-channel-adapter>
instead of that <int-file:outbound-channel-adapter> definition.

inbound-channel-adapter - How to update row field on failure?

I have an integration that starts with a standard database query and it update the state in the database to indicate that the integration has worked fine. It works.
But if the data cannot be processed and an exception is raised, the state is not updated as intended, but I would like to update my database row with a 'KO' state so the same row won't fail over and over.
Is there a way to provide a second query to execute when integration fails?
It seems to me that it is very standard way of doing things but I couldn't find a simple way to do it. I could catch exception in every step of the integration and update the database, but it creates coupling, so there should be another solution.
I tried a lot of Google search but I could not find anything, but I'm pretty sure the answer is out there.
Just in case, there is my xml configuration to do the database query (nothing fancy) :
<int-jdbc:inbound-channel-adapter auto-startup="true" data-source="datasource"
query="select * FROM MyTable where STATE='ToProcess')"
channel="stuffTransformerChannel"
update="UPDATE MyTable SET STATE='OK' where id in (:id)"
row-mapper="myRowMapper" max-rows-per-poll="1">
<int:poller fixed-rate="1000">
<int:transactional />
</int:poller>
</int-jdbc:inbound-channel-adapter>
I'm using spring-integration version 4.0.0.RELEASE
Since you are within Transaction, it is normal behaviuor, that rallback is caused and your DB returns to the clear state.
And it is classical pattern to get the deal with data on application purpose in that case, not from some built-in tool. That's why we don't provide any on-error-update, because it can't be a use-case for evrything.
As soon as you are going to update the row anyway you should do something on onRallback event and do it within new transaction, though. However it should be in the same Thread, to prevent fetching the same row from the second polling task.
For this purpose we provide a transaction-synchronization-factory feature:
<int-jdbc:inbound-channel-adapter max-rows-per-poll="1">
<int:poller fixed-rate="1000" max-messages-per-poll="1">
<int:transactional synchronization-factory="syncFactory"/>
</int:poller>
</int-jdbc:inbound-channel-adapter>
<int:transaction-synchronization-factory id="syncFactory">
<int:after-rollback channel="stuffErrorChannel"/>
</int:transaction-synchronization-factory>
<int-jdbc:outbound-channel-adapter
query="UPDATE MyTable SET STATE='KO' where id in (:payload[id])"
channel="stuffErrorChannel">
<int-jdbc:request-handler-advice-chain>
<tx:advice id="requiresNewTx">
<tx:attributes>
<tx:method name="handle*Message" propagation="REQUIRES_NEW"/>
</tx:attributes>
</tx:advice>
</int-jdbc:request-handler-advice-chain>
</int-jdbc:outbound-channel-adapter>
Hope I am clear

aggregator release strategy depend on another service activator running

I understand how aggregating based on size works but I also want to make the release strategy depend on another step in the pipeline to be still running. The idea is that i move files to a certain dir "source", aggregate enough files and then move from "source" to "stage" and then process the staged files. While this process is running I dont want to put more files in stage but I do want to continue to add more files to source folder (that part is handled by using a dispatcher channel connected with file inbound adapter before the aggregator)
<int:aggregator id="filesBuffered"
input-channel="sourceFilesProcessed"
output-channel="stagedFiles"
release-strategy-expression="size() == 10"
correlation-strategy-expression="'mes-group'"
expire-groups-upon-completion="true"
/>
<int:channel id="stagedFiles" />
<int:service-activator input-channel="stagedFiles"
output-channel="readyForMes"
ref="moveToStage"
method="move" />
so as you can see I dont want to release the aggregated messages if an existing instance of moveToStage service activator is running.
I thought about making the stagedFiles channel a queue channel but that doesnt seems right because I do want the files to be passed to moveToStage as a Collection not a single file which I am assuming by making stagedFiles a queue channel it will send a single file. Instead I want to get to a threshold e.g. 10 files, pass those to stagedFiles which allows the moveToStage to process those files but then until this step is done I want the aggregator to continue to aggregate files and then release all aggregated files.
Thanks
I suggest you to have some flag as a AtomicBoolean bean and use it from your moveToStage#move and check it's state from:
release-strategy-expression="size() >= 10 and #stagingFlag.get()"

Spring Integration: create dynamic directories using ftp:outbound-adapter

We would like to be able to change the FTP directory on a channel, after the channel has been created. In our particular use case, the subdirectory for an FTP put is determined at runtime.for ex: we have daily reports uploaded by users.it should be store in ftp server in day wise folders. ex: test/reports/27-11-2012/abc.pdf, test/reports/28-11-2012/abc.pdf etc..
some what Like this
<int-ftp:outbound-channel-adapter id="ftpOutbound" channel="ftpChannel" remote-directory="remoteDirectoryPath"
session-factory="ftpClientFactory" />
remoteDirectoryPath - it should append runtime
Please Anybody can give us solution?
Use remote-directory-expression
#beanName.method() is currently not available in this expression; you will need to use SpEL for the directory generation...
"'test' + T(java.io.File).separator + new java.text.SimpleDateFormat('yyyyMMDD').format(new java.util.Date())"
You can assign a directory/path at Runtime into ftp:outbound-channel-adapter.
I am coping the data over here. You can check this out.
This is working for me.
xml side:
<int-ftp:outbound-channel-adapter id="ftpOutboundAdapter" session-factory="ftpSessionFactory"
channel="sftpOutboundChannel"
remote-directory-expression="#targetDir.get()"
remote-filename-generator="fileNameGenerator"/>
<bean id="targetDir" class="java.util.concurrent.atomic.AtomicReference">
<constructor-arg value="D:\PATH\"/>
</bean>
In this block...remote-directory-expression="#targetDir.get()"
is used for setting the directory/path at runtime.
Java side:
AtomicReference<String> targetDir = (AtomicReference<String>)appContext.getBean("targetDir", AtomicReference.class);
targetDir.set("E:\PATH\");
By, this you can set your path.

Resources