int-sftp:inbound-channel-adapter -- delete local file if not valid - spring-integration

I am using int-sftp:inbound-channel-adapter to download valid files from sftp server to local dir. I need to delete local file if that file is rejected by my custom filter. Is this something achieved via config or need to implement code? If so is there any sample out there?

You would need to do the deletion in your custom filter. Use File.delete().
But, of course, it would be better to use a custom remote filter, instead of a custom local filter, to avoid fetching the invalid file (unless you need to look at the contents).

Related

Add prefix or suffix to filename while streaming the file using GET gateway

I want to add the temporary prefix or suffix while streaming the file from a remote directory using SFTP.
I have tried to add temporaryFileSuffix to outboundGateway while streaming the file but it is not adding any suffix later I checked it is documented that
"Set the temporary suffix to use when transferring files to the remote system."
.handle(Sftp.outboundGateway(sftpSessionFactory(), GET, "payload.remoteDirectory + payload.filename").options(STREAM).temporaryFileSuffix("_reading"))
Do I need to Rename the file using Rename gateway or there is a better way to do it.
Your question is not clear - do you mean you want to copy it with a temporary name locally? Or, do you mean you want to rename it on the remote server before copying?
If the former, use the localFilenameGeneratorExpression.
If the latter, you would have to use the MV gateway first.

azure logic app how to check a specific file in a sftp is changed

Need to check a sftp site and there will be multiple files been uploaded to a folder. I am writing logic apps to process the files. Each logic app will handle one file because each file format is different. Problem is sftp trigger can only detect change to ANY file in the folder. So if a file changes, the logic app for that file will run, but OTHER logic apps will run as well which is not desired.
i have tried use recurrence trigger then followed by a sftp get file content by path action but that will fail if the file specified does not exist, what I want is the logic app just quit or better not been triggered at all.
How to just trigger the logic app if a particular file is updated/uploaded?
On your logic App you can actually use the Dynamic Content and Expressions to do the following
decodeBase64(triggerOutputs()['headers']['x-ms-file-name-encoded'])
Hope it helps!
I tried my Azure web FTP site with a condition if file name is equal to abc.txt and get the same Input. The Expression result is always false.
Then I check the Run Details I found the file name in OUTPUTS wasn't abc.txt, it's encrypted with base64.
In my test abc.txt was encrypted into YWJjLnR4dA==, then I changed the file name to YWJjLnR4dA== in the Logic App condition and it works.
So you could go to check your run history get the file name or you could go to this site encrypt you file name with Base64.
Hope thhis could help you, if you still have other questions, please let me know.

File dates using SFTP and FTP connectors in Logic Apps

I need to process files in an order based on the file modify/create date. I'm using a logic app to process files but cannot get to the date property using the List or the Get from the SFTP Connector or the FTP connector.
Any thoughts on how this can be accomplished?
Any access to source code so I can make a tweak or two?
The current SFTP and FTP do not return modified date/time. If you could choose one of the following, do you have a preference? Not making any promises but investigating best way to resolve this and light up this scenario:
Add FileModifiedDateTime property for each file returned
Provide a parameter to sort the ListFiles. So, property is still not exposed, but the files are sorted as required by the client so you don't have to check the time of each file to see which is earliest.

Spring integration: download file for each update on it from FTPS server

I am using Spring Integration.
In the Remote FTPS server, same file will be updated every time. Source system is not ready to create new file every time.
for everyupdate of the file. i need to download file and process it.
Require help to create filter to listen to every file update.
Use an FtpPersistentAcceptOnceFileListFilter in the filter attribute and a FileSystemPersistentAcceptOnceFileListFilter in the local-filter attribute.
These filters, as well as persisting state so it lives beyond the current execution, also compare the modified time.
However, you need to be very careful updating a file server-side, rather than replacing it with a new one for each update. It is entirely possible (and even likely) to fetch a partially updated file (fetch the file before the writer has finished updating it).

Avoid over-writing blobs AZURE

if i upload a file on azure blob in the same container where the file is existing already, it is over-writing the file, how to avoid overwriting the same? below i am mentioning the scenario...
step1 - upload file "abc.jpg" on azure in container called say "filecontainer"
step2 - once it gets uploaded, try uploading some different file with the same name to the same container
Output - it will overwrite existing file with the latest uploaded
My Requirement - i want to avoid this overwrite, as different people may upload files having same name to my container.
Please help
P.S.
-i do not want to create different containers for different users
-i am using REST API with Java
Windows Azure Blob Storage supports conditional headers using which you can prevent overwriting of blobs. You can read more about conditional headers here: http://msdn.microsoft.com/en-us/library/windowsazure/dd179371.aspx.
Since you want that a blob should not be overwritten, you would need to specify If-None-Match conditional header and set it's value to *. This would cause the upload operation to fail with Precondition Failed (412) error.
Other idea would be to check for blob's existence just before uploading (by fetching it's properties) however I would not recommend this approach as it may lead to some concurrency issues.
You have no control over the name your users upload their files with. You, however, have control over the name you store those files with. The standard way is to generate a Guid and name each file accordingly. The chances of conflict is almost zero.
A simple pseudocode looks like this:
//generate a Guid and rename the file the user uploaded with the generated Guid
//store the name of the file in a dbase or what-have-you with the Guid
//upload the file to the blob storage using the name you generated above
Hope that helps.
Let me put it that way:
step one - user X uploads file "abc1.jpg" and you save it io a local folder XYZ
step two - user Y uploads another file with same name "abc1.jpg", and now you save it again in a local folder XYZ
What do you do now?
With this I am illustrating that your question does not relate to Azure in any way!
Just do not rely on original file names when saving files. Where-ever you are saving them. Generate random names (GUIDs for example) and "attach" the original name as meta-data.

Resources