Spring Integration FTP Java DSL - spring-integration

I am trying to transfer remote files from an FTP repo to a local repo. At the moment it works in terms of the initial transfer and if the local file is deleted but I would like it to pick up on remote file changes from the last modified timestamp. I have read around trying to create a custom filter but can't find much information on doing this via Java DSL.
#Bean
public IntegrationFlow ftpInboundFlow(){
return IntegrationFlows
.from(s -> s
.ftp(this.ftpSessionFactory())
.preserveTimestamp(true)
.remoteDirectory(ftpData.getRemoteDirectory())
.localDirectory(new File(ftpData.getLocalDirectory())),
e -> e.id("ftpInboundAdapter").autoStartup(true))
.channel(MessageChannels.publishSubscribe())
.get();
}

It has been fixed only recently: https://jira.spring.io/browse/INT-4232.
Meanwhile you don't have choice unless delete local files after processing.
You have to use FtpPersistentAcceptOnceFileListFilter any way, because of: https://jira.spring.io/browse/INT-4115.
There is nothing from Java DSL perspective.
UPDATE
can you point me towards how to delete local files via Java DSL
The FtpInboundFileSynchronizingMessageSource produces a message already for local file as a payload. In addition there are some headers like:
.setHeader(FileHeaders.RELATIVE_PATH, file.getAbsolutePath()
.replaceFirst(Matcher.quoteReplacement(this.directory.getAbsolutePath() + File.separator),
""))
.setHeader(FileHeaders.FILENAME, file.getName())
.setHeader(FileHeaders.ORIGINAL_FILE, file)
When you're done with the file downstream already you can delete it via regular File.delete() operation. That can be done for example using ExpressionEvaluatingRequestHandlerAdvice:
#Bean
public Advice deleteFileAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString("headers[file_originalFile].delete()");
return advice;
}
...
.<String>handle((p, h) -> ..., e -> e.advice(deleteFileAdvice()))

Related

Spring AMQP - How to convert body of message to my custom type?

I have a Spring Boot project (let's call it as project A) which uses JMS template to send and receive messages over rabbitmq and works very well. I am not able to change anything on this project.
On other project (let's call it as project B), I want to use Spring AMQP in this project because this is new project but when the project A sends message, project B takes body part of Message as byte array. I defined a RabbitListener to listen queue which will be populated by project A like below:
#RabbitListener( queues = "theQueueName")
public void listen(org.springframework.amqp.core.Message incomingMessage) {
System.out.println("Message read from myQueue : " + new String(incomingMessage.getBody()));
}
The body part of incoming message is byte but I need to convert it to custom type which is being used in project A.
What should I do to take this body part as I want?
The ability to inject Spring’s message abstraction is particularly useful to benefit from all the information stored in the transport-specific message without relying on the transport-specific API. The following example shows how to do so:
#RabbitListener(queues = "myQueue")
public void processOrder(org.springframework.messaging.Message<Order> order) { ...
}
https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-annotation-driven-enable-signature

Spring Integration DSL - composition with gateway blocks thread

I am trying to learn about how to build IntegrationFlows as units, and join them up.
I set up a very simple processing integration flow:
IntegrationFlow processingFlow = f -> f
.<String>handle((p, h) -> process(p))
.log();
flowContext.registration(processingFlow)
.id("testProcessing")
.autoStartup(false)
.register();
Processing is very simple:
public String process(String process) {
return process + " has been processed";
}
Then I compose a flow from a source, using .gateway() to join the source to the processing:
MessageChannel beginningChannel = MessageChannels.direct("beginning").get();
StandardIntegrationFlow composedFlow = IntegrationFlows
.from(beginningChannel)
.gateway(processingFlow)
.log()
.get();
flowContext.registration(composedFlow)
.id("testComposed")
.autoStartup(false)
.addBean(processingFlow)
.register();
Then I start the flow and send a couple of messages:
composedFlow.start();
beginningChannel.send(MessageBuilder.withPayload(new String("first string")).build());
beginningChannel.send(MessageBuilder.withPayload(new String("second string")).build());
The logging handler confirms the handle method has been called for the first message, but the main thread then sits idle, and the second message is never processed.
Is this not the correct way to compose integration flow from building blocks? Doing so with channels requires registering the channel as a bean, and I'm trying to do all this dynamically.
It must be logAndReply() in the processingFlow. See their JavaDocs for difference. The log() in the end of flow makes it one-way. That’s why you are blocked since gateway waits for reply, but there is no one according your current flow definition. Unfortunately we can’t determine that from the framework level: there might be cases when you indeed don’t return according some routing or filtering logic. The gateway can be configured with a reply timeout. By default it is an infinite.

Azure Logic App, Cant get data from CreateFile Function

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

Spring Integration - AMQP Inferred Types In Java DSL?

I have been working on a "paved road" for setting up asynchronous messaging between two micro services using AMQP. We want to promote the use of separate domain objects for each service, which means that each service must define their own copy of any objects passed across the queue.
We are using Jackson2JsonMessageConverter on both the producer and the consumer side and we are using the Java DSL to wire the flows to/from the queues.
I am sure there is a way to do this, but it is escaping me: I want the consumer side to ignore the __TypeID__ header that is passed from the producer, as the consumer may have a different representation of that event (and it will likely be in in a different java package).
It appears there was work done such that if using the annotation #RabbitListener, an inferredArgumentTypeargument is derived and will override the header information. This is exactly what I would like to do, but I would like to use the Java DSL to do it. I have not yet found a clean way in which to do this and maybe I am just missing something obvious. It seems it would be fairly straight forward to derive the type when using the following DSL:
return IntegrationFlows
.from(
Amqp.inboundAdapter(factory, queueRemoteTaskStatus())
.concurrentConsumers(10)
.errorHandler(errorHandler)
.messageConverter(messageConverter)
)
.channel(channelRemoteTaskStatusIn())
.handle(listener, "handleRemoteTaskStatus")
.get();
However, this results in a ClassNotFound exception. The only way I have found to get around this, so far, is to set a custom message converter, which requires explicit definition of the type.
public class ForcedTypeJsonMessageConverter extends Jackson2JsonMessageConverter {
ForcedTypeJsonMessageConverter(final Class<?> forcedType) {
setClassMapper(new ClassMapper() {
#Override
public void fromClass(Class<?> clazz, MessageProperties properties) {
//this class is only used for inbound marshalling.
}
#Override
public Class<?> toClass(MessageProperties properties) {
return forcedType;
}
});
}
}
I would really like this to be derived, so the developer does not have to really deal with this.
Is there an easier way to do this?
The simplest way is to configure the Jackson converter's DefaultJackson2JavaTypeMapper with TypeIdMapping (setIdClassMapping()).
On the sending system, map foo:com.one.Foo and on the receiving system map foo:com.two.Foo.
Then, the __TypeId__ header gets foo and the receiving system will map it to its representation of a Foo.
EDIT
Another option would be to add an afterReceiveMessagePostProcessor to the inbound channel adapter's listener container - it could change the __TypeId__ header.

Spring Integration DSL Same message to both channels

We have a requirement where I need the same message(payload) to be processed in two different channels. We are under the impression that using PubliSHSubscribe Channel will help us deal with that by making a copy of message to both the channels. However we figured that each channel was getting executed one after the other and if e make any changes in the payload in one channel, its effecting the payload of other channel as well.
#Bean
public IntegrationFlow bean1() {
return IntegrationFlows
.from("Channel1")
.handle(MyMessage.class, (payload, header) -> obj1.method1(payload))
.channel(MessageChannels.publishSubscribe("subscribableChannel").get())
.get();
}
#Bean
public IntegrationFlow bean21() {
return IntegrationFlows
.from("subscribableChannel")
.handle(MyMessage.class, (payload, header) -> obj2.method2(payload,header))
.channel("nullChannel")
.get();
}
#Bean
public IntegrationFlow bean22() {
return IntegrationFlows
.from("subscribableChannel")
.handle(MyMessage.class, (payload, header) -> obj3.method3(payload))
.channel("nullChannel")
.get();
}
IN the above example, if i make changes to payload in bean21, its effecting the input payload passed to bean 22.
My requirement is to pass the same payload to bean21 and bean22 and to execute them parallely? Can you please advise how to accomplish that?
That's correct. Spring Integration is just Java and there is no any magic in copying payload between different messages. It is really just the same object in the memory. Now imagine you have a pure Java and would like to call two different method with the same Foo object. And then you modify that object in one method. What happens in another one? Right, it will see a modifications on the object.
To achieve your goal with definitely copying object to a new instance, you have to ensure it yourself. For example implement Cloneable interface on your class or provide copy constructor or any other possible solution to create a new object.
In the beginning of the flow of one of the subscriber you should perform that clone operation and you will have a new object without impacting another subscriber.
See more info in this JIRA: https://jira.spring.io/browse/INT-2979

Resources