Sometimes No Messages When Obtaining Input Stream from SFTP Outbound Gateway
This is follow up question to
Use SFTP Outbound Gateway to Obtain Input Stream
The problem I was having in previous question appears that I was not closing the stream as shown in the int:service-activator. However, when I added the int:service-activator then I was seemed to be forced to add int:poller.
However, when I added the int:poller I have noticed that sometimes now when attempting to obtain the stream the messages are null. I have found that a workaround is to simply retry. I have tested with different files and it seems that small files are adversely affected and large files are not. So, if I had to guess, there must be a race condition where the int:service-activator is closing the session before I try call getInputStream() but I was hoping someone could explain if this is what is actually going on and if there is a better solution than just simply retrying?
Thanks!
Here is the outbound gateway configuration:
<int-ftp:outbound-gateway session-factory="ftpClientFactory"
request-channel="inboundGetStream" command="get" command-options="-stream"
expression="payload" remote-directory="/" reply-channel="stream">
</int-ftp:outbound-gateway>
<int:channel id="stream">
<int:queue/>
</int:channel>
<int:poller default="true" fixed-rate="50" />
<int:service-activator input-channel="stream"
expression="payload.toString().equals('END') ? headers['file_remoteSession'].close() : null" />
Here is the source where I obtain the InputStream:
public InputStream openFileStream(final int retryCount, final String filename, final String directory)
throws Exception {
InputStream is = null;
for (int i = 1; i <= retryCount; ++i) {
if (inboundGetStream.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
is = getInputStream();
if (is != null) {
break;
} else {
logger.info("Failed to obtain input stream so attempting retry " + i + " of " + retryCount);
Thread.sleep(ftpTimeout);
}
}
}
return is;
}
private InputStream getInputStream() {
Message<?> msgs = stream.receive(ftpTimeout);
if (msgs == null) {
return null;
}
InputStream is = (InputStream) msgs.getPayload();
return is;
}
Update, I’ll go ahead and accept the only answer as it helped just enough to find the solution.
The answer to the original question accepted answer was confusing because it answered a java question with an xml configuration solution that while explained the problem didn’t really provide the necessary java technical solution. This follow up question/answer clarifies what is going on within spring-integration and sort of suggests what is necessary to solve.
Final solution. To obtain and save the stream for later, I had to create a bean to save the stream for later reference. This stream is obtained from the message header.
Note, error checking and getter/setter is left out for brevity:
Use the same xml config as in the question above but eliminate the poller and service-activator elements as they are unnecessary and were causing the errors.
Create a new class SftpStreamSession to hold necessary references:
public class SftpStreamSession {
private Session<?> session;
private InputStream inputStream;
public void close() {
inputStream.close();
session.close();
}
}
Change the openFileStream method to return an SftpStreamSession:
public SftpStreamSession openFileStream(final String filename, final String directory) throws Exception {
SftpStreamSession sss = null;
if (inboundGetStream.send(MessageBuilder.withPayload(directory + "/" + filename).build(), ftpTimeout)) {
Message<?> msgs = stream.receive(ftpTimeout);
InputStream is = (InputStream) msgs.getPayload();
MessageHeaders mH = msgs.getHeaders();
Session<?> session = (Session<?>) mH.get("file_remoteSession");
sss = new SftpStreamSession(session, is);
}
return sss;
}
First of all you don't need payload.toString().equals('END') because it looks like you don't use <int-file:splitter> in your logic.
Second. You don't need that ugly <service-activator> because you have full access to the message in your code. You can simply obtain that file_remoteSession, cast it into Session<?> and call its .close() in the end of your logic.
Yes, there is a race condition, but it happens in your code.
Look, you have stream QueueChannel. From the beginning you had one consumer stream.receive(ftpTimeout);. But now you have introduced that <int:service-activator input-channel="stream">. Therefore one more competition consumer. Having such a small (fixed-rate="50") polling interval indeed leads you to unexpected behavior.
Related
In a Controller-Service-Datalayer architecture, I'm searching for a way to verify that my controller methods perform exactly one call to the service layer like this:
#DeleteMapping(value = "/{id}")
public ResponseEntity<String> deleteBlubber(#PathVariable("id") long blubberId) {
service.deleteBlubber(blubberId);
return new ResponseEntity<>("ok", HttpStatus.OK);
}
This should not be allowed:
#DeleteMapping(value = "/{id}")
public ResponseEntity<String> deleteBlubber(#PathVariable("id") long blubberId) {
service.deleteOtherStuffFirst(); // Opens first transaction
service.deleteBlubber(blubberId); // Opens second transaction - DANGER!
return new ResponseEntity<>("ok", HttpStatus.OK);
}
As you can see from the comments, the reason for this is to make sure that each request is handled in one transaction (that is started in the service layer), not multiple transactions.
It seems that ArchUnit can only check meta-data from classes and methods and not what's actually going on in a method. I would have to be able to count the request to the service classes, which seems to not be possible in ArchUnit.
Any idea if this might be possible? Thanks!
With JavaMethod.getMethodCallsFromSelf() you have access to all methods calls of a given method. This could be used inside a custom ArchCondition like this:
methods()
.that().areDeclaredInClassesThat().areAnnotatedWith(Controller.class)
.should(new ArchCondition<JavaMethod>("call exactly one service method") {
#Override
public void check(JavaMethod item, ConditionEvents events) {
List<JavaMethodCall> serviceCalls = item.getMethodCallsFromSelf().stream()
.filter(call -> call.getTargetOwner().isAnnotatedWith(Service.class))
.toList();
if (serviceCalls.size() != 1) {
String message = serviceCalls.stream().map(JavaMethodCall::getDescription).collect(joining(" and "));
events.add(SimpleConditionEvent.violated(item, message));
}
}
})
The last element in the code for the following DSL flow is Service Activator (.handle method).
Is there a default output direct channel to which I can subscribe here? If I understand things correctly, the output channel must be present
I know I can add .channel("name") at the end but the question is what if it's not written explicitly.
Here is the code:
#SpringBootApplication
#IntegrationComponentScan
public class QueueChannelResearch {
#Bean
public IntegrationFlow lambdaFlow() {
return f -> f.channel(c -> c.queue(50))
.handle(System.out::println);
}
public static void main(String[] args) {
ConfigurableApplicationContext ctx = SpringApplication.run(QueueChannelResearch.class, args);
MessageChannel inputChannel = ctx.getBean("lambdaFlow.input", MessageChannel.class);
for (int i = 0; i < 1000; i++) {
inputChannel.send(MessageBuilder.withPayload("w" + i)
.build());
}
ctx.close();
}
Another question is about QueueChannel. The program hangs if comment handle() and completes if uncomment it. Does that mean that handle() add a default Poller before it?
return f -> f.channel(c -> c.queue(50));
// .handle(System.out::println);
No, that doesn't work that way.
Just recall that integration flow is a filter-pipes architecture and result of the current step is going to be sent to next one. Since you use .handle(System.out::println) there is no output from that println() method call therefore nothing is returned to build a Message to sent to the next channel if any. So, the flow stops here. The void return type or null returned value is a signal for service activator to stop the flow. Consider your .handle(System.out::println) as an <outbound-channel-adapter> in the XML configuration.
And yes: there is no any default channels, unless you define one via replyChannel header in advance. But again: your service method must return something valuable.
The output from service activator is optional, that's why we didn't introduce extra operator for the Outbound Channel Adapter.
The question about QueueChannel would be better to handle in the separate SO thread. There is no default poller unless you declare one as a PollerMetadata.DEFAULT_POLLER. You might use some library which delcares that one for you.
I have a requirement to split the messages and process one by one. If any of the messages fails, I would like to report it to error channel and resume processing the next available messages
I am using spring cloud aws stream starter with 1.0.0-SNAPSHOT
I wrote a sample program using splitter
#Bean
public MessageChannel channelSplitOne() {
return new DirectChannel();
}
#StreamListener(INTERNAL_CHANNEL)
public void channelOne(String message) {
if (message.equals("l")) {
throw new RuntimeException("Error due to l");
}
System.out.println("Internal: " + message);
}
#Splitter(inputChannel = Sink.INPUT, outputChannel = INTERNAL_CHANNEL)
public List<Message> extractItems(Message<String> input) {
return Arrays.stream(input.getPayload().split(""))
.map(s -> MessageBuilder.withPayload(s).copyHeaders(input.getHeaders()).build())
.collect(Collectors.toList());
}
When I send the message as Hello, the exxpectation is that
'h','e','o' shall be processed, but 'l' shall be reported as error.
But here the after 'l', the processing is not resumed.
Is there any way to achieve this.
You can do that, but with the #ServiceActivator instead of #StreamListener. The first one has adviceChain option where you can inject an ExpressionEvaluatingRequestHandlerAdvice: https://docs.spring.io/spring-integration/docs/5.0.4.RELEASE/reference/html/messaging-endpoints-chapter.html#expression-advice.
The problem that the splitter is like a regular loop in Java, so to continue after error we need to add somehow a try...catch there. But that’s already not a splitter responsibility. Therefore we have to move such a logic into the place we have a error problem.
I'm using the Spring Integration Zip extension and it appears that I'm losing headers I've added upstream in the flow. I'm guessing that they are being lost in UnZipResultSplitter.splitUnzippedMap() as I don't see anything that explicitly copies them over.
I seem to recall that this is not unusual with splitters but I can't determine what strategy one should use in such a case.
Yep!
It looks like a bug.
The splitter contract is like this:
if (item instanceof Message) {
builder = this.getMessageBuilderFactory().fromMessage((Message<?>) item);
}
else {
builder = this.getMessageBuilderFactory().withPayload(item);
builder.copyHeaders(headers);
}
So, if those splitted items are messages already, like in case of our UnZipResultSplitter, we just use message as is without copying headers from upstream.
Please, raise a JIRA ticket (https://jira.spring.io/browse/INTEXT) on the matter.
Meanwhile let's consider some workaround:
public class MyUnZipResultSplitter {
public List<Message<Object>> splitUnzipped(Message<Map<String, Object>> unzippedEntries) {
final List<Message<Object>> messages = new ArrayList<Message<Object>>(unzippedEntries.size());
for (Map.Entry<String, Object> entry : unzippedEntries.getPayload().entrySet()) {
final String path = FilenameUtils.getPath(entry.getKey());
final String filename = FilenameUtils.getName(entry.getKey());
final Message<Object> splitMessage = MessageBuilder.withPayload(entry.getValue())
.setHeader(FileHeaders.FILENAME, filename)
.setHeader(ZipHeaders.ZIP_ENTRY_PATH, path)
.copyHeaders(unzippedEntries/getHeaders())
.build();
messages.add(splitMessage);
}
return messages;
}
}
I use jms:message-driven-channel-adapter and jms:outbound-channel-adapter in my project to get and put messages from/to IBM MQ . I need to get timestamps before and after each put and get. How could i achieve this. Please advise.
Please see my question updated below:
We need time taken for each put and get operation. So what i believed is, if I could get the timestamp in the following way, I will be able to achieve what I wanted.
1)At jms:message-driven-channel-adapter: Note timestamp before and after each get -> derive time taken for each get
2)At jms:outbound-channel-adapter: Note timestamp before and after each put -> derive time taken for each put
Please advise.
Thanks.
Well. It isn't clear what you want to have, because you always can get deal with System.currentTimeMillis().
But from other side Spring Integration maps a jMSTimestamp property of JmsMessage to the Message header jms_timestamp of incomming message in the <jms:message-driven-channel-adapter>.
Another point, that each Spring Integration Message has its own timestamp header.
So, if you switch on something like this:
<wire-tap channel="logger"/>
<logging-channel-adapter id="logger" log-full-message="true"/>
You always will see the timestamp fro each message in te logs, before it is sent to the channel.
UPDATE
OK. Thanks. Now it is more clear.
Well, for the outbound part (put in your case) I can say that your solution is laying with a custom ChannelInterceptor :
public class PutTimeInterceptor extends ChannelInterceptorAdapter {
private final Log logger = LogFactory.getLog(this.getClass());
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
logger.info("preSend time [" + System.currentTimeMillis() + "] for: " + message);
return message;
}
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
logger.info("postSend time [" + System.currentTimeMillis() + "] for: " + message);
}
}
<channel id="putToJmsChannel">
<interceptors>
<bean class="com.my.proj.int.PutTimeInterceptor"/>
</interceptors>
</channel>
<jms:outbound-channel-adapter channel="putToJmsChannel"/>
Keep in mind that ChannelInterceptor isn't statefull, so you should calculate the put time manually for each message.
Another option is <jms:request-handler-advice-chain>, when you should implements the custom AbstractRequestHandlerAdvice:
public class PutTimeRequestHandlerAdvice extends AbstractRequestHandlerAdvice {
private final Log logger = LogFactory.getLog(this.getClass());
#Override
protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws Exception {
long before = System.currentTimeMillis();
Object result = callback.execute();
logger.info("Put time: [" + System.currentTimeMillis() - before + "] for: " + message);
return result;
}
}
These are for the put only.
You can't derive the execution time for the get part, because it is a MessageListener, which is an event-driven component. When the message is in the queue you just receive it and that's all. There is no a hook to tie when the listener starts retrieve a message from the queue.