create sftp reply-channel to reply with error or messages that was unsuccessfully sent - spring-integration

I am using java dsl to configure sfp outbound flow.
Gateway:
#MessagingGateway
public interface SftpGateway {
#Gateway(requestChannel = "sftp-channel")
void sendFiles(List<Message> messages);
}
Config:
#Bean
public IntegrationFlow sftpFlow(DefaultSftpSessionFactory sftpSessionFactory) {
return IntegrationFlows
.from("sftp-channel")
.split()
.handle(Sftp.outboundAdapter(sftpSessionFactory, FileExistsMode.REPLACE)
.useTemporaryFileName(false)
.remoteDirectory(REMOTE_DIR_TO_CREATE).autoCreateDirectory(true)).get();
}
#Bean
public DefaultSftpSessionFactory sftpSessionFactory() {
...
}
How can i configure flow to make my gateway reply with Messages that were failed?
In other words i want my gateway to be able to return list of messages which were failed, not void.
I marked gateway with
#MessagingGateway(errorChannel = "errorChannel")
and wrote error channel
#Bean
public IntegrationFlow errorFlow() {
return IntegrationFlows.from("errorChannel").handle(new GenericHandler<MessagingException>() {
public Message handle(MessagingException payload, Map headers) {
System.out.println(payload.getFailedMessage().getHeaders());
return payload.getFailedMessage();
}
})
.get();
}
#Bean
public MessageChannel errorChannel() {
return MessageChannels.direct().get();
}
and in case of some errors(i.e. no connection to SFTP) i get only one error (payload of first message in list).
Where should i put Advice to aggregate all messages?

This is not the question to Spring Integration Java DSL.
This is mostly a design and architecture task.
Currently you don't have any choice because you use Sftp.outboundAdapter() which is one-way, therefore without any reply. And your SftpGateway is ready for that behavior with the void return type.
If you have a downstream errorr, you can only throw them or catch and send to some error-channel.
According to your request of:
i want my gateway to be able to return list of messages which were failed, not void.
I'd say it depends. Actually it is just return from your gateway. So, if you return an empty list into gateway that may mean that there is no errors.
Since Java doesn't provide multi-return capabilities we don't have choice unless do something in our stream which builds that single message to return. As we decided list of failed messages.
Since you have there .split(), you should look into .aggregate() to build a single reply.
Aggregator correlates with the Splitter enough easy, via default applySequence = true.
To send to aggregator I'd suggest to take a look into ExpressionEvaluatingRequestHandlerAdvice on the Sftp.outboundAdapter() endpoint (second param of the .handle()). With that you should send both good and bad messages to the same .aggregate() flow. Than you can iterate a result list to clean up it from the good result. The result after that can be send to the SftpGateway using replyChannel header.
I understand that it sounds a bit complicated, but what you want doesn't exist out-of-the-box. Need to think and play yourself to figure out what can be reached.

Related

Spring Intgergation aws - KinesisMessageHandler Direct Channel

My Message handler for publishing messages to the kinesis stream is as follows
public MessageHandler kinesisMessageHandler(final AmazonKinesisAsync amazonKinesis,
#Qualifier("successChannel") MessageChannel successChannel,
#Qualifier("errorChannel") MessageChannel errorChannel) {
KinesisMessageHandler kinesisMessageHandler = new KinesisMessageHandler(amazonKinesis);
kinesisMessageHandler.setSync(false);
kinesisMessageHandler.setOutputChannel(successChannel);
kinesisMessageHandler.setFailureChannel(errorChannel);
return kinesisMessageHandler;
}
#Bean(name = "errorChannel")
public MessageChannel errorChannel() {
return MessageChannels.direct().get();
}
#Bean(name = "successChannel")
public MessageChannel successChannel() {
return MessageChannels.direct().get();
}
The setSync flag is set as false so that the messages are getting processed asynchronously.Also, I have created separate IntegrationFlow to receive and process Kinesis response from the success & error channel.
public IntegrationFlow successMessageIntegrationFlow(MessageChannel successChannel,
MessageChannel inboundKinesisMessageChannel,
MessageReceiverServiceActivator kinesisMessageReceiverServiceActivator) {
return IntegrationFlows.from(successChannel).channel(inboundKinesisMessageChannel)
.handle(kinesisMessageReceiverServiceActivator, "receiveMessage").get();
}
#Bean
public IntegrationFlow errorMessageIntegrationFlow(MessageChannel errorChannel,
MessageChannel inboundKinesisErrorChannel,
MessageReceiverServiceActivator kinesisErrorReceiverServiceActivator
) {
return IntegrationFlows.from(errorChannel).channel(inboundKinesisErrorChannel)
.handle(kinesisErrorReceiverServiceActivator, "receiveMessage").get();
}
I wanted to know if you see any issues in using Direct Channel to receive success & error responses from Kinesis and processing it using an IntegrationFlow. As far as I know, with Direct Channel a producer is a blocker during send until the consumer finishes its work and returns management to the producer caller back. Is it a correct assumption that here the producer is executed in a different set of thread pools by the AmazonKinesisAsyncClient and the producer will not wait for the IntegrationFlow to process the messages? Let me know If I need to implement it differently
Your assumption about blocking is correct: the control does not come back to the producing thread. So, if have a limited number of threads in that Kinesis client, you need to be sure that you free them as soon as possible. You might consider to have those callbacks in the queue channel instead. They are asynchronous anyway, but won’t hold Kinesis client if that.
You still have a flaw in your flows: .channel(inboundKinesisMessageChannel) . That means the same channel in the middle if two different flows . And if it is a direct one , then you end up with round robin distribution. I would just remove it altogether .

Reply Channel for Messaging Gateway using Java DSL

I have a REST API which receives a POST request from a client application.
#Autowired
private EdiTranslationEvents.TransformToString transformToString;
#PostMapping("/testPost")
#ResponseStatus(HttpStatus.OK)
public String testPostLogging(#RequestBody Student student) {
log.info("In controller....");
System.out.println(this.transformToString.objectToInputGateway(student));
return "testPost logging";
}
As you can see, in the controller, I have an autowired messaging gateway and I am using it to send data to a channel.
#Configuration
#EnableIntegration
public class EdiTranslationEvents {
#Component
#MessagingGateway
public interface TransformToString {
#Gateway(requestChannel = "inputObjectChannel")
String objectToInputGateway(Student student);
}
#Bean
public IntegrationFlow inputObjectString() {
return IntegrationFlows.from(inputObjectChannel())
.transform(Transformers.objectToString())
.log(LoggingHandler.Level.DEBUG, "com.dash.Logger")
.get();
}
}
When I send data to the REST API, the API just hangs. The gateway does not return anything. I have the return type specified, and I was assuming that the gateway creates a temporary reply channel and sends the response to that channel.
However, I am not doing anything in the DSL configuration for creating or managing a reply. So, how do I send a reply back to the reply channel from the DSL flow?
Your current flow does not return a value, you are simply logging the message.
A terminating .log() ends the flow.
Delete the .log() element so the result of the transform will automatically be routed back to the gateway.
Or add a .bridge() (a bridge to nowhere) after the log and it will bridge the output to the reply channel.

Java: MQTT MessageProducerSupport to Flux

I have a simple MQTT Client that outputs received messages via IntegrationFlow:
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
MqttConnectOptions options = new MqttConnectOptions();
options.setServerURIs(new String[] { "tcp://test.mosquitto.org:1883" });
factory.setConnectionOptions(options);
return factory;
}
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(
"myConsumer",
mqttClientFactory(),
"/test/#");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
I need to pipe all received messages into a Flux though for further processing.
public Flux<String> mqttChannel() {
...
return mqttFlux;
}
How can I do that? The loggingHandler receives all messages from the IntegrationFlow. Couldn't my Flux get it's input in a similar fashion - by passing it somehow to IntegrationFlows handle function?
MQTT Example code is take from https://github.com/spring-projects/spring-integration-samples/blob/master/basic/mqtt/src/main/java/org/springframework/integration/samples/mqtt/Application.java
Attempt: Following Artem Bilans advise I'm now trying to use toReactivePublisher to convert my inbound IntegrationFlow to Flux.
public Flux<String> mqttChannel() {
Publisher<Message<Object>> flow = IntegrationFlows.from(mqttInbound())
.toReactivePublisher();
Flux<String> mqttFlux = Flux.from(flow)
.log()
.map(i -> "TESTING: Received a MQTT message");
return mqttFlux;
}
Running the example i get following error:
10:14:39.541 [MQTT Call: myConsumer] ERROR o.s.i.m.i.MqttPahoMessageDrivenChannelAdapter - Unhandled exception for GenericMessage [payload=OFF,26.70,65.00,663,-62,192.168.2.100,0.026,25,4,6,7,933,278,27,4,1,0,1580496218,730573600,1800000,1980000,1580496218,730573600,10800000,11880000, headers={mqtt_receivedRetained=true, mqtt_id=0, mqtt_duplicate=false, id=3f7565aa-ff4f-c389-d8a9-712d4f06f1cb, mqtt_receivedTopic=/083B7036697886C41D2DF2FD919143EE/MasterBedroom/Sensor/, mqtt_receivedQos=0, timestamp=1602231279537}]
Conclusion: as soon as the first message arrives, it's handled wrong and an exception is thrown.
Please, read this doc: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/reactive-streams.html#reactive-streams
It is not clear what you would like to achieve with that "my flux" and how that could look, but for your current configuration there are a couple of solutions.
You can use a FluxMessageChannel which is already a Publisher, so you can simply use Flux.from() and subscriber to that for consuming data produced by the mentioned MqttPahoMessageDrivenChannelAdapter.
Another way is to use a toReactivePublisher() on the IntegrationFlowBuilder to expose the whole flow as a reactive Publsiher source. In this case, of course, you can't use the LoggingHandler because it is a one-way and makes your flow ending exactly here. You may consider to use a log() operator instead though: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-log
By the way the FluxMessageChannel is publish-subscribe, so you can have it in the flow for those logs and also have it externally for Flux.from() subscription. All the subscribers to this channel are going to get the same message.

Spring Integration one channel for multiple producers and consumers

I have this direct channel:
#Bean
public DirectChannel emailingChannel() {
return MessageChannels
.direct( "emailingChannel")
.get();
}
Can I define multiple flows for the same channel like this:
#Bean
public IntegrationFlow flow1FromEmailingChannel() {
return IntegrationFlows.from( "emailingChannel" )
.handle( "myService" , "handler1")
.get();
}
#Bean
public IntegrationFlow flow2FromEmailingChannel() {
return IntegrationFlows.from( "emailingChannel" )
.handle( "myService" , "handler2" )
.get();
}
EDIT
#Service
public class MyService {
public void handler1(Message<String> message){
....
}
public void handler2(Message<List<String>> message){
....
}
}
Each flow's handle(...) method manipulates different payload data types but the goal is the same, i-e reading data from the channel and call the relevant handler. I would like to avoid many if...else to check the data type in one handler.
Additional question: What happens when multiple threads call the same channel (no matter its type: Direct, PubSub or Queue) at the same time (as per default a #Bean has singleton scope)?
Thanks a lot
With a direct channel messages will be round-robin distributed to the consumers.
With a queue channel only one consumer will get each message; the distribution will be based on their respective pollers.
With a pub/sub channel both consumers will get each message.
You need to provide more information but it sounds like you need to add a payload type router to your flow to direct the messages to the right consumer.
EDIT
When the handler methods are in the same class you don't need two flows; the framework will examine the methods and, as long as there is no ambiguity) will call the method that matches the payload type.
.handle(myServiceBean())

Spring Integration 4 asynchronous request/response

I am trying to write a simple message flow using Spring Integration v4's DSL APIs which would look like this:
-> in.ch -> Processing -> JmsGatewayOut -> JMS_OUT_QUEUE
Gateway
<- out.ch <- Processing <- JmsGatewayIn <- JMS_IN_QUEUE
With the request/response being asynchronous, when I inject a message via the initial Gateway, the message goes all the way to JMS_OUT_QUEUE. Beyond this message flow, a reply message is put back into JMS_IN_QUEUE which it is then picked up by JmsGatewayIn. At this point, the message is Processed and placed into out.ch (I know the response gets to out.ch because I have a logger interceptor there which logs the message being placed there) but, the Gateway never receives the response.
Instead of a response, the system outside of this message flow which picked up the message from JMS_OUT_QUEUE and placed the response in JMS_IN_QUEUE, receives a javax.jms.MessageFormatException: MQJMS1061: Unable to deserialize object on its own JmsOutboundgateway (I think it is failing to deserialize a jms reply object from looking at the logs).
I have clearly not got something configured correctly but I don't know exactly what. Does anyone know what I am missing?
Working with spring-integration-core-4.0.3.RELEASE, spring-integration-jms-4.0.3.RELEASE, spring-integration-java-dsl-1.0.0.M2, spring-jms-4.0.6.RELEASE.
My Gateway is configured as follows:
#MessagingGateway
public interface WsGateway {
#Gateway(requestChannel = "in.ch", replyChannel = "out.ch",
replyTimeout = 45000)
AResponse process(ARequest request);
}
My Integration flow is configured as follows:
#Configuration
#EnableIntegration
#IntegrationComponentScan
#ComponentScan
public class IntegrationConfig {
#Bean(name = "in.ch")
public DirectChannel inCh() {
return new DirectChannel();
}
#Bean(name = "out.ch")
public DirectChannel outCh() {
return new DirectChannel();
}
#Autowired
private MQQueueConnectionFactory mqConnectionFactory;
#Bean
public IntegrationFlow requestFlow() {
return IntegrationFlows.from("in.ch")
.handle("processor", "processARequest")
.handle(Jms.outboundGateway(mqConnectionFactory)
.requestDestination("JMS_OUT_QUEUE")
.correlationKey("JMSCorrelationID")
.get();
}
#Bean
public IntegrationFlow responseFlow() {
return IntegrationFlows.from(Jms.inboundGateway(mqConnectionFactory)
.destination("JMS_IN_QUEUE"))
.handle("processor", "processAResponse")
.channel("out.ch")
.get();
}
}
Thanks for any help on this,
PM.
First of all your configuration is bad:
Since you start the flow from WsGateway#process you really should wait reply there.
The gateway's request/reply capability is based on TemporaryReplyChannel, which is placed to the headers as non-serializable value.
As long as you wait rely on that gateway, actually there is no reason to provide the replyChannel, if you aren't going to do some publish-subscribe logic on the reply.
As you send message to the JMS queue, you should understand that consumer part might be a separete remote application. And the last one might know nothing about your out.ch.
The JMS request/reply capability is really based on JMSCorrelationID, but it isn't enough. The one more thing here is a ReplyTo JMS header. Hence, if you are going to send reply from the consumer you should really just rely on the JmsGatewayIn stuff.
So I'd change your code to this:
#MessagingGateway
public interface WsGateway {
#Gateway(requestChannel = "in.ch", replyTimeout = 45000)
AResponse process(ARequest request);
}
#Configuration
#EnableIntegration
#IntegrationComponentScan
#ComponentScan
public class IntegrationConfig {
#Bean(name = "in.ch")
public DirectChannel inCh() {
return new DirectChannel();
}
#Autowired
private MQQueueConnectionFactory mqConnectionFactory;
#Bean
public IntegrationFlow requestFlow() {
return IntegrationFlows.from("in.ch")
.handle("processor", "processARequest")
.handle(Jms.outboundGateway(mqConnectionFactory)
.requestDestination("JMS_OUT_QUEUE")
.replyDestination("JMS_IN_QUEUE"))
.handle("processor", "processAResponse")
.get();
}
}
Let me know, if it is appropriate for you or try to explian why you use two-way gateways for one one-way cases. Maybe Jms.outboundAdapter() and Jms.inboundAdapter() are more good for you?
UPDATE
How to use <header-channels-to-string> from Java DSL:
.enrichHeaders(e -> e.headerChannelsToString())

Resources