Spring integration aws (sqs) to trigger spring integration flow - spring-integration

I have to listen a queue using spring integration flow and intgeration sqs. Once message is received from queue it should trigger a integration flow. Below is the things which I am trying but everythings fine in but afater receiving test it is not triggering any Integration flow. Please let me know where I am doing wrong:
UPDATED as per comment from Artem
Adapter for SQS.
#Bean
public MessageProducerSupport sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(amazonSQSAsync, "Main");
adapter.setOutputChannel(inputChannel());
adapter.setAutoStartup(true);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.NEVER);
adapter.setMaxNumberOfMessages(10);
adapter.setVisibilityTimeout(5);
return adapter;
}
Queue configured:
#Bean
public MessageChannel inputChannel() {
return new DirectChannel();
}
Now the main integration flow trigger point:
#Bean
public IntegrationFlow inbound() {
return IntegrationFlows.from("inputChannel").transform(i -> "TEST_FLOW").get();
}
}
Appreciate any type of help.

The sqsMessageDrivenChannelAdapter() must be declared as a #Bean
The inbound() must be declared as a #Bean
This one fully does not make sense IntegrationFlows.from(MessageChannels.queue()). What is the point to start the flow from anonymous channel? Who and how is going to produce messages to that channel?
Make yourself familiar with different channels: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations
Pay attention that QueueChannel must be consumed via polling endpoint.
Right, there is a default poller auto-configured by Spring Boot, but it is based on a single thread in the TaskScheduler and has a polling period as 10 millis.
I wouldn't recommend to hand off SQS messages to the QueueChannel: when consumer fails, you lose the data. It is better to process those messages in the consumer thread.
Otherwise your intention is not clear in the provided code.
Can you, please, share with us what error you get or anything else?
You also can turn on DEBUG logging level for org.springframework.integration to see how your messages are processed.

Related

Spring integration: how to specify a custom task executor for QueueChannel

I'm writing a Spring Integration application which receives messages from an input system, transforms them and sends them to an output system. The connection to the output system is not always available. The messages from the input system can come at any moment. In case if they are coming when the output system is not available, they shouldn't be lost and send eventually when the output system is available. So I store the messages from the input system in a QueueChannel:
#Configuration
class TcInFlowConfiguration {
#Bean
fun tcInFlow(
#Qualifier(TC_MESSAGE_LISTENER) listener: MessageProducerSupport,
#Qualifier(TC_MESSAGE_CHANNEL) messageChannel: MessageChannel
): IntegrationFlow {
return IntegrationFlow
.from(listener)
.transform { msg: ByteArray -> RamsesTcFrame.deserialize(msg) }
.channel(messageChannel)
.get()
}
#Bean
#Qualifier(TC_MESSAGE_CHANNEL)
fun tcMessageChannel(): MessageChannel {
return MessageChannels.queue().get()
}
The app receives an API call to open/close the connection to the output system, so I create and remove the output integration flow programmatically via IntegrationFlowContext:
val outFlow = IntegrationFlow
.from(TC_MESSAGE_CHANNEL)
.handle(createMessageSender())
.get()
integrationFlowContext.registration(outFlow).register()
When the messages are polled from the queue to be processed by the outFlow, the default Spring task executor is used (I see "scheduling-1" as a thread name in logs). The problem is that I have multiple independent integration flows in the app with the multiple queue channels, so they all got mixed up by being processed by the same task executor. What I want is to process each flow in its own dedicated thread, so the flows won't block each other. How can I achieve this?
Spring Boot v3.0.2, Spring v6.0.4
I tried setting a task scheduler for my QueueChannel:
val queueChannel = MessageChannels.queue().get()
queueChannel.setTaskScheduler(taskScheduler)
It didn't have any effect, taskScheduler seems to be simply not used by QueueChannel implementation.
I tried using ExecutorChannel instead of QueueChannel which supports setting a custom Executor. Unfortunately, ExecutorChannel doesn't buffer messages in memory, so if there are no subscribers to the channel the messages are lost.
Finally, I tried defining a poller in the outFlow to poll the messages from the QueueChannel:
IntegrationFlow
.from(TC_MESSAGE_CHANNEL)
.handle(createMessageSender()) { e -> e.poller(Pollers.fixedDelay(10).taskExecutor(taskExecutor)) }
.get()
This didn't work either. After the connection to the output system is closed and the outFlow is removed, the intermediate channel created by the poller remains in Spring context. So when the new message arrives in QueueChannel it goes to that intermediate channel which is a subscribable channel without subscribers, so the message is lost.
That's correct. The QueueChannel is just a buffer for messages. It really only matters how you consume messages from there. And the common way is to use a PollingConsumer like you do with that e.poller(). It is also correct to configure a taskExecutor() if you don't like to have your messages to be consumed by a TaskScheduler's thread. Not sure what you talk about an "intermediate channel" since it doesn't look like you have one declared in your outFlow. You have that .from(TC_MESSAGE_CHANNEL) and then immediately a handle() with a proper poller. So, no any extra channel in between or after. Unless you do something else in your createMessageSender().
I would suggest do not have a dynamic flow, but rather singleton one for that output system. The QueueChannel can be configured for a persistent message store and poller can be transactional. So, if no connection to a target system, the transaction is going to be rolled back and message remains in the store: https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#message-store.
You also can just stop() a polling consumer for that handle() when no connection, so no messages are polled from the queue at that moment.

SI subscription to multiple mqtt topics

I'm trying to learn how to handle MQTT Messages in Spring-Integration.
Have created a converter, that subscribes with a single MqttPahoMessageDrivenChannelAdapter per MQTT Topic for consuming and converting the messages.
The problem is our data provider is planning to "speed-up" publishing messages on his side. So instead of having a few(<=10) topics each of which has messages with about 150 fields it is planned to publish each of those fields to the separate MQTT topic.
This means my converter would have to consume ca. 1000 mqtt topics, but I do not know whether:
Is spring-integration still a good choice for it. Cause afaik. the mentioned adapter uses the PAHO MqttClient that will consume the messages from all of the topics it is subscribed to in one single thread and creating 1000 instances of those adapters is an overkill.
If we stick further to spring-integration and use the provided components, would it be a good idea to create a single inbound adapter for all of the fields, that previously were in messages of one topic but moving the conversion away from the adapter bean to a separate bean ( that does the conversion) connected with an executer-channel to the adapter and thus executing the conversion of those fields on some threadpool in parallel.
Thanks in advance for your answers!
I think your idea makes sense.
For that purpose you need to implement a passthrough MqttMessageConverter and provide an MqttMessage as a payload and topic as a header:
public class PassThroughMqttMessageConverter implements MqttMessageConverter {
#Override
public Message<?> toMessage(String topic, MqttMessage mqttMessage) {
return MessageBuilder.withPayload(mqttMessage)
.setHeader(MqttHeaders.RECEIVED_TOPIC, topic)
.build();
}
#Override
public Object fromMessage(Message<?> message, Class<?> targetClass) {
return null;
}
#Override
public Message<?> toMessage(Object payload, MessageHeaders headers) {
return null;
}
}
So, you really will be able to perform a target conversion downstream, after a mentioned ExecutorChannel in the custom transformer.
You also may consider to implement a custom MqttPahoClientFactory (an extension of the DefaultMqttPahoClientFactory may work as well) and provide a custom ScheduledExecutorService to be injected into the MqttClient you are going create in the getClientInstance().

How to make the call to service activator transactional after split

I am using following to define my integration flow:
#Bean
public IntegrationFlow pollingFlow(MessageSource<Object> jdbcMessageSource) {
return IntegrationFlows.from(jdbcMessageSource,
c -> c.poller(Pollers.fixedRate(250, TimeUnit.MILLISECONDS)
.maxMessagesPerPoll(1)
.transactional()))
.split()
.channel(taskSourceChannel())
.get();
}
I would like to make call to service activator that reads from taskSourceChannel as transactional. Also, I want to use following with my transaction.
#Bean
public TransactionSynchronizationFactory transactionSynchronizationFactory() {
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor
= new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setAfterCommitChannel(successChannel());
syncProcessor.setAfterRollbackChannel(failureChannel());
return new DefaultTransactionSynchronizationFactory(syncProcessor);
}
The taskSourceChannel is an executor channel.
#Bean
public MessageChannel taskSourceChannel() {
return new ExecutorChannel(executor());
}
How can I add transaction support after split while using TransactionSynchronizationFactory. I don't want to make polling transacational. The only solution I can think of is putting transactional on activator but that won't solve my problem. I would like to make it applicable to any service activator uses this channel.
You question is not so clear, but you definitely need to consider to add transaction into the service activator. Although you don't show what is the subscriber for that taskSourceChannel, but you need to think do not have several subscribers on it.
Nevertheless I think your point is to apply TX into the service activator on this taskSourceChannel and everything after that one.
For this purpose Spring Integration provides a TransactionHandleMessageAdvice. See more info the Reference Manual: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints-chapter.html#tx-handle-message-advice.
The TransactionSynchronizationFactory is only used from the AbstractPollingEndpoint implementations. However you can still utilize it in your transactional context relying on the TransactionSynchronizationManager.registerSynchronization().

Howto execute a success action tied to inbound flow after message processing via DirectChannel

This question is more of a design question than a real problem. Given following basic flow:
#Bean
public DirectChannel getFileToSftpChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow sftpOutboundFlow() {
return IntegrationFlows.from(getFileToSftpChannel())
.handle(Sftp.outboundAdapter(this.sftpSessionFactory)
.useTemporaryFileName(false)
.remoteDirectory("test")).get();
}
#Bean
public IntegrationFlow filePollingInboundFlow() {
return from(s -> s.file(new File("path")).patternFilter("*.ext"),
e -> e.poller(fixedDelay(60, SECONDS).channel(getFileToSftpChannel()).get();
}
There is an inbound file polling flow which publishes messages via a DirectChannel to an outbound SFTP flow uploading the file.
After the entire flow finishes, I want to execute a "success" action: move the original file (locally) to an archive folder.
Using the DirectChannel, I understand that the upload will happen in the same thread as the file polling.
In other words, the file poller blocks untill the upload completes (or an error message is returned which is then pushed to the error channel).
Knowing this, I want to place the 'success' action (= moving the original file) on the inbound flow. Things I already know about and don't want to use:
Another 'handle' on the sftpOutbound. Reason: moving the file is tied to the inboud flow not the outbound flow. For ex. if I would introduce another, 2nd, producer later on (eg. a JMS inbound flow) publishing to the same channel, there would be no 'file' to be moved.
Adding an interceptor on the DirectChannel and use the 'afterSendCompletion'. Reason: same as above, I want to logic to be tied to the inbound flow
Add transaction semantics on the inbound flow and react on 'commit'. Reason: as all of this is non transactional (file system/SFTP based) I want to avoid using this.
Another thing I tried was adding an 'handle' on the inbound flow. However, I learned as the inbound flow has no real 'reply', the handle is executed before the message is sent, so this doesn't work as the move has to be executed after successful processing of the message.
Question in short: what is the standard way of executing an action supplied by the producer (=inbound flow) after the message was successfully processed by a consumer (=outbound flow) via the DirectChannel?
Well, the standard way to do something similar is transaction and that's why we some time ago introduced the PseudoTransactionManager and the XML sample for similar task looks like:
<int-file:inbound-channel-adapter id="realTx" channel="txInput" auto-startup="false"
directory="${java.io.tmpdir}/si-test2">
<int:poller fixed-rate="500">
<int:transactional synchronization-factory="syncFactory"/>
</int:poller>
</int-file:inbound-channel-adapter>
<bean id="transactionManager" class="org.springframework.integration.transaction.PseudoTransactionManager"/>
<int:transaction-synchronization-factory id="syncFactory">
<int:after-commit expression="payload.delete()"/>
</int:transaction-synchronization-factory>
As you see we remove the file in the end of transaction which is caused really after your move to SFTP.
I'd say it is the best way to be tied with only the producer.
Another way is to introduce one more channel before getFileToSftpChannel() and apply the ChannelInterceptor.afterSendCompletion which will be invoked in the end too, by the same single-thread reason. With this approach you should just bridge all your producers with their specific DirectChannels to that single getFileToSftpChannel() for the SFTP adapter.
So, it's up to you what to choose. You have good argument from the architectural perspective to divide the logic by the responsibility levels, but as you see there is no so much choice...
Any other ideas are welcome!
You can try something like the following
#Bean
public DirectChannel getFileToSftpChannel() {
DirectChannel directChannel = new DirectChannel();
directChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public void afterSendCompletion(final Message<?> message,
final MessageChannel channel, final boolean sent, final Exception ex) {
if (ex == null) {
new Archiver().archive((File) message.getPayload());
}
}
});
return directChannel;
}

JMS Outbound Gateway response read throttling

I have a JMS Outbound Gateway which sends messages out via a request queue and receives messages in via a response queue. I would like to know what is the simplest way to apply throttling to the receive part of messages off the response queue. I have tried setting a Poller to the Outbound Gateway but, when I set it, the response messages are not consumed at all. Can a Poller be used in Outbound Gateways for the purpose of message consumption throttling? If so, how? If not, how can I best throttle message response consumption instead?
My stack is:
o.s.i:spring-integration-java-dsl:1.0.0.RC1
o.s.i:spring-integration-jms:4.0.4.RELEASE
My IntegrationgConfig.class:
#Configuration
#EnableIntegration
public class IntegrationConfig {
...
#Bean
public IntegrationFlow testFlow() {
return IntegrationFlows
.from("test.request.ch")
.handle(Jms.outboundGateway(connectionFactory)
.receiveTimeout(45000)
.requestDestination("REQUEST_QUEUE")
.replyDestination("RESPONSE_QUEUE")
.correlationKey("JMSCorrelationID"), e -> {
e.requiresReply(true);
e.poller(Pollers.fixedRate(1000).maxMessagesPerPoll(2)); // when this poller is set, response messages are not consumed at all...
})
.handle("testService",
"testMethod")
.channel("test.response.ch").get();
}
...
}
Cheers,
PM
Since you are going to fetch messages from the response queue the .poller() doesn't help you.
We need poller if our endpoint's input-channel (in your case test.request.ch) is a PollableChannel. See docs on the matter.
There .replyContainer() option on the Jms.outboundGateway for you. With that you can configure concurrentConsumers options to achieve better throughput on response queue.
Otherwise the JmsOutboundGateway creates MessageConsumer for each request message.

Resources