Spring Integration Jms.outboundGateway x Firewalls - spring-integration

This below flow is working fine when the network firewall is open, but if we close it and the JMS broker cannot be reached, the timeout is not happening at all and the caller process is getting stuck. It seems the gateway timer does not start because the thread did not return... Can you suggest what's the best way to handle this unhappy scenario?
#Bean
public IntegrationFlow request() {
return IntegrationFlows.from(requestChannel)
.handle(Jms.outboundGateway(this.connectionFactyory)
.requestDestination(requestQueue)
.extractReplyPayload(false)
.correlationKey("JMSCorrelationID")
.receiveTimeout(5000L))
.channel(replyChannel).get();
}

See receiveTimeout JavaDocs:
/**
* Set the max timeout value for the MessageConsumer's receive call when
* waiting for a reply. The default value is 5 seconds.
* #param receiveTimeout The receive timeout.
*/
public void setReceiveTimeout(long receiveTimeout) {
So, it is really about a reply, which is going to be triggered when we have already sent a request. More over you definitely use those default 5 seconds for its value. So, the blocking problem is somewhere else.
It would be great if you share with us some DEBUG (or even TRACE) logs for org.springframework.integration to see what is going on when you send a message to the flow without JMS broker access.

Related

Spring integration: how to specify a custom task executor for QueueChannel

I'm writing a Spring Integration application which receives messages from an input system, transforms them and sends them to an output system. The connection to the output system is not always available. The messages from the input system can come at any moment. In case if they are coming when the output system is not available, they shouldn't be lost and send eventually when the output system is available. So I store the messages from the input system in a QueueChannel:
#Configuration
class TcInFlowConfiguration {
#Bean
fun tcInFlow(
#Qualifier(TC_MESSAGE_LISTENER) listener: MessageProducerSupport,
#Qualifier(TC_MESSAGE_CHANNEL) messageChannel: MessageChannel
): IntegrationFlow {
return IntegrationFlow
.from(listener)
.transform { msg: ByteArray -> RamsesTcFrame.deserialize(msg) }
.channel(messageChannel)
.get()
}
#Bean
#Qualifier(TC_MESSAGE_CHANNEL)
fun tcMessageChannel(): MessageChannel {
return MessageChannels.queue().get()
}
The app receives an API call to open/close the connection to the output system, so I create and remove the output integration flow programmatically via IntegrationFlowContext:
val outFlow = IntegrationFlow
.from(TC_MESSAGE_CHANNEL)
.handle(createMessageSender())
.get()
integrationFlowContext.registration(outFlow).register()
When the messages are polled from the queue to be processed by the outFlow, the default Spring task executor is used (I see "scheduling-1" as a thread name in logs). The problem is that I have multiple independent integration flows in the app with the multiple queue channels, so they all got mixed up by being processed by the same task executor. What I want is to process each flow in its own dedicated thread, so the flows won't block each other. How can I achieve this?
Spring Boot v3.0.2, Spring v6.0.4
I tried setting a task scheduler for my QueueChannel:
val queueChannel = MessageChannels.queue().get()
queueChannel.setTaskScheduler(taskScheduler)
It didn't have any effect, taskScheduler seems to be simply not used by QueueChannel implementation.
I tried using ExecutorChannel instead of QueueChannel which supports setting a custom Executor. Unfortunately, ExecutorChannel doesn't buffer messages in memory, so if there are no subscribers to the channel the messages are lost.
Finally, I tried defining a poller in the outFlow to poll the messages from the QueueChannel:
IntegrationFlow
.from(TC_MESSAGE_CHANNEL)
.handle(createMessageSender()) { e -> e.poller(Pollers.fixedDelay(10).taskExecutor(taskExecutor)) }
.get()
This didn't work either. After the connection to the output system is closed and the outFlow is removed, the intermediate channel created by the poller remains in Spring context. So when the new message arrives in QueueChannel it goes to that intermediate channel which is a subscribable channel without subscribers, so the message is lost.
That's correct. The QueueChannel is just a buffer for messages. It really only matters how you consume messages from there. And the common way is to use a PollingConsumer like you do with that e.poller(). It is also correct to configure a taskExecutor() if you don't like to have your messages to be consumed by a TaskScheduler's thread. Not sure what you talk about an "intermediate channel" since it doesn't look like you have one declared in your outFlow. You have that .from(TC_MESSAGE_CHANNEL) and then immediately a handle() with a proper poller. So, no any extra channel in between or after. Unless you do something else in your createMessageSender().
I would suggest do not have a dynamic flow, but rather singleton one for that output system. The QueueChannel can be configured for a persistent message store and poller can be transactional. So, if no connection to a target system, the transaction is going to be rolled back and message remains in the store: https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#message-store.
You also can just stop() a polling consumer for that handle() when no connection, so no messages are polled from the queue at that moment.

How to stop the polling of #InboundChannelAdapter when kafka went down to prevent data loss?

i am using spring cloud data flow.
#Bean
#InboundChannelAdapter(channel = TbeSource.PR1, poller = #Poller(fixedDelay = "2"))
public Supplier<Product> getProductSource(ProductBuilder dataAccess) {
return ()->dataAccess.getNext();
};
if the kafka was went down suddenly then how can we stop this polling behaviour to prevent the data loss?
while i am testing even if kafka went down data is continously readed from database and continously it is trying to send the record to kafka?
Expected performance is to stop the data polling once the kafka went down..
Is there any possible ways to perform that?
The #Poller of the #InboundChannelAdapter can be configured with an errorChannel:
/**
* #return The the bean name of default error channel
* for the underlying {#code MessagePublishingErrorHandler}.
* #since 4.3.3
*/
String errorChannel() default "";
So, whenever exception happens downstream the flow on that TbeSource.PR1 channel, it is going to be delivered to the provided error channel for some error flow on it.
Over there you could follow the logic to stop the SourcePollingChannelAdapter created for that #InboundChannelAdapter and Supplier combination. In this case the bean id is like this: [CONFIGURATION_CLASS_BEAN_NAME.getProductSource.inboundChannelAdapter]. See here for more info: https://docs.spring.io/spring-integration/reference/html/configuration.html#annotations_on_beans. As it states you also can just use an #EndpointId instead to simplify your life with dependency injection routine.
Make sure you rethrow an exception to let DB transaction to roll back to avoid data loss!

spring-integration amqp reject message if not procced

I need solve this scenario. I have two amqp consumer set to fetch one message.
#Bean
public IntegrationFlow jmsPrimaryFlow() {
return IntegrationFlows.from(
Amqp.inboundGateway(
taskManager().getPrimaryMessageListenerContainer()).errorChannel(errorChannel())
)
.channel(taskChannel())
.get();
}
#Bean
public IntegrationFlow jmsSecondaryFlow() {
return IntegrationFlows.from(
Amqp.inboundGateway(
taskManager().getSecondaryMessageListenerContainer()).errorChannel(errorChannel())
.autoStartup(false)
)
.channel(taskChannel())
.get();
}
taskChannel is queuechannel but allow only one message consume at once so no parallel processing.
How can I reject one message after some timeout if another message took too long to proceed.
so this message will returned back to queue to proceed by another node? Just I mean that those two consumer prefetch two messages but only one can be processed at once so how release second prefetched message if the first one took to long to proceed.
Your question is not clear. You could set a capacity limit (say 1) on the queue channel and set a sendTimeout on the gateway. Then, if the queue is full, attempts to add messages will fail after the timeout. However, using a queue channel in this scenario is dangerous - you can lose messages if the server fails because messages are ack'd as soon as they are deposited in the queue.
If you use a RendezvousChannel instead, the producer will block waiting for the consumer to receive the message.
But bear in mind, even this single message can be lost if the server crashes after the handoff.

JMS Outbound Gateway response read throttling

I have a JMS Outbound Gateway which sends messages out via a request queue and receives messages in via a response queue. I would like to know what is the simplest way to apply throttling to the receive part of messages off the response queue. I have tried setting a Poller to the Outbound Gateway but, when I set it, the response messages are not consumed at all. Can a Poller be used in Outbound Gateways for the purpose of message consumption throttling? If so, how? If not, how can I best throttle message response consumption instead?
My stack is:
o.s.i:spring-integration-java-dsl:1.0.0.RC1
o.s.i:spring-integration-jms:4.0.4.RELEASE
My IntegrationgConfig.class:
#Configuration
#EnableIntegration
public class IntegrationConfig {
...
#Bean
public IntegrationFlow testFlow() {
return IntegrationFlows
.from("test.request.ch")
.handle(Jms.outboundGateway(connectionFactory)
.receiveTimeout(45000)
.requestDestination("REQUEST_QUEUE")
.replyDestination("RESPONSE_QUEUE")
.correlationKey("JMSCorrelationID"), e -> {
e.requiresReply(true);
e.poller(Pollers.fixedRate(1000).maxMessagesPerPoll(2)); // when this poller is set, response messages are not consumed at all...
})
.handle("testService",
"testMethod")
.channel("test.response.ch").get();
}
...
}
Cheers,
PM
Since you are going to fetch messages from the response queue the .poller() doesn't help you.
We need poller if our endpoint's input-channel (in your case test.request.ch) is a PollableChannel. See docs on the matter.
There .replyContainer() option on the Jms.outboundGateway for you. With that you can configure concurrentConsumers options to achieve better throughput on response queue.
Otherwise the JmsOutboundGateway creates MessageConsumer for each request message.

Spring JMS Outbound Gateway receive timeout being ignored

I have a Spring Integration flow which sends a message out via a JMS Outbound Gateway which is configured to have a receive timeout of 45 seconds. I am trying to test the receive timeout period by sending a message out in a setup where the message is never consumed on the other side (therefore a response doesn't come back). However, when I run the test, the message is placed in the outbound queue but, the Outbound Gateway's receive timeout never occurs (after 45 seconds). Any ideas what reasons there could be for this happening (not happening)?
My stack is:
o.s.i:spring-integration-java-dsl:1.0.0.M3
o.s.i:spring-integration-jms:4.0.4.RELEASE
My IntegrationgConfig.class:
#Configuration
#EnableIntegration
public class IntegrationConfig {
...
#Bean
public IntegrationFlow testFlow() {
return IntegrationFlows
.from("test.request.ch")
.handle(Jms.outboundGateway(connectionFactory)
.receiveTimeout(45000)
.requestDestination("REQUEST_QUEUE")
.replyDestination("RESPONSE_QUEUE")
.correlationKey("JMSCorrelationID"))
.handle("testService",
"testMethod")
.channel("test.response.ch").get();
}
...
}
In terms of JMS configuration, the connection factory used is a standard CachingConnectionFactory which targets an MQConnectionFactory.
Thanks in advance for any help on this.
PM
--- UPDATE ---
I have turned on debugging and I can see that when the timeout occurs the following message is logged:
AbstractReplyProducingMessageHandler - handler 'org.springframework.integration.jms.JmsOutboundGateway#0' produced no reply for request Message: [Payload byte[835]][...]
Just need to find out how to capture this event in the flow?
--- UPDATE 2 ---
The message being sent out has an ERROR_CHANNEL header set on it to which I would expect the timeout exception to be routed to - but this routing does not happen?
Is it possible that the CachingConnectionFactory is handling the exception and not passing it back to the flow?
To make it working you need to add the second Lambda to the .handle() with Jms:
.handle(Jms.outboundGateway(connectionFactory)
.receiveTimeout(45000)
.requestDestination("REQUEST_QUEUE")
.replyDestination("RESPONSE_QUEUE")
.correlationKey("JMSCorrelationID"),
e -> e.requiresReply(true))
By default AbstractReplyProducingMessageHandler doesn't require reply even if receiveTimeout is exhausted, and we see that by logs shown by you.
However, I see that we should revise all MessageHandlerSpecs, because XML support changes the requires-reply for some components to true by default.
Feel free to raise a JIRA issue on the matter and we'll address it soon, because the GA release for Java DSL is planned over a week or two: https://spring.io/blog/2014/10/31/spring-integration-java-dsl-1-0-rc1-released
Thank you for the attention to this stuff!

Resources