I need solve this scenario. I have two amqp consumer set to fetch one message.
#Bean
public IntegrationFlow jmsPrimaryFlow() {
return IntegrationFlows.from(
Amqp.inboundGateway(
taskManager().getPrimaryMessageListenerContainer()).errorChannel(errorChannel())
)
.channel(taskChannel())
.get();
}
#Bean
public IntegrationFlow jmsSecondaryFlow() {
return IntegrationFlows.from(
Amqp.inboundGateway(
taskManager().getSecondaryMessageListenerContainer()).errorChannel(errorChannel())
.autoStartup(false)
)
.channel(taskChannel())
.get();
}
taskChannel is queuechannel but allow only one message consume at once so no parallel processing.
How can I reject one message after some timeout if another message took too long to proceed.
so this message will returned back to queue to proceed by another node? Just I mean that those two consumer prefetch two messages but only one can be processed at once so how release second prefetched message if the first one took to long to proceed.
Your question is not clear. You could set a capacity limit (say 1) on the queue channel and set a sendTimeout on the gateway. Then, if the queue is full, attempts to add messages will fail after the timeout. However, using a queue channel in this scenario is dangerous - you can lose messages if the server fails because messages are ack'd as soon as they are deposited in the queue.
If you use a RendezvousChannel instead, the producer will block waiting for the consumer to receive the message.
But bear in mind, even this single message can be lost if the server crashes after the handoff.
Related
I'm writing a Spring Integration application which receives messages from an input system, transforms them and sends them to an output system. The connection to the output system is not always available. The messages from the input system can come at any moment. In case if they are coming when the output system is not available, they shouldn't be lost and send eventually when the output system is available. So I store the messages from the input system in a QueueChannel:
#Configuration
class TcInFlowConfiguration {
#Bean
fun tcInFlow(
#Qualifier(TC_MESSAGE_LISTENER) listener: MessageProducerSupport,
#Qualifier(TC_MESSAGE_CHANNEL) messageChannel: MessageChannel
): IntegrationFlow {
return IntegrationFlow
.from(listener)
.transform { msg: ByteArray -> RamsesTcFrame.deserialize(msg) }
.channel(messageChannel)
.get()
}
#Bean
#Qualifier(TC_MESSAGE_CHANNEL)
fun tcMessageChannel(): MessageChannel {
return MessageChannels.queue().get()
}
The app receives an API call to open/close the connection to the output system, so I create and remove the output integration flow programmatically via IntegrationFlowContext:
val outFlow = IntegrationFlow
.from(TC_MESSAGE_CHANNEL)
.handle(createMessageSender())
.get()
integrationFlowContext.registration(outFlow).register()
When the messages are polled from the queue to be processed by the outFlow, the default Spring task executor is used (I see "scheduling-1" as a thread name in logs). The problem is that I have multiple independent integration flows in the app with the multiple queue channels, so they all got mixed up by being processed by the same task executor. What I want is to process each flow in its own dedicated thread, so the flows won't block each other. How can I achieve this?
Spring Boot v3.0.2, Spring v6.0.4
I tried setting a task scheduler for my QueueChannel:
val queueChannel = MessageChannels.queue().get()
queueChannel.setTaskScheduler(taskScheduler)
It didn't have any effect, taskScheduler seems to be simply not used by QueueChannel implementation.
I tried using ExecutorChannel instead of QueueChannel which supports setting a custom Executor. Unfortunately, ExecutorChannel doesn't buffer messages in memory, so if there are no subscribers to the channel the messages are lost.
Finally, I tried defining a poller in the outFlow to poll the messages from the QueueChannel:
IntegrationFlow
.from(TC_MESSAGE_CHANNEL)
.handle(createMessageSender()) { e -> e.poller(Pollers.fixedDelay(10).taskExecutor(taskExecutor)) }
.get()
This didn't work either. After the connection to the output system is closed and the outFlow is removed, the intermediate channel created by the poller remains in Spring context. So when the new message arrives in QueueChannel it goes to that intermediate channel which is a subscribable channel without subscribers, so the message is lost.
That's correct. The QueueChannel is just a buffer for messages. It really only matters how you consume messages from there. And the common way is to use a PollingConsumer like you do with that e.poller(). It is also correct to configure a taskExecutor() if you don't like to have your messages to be consumed by a TaskScheduler's thread. Not sure what you talk about an "intermediate channel" since it doesn't look like you have one declared in your outFlow. You have that .from(TC_MESSAGE_CHANNEL) and then immediately a handle() with a proper poller. So, no any extra channel in between or after. Unless you do something else in your createMessageSender().
I would suggest do not have a dynamic flow, but rather singleton one for that output system. The QueueChannel can be configured for a persistent message store and poller can be transactional. So, if no connection to a target system, the transaction is going to be rolled back and message remains in the store: https://docs.spring.io/spring-integration/docs/current/reference/html/system-management.html#message-store.
You also can just stop() a polling consumer for that handle() when no connection, so no messages are polled from the queue at that moment.
I have to listen a queue using spring integration flow and intgeration sqs. Once message is received from queue it should trigger a integration flow. Below is the things which I am trying but everythings fine in but afater receiving test it is not triggering any Integration flow. Please let me know where I am doing wrong:
UPDATED as per comment from Artem
Adapter for SQS.
#Bean
public MessageProducerSupport sqsMessageDrivenChannelAdapter() {
SqsMessageDrivenChannelAdapter adapter = new SqsMessageDrivenChannelAdapter(amazonSQSAsync, "Main");
adapter.setOutputChannel(inputChannel());
adapter.setAutoStartup(true);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.NEVER);
adapter.setMaxNumberOfMessages(10);
adapter.setVisibilityTimeout(5);
return adapter;
}
Queue configured:
#Bean
public MessageChannel inputChannel() {
return new DirectChannel();
}
Now the main integration flow trigger point:
#Bean
public IntegrationFlow inbound() {
return IntegrationFlows.from("inputChannel").transform(i -> "TEST_FLOW").get();
}
}
Appreciate any type of help.
The sqsMessageDrivenChannelAdapter() must be declared as a #Bean
The inbound() must be declared as a #Bean
This one fully does not make sense IntegrationFlows.from(MessageChannels.queue()). What is the point to start the flow from anonymous channel? Who and how is going to produce messages to that channel?
Make yourself familiar with different channels: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations
Pay attention that QueueChannel must be consumed via polling endpoint.
Right, there is a default poller auto-configured by Spring Boot, but it is based on a single thread in the TaskScheduler and has a polling period as 10 millis.
I wouldn't recommend to hand off SQS messages to the QueueChannel: when consumer fails, you lose the data. It is better to process those messages in the consumer thread.
Otherwise your intention is not clear in the provided code.
Can you, please, share with us what error you get or anything else?
You also can turn on DEBUG logging level for org.springframework.integration to see how your messages are processed.
I have a queue receiver, which reads messages from the queue and process the message (do some processing and inserts some data to the azure table or retrieves the data).
What I observed was that any exception that my processing method (SendResponseAsync()) throws results in retry i.e. redelivery of the message to the default 10 times.
Can this behavior be customized i.e. I only retry for certain exception and ignore for other. Like if there is some network issue, then it makes sense to retry but if it is BadArgumentException(poisson message), then I may not want to retry.
Since retry is taken care by ServiceBus client library, can we customize this behavior ?
This is the code at the receiver end
public MessagingServer(QueueConfiguration config)
{
this.requestQueueClient = QueueClient.CreateFromConnectionString(config.ConnectionString, config.QueueName);
this.requestQueueClient.OnMessageAsync(this.DispatchReplyAsync);
}
private async Task DispatchReplyAsync(BrokeredMessage message)
{
await this.SendResponseAsync(message);
}
I am using
MessageProducerSupport messageProducer =
Jms.messageDriverChannelAdapter(jmsConnectionFactory, TransactedMessageListenerContainer.class)
.destination(queue)
.get();
to consume messages from ActiveMQ queue.
This is first part of my IntegrationFlow and then multiple stages occur (transform, route, handle..) within transaction
It is there to handle messages from upstream
In order to get the ACK from Spring integration pipeline I used Jms.inboundGateway(jmsConnectionFactory, TransactedMessageListenerContainer.class) which doesn't break existing flow and everything works
When I set replyTo header of upstream message, I would assume Spring Integration would send the object of the last stage of IntegrationFlow which was successful back to replyTo queue
Is my approach correct?
Is it possible to achieve such use-case?
Yes, that's correct and should work by its (Messaging Gateway) premise.
The Jms.inboundGateway() is based on the ChannelPublishingJmsMessageListener with the expectReply = true and there is a code:
private Destination getReplyDestination(javax.jms.Message request, Session session) throws JMSException {
Destination replyTo = request.getJMSReplyTo();
....
return replyTo;
}
to obtain a replyTo from the request.
Everything that works well, if your last MessageHandler in the flow is a AbstractReplyProducingMessageHandler and really returns something to be produced to the replyChannel from headers.
If you aren't sure in your case, so share the end of your flow and the place where would you like to send a reply.
I have a JMS Outbound Gateway which sends messages out via a request queue and receives messages in via a response queue. I would like to know what is the simplest way to apply throttling to the receive part of messages off the response queue. I have tried setting a Poller to the Outbound Gateway but, when I set it, the response messages are not consumed at all. Can a Poller be used in Outbound Gateways for the purpose of message consumption throttling? If so, how? If not, how can I best throttle message response consumption instead?
My stack is:
o.s.i:spring-integration-java-dsl:1.0.0.RC1
o.s.i:spring-integration-jms:4.0.4.RELEASE
My IntegrationgConfig.class:
#Configuration
#EnableIntegration
public class IntegrationConfig {
...
#Bean
public IntegrationFlow testFlow() {
return IntegrationFlows
.from("test.request.ch")
.handle(Jms.outboundGateway(connectionFactory)
.receiveTimeout(45000)
.requestDestination("REQUEST_QUEUE")
.replyDestination("RESPONSE_QUEUE")
.correlationKey("JMSCorrelationID"), e -> {
e.requiresReply(true);
e.poller(Pollers.fixedRate(1000).maxMessagesPerPoll(2)); // when this poller is set, response messages are not consumed at all...
})
.handle("testService",
"testMethod")
.channel("test.response.ch").get();
}
...
}
Cheers,
PM
Since you are going to fetch messages from the response queue the .poller() doesn't help you.
We need poller if our endpoint's input-channel (in your case test.request.ch) is a PollableChannel. See docs on the matter.
There .replyContainer() option on the Jms.outboundGateway for you. With that you can configure concurrentConsumers options to achieve better throughput on response queue.
Otherwise the JmsOutboundGateway creates MessageConsumer for each request message.