spring-integration: dispatch queued messages to selective consumer - spring-integration

I have a spring integration flow which produces messages that should be kept around waiting for an appropriate consumer to come along and consume them.
#Bean
public IntegrationFlow messagesPerCustomerFlow() {
return IntegrationFlows.
from(WebFlux.inboundChannelAdapter("/messages/{customer}")
.requestMapping(r -> r
.methods(HttpMethod.POST)
)
.requestPayloadType(JsonNode.class)
.headerExpression("customer", "#pathVariables.customer")
)
.channel(messagesPerCustomerQueue())
.get();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerSpec poller() {
return Pollers.fixedRate(100);
}
#Bean
public QueueChannel messagesPerCustomerQueue() {
return MessageChannels.queue()
.get();
}
The messages in the queue should be delivered as server-sent events via http as shown below.
The PublisherSubscription is just a holder for the Publisher and the IntegrationFlowRegistration, the latter is used to destroy the dynamically created flow when it is no longer needed (note that the incoming message for the GET has no content, which is not handled properly ATM by the Webflux integration, hence a small workaround is necessary to get access to the path variable shoved to the customer header):
#Bean
public IntegrationFlow eventMessagesPerCustomer() {
return IntegrationFlows
.from(WebFlux.inboundGateway("/events/{customer}")
.requestMapping(m -> m.produces(TEXT_EVENT_STREAM_VALUE))
.headerExpression("customer", "#pathVariables.customer")
.payloadExpression("''") // neeeded to make handle((p,h) work
)
.log()
.handle((p, h) -> {
String customer = h.get("customer").toString();
PublisherSubscription<JsonNode> publisherSubscription =
subscribeToMessagesPerCustomer(customer);
return Flux.from(publisherSubscription.getPublisher())
.map(Message::getPayload)
.doFinally(signalType ->
publisherSubscription.unsubscribe());
})
.get();
}
The above request for server-sent events dynamically registers a flow which subscribes to the queue channel on demand with a selective consumer realized by a filter with throwExceptionOnRejection(true). Following the spec for Message Handler chain that should ensure that the message is offered to all consumers until one accepts it.
public PublisherSubscription<JsonNode> subscribeToMessagesPerCustomer(String customer) {
IntegrationFlowBuilder flow = IntegrationFlows.from(messagesPerCustomerQueue())
.filter("headers.customer=='" + customer + "'",
filterEndpointSpec -> filterEndpointSpec.throwExceptionOnRejection(true));
Publisher<Message<JsonNode>> messagePublisher = flow.toReactivePublisher();
IntegrationFlowRegistration registration = integrationFlowContext.registration(flow.get())
.register();
return new PublisherSubscription<>(messagePublisher, registration);
}
This construct works in principle, but with the following issues:
Messages sent to the queue while there are no subscribers at all lead to a MessageDeliveryException: Dispatcher has no subscribers for channel 'application.messagesPerCustomerQueue'
Messages sent to the queue while no matching subscriber is present yet lead to an AggregateMessageDeliveryException: All attempts to deliver Message to MessageHandlers failed.
What I want is that the message remains in the queue and is repeatedly offered to all subscribers until it is either consumed or expires (a proper selective consumer). How can I do that?

note that the incoming message for the GET has no content, which is not handled properly ATM by the Webflux integration
I don't understand this concern.
The WebFluxInboundEndpoint works with this algorithm:
if (isReadable(request)) {
...
else {
return (Mono<T>) Mono.just(exchange.getRequest().getQueryParams());
}
Where GET method really goes to the else branch. And the payload of the message to send is a MultiValueMap. And also we recently fixed with you the problem for the POST, which is released already as well in version 5.0.5: https://jira.spring.io/browse/INT-4462
Dispatcher has no subscribers
Can't happen on the QueueChannel in principle. There is no any dispatcher on there at all. It is just queue and sender offers message to be stored. You are missing something else to share with us. But let's call things with its own names: the messagesPerCustomerQueue is not a QueueChannel in your application.
UPDATE
Regarding:
What I want is that the message remains in the queue and is repeatedly offered to all subscribers until it is either consumed or expires (a proper selective consumer)
Only what we see is a PollableJmsChannel based on the embedded ActiveMQ to honor TTL for messages. As a consumer of this queue you should have a PublishSubscribeChannel with the setMinSubscribers(1) to make MessagingTemplate to throw a MessageDeliveryException when there is no subscribers yet. This way a JMS transaction will be rolled back and message will return to the queue for the next polling cycle.
The problem with in-memory QueueChannel that there is no transactional redelivery and message once polled from that queue is going to be lost.
Another option is similar to JMS (transactional) is a JdbcChannelMessageStore for the QueueChannel. Although this way we don't have a TTL functionality...

Related

Spring Integration - Control retry logic in http outboundAdapter

I configured a route in Spring Integration 5.4.4 that read from an AMQP queue and write to an http outbound adapter.
I'm not able to control retries when, for example, I programmatically declare a wrong http hostname for the http outbound adapter (Cause java.net.UnknownHostException).
This seems to generate an infinite retry (message not acked on RabbitMQ container), even if I configured the RetryTemplate logic in the amqpInboundAdapter.
My goal should be: requeue the message for N times in case http outbound adapter is in error, discard the message otherwise and don't requeue again.
Code here:
Spring Integration route
public IntegrationFlow route(AmqpInboundChannelAdapterSMLCSpec amqpInboundChannelAdapterSMLCSpec) {
return IntegrationFlows
.from(amqpInboundChannelAdapterSMLCSpec)
.filter(validJsonFilter())
.enrichHeaders(h -> h.header("X-Insert-Key",utboundHttpConfig.outboundHttpToken))
.enrichHeaders(h -> h.header("Content-Encoding", "gzip"))
.enrichHeaders(h -> h.header("Content-Type", "application/json"))
.handle(Http.outboundChannelAdapter(outboundHttpConfig.outboundHttpUrl) .mappedRequestHeaders("X-Insert-Key")
.httpMethod(HttpMethod.POST)
)
.get();
}
AmqpInboundChannelAdapterSMLCSpec
public AmqpInboundChannelAdapterSMLCSpec gatewayEventInboundAmqpAdapter(ConnectionFactory connectionFactory) {
RetryTemplate retryTemplate = new RetryTemplate();
exceptionClassifierRetryPolicy.setPolicyMap(exceptionPolicy);
retryTemplate.setBackOffPolicy(new ExponentialBackOffPolicy());
retryTemplate.setRetryPolicy(new SimpleRetryPolicy(1));
retryTemplate.setThrowLastExceptionOnExhausted(true);
return Amqp
.inboundAdapter(connectionFactory, rabbitConfig.inboundQueue())
.configureContainer(c -> c
.concurrentConsumers(3)
.maxConcurrentConsumers(5)
.receiveTimeout(2000)
.alwaysRequeueWithTxManagerRollback(false)
)
.retryTemplate(retryTemplate);
}
Any ideas?
Thanks a lot
requeue the message for N times in case http outbound adapter is in error, discard the message otherwise and don't requeue again.
When you use a retry on the AMQP MessageListenerContainer, there is on requeue: the retry is done in the memory without round trips to the broker.
Anyway what you do so far is OK. Only what you are missing is a RejectAndDontRequeueRecoverer to be configured for that Amqp.inboundAdapter() to decide what to do with an AMQP message when all the retry attempts are exhausted.
Unfortunately the direct MessageRecoverer configuration for channel adapter has been added since version 5.5: https://docs.spring.io/spring-integration/docs/5.5.0-M3/reference/html/whats-new.html#x5.5-amqp.
For the current version it has to be done via recoveryCallback(RecoveryCallback<?> recoveryCallback) option and respective delegation:
.recoveryCallback(context -> {
org.springframework.amqp.core.Message messageToReject =
(org.springframework.amqp.core.Message) RetrySynchronizationManager.getContext()
.getAttribute(AmqpMessageHeaderErrorMessageStrategy.AMQP_RAW_MESSAGE);
throw new ListenerExecutionFailedException("Retry Policy Exhausted",
new AmqpRejectAndDontRequeueException(context.getLastThrowable()), messageToReject);
}))

What is it for redisQueueInboundGateway.setReplyChannelName

just want to ask what is the redisQueueInboundGateway.setReplyChannelName for
I got a log B and then a log.
1.My question is in what situation will the log C be printed when I set it to the RedisQueueInboundGateway.
the doc in "https://docs.spring.io/spring-integration/reference/html/redis.html#redis-queue-inbound-gateway" seems incorrect for class name and class explanation such like:
2.1 the 'RedisOutboundChannelAdapter' is named in 'RedisPublishingMessageHandler'.
2.2 the 'RedisQueueOutboundChannelAdapter' is named in 'RedisQueueMessageDrivenEndpoint'.
2.3 the explanation of Redis Queue Outbound Gateway is exactly the copy of Redis Queue Inbound Gateway.
#GetMapping("test")
public void test() {
this.teller.test("testing 1");
#Gateway(requestChannel = "inputA")
void test(String transaction);
#Bean("A")
PublishSubscribeChannel getA() {
return new PublishSubscribeChannel();
}
#Bean("B")
PublishSubscribeChannel getB() {
return new PublishSubscribeChannel();
}
#Bean("C")
PublishSubscribeChannel getC() {
return new PublishSubscribeChannel();
}
#ServiceActivator(inputChannel = "A")
void aTesting(Message message) {
System.out.println("A");
System.out.println(message);
}
#ServiceActivator(inputChannel = "B")
String bTesting(Message message) {
System.out.println("B");
System.out.println(message);
return message.getPayload() + "Basdfasdfasdfadsfasdf";
}
#ServiceActivator(inputChannel = "C")
void cTesting(Message message) {
System.out.println("C");
System.out.println(message);
}
#ServiceActivator(inputChannel = "inputA")
#Bean
RedisQueueOutboundGateway getRedisQueueOutboundGateway(RedisConnectionFactory connectionFactory) {
val redisQueueOutboundGateway = new RedisQueueOutboundGateway(Teller.CHANNEL_CREATE_INVOICE, connectionFactory);
redisQueueOutboundGateway.setReceiveTimeout(5);
redisQueueOutboundGateway.setOutputChannelName("A");
redisQueueOutboundGateway.setSerializer(new GenericJackson2JsonRedisSerializer(new ObjectMapper()));
return redisQueueOutboundGateway;
}
#Bean
RedisQueueInboundGateway getRedisQueueInboundGateway(RedisConnectionFactory connectionFactory) {
val redisQueueInboundGateway = new RedisQueueInboundGateway(Teller.CHANNEL_CREATE_INVOICE, connectionFactory);
redisQueueInboundGateway.setReceiveTimeout(5);
redisQueueInboundGateway.setRequestChannelName("B");
redisQueueInboundGateway.setReplyChannelName("C");
redisQueueInboundGateway.setSerializer(new GenericJackson2JsonRedisSerializer(new ObjectMapper()));
return redisQueueInboundGateway;
}
Your concern is not clear.
2.1
There is a component (pattern name) and there is a class on background covering the logic.
Sometime they are not the same.
So, Redis Outbound Channel Adapter is covered by the RedisPublishingMessageHandler, just because there is a ConsumerEndpointFactoryBean to consume messages from the input channel and RedisPublishingMessageHandler to handle them. In other words the framework creates two beans to make such a Redis interaction working. In fact all the outbound channel adapters (gateways) are handled the same way: endpoint plus handler. Together they are called adapter or gateway depending on the type of the interaction.
2.2
I don't see such a misleading in the docs.
2.3
That's not true.
See difference:
Spring Integration introduced the Redis queue outbound gateway to perform request and reply scenarios. It pushes a conversation UUID to the provided queue,
Spring Integration 4.1 introduced the Redis queue inbound gateway to perform request and reply scenarios. It pops a conversation UUID from the provided queue
All the inbound gateways are supplied with an optional replyChannel to track the replies. It is not where this type of gateways is going to send something. It is fully opposite: the place where this inbound gateway is going to take a reply channel to send a reply message into Redis back. The Inbound gateway is initiated externally. In our case as a request message in the configured Redis list. When you Integration flow does its work, it sends a reply message to this gateway. In most cases it is done automatically using a replyChannel header from the message. But if you would like to track that reply, you add a PublishSubscribeChannel as that replyChannel option on the inbound gateway and both your service activator and the gateway get the same message.
The behavior behind that replyChannel option is explained in the Messaging Gateway chapter: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints.html#gateway-default-reply-channel
You probably right about this section in the docs https://docs.spring.io/spring-integration/reference/html/redis.html#redis-queue-inbound-gateway and those requestChannel and replyChannel are really a copy of the text from the Outbound Gateway section. That has to be fixed. Feel free to raise a GH issue so we won't forget to address it.
The C logs are going to be printed when you send a message into that C channel, but again: if you want to make a reply correlation working for the Redis Inbound Gateway it has to be as a PublishSubscribeChannel. Otherwise just omit it and your String bTesting(Message message) { will send its result to the replyChannel header.

Poll on HttpOutboundGateway

#Bean
public HttpMessageHandlerSpec documentumPolledInbound() {
return Http
.outboundGateway("/fintegration/getReadyForReimInvoices/Cz", restTemplate)
.expectedResponseType(String.class)
.errorHandler(new DefaultResponseErrorHandler())
;
}
How to poll the above, to get the payload for further processing
The HTTP Client endpoint is not pollable, but event-driven. So, as you usually call some REST Service from the curl, for example, the same way happens here. You have some .handle(), I guess, for this documentumPolledInbound() and there is some message channel to send message to trigger this endpoint to call your REST service.
Not clear how you are going to handle a response, but there is rally a way to trigger an event periodically to call this endpoint.
For that purpose we only can * mock* an inbound channel adapter which can configured with some trigger policy:
#Bean
public IntegrationFlow httpPollingFlow() {
return IntegrationFlows
.from(() -> "", e -> e.poller(p -> p.fixedDelay(10, TimeUnit.SECONDS)))
.handle(documentumPolledInbound())
.get();
}
This way we are going to send a message with an empty string as a payload to the channel for the handle(). And we do that every 10 seconds.
Since your HttpMessageHandlerSpec doesn't care about an inbound payload, it is really doesn't matter what we return from the MessageSource in the polling channel adapter.

Spring Integration JMS assured message delivery using DSL

I am trying to create a flow(1) in which message is received from TCP adapter which can be client or server and it sends the message to ActiveMQ broker.
My another flow(2) pick the message from required queue and send to the destination
TCP(client/server) ==(1)==> ActiveMQ Broker ==(2)==> HTTP Outbound adapter
I want to ensure that in case my message is not delivered to the required destination then it re-attempt to send the message again.
My current flow(1) to broker is :
IntegrationFlow flow = IntegrationFlows
.from(Tcp
.inboundAdapter(Tcp.netServer(Integer.parseInt(1234))
.serializer(customSerializer).deserializer(customSerializer)
.id("server").soTimeout(5000))
.id(hostConnection.getConnectionNumber() + "adapter"))).channel(directChannel())
.wireTap("tcpInboundMessageLogChannel").channel(directChannel())
.handle(Jms.outboundAdapter(activeMQConnectionFactory)
.destination("jmsInbound"))
.get();
this.flowContext.registration(flow).id("outflow").register();
and My flow(2) from broker to http outbound :
flow = IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(activeMQConnectionFactory)
.destination("jmsInbound"))
.channel(directChannel())
.handle(Http.outboundChannelAdapter(hostConnection.getUrl()).httpMethod(HttpMethod.POST)
.expectedResponseType(String.class)
.mappedRequestHeaders("abc"))
.get();
this.flowContext.registration(flow).id("inflow").register();
Issue:
In case of any exception during delivery for example my destination URL is not working then it re attempt to send the message.
After unsuccessfull attempt it retry 7 times i.e max attempt to 7
If still the attempt is not successful then it send the message to ActiveMQ.DLQ (Dead letter Queue) and does not re-attempt again as message is dequeued from actual queue and send to ActiveMQ.DLQ.
So, i want the scenario that no message will be lost and message will be processed in order.
First: I believe that you can configure jmsInbound for the infinite retries:
/**
* Configuration options for a messageConsumer used to control how messages are re-delivered when they
* are rolled back.
* May be used server side on a per destination basis via the Broker RedeliveryPlugin
*
* #org.apache.xbean.XBean element="redeliveryPolicy"
*
*/
public class RedeliveryPolicy extends DestinationMapEntry implements Cloneable, Serializable {
On the other hand you can configure a .handle(Http.outboundChannelAdapter( for the RequestHandlerRetryAdvice for similar retry behavior but inside the application without round trips to the JMS and back: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
Here is some sample how it can be configured from the Java DSL perspective:
#Bean
public IntegrationFlow errorRecovererFlow() {
return IntegrationFlows.from(Function.class, "errorRecovererFunction")
.handle((GenericHandler<?>) (p, h) -> {
throw new RuntimeException("intentional");
}, e -> e.advice(retryAdvice()))
.get();
}
#Bean
public RequestHandlerRetryAdvice retryAdvice() {
RequestHandlerRetryAdvice requestHandlerRetryAdvice = new RequestHandlerRetryAdvice();
requestHandlerRetryAdvice.setRecoveryCallback(new ErrorMessageSendingRecoverer(recoveryChannel()));
return requestHandlerRetryAdvice;
}
#Bean
public MessageChannel recoveryChannel() {
return new DirectChannel();
}
The RequestHandlerRetryAdvice can be configured with the RetryTemplate to apply something like AlwaysRetryPolicy. See Spring Retry project for more info: https://github.com/spring-projects/spring-retry

Multithreaded JMS Transaction enabled Consumer hungs up

My requirements are stated below:
I have to develop a wrapper service on top a queue,so i was just going through some message Queue like (ActiveMQ,Apollo,Kafka). But decided to proceed with ActiveMQ to match our usecases.Now the requirement are as follows:
1) A restful api through which different publisher will publish to queue,based on clientId queue will be selected.
2) Consumer will consume message through restful api and will consume message in batches. say consumer as for something like give me 10 message from queue.
Now the service should provide 10 message if there is 10 message or if message number is less or zero it will send accordingly. After receiving the message the client will process with the message and send back acknowledgement through different res-full uri. upon receiving that acknowledgement,the MQService should commit or rollback message from the queue.
In order to this in the MQService layer, i have used a cached,where im keeping the JMS connection and session object till acknowledgemnt is received or ttl expire.
In-order to retrieve message in batches and send back to client, i have created a multi-threaded consumer,so that for 5 batch message request,the service layer will create 5 thread each having different connection and session object( as stated in ActiveMQ multiple consumer http://activemq.apache.org/multiple-consumers-on-a-queue.html)
Basic use-case:
MQ(BROKER)[A] --> Wrapper(MQService)[B]-->Client [C]
Note:[B] is a restfull service having JMS consumer implemented in it.It keeps the connection and session object in cache.
[C] request to [B] to give 3 message
[B] must fetch 3 message if available in queue,wrap it in batchmsgFormat and send it to [C]
[C] process the message and send acknowledgemnt suces/failed to [B] through /send-ack uri.
Upon receiving Ack from [C], [B] will commit the Jms session and close the session and connection object. Also it will evict those from the cache.
The above work-flow is working fine with single message fetching.
But the queue hungs up on JMS MesageConsumer.receive() when try to fetch message with mutilple consumer using multithreading. ...
Here the JMS Consumer code in MQService layer:
----------------------------------------------
public BatchMessageFormat getConsumeMsg(final String clientId, final Integer batchSize) throws Exception {
BatchMessageFormat batchmsgFormat = new BatchMessageFormat();
List<MessageFormat> msgdetails = new ArrayList<MessageFormat>();
List<Future<MessageFormat>> futuremsgdetails = new ArrayList<Future<MessageFormat>>();
if (batchSize != null) {
Integer msgCount = getMsgCount(clientId, batchSize);
for (int batchconnect = 0; batchconnect <msgCount; batchconnect++) {
FutureTask<MessageFormat> task = new FutureTask<MessageFormat>(new Callable<MessageFormat>() {
#Override
public MessageFormat call() throws Exception {
MessageFormat msg=consumeBatchMsg(clientId,batchSize);
return msg;
}
});
futuremsgdetails.add(task);
Thread t = new Thread(task);
t.start();
}
for(Future<MessageFormat> msg:futuremsgdetails){
msgdetails.add(msg.get());
}
batchmsgFormat.setMsgDetails(msgdetails);
return batchmsgFormat
}
Message fetching:
private MessageFormat consumeBatchMsg(String clientId, Integer batchSize) throws JMSException, IOException{
MessageFormat msgFormat= new MessageFormat();
Connection qC = ConnectionUtil.getConnection();
qC.start();
Session session = qC.createSession(true, -1);
Destination destination = createQueue(clientId, session);
MessageConsumer consumer = session.createConsumer(destination);
Message message = consumer.receive(2000);
if (message!=null || message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
msgFormat.setMessageID(textMessage.getJMSMessageID());
msgFormat.setMessage(textMessage.getText());
CacheObject cacheValue = new CacheObject();
cacheValue.setConnection(qC);
cacheValue.setSession(session);
cacheValue.setJmsQueue(destination);
MQCache.instance().add(textMessage.getJMSMessageID(),cacheValue);
}
consumer.close();
return msgFormat;
}
Acknowledgement and session closing:
public String getACK(String clientId,String msgId,String ack)throws JMSException{
if (MQCache.instance().get(msgId) != null) {
Connection connection = MQCache.instance().get(msgId).getConnection();
Session session = MQCache.instance().get(msgId).getSession();
Destination destination = MQCache.instance().get(msgId).getJmsQueue();
MessageConsumer consumer = session.createConsumer(destination);
if (ack.equalsIgnoreCase("SUCCESS")) {
session.commit();
} else {
session.rollback();
}
session.close();
connection.close();
MQCache.instance().evictCache(msgId);
return "Accepted";
} else {
return "Rejected";
}
}
Does anyone worked on similar scenario or can you pls throw some light? Is there any other way to implement this batch mesage fetching as well as client failure handling?
Try after setting the prefetch limit to 0 as below:
ConnectionFactory connectionFactory
= new ActiveMQConnectionFactory("tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=0");
I'll give a few pointers to help to code this logic better.
I'm assuming you are using pure JMS 1.1 as much as possible. Ensure that you have one place where you get the connection from the pool or create a connection. You need not do that inside a thread. You can do that outside. Sessions must be created inside a thread and shouldn't be shared. This will impact the logic in the function consumeBatchMsg().
Secondly, its simpler to use one thread to consume all the messages of the given batchSize. I see that you are using transacted session. So you can do one commit after getting all the messages of the batchSize.
If you really want to take the complicated route of having multiple consumers on a queue (probably little better performance), you can using CountDownLatch or CyclicBarrier of Java and set it to batchSize to trigger. Once all the threads have received the messages, it can commit and close the sessions in the respective threads. Never let the session instance go out of the context of the thread that created it.

Resources