How to consume all messages from channel with Spring Integration Java DSL? - spring-integration

I'm trying to define a flow for a single-threaded handler. Messages come in great number and the handler is slow (it's inefficient to process them one by one). So I want to make the handler consume all messages available in the channel at once (or wait until a few messages accumulate) with Java DSL. If there are no messages in the channel and the handler has processed a previous group it should wait for a certain period of time (timeout "a") for a few messages to accumulate in the channel. But if messages keep coming, the handler MUST consume them after a certain period of time from the previous execution (timeout "b"). Therefore time intervals between handler executions should be no more than "b" (unless no messages arrive in the channel).
There is no reason to make multiple instances of that sort of handler: it generates data for interfaces. The code below describes some basic configuration. My problem is that I'm not able to come up with debouncing (the timeout "b") and releasing the group once the handler execution is completed.
#Configuration
public class SomeConfig {
private AtomicBoolean someHandlerBusy = new AtomicBoolean(false);
#Bean
StandardIntegrationFlow someFlow() {
return IntegrationFlows
.from("someChannel")
.aggregate(aggregatorSpec -> aggregatorSpec
//The only rule to release a group:
//wait 500ms after last message and have a free someHandler
.groupTimeout(500)
.sendPartialResultOnExpiry(true) //if 500ms expired - send group
.expireGroupsUponCompletion(true) //group should be filled again
.correlationStrategy(message -> true) //one group key, all messages in oe group
.releaseStrategy(message -> false) //never release messages, only with timeout
//Send messages one by one. This is not part of this task.
//I just want to know how to do that. Like splitter.
//.outputProcessor(MessageGroup::getMessages)
)
.handle("someHandler")
.get();
}
}
I have the solution with plain Java (kotlin) code: https://pastebin.com/mti3Y5tD
UPDATE
The configuration below does not erase group. The group is growing and growing and it falls wit error at the end.
Error:
*** java.lang.instrument ASSERTION FAILED ***: "!errorOutstanding" with message transform method call failed at JPLISAgent.c line: 844
Configuration:
#Configuration
public class InterfaceHandlerConfigJava {
#Bean
MessageChannel interfaceAggregatorFlowChannel() {
return MessageChannels.publishSubscribe("interfaceAggregatorFlowChannel").get();
}
#EventListener(ApplicationReadyEvent.class)
public void initTriggerPacket(ApplicationReadyEvent event) {
MessageChannel channel = event.getApplicationContext().getBean("interfaceAggregatorFlowChannel", MessageChannel.class);
channel.send(MessageBuilder.withPayload(new InterfaceHandler.HandlerReadyMessage()).build());
}
#Bean
StandardIntegrationFlow someFlow(
InterfaceHandler interfaceHandler
) {
long lastMessageTimeout = 10L;
return IntegrationFlows
.from("interfaceAggregatorFlowChannel")
.aggregate(aggregatorSpec -> aggregatorSpec
.groupTimeout(messageGroup -> {
if (haveInstance(messageGroup, InterfaceHandler.HandlerReadyMessage.class)) {
System.out.println("case HandlerReadyMessage");
if (haveInstance(messageGroup, DbChangeStreamConfiguration.InitFromDbMessage.class)) {
System.out.println("case InitFromDbMessage");
return 0L;
} else if (messageGroup.size() > 1) {
long groupCreationTimeout =
messageGroup.getTimestamp() + 500L - System.currentTimeMillis();
long timeout = Math.min(groupCreationTimeout, lastMessageTimeout);
System.out.println("case messageGroup.size() > 1, timeout: " + timeout);
return timeout;
}
}
System.out.println("case Handler NOT ReadyMessage");
return null;
})
.sendPartialResultOnExpiry(true)
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.correlationStrategy(message -> true)
.releaseStrategy(message -> false)
)
.handle(interfaceHandler, "handle")
.channel("interfaceAggregatorFlowChannel")
.get();
}
private boolean haveInstance(MessageGroup messageGroup, Class clazz) {
for (Message<?> message : messageGroup.getMessages()) {
if (clazz.isInstance(message.getPayload())) {
return true;
}
}
return false;
}
}
I want to highlight: this flow is in the cycle. There is IN and no OUT. Messages go to the IN but handler emits HandlerReadyMessage at the end.
Maybe there should be some thread breaker channel?
FINAL VARIANT
As aggregator and handler should not blocks each other and should not try to make a stackoverflow exception they should run in different threads. In the configuration above this achieved with queue channels. Looks that publish-subscribe channels are not running subscribers in different threads (at least for one subscriber).
#Configuration
public class InterfaceHandlerConfigJava {
// acts as thread breaker too
#Bean
MessageChannel interfaceAggregatorFlowChannel() {
return MessageChannels.queue("interfaceAggregatorFlowChannel").get();
}
#Bean
MessageChannel threadBreaker() {
return MessageChannels.queue("threadBreaker").get();
}
#EventListener(ApplicationReadyEvent.class)
public void initTriggerPacket(ApplicationReadyEvent event) {
MessageChannel channel = event.getApplicationContext().getBean("interfaceAggregatorFlowChannel", MessageChannel.class);
channel.send(MessageBuilder.withPayload(new InterfaceHandler.HandlerReadyMessage()).build());
}
#Bean
StandardIntegrationFlow someFlow(
InterfaceHandler interfaceHandler
) {
long lastMessageTimeout = 10L;
return IntegrationFlows
.from("interfaceAggregatorFlowChannel")
.aggregate(aggregatorSpec -> aggregatorSpec
.groupTimeout(messageGroup -> {
if (haveInstance(messageGroup, InterfaceHandler.HandlerReadyMessage.class)) {
System.out.println("case HandlerReadyMessage");
if (haveInstance(messageGroup, DbChangeStreamConfiguration.InitFromDbMessage.class)) {
System.out.println("case InitFromDbMessage");
return 0L;
} else if (messageGroup.size() > 1) {
long groupCreationTimeout =
messageGroup.getTimestamp() + 500L - System.currentTimeMillis();
long timeout = Math.min(groupCreationTimeout, lastMessageTimeout);
System.out.println("case messageGroup.size() > 1, timeout: " + timeout);
return timeout;
}
}
System.out.println("case Handler NOT ReadyMessage");
return null;
})
.sendPartialResultOnExpiry(true)
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.correlationStrategy(message -> true)
.releaseStrategy(message -> false)
.poller(pollerFactory -> pollerFactory.fixedRate(1))
)
.channel("threadBreaker")
.handle(interfaceHandler, "handle", spec -> spec.poller(meta -> meta.fixedRate(1)))
.channel("interfaceAggregatorFlowChannel")
.get();
}
private boolean haveInstance(MessageGroup messageGroup, Class clazz) {
for (Message<?> message : messageGroup.getMessages()) {
if (clazz.isInstance(message.getPayload())) {
return true;
}
}
return false;
}
}

It's not clear what you mean by timer b, but you can use a .groupTimeoutExpression(...) to dynamically determine the group timeout.
You don't need to worry about sending messages one by one; when the output processor returns a collection of Message<?> they are sent one-at-a-time.

Related

How to poll for multiple files at once with Spring Integration with WebFlux?

I have the following configuration below for file monitoring using Spring Integration and WebFlux.
It works well, but if I drop in 100 files it will pick up one file at a time with a 10 second gap between the "Received a notification of new file" log messages.
How do I poll for multiple files at once, so I don't have to wait 1000 seconds for all my files to finally register?
#Configuration
#EnableIntegration
public class FileMonitoringConfig {
private static final Logger logger =
LoggerFactory.getLogger(FileMonitoringConfig.class.getName());
#Value("${monitoring.folder}")
private String monitoringFolder;
#Value("${monitoring.polling-in-seconds:10}")
private int pollingInSeconds;
#Bean
Publisher<Message<Object>> myMessagePublisher() {
return IntegrationFlows.from(
Files.inboundAdapter(new File(monitoringFolder))
.useWatchService(false),
e -> e.poller(Pollers.fixedDelay(pollingInSeconds, TimeUnit.SECONDS)))
.channel(myChannel())
.toReactivePublisher();
}
#Bean
Function<Flux<Message<Object>>, Publisher<Message<Object>>> myReactiveSource() {
return flux -> myMessagePublisher();
}
#Bean
FluxMessageChannel myChannel() {
return new FluxMessageChannel();
}
#Bean
#ServiceActivator(
inputChannel = "myChannel",
async = "true",
reactive = #Reactive("myReactiveSource"))
ReactiveMessageHandler myMessageHandler() {
return new ReactiveMessageHandler() {
#Override
public Mono<Void> handleMessage(Message<?> message) throws MessagingException {
return Mono.fromFuture(doHandle(message));
}
private CompletableFuture<Void> doHandle(Message<?> message) {
return CompletableFuture.runAsync(
() -> {
logger.info("Received a notification of new file: {}", message.getPayload());
File file = (File) message.getPayload();
});
}
};
}
}
The Inbound Channel Adapter polls a single data record from the source per poll cycle.
Consider to add maxMessagesPerPoll(-1) to your poller() configuration.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-adapter-namespace-inbound

Spring Integration resequencer does not release the last group of messages

I have the following configuration:
#Bean
public IntegrationFlow messageFlow(JdbcMessageStore groupMessageStore, TransactionSynchronizationFactory syncFactory, TaskExecutor te, ThreadPoolTaskScheduler ts, RealTimeProcessor processor) {
return IntegrationFlows
.from("inputChannel")
.handle(processor, "handleInputMessage", consumer -> consumer
.taskScheduler(ts)
.poller(poller -> poller
.fixedDelay(pollerFixedDelay)
.receiveTimeout(pollerReceiveTimeout)
.maxMessagesPerPoll(pollerMaxMessagesPerPoll)
.taskExecutor(te)
.transactional()
.transactionSynchronizationFactory(syncFactory)))
.resequence(s -> s.messageStore(groupMessageStore)
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(50, 30000)))
.channel("sendingChannel")
.handle(processor, "sendMessage")
.get();
}
If I send a single batch of e.g. 100 messages to the inputChannel it works as expected until there are no messages in the inputChannel. After the inputChannel becomes empty it also stops processing for messages that were waiting for sequencing. As a result there are always a couple of messages left in the groupMessageStore even after the set release timeout.
I'm guessing it's because the poller is configured only for the inputChannel and if there are no messages in there it will never get to the sequencer (so will never call canRelease on the release strategy).
But if I try adding a separate poller for the resequencer I get the following error A poller should not be specified for endpoint since channel x is a SubscribableChannel (not pollable).
Is there a different way to configure it so that the last group of messages is always released?
The release strategy is passive and needs something to trigger it to be called.
Add .groupTimeout(...) to release the partial sequence after the specified time elapses.
EDIT
#SpringBootApplication
public class So67993972Application {
private static final Logger log = LoggerFactory.getLogger(So67993972Application.class);
public static void main(String[] args) {
SpringApplication.run(So67993972Application.class, args);
}
#Bean
IntegrationFlow flow(MessageGroupStore mgs) {
return IntegrationFlows.from(MessageChannels.direct("input"))
.resequence(e -> e.messageStore(mgs)
.groupTimeout(5_000)
.sendPartialResultOnExpiry(true)
.releaseStrategy(new TimeoutCountSequenceSizeReleaseStrategy(50, 2000)))
.channel(MessageChannels.queue("output"))
.get();
}
#Bean
MessageGroupStore mgs() {
return new SimpleMessageStore();
}
#Bean
public ApplicationRunner runner(MessageChannel input, QueueChannel output, MessageGroupStore mgs) {
return args -> {
MessagingTemplate template = new MessagingTemplate(input);
log.info("Sending");
template.send(MessageBuilder.withPayload("foo")
.setHeader(IntegrationMessageHeaderAccessor.CORRELATION_ID, "bar")
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_NUMBER, 2)
.setHeader(IntegrationMessageHeaderAccessor.SEQUENCE_SIZE, 2)
.build());
log.info(output.receive(10_000).toString());
Thread.sleep(1000);
log.info(mgs.getMessagesForGroup("bar").toString());
};
}
}

Receive messages from a channel by some event spring integration dsl [duplicate]

i have a channel that stores messages. When new messages arrive, if the server has not yet processed all the messages (that still in the queue), i need to clear the queue (for example, by rerouting all data into another channel). For this, I used a router. But the problem is when a new messages arrives, then not only old but also new ones rerouting into another channel. New messages must remain in the queue. How can I solve this problem?
This is my code:
#Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.bridge(e -> e.poller(Pollers.fixedDelay(500, TimeUnit.MILLISECONDS, 1000).maxMessagesPerPoll(1)))
.route(r -> {
if (flag) {
return "mainChannel";
} else {
return "garbageChannel";
}
})
.get();
}
#Bean
public IntegrationFlow outFlow() {
return IntegrationFlows.from("mainChannel")
.handle(m -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(m.getPayload() + "\tmainFlow");
})
.get();
}
#Bean
public IntegrationFlow outGarbage() {
return IntegrationFlows.from("garbageChannel")
.handle(m -> System.out.println(m.getPayload() + "\tgarbage"))
.get();
}
Flag value changes through #GateWay by pressing "q" and "e" keys.
I would suggest you to take a look into a purge API of the QueueChannel:
/**
* Remove any {#link Message Messages} that are not accepted by the provided selector.
* #param selector The message selector.
* #return The list of messages that were purged.
*/
List<Message<?>> purge(#Nullable MessageSelector selector);
This way with a custom MessageSelector you will be able to remove from the queue old messages. See a timestamp message header to consult. With the result of this method you can do whatever you need to do with old messages.

Thread safety in executor channel

I have a message producer which produces around 15 messages/second
The consumer is a spring integration project which consumes from the Message Queue and does a lot of processing. I have used the Executor channel to process messages in parallel and then the flow passes through some common handler class.
Please find below the snippet of code -
baseEventFlow() - We receive the message from the EMS queue and send it to a router
router() - Based on the id of the message" a particular ExecutorChannel instance is configured with a singled-threaded Executor. Every ExecutorChannel is going to be its dedicated executor with only single thread.
skwDefaultChannel(), gjsucaDefaultChannel(), rpaDefaultChannel() - All the ExecutorChannel beans are marked with the #BridgeTo for the same channel which starts that common flow.
uaEventFlow() - Here each message will get processed
#Bean
public IntegrationFlow baseEventFlow() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(Jms.container(this.emsConnectionFactory, this.emsQueue).get()))
.wireTap(FLTAWARE_WIRE_TAP_CHNL)
.route(router()).get();
}
public AbstractMessageRouter router() {
return new AbstractMessageRouter() {
#Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
if (message.getPayload().toString().contains("\"id\":\"RPA")) {
return Collections.singletonList(skwDefaultChannel());
}else if (message.getPayload().toString().contains("\"id\":\"ASH")) {
return Collections.singletonList(rpaDefaultChannel());
} else if (message.getPayload().toString().contains("\"id\":\"GJS")
|| message.getPayload().toString().contains("\"id\":\"UCA")) {
return Collections.singletonList(gjsucaDefaultChannel());
} else {
return Collections.singletonList(new NullChannel());
}
}
};
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel skwDefaultChannel() {
return MessageChannels.executor(SKW_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel gjsucaDefaultChannel() {
return MessageChannels.executor(GJS_UCA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
#BridgeTo("uaDefaultChannel")
public MessageChannel rpaDefaultChannel() {
return MessageChannels.executor(RPA_DEFAULT_CHANNEL_NAME, Executors.newFixedThreadPool(1)).get();
}
#Bean
public IntegrationFlow uaEventFlow() {
return IntegrationFlows.from("uaDefaultChannel")
.wireTap(UA_WIRE_TAP_CHNL)
.transform(eventHandler, "parseEvent")
.handle(uaImpl, "process").get();
}
My concern is in the uaEVentFlow() the common transform and handler method are not thread safe and it may cause issue. How can we ensure that we inject a new transformer and handler at every message invocation?
Should I change the scope of the transformer and handler bean as prototype?
Instead of bridging to a common flow, you should move the .transform() and .handle() to each of the upstream flows and add
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
to their #Bean definitions so each gets its own instance.
But, it's generally better to make your code thread-safe.

How to re-queue message when spring integration configuration includes a priority channel

I have a Spring Integration configuration that utilizes a priority channel. When an item is read from that channel, local resources are checked at that point in time, and if the resources are not available to process the item, I would like to requeue the message so that another machine picks it up. Originally, I wrongly threw an exception thinking that a requeue would occur, but as was answered in my other question this is not going to work since the priority channel executes in another thread than the listener container.
I thought about placing a filter right after the inbound channel adapter, and throwing an exception if resources are not available at that time, but at that instance in time an accurate assessment of resources cannot be made because resource availability at that time does match what will be available when the message is selected based upon priority.
My next thought is to place a filter after the priority channel and before the service activator and direct messages that cannot be handled by current resources to the discard-channel which is defined as an outbound channel adapter that sends the message back to the original queue. Are there pitfalls to this approach?
EDIT 20150917:
Per Gary's advice, I have moved to RabbitMQ 3.5.x in order to take of the built-in priority queues. I now have a problem tracking the number of attempts as it appears my original message is placed back on the queue, rather than my modified message. I have updated the code blocks to reflect the current setup.
EDIT 20150922:
I am updating this post to reflect the final proof of concept code base that I created. I am not a Spring-Integration expert by any means, so please keep that in mind as well as the fact that this test code is not production ready. My original intent was to have messages resubmitted and retried a certain amount of times if a particular exception was thrown. This can be accomplished using the StatefulRetryOperationsInterceptor. But to experiment further, I wanted to be able to set/increment a header on failure and then have something in my flow that could react to that value. That was accomplished by using an extension of the RepublishMessageRecoverer that overrides additionalHeaders(). This object then is used to configure the RetryOperationsInterceptor.
One other minor thing: I wanted to reduce some of the default Spring Integration logging when my signal exception was thrown, so I needed to make sure I named my error channel "errorChannel" in order to replace the Spring Integration default. I also needed to create a custom ErrorHandler which to assign to the ListenerContainer default which logs everything to ERROR level.
Here is my current setup:
Spring Integration 4.2.0.RELEASE
Spring AMQP 1.5.0.RELEASE
RabbitMQ 3.5.x
Configuration
#Autowired
public void setSpringIntegrationConfigHelper (SpringIntegrationHelper springIntegrationConfigHelper) {
this.springIntegrationConfigHelper = springIntegrationConfigHelper;
}
#Bean
public String priorityPOCQueueName() {
return "poc.priority";
}
#Bean
public Queue priorityPOCQueue(RabbitAdmin rabbitAdmin) {
boolean durable = true;
boolean exclusive = false;
boolean autoDelete = false;
//Adding the x-max-priority argument is what signals RabbitMQ that this is a priority queue. Must be Rabbit 3.5.x
Map<String,Object> arguments = new HashMap<String, Object>();
arguments.put("x-max-priority", 5);
Queue queue = new Queue(priorityPOCQueueName(),
durable,
exclusive,
autoDelete,
arguments);
rabbitAdmin.declareQueue(queue);
return queue;
}
#Bean
public Binding priorityPOCQueueBinding(RabbitAdmin rabbitAdmin) {
Binding binding = new Binding(priorityPOCQueueName(),
DestinationType.QUEUE,
"amq.direct",
priorityPOCQueue(rabbitAdmin).getName(),
null);
rabbitAdmin.declareBinding(binding);
return binding;
}
#Bean
public AmqpTemplate priorityPOCMessageTemplate(ConnectionFactory amqpConnectionFactory,
#Qualifier("priorityPOCQueueName") String queueName,
#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
RabbitTemplate template = new RabbitTemplate(amqpConnectionFactory);
template.setChannelTransacted(false);
template.setExchange("amq.direct");
template.setQueue(queueName);
template.setRoutingKey(queueName);
template.setMessageConverter(messageConverter);
return template;
}
#Autowired
#Qualifier("priorityPOCQueue")
public void setPriorityPOCQueue(Queue priorityPOCQueue) {
this.priorityPOCQueue = priorityPOCQueue;
}
#Bean
public MessageRecoverer miTestMessageRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
return new MessageRecoverer() {
#Override
public void recover(org.springframework.amqp.core.Message msg, Throwable t) {
StringBuilder sb = new StringBuilder();
sb.append("Firing Test Recoverer: ").append(t.getClass().getName()).append(" Message Count: ")
.append(msg.getMessageProperties().getMessageCount())
.append(" ID: ").append(msg.getMessageProperties().getMessageId())
.append(" DeliveryTag: ").append(msg.getMessageProperties().getDeliveryTag())
.append(" Redilivered: ").append(msg.getMessageProperties().isRedelivered());
logger.debug(sb.toString());
PriorityMessage m = new PriorityMessage(5);
m.setId(randomGenerator.nextLong(10L, 1000000L));
priorityPOCMessageTemplate.convertAndSend(m , new SimulateErrorHeaderPostProcessor(Boolean.FALSE, m.getPriority()));
}
};
}
#Bean
public RepublishMessageRecoverer miRepublishRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
class MiRecoverer extends RepublishMessageRecoverer {
public MiRecoverer(AmqpTemplate errorTemplate) {
super(errorTemplate);
this.setErrorRoutingKeyPrefix("");
}
#Override
protected Map<? extends String, ? extends Object> additionalHeaders(
org.springframework.amqp.core.Message message, Throwable cause) {
Map<String, Object> map = new HashMap<>();
if (message.getMessageProperties().getHeaders().containsKey("jmattempts") == false) {
map.put("jmattempts", 0);
} else {
Integer count = Integer.valueOf(message.getMessageProperties().getHeaders().get("jmattempts").toString());
map.put("jmattempts", ++count);
}
return map;
}
} ;
return new MiRecoverer(priorityPOCMessageTemplate);
}
#Bean
public StatefulRetryOperationsInterceptor inadequateResourceInterceptor(#Qualifier("priorityPOCMessageTemplate") AmqpTemplate priorityPOCMessageTemplate
, #Qualifier("priorityMessageKeyGenerator") PriorityMessageKeyGenerator priorityMessageKeyGenerator
, #Qualifier("miTestMessageRecoverer") MessageRecoverer messageRecoverer
, #Qualifier("miRepublishRecoverer") RepublishMessageRecoverer miRepublishRecoverer) {
StatefulRetryInterceptorBuilder b = RetryInterceptorBuilder.stateful();
return b.maxAttempts(2)
.backOffOptions(2000L, 1.0D, 4000L)
.messageKeyGenerator(priorityMessageKeyGenerator)
.recoverer(miRepublishRecoverer)
.build();
}
#Bean(name="exec.priorityPOC")
TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor e = new ThreadPoolTaskExecutor();
e.setCorePoolSize(1);
e.setQueueCapacity(1);
return e;
}
/* #Bean(name="poc.priorityChannel")
public MessageChannel pocPriorityChannel() {
PriorityChannel c = new PriorityChannel(new PriorityComparator());
c.setComponentName("poc.priorityChannel");
c.setBeanName("poc.priorityChannel");
return c;
}
*/
#Bean(name="poc.inputChannel")
public MessageChannel pocPriorityChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("poc.inputChannel");
c.setBeanName("poc.inputChannel");
return c;
}
#Bean(name="poc.inboundChannelAdapter") //make this a unique name
public AmqpInboundChannelAdapter amqpInboundChannelAdapter(#Qualifier("exec.priorityPOC") TaskExecutor taskExecutor
, #Qualifier("errorChannel") MessageChannel pocErrorChannel
, #Qualifier("inadequateResourceInterceptor") StatefulRetryOperationsInterceptor inadequateResourceInterceptor) {
org.aopalliance.aop.Advice[] adviceChain = new org.aopalliance.aop.Advice[]{inadequateResourceInterceptor};
int concurrentConsumers = 1;
AmqpInboundChannelAdapter a = springIntegrationConfigHelper.createInboundChannelAdapter(taskExecutor
, pocPriorityChannel(), new Queue[]{priorityPOCQueue}, concurrentConsumers, adviceChain
, new PocErrorHandler());
a.setErrorChannel(pocErrorChannel);
return a;
}
#Transformer(inputChannel = "poc.inputChannel", outputChannel = "poc.procesPoc")
public Message<PriorityMessage> incrementAttempts(Message<PriorityMessage> msg) {
//I stopped using this in the POC.
return msg;
}
#ServiceActivator(inputChannel="poc.procesPoc")
public void procesPoc(#Header(SimulateErrorHeaderPostProcessor.ERROR_SIMULATE_HEADER_KEY) Boolean simulateError
, #Headers Map<String, Object> headerMap
, PriorityMessage priorityMessage) throws InterruptedException {
if (isFirstMessageReceived == false) {
//Thread.sleep(15000); //Cause a bit of a backup so we can see prioritizing in action.
isFirstMessageReceived = true;
}
Integer retryAttempts = 0;
if (headerMap.containsKey("jmattempts")) {
retryAttempts = Integer.valueOf(headerMap.get("jmattempts").toString());
}
logger.debug("Received message with priority: " + priorityMessage.getPriority() + ", simulateError: " + simulateError + ", Current attempts count is "
+ retryAttempts);
if (simulateError && retryAttempts < PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Simulating an error and re-queue'ng. Current attempt count is " + retryAttempts);
throw new AnalyzerNonAdequateResourceException();
} else if (simulateError && retryAttempts > PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Max attempt count exceeded");
}
}
/**************************************************************************************************
*
* Error Channel
*
**************************************************************************************************/
//Note that we want to override default Spring error channel, so the name of the bean must be errorChannel
#Bean(name="errorChannel")
public MessageChannel pocErrorChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("errorChannel");
c.setBeanName("errorChannel");
return c;
}
#ServiceActivator(inputChannel="errorChannel")
public void pocHandleError(Message<MessagingException> message) throws Throwable {
MessagingException me = message.getPayload();
logger.error("pocHandleError: error encountered: " + me.getCause().getClass().getName());
SortedMap<String, Object> sorted= new TreeMap<>();
sorted.putAll(me.getFailedMessage().getHeaders());
if (me.getCause() instanceof AnalyzerNonAdequateResourceException) {
logger.debug("Headers: " + sorted.toString());
//Let this message get requeued
throw me.getCause();
}
Message<?> failedMsg = me.getFailedMessage();
Object o = failedMsg.getPayload();
StringBuilder sb = new StringBuilder();
if (o != null) {
sb.append("AnalyzerErrorHandler: Failed Message Type: ")
.append(o.getClass().getCanonicalName()).append(". toString: ").append(o.toString());
logger.error(sb.toString());
}
//The first level sometimes brings back either MessagingHandlingException or
//MessagingTransformationException which may contain a subcause
Exception e = (Exception)me.getCause();
int i = 0;
sb.delete(0, sb.length());
sb.append("AnalyzerErrorHandler nested messages: ");
while (e != null && i++ < 10) {
sb.append(System.lineSeparator()).append(" ")
.append(e.getClass().getCanonicalName()).append(": ")
.append(e.getMessage());
}
if (i > 0) {
logger.error(sb.toString());
}
//Don't want a message to recycle
throw new AmqpRejectAndDontRequeueException(e);
}
/**
* This gets set on the ListenerContainer. The default handler on the listener
* container logs everything with full stack trace. We don't want to do that
* for our known resource exception
*/
public static class PocErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
Throwable cause = t.getCause();
if (cause != null) {
while (cause.getCause() != null) {
cause = cause.getCause();
}
} else {
cause = t;
}
if (cause instanceof AnalyzerNonAdequateResourceException) {
logger.info(AnalyzerNonAdequateResourceException.class.getName() + ": not enough resources to process the item.");
return;
}
else {
logger.error("POC Listener Exception", t);
}
}
}
SpringIntegrationHelper
protected ConnectionFactory connectionFactory;
protected MessageConverter messageConverter;
#Autowired
public void setConnectionFactory (ConnectionFactory connectionFactory) {
this.connectionFactory = connectionFactory;
}
#Autowired
public void setMessageConverter(#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
public AmqpInboundChannelAdapter createInboundChannelAdapter(TaskExecutor taskExecutor
, MessageChannel outputChannel, Queue[] queues, int concurrentConsumers
, org.aopalliance.aop.Advice[] adviceChain,
ErrorHandler errorHandler) {
SimpleMessageListenerContainer listenerContainer =
new SimpleMessageListenerContainer(connectionFactory);
//AUTO is default, but setting it anyhow.
listenerContainer.setAcknowledgeMode(AcknowledgeMode.AUTO);
listenerContainer.setAutoStartup(true);
listenerContainer.setConcurrentConsumers(concurrentConsumers);
listenerContainer.setMessageConverter(messageConverter);
listenerContainer.setQueues(queues);
//listenerContainer.setChannelTransacted(false);
listenerContainer.setErrorHandler(errorHandler);
listenerContainer.setPrefetchCount(1);
listenerContainer.setTaskExecutor(taskExecutor);
listenerContainer.setDefaultRequeueRejected(true);
if (adviceChain != null && adviceChain.length > 0) {
listenerContainer.setAdviceChain(adviceChain);
}
AmqpInboundChannelAdapter a = new AmqpInboundChannelAdapter(listenerContainer);
a.setMessageConverter(messageConverter);
a.setAutoStartup(true);
a.setHeaderMapper(MyAmqpHeaderMapper.createPassAllHeaders());
a.setOutputChannel(outputChannel);
return a;
}
It's not clear why you want to use a PriorityChannel in this context; why not use a priority queue in RabbitMQ? That way, you can run your flow on the container thread.
Sending the queue to the back of the queue yourself would work, but there is a risk of message loss.

Resources