How to poll for multiple files at once with Spring Integration with WebFlux? - spring-integration

I have the following configuration below for file monitoring using Spring Integration and WebFlux.
It works well, but if I drop in 100 files it will pick up one file at a time with a 10 second gap between the "Received a notification of new file" log messages.
How do I poll for multiple files at once, so I don't have to wait 1000 seconds for all my files to finally register?
#Configuration
#EnableIntegration
public class FileMonitoringConfig {
private static final Logger logger =
LoggerFactory.getLogger(FileMonitoringConfig.class.getName());
#Value("${monitoring.folder}")
private String monitoringFolder;
#Value("${monitoring.polling-in-seconds:10}")
private int pollingInSeconds;
#Bean
Publisher<Message<Object>> myMessagePublisher() {
return IntegrationFlows.from(
Files.inboundAdapter(new File(monitoringFolder))
.useWatchService(false),
e -> e.poller(Pollers.fixedDelay(pollingInSeconds, TimeUnit.SECONDS)))
.channel(myChannel())
.toReactivePublisher();
}
#Bean
Function<Flux<Message<Object>>, Publisher<Message<Object>>> myReactiveSource() {
return flux -> myMessagePublisher();
}
#Bean
FluxMessageChannel myChannel() {
return new FluxMessageChannel();
}
#Bean
#ServiceActivator(
inputChannel = "myChannel",
async = "true",
reactive = #Reactive("myReactiveSource"))
ReactiveMessageHandler myMessageHandler() {
return new ReactiveMessageHandler() {
#Override
public Mono<Void> handleMessage(Message<?> message) throws MessagingException {
return Mono.fromFuture(doHandle(message));
}
private CompletableFuture<Void> doHandle(Message<?> message) {
return CompletableFuture.runAsync(
() -> {
logger.info("Received a notification of new file: {}", message.getPayload());
File file = (File) message.getPayload();
});
}
};
}
}

The Inbound Channel Adapter polls a single data record from the source per poll cycle.
Consider to add maxMessagesPerPoll(-1) to your poller() configuration.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-adapter-namespace-inbound

Related

Spring integration Java DSL http outbound

How to resend same or modified message from outbound http call in case of specific client error responses like 400, 413 etc
#Bean
private IntegrationFlow myChannel() {
IntegrationFlowBuilder builder =
IntegrationFlows.from(queue)
.handle(//http post method config)
...
.expectedResponseType(String.class))
.channel(MessageChannels.publishSubscribe(channel2));
return builder.get();
}
#Bean
private IntegrationFlow defaultErrorChannel() {
}
EDIT: Added end point to handle method
#Bean
private IntegrationFlow myChannel() {
IntegrationFlowBuilder builder =
IntegrationFlows.from(queue)
.handle(//http post method config)
...
.expectedResponseType(String.class),
e -> e.advice(myRetryAdvice()))
.channel(MessageChannels.publishSubscribe(channel2));
return builder.get();
}
#Bean
public Advice myRetryAdvice(){
... // set custom retry policy
}
Custom Retry policy:
class InternalServerExceptionClassifierRetryPolicy extends
ExceptionClassifierRetryPolicy {
public InternalServerExceptionClassifierRetryPolicy() {
final SimpleRetryPolicy simpleRetryPolicy =
new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(2);
this.setExceptionClassifier(new Classifier<Throwable, RetryPolicy>() {
#Override
public RetryPolicy classify(Throwable classifiable) {
if (classifiable instanceof HttpServerErrorException) {
// For specifically 500 and 504
if (((HttpServerErrorException) classifiable).getStatusCode() == HttpStatus.INTERNAL_SERVER_ERROR
|| ((HttpServerErrorException) classifiable)
.getStatusCode() == HttpStatus.GATEWAY_TIMEOUT) {
return simpleRetryPolicy;
}
return new NeverRetryPolicy();
}
return new NeverRetryPolicy();
}
});
}}
EDIT 2: Override open() to modify the original message
RequestHandlerRetryAdvice retryAdvice = new
RequestHandlerRetryAdvice(){
#Override
public<T, E extends Throwable> boolean open(RetryContext
retryContext, RetryCallback<T,E> callback){
Message<String> originalMsg =
(Message) retryContext.getAttribute(ErrorMessageUtils.FAILED_MESSAGE_CONTEXT);
Message<String> updatedMsg = //some updated message
retryContext.setAttribute(ErrorMessageUtils.FAILED_MESSAGE_CONTEXT,up datedMsg);
return super.open(retryContext, callback);
}
See a RequestHandlerRetryAdvice: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints.html#message-handler-advice-chain. So, you configure some RetryPolicy to check those HttpClientErrorException for retry and the framework will re-send for you.
Java DSL allows us to configure it via second handle() argument - endpoint configurer: .handle(..., e -> e.advice(myRetryAdvice)): https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-endpoints

Spring Kafka outboundChannelAdapter's control does not return back in the integration flow

After a messsage is sent, it gets published to Kafka topic but the Message from KafkaSuccessTransformer does not return back to the REST controller. I am trying to return the message as-is if sent successfully but nothing after Kafka handler seems to be invoked.
#MessagingGateway
public interface MyGateway<String, Message<?>> {
#Gateway(requestChannel = "enrollChannel")
Message<?> sendMsg(#Payload String payload);
}
------------------------
#RestController
public class Controller {
MyGateway<String, Message<?>> myGateway;
#PostMapping
public Message<?> send(#RequestBody String request) throws Exception {
Message<?> resp = myGateway.sendMsg(request);
log.info("I am back"); // control doesn't come to this point
return resp;
}
}
--------------------------
#Component
public class MyIntegrationFlow {
KafkaSuccessTransformer stransformer;
#Bean
public MessageChannel enrollChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel kafkaSuccessChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow enrollIntegrationFlow() {
return IntegrationFlows.from("enrollChannel")
//another transformer which turns the string to Message<?>
.handle(Kafka.outboundChannelAdapter(kafkaTemplate) //kafkaTemplate has the necesssary config
.topic("topic1")
.messageKey(messageKeyFunction -> messageKeyFunction.getHeaders()
.get("key1")
.sendSuccessChannel("kafkaSuccessChannel"));
}
#Bean
public IntegrationFlow successfulKafkaSends() {
return f -> IntegrationFlows.from("kafkaSuccessChannel").transform(stransformer);
}
}
--------------
#Component
public class KafkaSuccessTransformer {
#Transformer
public Message<?> transform(Message<?> message) {
log.info("Message is sent to Kafka");
return message; //control comes here but does not return to REST controller
}
}
Channel adapters are for one-way traffic; there is no result.
Add a publishSubscribe channel with two subflows; the second one can be just a bridge to nowhere - .bridge() ends the flow. It will then return the outbound message to the gateway.
See https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-subflows
Per Artem:
Something is off in the configuration or code. The logic is like this: processSendResult(message, producerRecord, sendFuture, getSendSuccessChannel());. Then: getMessageBuilderFactory().fromMessage(message). So, the replyChannel header is present in this "success" message. Therefore that transform(stransformer) should really produce its return to the replyChannel for a gateway in the beginning. Only the problem could be in the KafkaSuccessTransformer code where it does not copy request message headers for reply message. Please, share its whole code.

Watch remote directory for added files and stream it for reading data over SFTP

I want to add a watch on the remote machine for newly added CSV files or unread. Once files are identified read it according to their timestamp which will be there in the file name. The file will be read using streaming rather coping to the local machine. While the file is getting read, append _reading to the filename and append _read once the file is read. The file will be read over SFTP protocol and I am planning to use spring integration sftp. In case of error while reading file or data in the file is not as per expectation I want to move that file in sub-directory.
I have tried to poll the remote directory and reading once CSV file. Once read I am removing the file from the directory.
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-sftp</artifactId>
<version>5.1.0.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-core</artifactId>
<version>5.0.6.RELEASE</version>
</dependency>
Spring boot version 2.0.3.RELEASE
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(hostname);
factory.setPort(22);
factory.setUser(username);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<ChannelSftp.LsEntry>(factory);
}
#Bean
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template());
messageSource.setRemoteDirectory(path);
messageSource.setFilter(compositeFilters());
return messageSource;
}
public CompositeFileListFilter compositeFilters() {
return new CompositeFileListFilter()
.addFilter(new SftpRegexPatternFileListFilter(".*csv"));
}
#Bean
public SftpRemoteFileTemplate template() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
public IntegrationFlow sftpOutboundListFlow() {
return IntegrationFlows.from(this.sftpMessageSource(), e -> e.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)))
.handle(Sftp.outboundGateway(template(), NLST, path).options(Option.RECURSIVE)))
.filter(compositeFilters())
.transform(sorter())
.split()
.handle(Sftp.outboundGateway(template(), GET, "headers['file_remoteDirectory'] + headers['file_remoteFile']").options(STREAM))
.transform(csvToPojoTransformer())
.handle(service())
.handle(Sftp.outboundGateway(template(), MV, "headers['file_remoteDirectory'] + headers['file_remoteFile'] + _read"))
.handle(after())
.get();
}
#Bean
public MessageHandler sorter() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
List<String> fileNames = (List<String>) message.getPayload();
Collections.sort(fileNames);
}
};
}
#Bean
public MessageHandler csvToPojoTransformer() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
InputStream streamData = (InputStream) message.getPayload();
convertStreamtoObject(streamData, Class.class);
}
};
}
public List<?> convertStreamtoObject(InputStream inputStream, Class clazz) {
HeaderColumnNameMappingStrategy ms = new HeaderColumnNameMappingStrategy();
ms.setType(clazz);
Reader reader = new InputStreamReader(inputStream);
CsvToBean cb = new CsvToBeanBuilder(reader)
.withType(clazz)
.withMappingStrategy(ms)
.withSkipLines(0)
.withSeparator('|')
.withThrowExceptions(true)
.build();
return cb.parse();
}
#Bean
public MessageHandler service() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
List<Class> csvDataAsListOfPojo = List < Class > message.getPayload();
// use this
}
};
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice after() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString("payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
#Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
#Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
Updated Code
For complex scenarios (list, move, fetch, remove, etc), you should use SFTP remote file gateways instead.
The SFTP outbound gateway provides a limited set of commands that let you interact with a remote SFTP server:
ls (list files)
nlst (list file names)
get (retrieve a file)
mget (retrieve multiple files)
rm (remove file(s))
mv (move and rename file)
put (send a file)
mput (send multiple files)
Or use the SftpRemoteFileTemplate directly from your code.
EDIT
In response to your comments; you need something like this
Inbound Channel Adapter (with poller) - returns directory name
LS Gateway
Filter (remove any files already fetched)
Transformer (sort the list)
Splitter
GET Gateway(stream option)
Transformer (csv to POJO)
Service (process POJO)
If you add
RM Gateway
after your service (to remove the remote file), you don't need the filter step.
You might find the Java DSL simpler to assemble this flow...
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(() -> "some/dir", e -> e.poller(...))
.handle(...) // LS Gateway
.filter(...)
.transform(sorter())
.split
.handle(...) // GET Gateway
.transform(csvToPojoTransformer())
.handle(myService())
.get()
}

How to route using message headers in Spring Integration DSL Tcp

I have 2 server side services and I would like route messages to them using message headers, where remote clients put service identification into field type.
Is the code snippet, from server side config, the correct way? It throws cast exception indicating that route() see only payload, but not the message headers. Also all example in the Spring Integration manual shows only payload based decisioning.
#Bean
public IntegrationFlow serverFlow( // common flow for all my services, currently 2
TcpNetServerConnectionFactory serverConnectionFactory,
HeartbeatServer heartbeatServer,
FeedServer feedServer) {
return IntegrationFlows
.from(Tcp.inboundGateway(serverConnectionFactory))
.<Message<?>, String>route((m) -> m.getHeaders().get("type", String.class),
(routeSpec) -> routeSpec
.subFlowMapping("hearbeat", subflow -> subflow.handle(heartbeatServer::processRequest))
.subFlowMapping("feed", subflow -> subflow.handle(feedServer::consumeFeed)))
.get();
}
Client side config:
#Bean
public IntegrationFlow heartbeatClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory,
HeartbeatClient heartbeatClient) {
return IntegrationFlows.from(heartbeatClient::send, e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.enrichHeaders(c -> c.header("type", "heartbeat"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.handle(heartbeatClient::receive)
.get();
}
#Bean
public IntegrationFlow feedClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory) {
return IntegrationFlows.from(FeedClient.MessageGateway.class)
.enrichHeaders(c -> c.header("type", "feed"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.get();
}
And as usual here is the full demo project code, ClientConfig and ServerConfig.
There is no standard way to send headers over raw TCP. You need to encode them into the payload somehow (and extract them on the server side).
The framework provides a mechanism to do this for you, but it requires extra configuration.
See the documentation.
Specifically...
The MapJsonSerializer uses a Jackson ObjectMapper to convert between a Map and JSON. You can use this serializer in conjunction with a MessageConvertingTcpMessageMapper and a MapMessageConverter to transfer selected headers and the payload in JSON.
I'll try to find some time to create an example of how to use it.
But, of course, you can roll your own encoding/decoding.
EDIT
Here's an example configuration to use JSON to convey message headers over TCP...
#SpringBootApplication
public class TcpWithHeadersApplication {
public static void main(String[] args) {
SpringApplication.run(TcpWithHeadersApplication.class, args);
}
// Client side
public interface TcpExchanger {
public String exchange(String data, #Header("type") String type);
}
#Bean
public IntegrationFlow client(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(TcpExchanger.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.get();
}
// Server side
#Bean
public IntegrationFlow server(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.log(Level.INFO, "exampleLogger", "'Received type header:' + headers['type']")
.route("headers['type']", r -> r
.subFlowMapping("upper",
subFlow -> subFlow.transform(String.class, p -> p.toUpperCase()))
.subFlowMapping("lower",
subFlow -> subFlow.transform(String.class, p -> p.toLowerCase())))
.get();
}
// Common
#Bean
public MessageConvertingTcpMessageMapper mapper() {
MapMessageConverter converter = new MapMessageConverter();
converter.setHeaderNames("type");
return new MessageConvertingTcpMessageMapper(converter);
}
#Bean
public MapJsonSerializer jsonMapping() {
return new MapJsonSerializer();
}
// Console
#Bean
#DependsOn("client")
public ApplicationRunner runner(TcpExchanger exchanger,
ConfigurableApplicationContext context) {
return args -> {
System.out.println("Enter some text; if it starts with a lower case character,\n"
+ "it will be uppercased by the server; otherwise it will be lowercased;\n"
+ "enter 'quit' to end");
Scanner scanner = new Scanner(System.in);
String request = scanner.nextLine();
while (!"quit".equals(request.toLowerCase())) {
if (StringUtils.hasText(request)) {
String result = exchanger.exchange(request,
Character.isLowerCase(request.charAt(0)) ? "upper" : "lower");
System.out.println(result);
}
request = scanner.nextLine();
}
scanner.close();
context.close();
};
}
}

How to re-queue message when spring integration configuration includes a priority channel

I have a Spring Integration configuration that utilizes a priority channel. When an item is read from that channel, local resources are checked at that point in time, and if the resources are not available to process the item, I would like to requeue the message so that another machine picks it up. Originally, I wrongly threw an exception thinking that a requeue would occur, but as was answered in my other question this is not going to work since the priority channel executes in another thread than the listener container.
I thought about placing a filter right after the inbound channel adapter, and throwing an exception if resources are not available at that time, but at that instance in time an accurate assessment of resources cannot be made because resource availability at that time does match what will be available when the message is selected based upon priority.
My next thought is to place a filter after the priority channel and before the service activator and direct messages that cannot be handled by current resources to the discard-channel which is defined as an outbound channel adapter that sends the message back to the original queue. Are there pitfalls to this approach?
EDIT 20150917:
Per Gary's advice, I have moved to RabbitMQ 3.5.x in order to take of the built-in priority queues. I now have a problem tracking the number of attempts as it appears my original message is placed back on the queue, rather than my modified message. I have updated the code blocks to reflect the current setup.
EDIT 20150922:
I am updating this post to reflect the final proof of concept code base that I created. I am not a Spring-Integration expert by any means, so please keep that in mind as well as the fact that this test code is not production ready. My original intent was to have messages resubmitted and retried a certain amount of times if a particular exception was thrown. This can be accomplished using the StatefulRetryOperationsInterceptor. But to experiment further, I wanted to be able to set/increment a header on failure and then have something in my flow that could react to that value. That was accomplished by using an extension of the RepublishMessageRecoverer that overrides additionalHeaders(). This object then is used to configure the RetryOperationsInterceptor.
One other minor thing: I wanted to reduce some of the default Spring Integration logging when my signal exception was thrown, so I needed to make sure I named my error channel "errorChannel" in order to replace the Spring Integration default. I also needed to create a custom ErrorHandler which to assign to the ListenerContainer default which logs everything to ERROR level.
Here is my current setup:
Spring Integration 4.2.0.RELEASE
Spring AMQP 1.5.0.RELEASE
RabbitMQ 3.5.x
Configuration
#Autowired
public void setSpringIntegrationConfigHelper (SpringIntegrationHelper springIntegrationConfigHelper) {
this.springIntegrationConfigHelper = springIntegrationConfigHelper;
}
#Bean
public String priorityPOCQueueName() {
return "poc.priority";
}
#Bean
public Queue priorityPOCQueue(RabbitAdmin rabbitAdmin) {
boolean durable = true;
boolean exclusive = false;
boolean autoDelete = false;
//Adding the x-max-priority argument is what signals RabbitMQ that this is a priority queue. Must be Rabbit 3.5.x
Map<String,Object> arguments = new HashMap<String, Object>();
arguments.put("x-max-priority", 5);
Queue queue = new Queue(priorityPOCQueueName(),
durable,
exclusive,
autoDelete,
arguments);
rabbitAdmin.declareQueue(queue);
return queue;
}
#Bean
public Binding priorityPOCQueueBinding(RabbitAdmin rabbitAdmin) {
Binding binding = new Binding(priorityPOCQueueName(),
DestinationType.QUEUE,
"amq.direct",
priorityPOCQueue(rabbitAdmin).getName(),
null);
rabbitAdmin.declareBinding(binding);
return binding;
}
#Bean
public AmqpTemplate priorityPOCMessageTemplate(ConnectionFactory amqpConnectionFactory,
#Qualifier("priorityPOCQueueName") String queueName,
#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
RabbitTemplate template = new RabbitTemplate(amqpConnectionFactory);
template.setChannelTransacted(false);
template.setExchange("amq.direct");
template.setQueue(queueName);
template.setRoutingKey(queueName);
template.setMessageConverter(messageConverter);
return template;
}
#Autowired
#Qualifier("priorityPOCQueue")
public void setPriorityPOCQueue(Queue priorityPOCQueue) {
this.priorityPOCQueue = priorityPOCQueue;
}
#Bean
public MessageRecoverer miTestMessageRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
return new MessageRecoverer() {
#Override
public void recover(org.springframework.amqp.core.Message msg, Throwable t) {
StringBuilder sb = new StringBuilder();
sb.append("Firing Test Recoverer: ").append(t.getClass().getName()).append(" Message Count: ")
.append(msg.getMessageProperties().getMessageCount())
.append(" ID: ").append(msg.getMessageProperties().getMessageId())
.append(" DeliveryTag: ").append(msg.getMessageProperties().getDeliveryTag())
.append(" Redilivered: ").append(msg.getMessageProperties().isRedelivered());
logger.debug(sb.toString());
PriorityMessage m = new PriorityMessage(5);
m.setId(randomGenerator.nextLong(10L, 1000000L));
priorityPOCMessageTemplate.convertAndSend(m , new SimulateErrorHeaderPostProcessor(Boolean.FALSE, m.getPriority()));
}
};
}
#Bean
public RepublishMessageRecoverer miRepublishRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
class MiRecoverer extends RepublishMessageRecoverer {
public MiRecoverer(AmqpTemplate errorTemplate) {
super(errorTemplate);
this.setErrorRoutingKeyPrefix("");
}
#Override
protected Map<? extends String, ? extends Object> additionalHeaders(
org.springframework.amqp.core.Message message, Throwable cause) {
Map<String, Object> map = new HashMap<>();
if (message.getMessageProperties().getHeaders().containsKey("jmattempts") == false) {
map.put("jmattempts", 0);
} else {
Integer count = Integer.valueOf(message.getMessageProperties().getHeaders().get("jmattempts").toString());
map.put("jmattempts", ++count);
}
return map;
}
} ;
return new MiRecoverer(priorityPOCMessageTemplate);
}
#Bean
public StatefulRetryOperationsInterceptor inadequateResourceInterceptor(#Qualifier("priorityPOCMessageTemplate") AmqpTemplate priorityPOCMessageTemplate
, #Qualifier("priorityMessageKeyGenerator") PriorityMessageKeyGenerator priorityMessageKeyGenerator
, #Qualifier("miTestMessageRecoverer") MessageRecoverer messageRecoverer
, #Qualifier("miRepublishRecoverer") RepublishMessageRecoverer miRepublishRecoverer) {
StatefulRetryInterceptorBuilder b = RetryInterceptorBuilder.stateful();
return b.maxAttempts(2)
.backOffOptions(2000L, 1.0D, 4000L)
.messageKeyGenerator(priorityMessageKeyGenerator)
.recoverer(miRepublishRecoverer)
.build();
}
#Bean(name="exec.priorityPOC")
TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor e = new ThreadPoolTaskExecutor();
e.setCorePoolSize(1);
e.setQueueCapacity(1);
return e;
}
/* #Bean(name="poc.priorityChannel")
public MessageChannel pocPriorityChannel() {
PriorityChannel c = new PriorityChannel(new PriorityComparator());
c.setComponentName("poc.priorityChannel");
c.setBeanName("poc.priorityChannel");
return c;
}
*/
#Bean(name="poc.inputChannel")
public MessageChannel pocPriorityChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("poc.inputChannel");
c.setBeanName("poc.inputChannel");
return c;
}
#Bean(name="poc.inboundChannelAdapter") //make this a unique name
public AmqpInboundChannelAdapter amqpInboundChannelAdapter(#Qualifier("exec.priorityPOC") TaskExecutor taskExecutor
, #Qualifier("errorChannel") MessageChannel pocErrorChannel
, #Qualifier("inadequateResourceInterceptor") StatefulRetryOperationsInterceptor inadequateResourceInterceptor) {
org.aopalliance.aop.Advice[] adviceChain = new org.aopalliance.aop.Advice[]{inadequateResourceInterceptor};
int concurrentConsumers = 1;
AmqpInboundChannelAdapter a = springIntegrationConfigHelper.createInboundChannelAdapter(taskExecutor
, pocPriorityChannel(), new Queue[]{priorityPOCQueue}, concurrentConsumers, adviceChain
, new PocErrorHandler());
a.setErrorChannel(pocErrorChannel);
return a;
}
#Transformer(inputChannel = "poc.inputChannel", outputChannel = "poc.procesPoc")
public Message<PriorityMessage> incrementAttempts(Message<PriorityMessage> msg) {
//I stopped using this in the POC.
return msg;
}
#ServiceActivator(inputChannel="poc.procesPoc")
public void procesPoc(#Header(SimulateErrorHeaderPostProcessor.ERROR_SIMULATE_HEADER_KEY) Boolean simulateError
, #Headers Map<String, Object> headerMap
, PriorityMessage priorityMessage) throws InterruptedException {
if (isFirstMessageReceived == false) {
//Thread.sleep(15000); //Cause a bit of a backup so we can see prioritizing in action.
isFirstMessageReceived = true;
}
Integer retryAttempts = 0;
if (headerMap.containsKey("jmattempts")) {
retryAttempts = Integer.valueOf(headerMap.get("jmattempts").toString());
}
logger.debug("Received message with priority: " + priorityMessage.getPriority() + ", simulateError: " + simulateError + ", Current attempts count is "
+ retryAttempts);
if (simulateError && retryAttempts < PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Simulating an error and re-queue'ng. Current attempt count is " + retryAttempts);
throw new AnalyzerNonAdequateResourceException();
} else if (simulateError && retryAttempts > PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Max attempt count exceeded");
}
}
/**************************************************************************************************
*
* Error Channel
*
**************************************************************************************************/
//Note that we want to override default Spring error channel, so the name of the bean must be errorChannel
#Bean(name="errorChannel")
public MessageChannel pocErrorChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("errorChannel");
c.setBeanName("errorChannel");
return c;
}
#ServiceActivator(inputChannel="errorChannel")
public void pocHandleError(Message<MessagingException> message) throws Throwable {
MessagingException me = message.getPayload();
logger.error("pocHandleError: error encountered: " + me.getCause().getClass().getName());
SortedMap<String, Object> sorted= new TreeMap<>();
sorted.putAll(me.getFailedMessage().getHeaders());
if (me.getCause() instanceof AnalyzerNonAdequateResourceException) {
logger.debug("Headers: " + sorted.toString());
//Let this message get requeued
throw me.getCause();
}
Message<?> failedMsg = me.getFailedMessage();
Object o = failedMsg.getPayload();
StringBuilder sb = new StringBuilder();
if (o != null) {
sb.append("AnalyzerErrorHandler: Failed Message Type: ")
.append(o.getClass().getCanonicalName()).append(". toString: ").append(o.toString());
logger.error(sb.toString());
}
//The first level sometimes brings back either MessagingHandlingException or
//MessagingTransformationException which may contain a subcause
Exception e = (Exception)me.getCause();
int i = 0;
sb.delete(0, sb.length());
sb.append("AnalyzerErrorHandler nested messages: ");
while (e != null && i++ < 10) {
sb.append(System.lineSeparator()).append(" ")
.append(e.getClass().getCanonicalName()).append(": ")
.append(e.getMessage());
}
if (i > 0) {
logger.error(sb.toString());
}
//Don't want a message to recycle
throw new AmqpRejectAndDontRequeueException(e);
}
/**
* This gets set on the ListenerContainer. The default handler on the listener
* container logs everything with full stack trace. We don't want to do that
* for our known resource exception
*/
public static class PocErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
Throwable cause = t.getCause();
if (cause != null) {
while (cause.getCause() != null) {
cause = cause.getCause();
}
} else {
cause = t;
}
if (cause instanceof AnalyzerNonAdequateResourceException) {
logger.info(AnalyzerNonAdequateResourceException.class.getName() + ": not enough resources to process the item.");
return;
}
else {
logger.error("POC Listener Exception", t);
}
}
}
SpringIntegrationHelper
protected ConnectionFactory connectionFactory;
protected MessageConverter messageConverter;
#Autowired
public void setConnectionFactory (ConnectionFactory connectionFactory) {
this.connectionFactory = connectionFactory;
}
#Autowired
public void setMessageConverter(#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
public AmqpInboundChannelAdapter createInboundChannelAdapter(TaskExecutor taskExecutor
, MessageChannel outputChannel, Queue[] queues, int concurrentConsumers
, org.aopalliance.aop.Advice[] adviceChain,
ErrorHandler errorHandler) {
SimpleMessageListenerContainer listenerContainer =
new SimpleMessageListenerContainer(connectionFactory);
//AUTO is default, but setting it anyhow.
listenerContainer.setAcknowledgeMode(AcknowledgeMode.AUTO);
listenerContainer.setAutoStartup(true);
listenerContainer.setConcurrentConsumers(concurrentConsumers);
listenerContainer.setMessageConverter(messageConverter);
listenerContainer.setQueues(queues);
//listenerContainer.setChannelTransacted(false);
listenerContainer.setErrorHandler(errorHandler);
listenerContainer.setPrefetchCount(1);
listenerContainer.setTaskExecutor(taskExecutor);
listenerContainer.setDefaultRequeueRejected(true);
if (adviceChain != null && adviceChain.length > 0) {
listenerContainer.setAdviceChain(adviceChain);
}
AmqpInboundChannelAdapter a = new AmqpInboundChannelAdapter(listenerContainer);
a.setMessageConverter(messageConverter);
a.setAutoStartup(true);
a.setHeaderMapper(MyAmqpHeaderMapper.createPassAllHeaders());
a.setOutputChannel(outputChannel);
return a;
}
It's not clear why you want to use a PriorityChannel in this context; why not use a priority queue in RabbitMQ? That way, you can run your flow on the container thread.
Sending the queue to the back of the queue yourself would work, but there is a risk of message loss.

Resources