FTP - Using Spring Integration task-scheduler process stops after certain period - spring-integration

When trying to start the jar seperately in Unix machine the Thread for task-schedular is not listnening after some time but it is working fine in Windows machine.Even the application is working in linux on startup but going further sometime it is not working.Please let me know Is there any way to avoid the issue.
#Bean
#InboundChannelAdapter(value = "inputChannel", poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<?> receive() {
FtpInboundFileSynchronizingMessageSource messageSource = new FtpInboundFileSynchronizingMessageSource(synchronizer());
File Temp = new File(TEMP_FOLDER);
messageSource.setLocalDirectory(Temp);
messageSource.setAutoCreateLocalDirectory(true);
return messageSource;
}
private AbstractInboundFileSynchronizer<FTPFile> synchronizer() {
AbstractInboundFileSynchronizer<FTPFile> fileSynchronizer = new FtpInboundFileSynchronizer(sessionFactory());
fileSynchronizer.setRemoteDirectory(ftpFileLocation);
fileSynchronizer.setDeleteRemoteFiles(false);
Pattern pattern = Pattern.compile(".*\\.xml$");
FtpRegexPatternFileListFilter ftpRegexPatternFileListFilter = new FtpRegexPatternFileListFilter(pattern);
fileSynchronizer.setFilter(ftpRegexPatternFileListFilter);
return fileSynchronizer;
}
#Bean(name = "sessionFactory")
public SessionFactory<FTPFile> sessionFactory() {
DefaultFtpSessionFactory sessionFactory = new DefaultFtpSessionFactory();
sessionFactory.setHost(ftpHostName);
sessionFactory.setUsername(ftpUserName);
sessionFactory.setPassword(ftpPassWord);
return sessionFactory;
}
#Bean(name = "inputChannel")
public PollableChannel inputChannel() {
return new QueueChannel();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(100));
return pollerMetadata;
}
#ServiceActivator(inputChannel = "inputChannel")
public void transferredFilesFromFTP(File payload) {
callWork(payload);
}

There is no reason to have one poller immediately after another one. I mean you don't need that QueueChannel.
It's really interesting what does that magic callWork(payload); code do. Doesn't it have anything blocking for some long time? Even if that looks like void (without returning something to wait), but you may have there some thread starvation code, which steals all the thread from the default TaskScheduler (10 by default).
Looks like this is fully related to your another question Spring Integration ftp Thread process

Related

spring batch api request on wait

I have written a simple spring batch project where
API to execute job: returns job ID on job launch
reads/processes/writes from/to DB in multithread parallel processing
(Launching the job asynchronously to get the job ID in advance so I can poll the status of the job from another API request.)
API to poll the status of the job with respect to the job ID passed.
Polling api works smoothly if job step's throttle limit is 7 or less.
However, if throttle limit is more than 7, job execution continues but polling api will be on wait till read/process releases.
Have also tried a simple api which simply returns String instead of polling but that goes on wait too.
Sample of the code as shown below:
#Configuration
#EnableBatchProcessing
public class SpringBatchConfig {
private int core = 200;
#Bean
public Job job() throws Exception {
return jobBuilderFactory.get(SC_Constants.JOB)
.incrementer(new RunIdIncrementer())
.listener(new Listener(transDAO))
.start(step_processRecords()
.build();
}
#Bean
public ThreadPoolTaskExecutor taskExecutor(){
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(this.core);
threadPoolTaskExecutor.setMaxPoolSize(this.core);
threadPoolTaskExecutor.setQueueCapacity(this.core);
threadPoolTaskExecutor.setThreadNamePrefix("threadExecutor");
return threadPoolTaskExecutor;
}
#Bean
#StepScope
public JdbcPagingItemReader<Transaction> itemReader(...) {
JdbcPagingItemReader<Transaction> itemReader = new JdbcPagingItemReader<Transaction>();
...
return itemReader;
}
#Bean
#StepScope
public ItemProcessor<Transaction,Transaction> processor() {
return new Processor();
}
#Bean
#StepScope
public ItemWriter<Transaction> writer(...) {
return new Writer();
}
#Bean
public Step step3_processRecords() throws Exception {
return stepBuilderFactory.get(SC_Constants.STEP_3_PROCESS_RECORDS)
.<Transaction,Transaction>chunk(this.chunk)
.reader(itemReader(null,null,null))
.processor(processor())
.writer(writer(null,null,null))
.taskExecutor(taskExecutor())
.throttleLimit(20)
.build();
}
}
file that extends DefaultBatchConfigurer has below:
#Override
public JobLauncher getJobLauncher() {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
SimpleAsyncTaskExecutor exec = new SimpleAsyncTaskExecutor();
exec.setConcurrencyLimit(concurrency_limit);
jobLauncher.setTaskExecutor(exec);
return jobLauncher;
}
Edit:
polling api code snippet
#POST
#Consumes(MediaType.APPLICATION_JSON)
#Path("/getJobStatus")
public Response getJobStatus(#RequestBody String body){
JSONObject jsonObject = new JSONObject(body);
Long jobId = jsonObject.get("jobId");
jobExecution = jobExplorer.getJobExecution(jobId);
batchStatus = jobExecution.getStatus().getBatchStatus();
write_count = jobExecution.getStepExecutions().iterator().next().getWriteCount();
responseDto.setJob_id(jobId);
responseDto.setWrite_count(write_count);
responseDto.setStatus(batchStatus.name());
return responseDto;
}
Second edit:
sharing a snippet of the jobrepository setting: using postgres jdbc job repository.
#Component
public class SpringBatchConfigurer extends DefaultBatchConfigurer{
...
#PostConstruct
public void initialize() {
try {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(driverClassName);
dataSource.setUsername(username);
dataSource.setPassword(password);
dataSource.setUrl(dsUrl + "?currentSchema=public");
dataSource.setInitialSize(3);
dataSource.setMinIdle(1);
dataSource.setMaxIdle(3);
dataSource.addConnectionProperty("maxConnLifetimeMillis", "30000");
this.transactionManager = new DataSourceTransactionManager(dataSource);
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.afterPropertiesSet();
this.jobRepository = factory.getObject();
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
jobLauncher.afterPropertiesSet();
this.jobLauncher = jobLauncher;
} catch (Exception e) {
throw new BatchConfigurationException(e);
}
}
Third Edit: Tried passing it as a local variable under this step. polling works but now, job execution is not happening. No threads generated. No processing is happening.
#Bean
public Step step3_processRecords() throws Exception {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(this.core_size);
threadPoolTaskExecutor.setMaxPoolSize(this.max_pool_size);
threadPoolTaskExecutor.setQueueCapacity(this.queue_capacity);
threadPoolTaskExecutor.setThreadNamePrefix("threadExecutor");
return stepBuilderFactory.get("step3")
.<Transaction,Transaction>chunk(this.chunk)
.reader(itemReader(null,null,null))
.processor(processor())
.writer(writer(null,null,null))
.taskExecutor(threadPoolTaskExecutor)
.throttleLimit(20)
.build();
}

spring batch getting stuck in parallel processing where works fine in serial processing

I am quite new to Spring Batch and tried to run Spring batch with single thread. Now I need to add multithreading in step and have below configuration, but parallel processing is getting hang after some time and no trace on console after it processes some records. Earlier for single thread I used JdbcCursorItemReader and then switch to JdbcPagingItemReader for thread safe reader.
Reader is reading entries from postgres DB and then processor (which calls other rest webservice and return response to writer) and writer (which creates new file and update status data in DB) can execute parallelly.
#Bean
public Job job(JobBuilderFactory jobBuilderFactory,
StepBuilderFactory stepBuilderFactory,
ItemReader<OrderRequest> itemReader,
ItemProcessor<OrderRequest, OrderResponse> dataProcessor,
ItemWriter<OrderResponse> fileWriter, JobExecutionListener jobListener,
ItemReadListener<OrderRequest> stepItemReadListener,
SkipListener<OrderRequest, OrderResponse> stepSkipListener, TaskExecutor taskExecutor) {
Step step1 = stepBuilderFactory.get("Process-Data")
.<OrderRequest, OrderResponse>chunk(10)
.listener(stepItemReadListener)
.reader(itemReader)
.processor(dataProcessor)
.writer(fileWriter)
.faultTolerant()
.processorNonTransactional()
.skipLimit(5)
.skip(CustomException.class)
.listener(stepSkipListener)
.taskExecutor(taskExecutor)
.throttleLimit(5)
.build();
return jobBuilderFactory.get("Batch-Job")
.incrementer(new RunIdIncrementer())
.listener(jobListener)
.start(step1)
.build();
}
#StepScope
#Bean
public JdbcPagingItemReader<OrderRequest> jdbcPagingItemReader(#Qualifier("postgresDataSource") DataSource dataSource,
#Value("#{jobParameters[customerId]}") String customerId, OrderRequestRowMapper rowMapper) {
// reading database records using JDBC in a paging fashion
JdbcPagingItemReader<OrderRequest> reader = new JdbcPagingItemReader<>();
reader.setDataSource(dataSource);
reader.setFetchSize(1000);
reader.setRowMapper(rowMapper);
// Sort Keys
Map<String, Order> sortKeys = new HashMap<>();
sortKeys.put("OrderRequestID", Order.ASCENDING);
// Postgres implementation of a PagingQueryProvider using database specific features.
PostgresPagingQueryProvider queryProvider = new PostgresPagingQueryProvider();
queryProvider.setSelectClause("*");
queryProvider.setFromClause("FROM OrderRequest");
queryProvider.setWhereClause("CUSTOMER = '" + customerId + "'");
queryProvider.setSortKeys(sortKeys);
reader.setQueryProvider(queryProvider);
return reader;
}
#StepScope
#Bean
public SynchronizedItemStreamReader<OrderRequest> itemReader(JdbcPagingItemReader<OrderRequest> jdbcPagingItemReader) {
return new SynchronizedItemStreamReaderBuilder<OrderRequest>().delegate(jdbcPagingItemReader).build();
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(5);
taskExecutor.setMaxPoolSize(5);
taskExecutor.setQueueCapacity(0);
return taskExecutor;
}
#StepScope
#Bean
ItemProcessor<OrderRequest, OrderResponse> dataProcessor() {
return new BatchDataFileProcessor();
}
#StepScope
#Bean
ItemWriter<OrderResponse> fileWriter() {
return new BatchOrderFileWriter();
}
#StepScope
#Bean
public ItemReadListener<OrderRequest> stepItemReadListener() {
return new StepItemReadListener();
}
#Bean
public JobExecutionListener jobListener() {
return new JobListener();
}
#StepScope
#Bean
public SkipListener<OrderRequest, OrderResponse> stepSkipListener() {
return new StepSkipListener();
}
What is problem with multithreading configuration here?
Batch works fine with single record at a time when used JdbcCursorItemReader and no TaskExecutor bean:
#StepScope
#Bean
public JdbcCursorItemReader<OrderRequest> jdbcCursorItemReader(#Qualifier("postgresDataSource") DataSource dataSource,
#Value("#{jobParameters[customerId]}") String customerId, OrderRequestRowMapper rowMapper) {
return new JdbcCursorItemReaderBuilder<OrderRequest>()
.name("jdbcCursorItemReader")
.dataSource(dataSource)
.queryArguments(customerId)
.sql(CommonConstant.FETCH_QUERY)
.rowMapper(rowMapper)
.saveState(true)
.build();
}
After changing TaskExecutor as follows its working now:
#Bean
public TaskExecutor taskExecutor() {
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor();
taskExecutor.setConcurrencyLimit(concurrencyLimit);
return taskExecutor;
}
Didn't get what was the problem with earlier.

How to get a file daily via SFTP using Spring Integration with Java config?

I need to get a file daily via SFTP. I would like to use Spring Integration with Java config. The file is generally available at a specific time each day. The application should try to get the file near that time each day. If the file is not available, it should continue to retry for x attempts. After x attempts, it should send an email to let the admin know that the file is still not available on the SFTP site.
One option is to use SftpInboundFileSynchronizingMessageSource. In the MessageHandler, I can kick off a job to process the file. However, I really don't need synchronization with the remote file system. After all, it is a scheduled delivery of the file. Plus, I need to delay at most 15 minutes for the next retry and to poll every 15 minutes seems a bit overkill for a daily file. I guess that I could use this but would need some mechanism to send email after a certain time elapsed and no file was received.
The other option seems to be using get of the SFTP Outbound Gateway. But the only examples I can find seem to be XML config.
Update
Adding code after using help provided by Artem Bilan's answer below:
Configuration class:
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource(ApplicationProperties applicationProperties, PropertiesPersistingMetadataStore store) {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer(applicationProperties));
source.setLocalDirectory(new File("ftp-inbound"));
source.setAutoCreateLocalDirectory(true);
FileSystemPersistentAcceptOnceFileListFilter local = new FileSystemPersistentAcceptOnceFileListFilter(store,"test");
source.setLocalFilter(local);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata() {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice());
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
return pollerMetadata;
}
#Bean
public RetryCompoundTriggerAdvice retryCompoundTriggerAdvice() {
return new RetryCompoundTriggerAdvice(compoundTrigger(), secondaryTrigger());
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public Trigger primaryTrigger() {
return new CronTrigger("*/60 * * * * *");
}
#Bean
public Trigger secondaryTrigger() {
return new PeriodicTrigger(10000);
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler(PropertiesPersistingMetadataStore store) {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
store.flush();
}
};
}
RetryCompoundTriggerAdvice class:
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private int count = 0;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger, Trigger overrideTrigger) {
Assert.notNull(compoundTrigger, "'compoundTrigger' cannot be null");
this.compoundTrigger = compoundTrigger;
this.override = overrideTrigger;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null && count <= 5) {
count++;
this.compoundTrigger.setOverride(this.override);
}
else {
this.compoundTrigger.setOverride(null);
if (count > 5) {
//send email
}
count = 0;
}
return result;
}
}
Since Spring Integration 4.3 there is CompoundTrigger:
* A {#link Trigger} that delegates the {#link #nextExecutionTime(TriggerContext)}
* to one of two Triggers. If the {#link #setOverride(Trigger) override} trigger is
* {#code null}, the primary trigger is invoked; otherwise the override trigger is
* invoked.
With the combination of CompoundTriggerAdvice:
* An {#link AbstractMessageSourceAdvice} that uses a {#link CompoundTrigger} to adjust
* the poller - when a message is present, the compound trigger's primary trigger is
* used to determine the next poll. When no message is present, the override trigger is
* used.
it can be used to reach your task:
The primaryTrigger can be a CronTrigger to run the task only once a day.
The override could be a PeriodicTrigger with desired short period to retry.
The retry logic you can utilize with one more Advice for poller or just extend that CompoundTriggerAdvice to add count logic to send an email eventually.
Since there is no file, therefore no message to kick the flow. And we don't have choice unless dance around the poller infrastructure.

spring batch integration file poller

I am trying to build a spring batch application that starts a job only after a file comes into a directory. For that I need a file poller and something like the snippet found in Spring reference manual:
public class FileMessageToJobRequest {
private Job job;
private String fileParameterName;
public void setFileParameterName(String fileParameterName) {
this.fileParameterName = fileParameterName;
}
public void setJob(Job job) {
this.job = job;
}
#Transformer
public JobLaunchRequest toRequest(Message<File> message) {
JobParametersBuilder jobParametersBuilder =
new JobParametersBuilder();
jobParametersBuilder.addString(fileParameterName,
message.getPayload().getAbsolutePath());
return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters());
}
}
I would like to manage everything with configuration classes, but I can't really figure out how to make it work.
Your question isn't clear. Would be better to have something that works, then some your own PoC or attempt to reach the task.
But anyway that looks like you would like to avoid XML configuration and be only with Java & Annotation Configuration.
For this purpose I suggest you to take a look into Reference Manual and find this sample in the File Support chapter, too:
#Bean
#InboundChannelAdapter(value = "fileInputChannel", poller = #Poller(fixedDelay = "1000"))
public MessageSource<File> fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File(INBOUND_PATH));
source.setFilter(new SimplePatternFileListFilter("*.txt"));
return source;
}

How to re-queue message when spring integration configuration includes a priority channel

I have a Spring Integration configuration that utilizes a priority channel. When an item is read from that channel, local resources are checked at that point in time, and if the resources are not available to process the item, I would like to requeue the message so that another machine picks it up. Originally, I wrongly threw an exception thinking that a requeue would occur, but as was answered in my other question this is not going to work since the priority channel executes in another thread than the listener container.
I thought about placing a filter right after the inbound channel adapter, and throwing an exception if resources are not available at that time, but at that instance in time an accurate assessment of resources cannot be made because resource availability at that time does match what will be available when the message is selected based upon priority.
My next thought is to place a filter after the priority channel and before the service activator and direct messages that cannot be handled by current resources to the discard-channel which is defined as an outbound channel adapter that sends the message back to the original queue. Are there pitfalls to this approach?
EDIT 20150917:
Per Gary's advice, I have moved to RabbitMQ 3.5.x in order to take of the built-in priority queues. I now have a problem tracking the number of attempts as it appears my original message is placed back on the queue, rather than my modified message. I have updated the code blocks to reflect the current setup.
EDIT 20150922:
I am updating this post to reflect the final proof of concept code base that I created. I am not a Spring-Integration expert by any means, so please keep that in mind as well as the fact that this test code is not production ready. My original intent was to have messages resubmitted and retried a certain amount of times if a particular exception was thrown. This can be accomplished using the StatefulRetryOperationsInterceptor. But to experiment further, I wanted to be able to set/increment a header on failure and then have something in my flow that could react to that value. That was accomplished by using an extension of the RepublishMessageRecoverer that overrides additionalHeaders(). This object then is used to configure the RetryOperationsInterceptor.
One other minor thing: I wanted to reduce some of the default Spring Integration logging when my signal exception was thrown, so I needed to make sure I named my error channel "errorChannel" in order to replace the Spring Integration default. I also needed to create a custom ErrorHandler which to assign to the ListenerContainer default which logs everything to ERROR level.
Here is my current setup:
Spring Integration 4.2.0.RELEASE
Spring AMQP 1.5.0.RELEASE
RabbitMQ 3.5.x
Configuration
#Autowired
public void setSpringIntegrationConfigHelper (SpringIntegrationHelper springIntegrationConfigHelper) {
this.springIntegrationConfigHelper = springIntegrationConfigHelper;
}
#Bean
public String priorityPOCQueueName() {
return "poc.priority";
}
#Bean
public Queue priorityPOCQueue(RabbitAdmin rabbitAdmin) {
boolean durable = true;
boolean exclusive = false;
boolean autoDelete = false;
//Adding the x-max-priority argument is what signals RabbitMQ that this is a priority queue. Must be Rabbit 3.5.x
Map<String,Object> arguments = new HashMap<String, Object>();
arguments.put("x-max-priority", 5);
Queue queue = new Queue(priorityPOCQueueName(),
durable,
exclusive,
autoDelete,
arguments);
rabbitAdmin.declareQueue(queue);
return queue;
}
#Bean
public Binding priorityPOCQueueBinding(RabbitAdmin rabbitAdmin) {
Binding binding = new Binding(priorityPOCQueueName(),
DestinationType.QUEUE,
"amq.direct",
priorityPOCQueue(rabbitAdmin).getName(),
null);
rabbitAdmin.declareBinding(binding);
return binding;
}
#Bean
public AmqpTemplate priorityPOCMessageTemplate(ConnectionFactory amqpConnectionFactory,
#Qualifier("priorityPOCQueueName") String queueName,
#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
RabbitTemplate template = new RabbitTemplate(amqpConnectionFactory);
template.setChannelTransacted(false);
template.setExchange("amq.direct");
template.setQueue(queueName);
template.setRoutingKey(queueName);
template.setMessageConverter(messageConverter);
return template;
}
#Autowired
#Qualifier("priorityPOCQueue")
public void setPriorityPOCQueue(Queue priorityPOCQueue) {
this.priorityPOCQueue = priorityPOCQueue;
}
#Bean
public MessageRecoverer miTestMessageRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
return new MessageRecoverer() {
#Override
public void recover(org.springframework.amqp.core.Message msg, Throwable t) {
StringBuilder sb = new StringBuilder();
sb.append("Firing Test Recoverer: ").append(t.getClass().getName()).append(" Message Count: ")
.append(msg.getMessageProperties().getMessageCount())
.append(" ID: ").append(msg.getMessageProperties().getMessageId())
.append(" DeliveryTag: ").append(msg.getMessageProperties().getDeliveryTag())
.append(" Redilivered: ").append(msg.getMessageProperties().isRedelivered());
logger.debug(sb.toString());
PriorityMessage m = new PriorityMessage(5);
m.setId(randomGenerator.nextLong(10L, 1000000L));
priorityPOCMessageTemplate.convertAndSend(m , new SimulateErrorHeaderPostProcessor(Boolean.FALSE, m.getPriority()));
}
};
}
#Bean
public RepublishMessageRecoverer miRepublishRecoverer(final AmqpTemplate priorityPOCMessageTemplate) {
class MiRecoverer extends RepublishMessageRecoverer {
public MiRecoverer(AmqpTemplate errorTemplate) {
super(errorTemplate);
this.setErrorRoutingKeyPrefix("");
}
#Override
protected Map<? extends String, ? extends Object> additionalHeaders(
org.springframework.amqp.core.Message message, Throwable cause) {
Map<String, Object> map = new HashMap<>();
if (message.getMessageProperties().getHeaders().containsKey("jmattempts") == false) {
map.put("jmattempts", 0);
} else {
Integer count = Integer.valueOf(message.getMessageProperties().getHeaders().get("jmattempts").toString());
map.put("jmattempts", ++count);
}
return map;
}
} ;
return new MiRecoverer(priorityPOCMessageTemplate);
}
#Bean
public StatefulRetryOperationsInterceptor inadequateResourceInterceptor(#Qualifier("priorityPOCMessageTemplate") AmqpTemplate priorityPOCMessageTemplate
, #Qualifier("priorityMessageKeyGenerator") PriorityMessageKeyGenerator priorityMessageKeyGenerator
, #Qualifier("miTestMessageRecoverer") MessageRecoverer messageRecoverer
, #Qualifier("miRepublishRecoverer") RepublishMessageRecoverer miRepublishRecoverer) {
StatefulRetryInterceptorBuilder b = RetryInterceptorBuilder.stateful();
return b.maxAttempts(2)
.backOffOptions(2000L, 1.0D, 4000L)
.messageKeyGenerator(priorityMessageKeyGenerator)
.recoverer(miRepublishRecoverer)
.build();
}
#Bean(name="exec.priorityPOC")
TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor e = new ThreadPoolTaskExecutor();
e.setCorePoolSize(1);
e.setQueueCapacity(1);
return e;
}
/* #Bean(name="poc.priorityChannel")
public MessageChannel pocPriorityChannel() {
PriorityChannel c = new PriorityChannel(new PriorityComparator());
c.setComponentName("poc.priorityChannel");
c.setBeanName("poc.priorityChannel");
return c;
}
*/
#Bean(name="poc.inputChannel")
public MessageChannel pocPriorityChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("poc.inputChannel");
c.setBeanName("poc.inputChannel");
return c;
}
#Bean(name="poc.inboundChannelAdapter") //make this a unique name
public AmqpInboundChannelAdapter amqpInboundChannelAdapter(#Qualifier("exec.priorityPOC") TaskExecutor taskExecutor
, #Qualifier("errorChannel") MessageChannel pocErrorChannel
, #Qualifier("inadequateResourceInterceptor") StatefulRetryOperationsInterceptor inadequateResourceInterceptor) {
org.aopalliance.aop.Advice[] adviceChain = new org.aopalliance.aop.Advice[]{inadequateResourceInterceptor};
int concurrentConsumers = 1;
AmqpInboundChannelAdapter a = springIntegrationConfigHelper.createInboundChannelAdapter(taskExecutor
, pocPriorityChannel(), new Queue[]{priorityPOCQueue}, concurrentConsumers, adviceChain
, new PocErrorHandler());
a.setErrorChannel(pocErrorChannel);
return a;
}
#Transformer(inputChannel = "poc.inputChannel", outputChannel = "poc.procesPoc")
public Message<PriorityMessage> incrementAttempts(Message<PriorityMessage> msg) {
//I stopped using this in the POC.
return msg;
}
#ServiceActivator(inputChannel="poc.procesPoc")
public void procesPoc(#Header(SimulateErrorHeaderPostProcessor.ERROR_SIMULATE_HEADER_KEY) Boolean simulateError
, #Headers Map<String, Object> headerMap
, PriorityMessage priorityMessage) throws InterruptedException {
if (isFirstMessageReceived == false) {
//Thread.sleep(15000); //Cause a bit of a backup so we can see prioritizing in action.
isFirstMessageReceived = true;
}
Integer retryAttempts = 0;
if (headerMap.containsKey("jmattempts")) {
retryAttempts = Integer.valueOf(headerMap.get("jmattempts").toString());
}
logger.debug("Received message with priority: " + priorityMessage.getPriority() + ", simulateError: " + simulateError + ", Current attempts count is "
+ retryAttempts);
if (simulateError && retryAttempts < PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Simulating an error and re-queue'ng. Current attempt count is " + retryAttempts);
throw new AnalyzerNonAdequateResourceException();
} else if (simulateError && retryAttempts > PriorityMessage.MAX_MESSAGE_RETRY_COUNT) {
logger.debug(" Max attempt count exceeded");
}
}
/**************************************************************************************************
*
* Error Channel
*
**************************************************************************************************/
//Note that we want to override default Spring error channel, so the name of the bean must be errorChannel
#Bean(name="errorChannel")
public MessageChannel pocErrorChannel() {
DirectChannel c = new DirectChannel();
c.setComponentName("errorChannel");
c.setBeanName("errorChannel");
return c;
}
#ServiceActivator(inputChannel="errorChannel")
public void pocHandleError(Message<MessagingException> message) throws Throwable {
MessagingException me = message.getPayload();
logger.error("pocHandleError: error encountered: " + me.getCause().getClass().getName());
SortedMap<String, Object> sorted= new TreeMap<>();
sorted.putAll(me.getFailedMessage().getHeaders());
if (me.getCause() instanceof AnalyzerNonAdequateResourceException) {
logger.debug("Headers: " + sorted.toString());
//Let this message get requeued
throw me.getCause();
}
Message<?> failedMsg = me.getFailedMessage();
Object o = failedMsg.getPayload();
StringBuilder sb = new StringBuilder();
if (o != null) {
sb.append("AnalyzerErrorHandler: Failed Message Type: ")
.append(o.getClass().getCanonicalName()).append(". toString: ").append(o.toString());
logger.error(sb.toString());
}
//The first level sometimes brings back either MessagingHandlingException or
//MessagingTransformationException which may contain a subcause
Exception e = (Exception)me.getCause();
int i = 0;
sb.delete(0, sb.length());
sb.append("AnalyzerErrorHandler nested messages: ");
while (e != null && i++ < 10) {
sb.append(System.lineSeparator()).append(" ")
.append(e.getClass().getCanonicalName()).append(": ")
.append(e.getMessage());
}
if (i > 0) {
logger.error(sb.toString());
}
//Don't want a message to recycle
throw new AmqpRejectAndDontRequeueException(e);
}
/**
* This gets set on the ListenerContainer. The default handler on the listener
* container logs everything with full stack trace. We don't want to do that
* for our known resource exception
*/
public static class PocErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
Throwable cause = t.getCause();
if (cause != null) {
while (cause.getCause() != null) {
cause = cause.getCause();
}
} else {
cause = t;
}
if (cause instanceof AnalyzerNonAdequateResourceException) {
logger.info(AnalyzerNonAdequateResourceException.class.getName() + ": not enough resources to process the item.");
return;
}
else {
logger.error("POC Listener Exception", t);
}
}
}
SpringIntegrationHelper
protected ConnectionFactory connectionFactory;
protected MessageConverter messageConverter;
#Autowired
public void setConnectionFactory (ConnectionFactory connectionFactory) {
this.connectionFactory = connectionFactory;
}
#Autowired
public void setMessageConverter(#Qualifier("jsonMessageConverter") MessageConverter messageConverter) {
this.messageConverter = messageConverter;
}
public AmqpInboundChannelAdapter createInboundChannelAdapter(TaskExecutor taskExecutor
, MessageChannel outputChannel, Queue[] queues, int concurrentConsumers
, org.aopalliance.aop.Advice[] adviceChain,
ErrorHandler errorHandler) {
SimpleMessageListenerContainer listenerContainer =
new SimpleMessageListenerContainer(connectionFactory);
//AUTO is default, but setting it anyhow.
listenerContainer.setAcknowledgeMode(AcknowledgeMode.AUTO);
listenerContainer.setAutoStartup(true);
listenerContainer.setConcurrentConsumers(concurrentConsumers);
listenerContainer.setMessageConverter(messageConverter);
listenerContainer.setQueues(queues);
//listenerContainer.setChannelTransacted(false);
listenerContainer.setErrorHandler(errorHandler);
listenerContainer.setPrefetchCount(1);
listenerContainer.setTaskExecutor(taskExecutor);
listenerContainer.setDefaultRequeueRejected(true);
if (adviceChain != null && adviceChain.length > 0) {
listenerContainer.setAdviceChain(adviceChain);
}
AmqpInboundChannelAdapter a = new AmqpInboundChannelAdapter(listenerContainer);
a.setMessageConverter(messageConverter);
a.setAutoStartup(true);
a.setHeaderMapper(MyAmqpHeaderMapper.createPassAllHeaders());
a.setOutputChannel(outputChannel);
return a;
}
It's not clear why you want to use a PriorityChannel in this context; why not use a priority queue in RabbitMQ? That way, you can run your flow on the container thread.
Sending the queue to the back of the queue yourself would work, but there is a risk of message loss.

Resources