TransactionSynchronizationFactory in combination with JmsTransactionManager not working - spring-integration

I have a case where I read a file, convert the content to a String. Then split the string into multiple payloads and send those payloads individually to a Queue. I want to use a JmsTransactionManager so that all messages are send or none at all.
When the TX is successful I want to move the file to an Archive folder, otherwise move it to a Failed folder. I have read that I can use transactionSynchronizationFactory to accomplish this. But in combination with a JmsTransactionManager the file is not moved. If I use a PseudoTransactionManager, then the file is moved, but I loose my JmsTransaction.
I have made a simplified version to reproduce the issue. (The content of the file in this case is a simple comma separated list of values.)
#Bean
public IntegrationFlow fileInboundAdaptor() {
return IntegrationFlows
.from(s -> s.file(new File(INBOUND_PATH))
.patternFilter("*.txt"),
e -> e.poller(Pollers.fixedDelay(5000)
.transactionSynchronizationFactory(transactionSynchronizationFactory())
.transactional(new JmsTransactionManager(connectionFactory))
)
)
.transform(Transformers.fileToString())
.split(s -> s.applySequence(false).get().getT2().setDelimiters(","))
.handle((GenericHandler<String>) (payload, headers) -> {
jmsTemplate.send("SOME_QUEUE", (Session session) -> session.createTextMessage(payload));
return payload;
})
.channel(MessageChannels.queue("fileReadingResultChannel"))
.get();
}
The transactionSynchronizationFactory looks like this:
#Bean
public TransactionSynchronizationFactory transactionSynchronizationFactory() {
ExpressionParser parser = new SpelExpressionParser();
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor
= new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setAfterCommitExpression(parser.parseExpression(
"payload.renameTo(new java.io.File('test/archive' " +
" + T(java.io.File).separator + 'ARCHIVE-' + payload.name))"));
syncProcessor.setAfterRollbackExpression(parser.parseExpression(
"payload.renameTo(new java.io.File('test/fail' " +
" + T(java.io.File).separator + 'FAILED-' + payload.name))"));
return new DefaultTransactionSynchronizationFactory(syncProcessor);
}
So my question is: does TransactionSynchronizationFactory only work with PseudoTransactionManager or is supposed to work with JmsTransactionManager aswell?
Solution
I needed to set the transactionSynchronization on the JmsTransaction. Something like this:
public JmsTransactionManager transactionManager() {
JmsTransactionManager jmsTransactionManager = new JmsTransactionManager(connectionFactory);
jmsTransactionManager.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ON_ACTUAL_TRANSACTION);
return jmsTransactionManager;
}

Well, I think your issue is here:
/**
* Create a new JmsTransactionManager for bean-style usage.
* <p>Note: The ConnectionFactory has to be set before using the instance.
* This constructor can be used to prepare a JmsTemplate via a BeanFactory,
* typically setting the ConnectionFactory via setConnectionFactory.
* <p>Turns off transaction synchronization by default, as this manager might
* be used alongside a datastore-based Spring transaction manager like
* DataSourceTransactionManager, which has stronger needs for synchronization.
* Only one manager is allowed to drive synchronization at any point of time.
* #see #setConnectionFactory
* #see #setTransactionSynchronization
*/
public JmsTransactionManager() {
setTransactionSynchronization(SYNCHRONIZATION_NEVER);
}
So, you have to manually switch on it into setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ALWAYS);

Related

Spring Integration read and process a file without polling

I'm currently trying to write and integration flow then reads a csv file and processes it in chunks (Calls API for enrichment) then writes in back out as a new csv. I currently have an example working perfectly except that it is polling a directory. What I would like to do is be able to pass the file-path and file-name to the integration flow in the headers and then just perform the operation on that one file.
Here is my code for the polling example that works great except for the polling.
#Bean
#SuppressWarnings("unchecked")
public IntegrationFlow getUIDsFromTTDandOutputToFile() {
Gson gson = new GsonBuilder().disableHtmlEscaping().create();
return IntegrationFlows
.from(Files.inboundAdapter(new File(inputFilePath))
.filter(getFileFilters())
.preventDuplicates(true)
.autoCreateDirectory(true),
c -> c
.poller(Pollers.fixedRate(1000)
.maxMessagesPerPoll(1)
)
)
.log(Level.INFO, m -> "TTD UID 2.0 Integration Start" )
.split(Files.splitter())
.channel(c -> c.executor(Executors.newFixedThreadPool(7)))
.handle((p, h) -> new CSVUtils().csvColumnSelector((String) p, ttdColNum))
.channel("chunkingChannel")
.get();
}
#Bean
#ServiceActivator(inputChannel = "chunkingChannel")
public AggregatorFactoryBean chunker() {
log.info("Initializing Chunker");
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(batchSize));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(100L));
aggregator.setOutputChannelName("chunkingOutput");
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setCorrelationStrategy(new CorrelationStrategyIml());
return aggregator;
}
#Bean
public IntegrationFlow enrichFlow() {
return IntegrationFlows.from("chunkingOutput")
.handle((p, h) -> gson.toJson(new TradeDeskUIDRequestPayloadBean((Collection<String>) p)))
.enrichHeaders(eh -> eh.async(false)
.header("accept", "application/json")
.header("contentType", "application/json")
.header("Authorization", "Bearer [TOKEN]")
)
.log(Level.INFO, m -> "Sending request of size " + batchSize + " to: " + TTD_UID_IDENTITY_MAP)
.handle(Http.outboundGateway(TTD_UID_IDENTITY_MAP)
.requestFactory(
alliantPooledHttpConnection.get_httpComponentsClientHttpRequestFactory())
.httpMethod(HttpMethod.POST)
.expectedResponseType(TradeDeskUIDResponsePayloadBean.class)
.extractPayload(true)
)
.log(Level.INFO, m -> "Writing response to output file" )
.handle((p, h) -> ((TradeDeskUIDResponsePayloadBean) p).printMappedBodyAsCSV2())
.handle(Files.outboundAdapter(new File(outputFilePath))
.autoCreateDirectory(true)
.fileExistsMode(FileExistsMode.APPEND)
//.appendNewLine(true)
.fileNameGenerator(m -> m.getHeaders().getOrDefault("file_name", "outputFile") + "_out.csv")
)
.get();
}
public class CorrelationStrategyIml implements CorrelationStrategy {
#Override
public Object getCorrelationKey(Message<?> message) {
return message.getHeaders().getOrDefault("", 1);
}
}
#Component
public class CSVUtils {
#ServiceActivator
String csvColumnSelector(String inputStr, Integer colNum) {
return StringUtils.commaDelimitedListToStringArray(inputStr)[colNum];
}
}
private FileListFilter<File> getFileFilters(){
ChainFileListFilter<File> cflf = new ChainFileListFilter<>();
cflf.addFilter(new LastModifiedFileListFilter(30));
cflf.addFilter(new AcceptOnceFileListFilter<>());
cflf.addFilter(new SimplePatternFileListFilter(fileExtention));
return cflf;
}
If you know the file, then there is no reason in any special component from the framework. You just start your flow from a channel and send a message to it with File object as a payload. That message is going to be carried on to the slitter in your flow and everything is going to work OK.
If you really want to have a high-level API on the matter, you can expose a #MessagingGateway as a beginning of that flow and end-user is going to call your gateway method with desired file as an argument. The framework will create a message on your behalf and send it to the message channel in the flow for processing.
See more info in docs about gateways:
https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#gateway
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#integration-flow-as-gateway
And also a DSL definition starting from some explicit channel:
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-channels

How to restrict transaction boundaries for JdbcPollingChannelAdapter

I have a JdbcPollingChannelAdapter defined as following:
#Bean
public MessageSource<Object> jdbcMessageSource(DataSource dataSource) {
JdbcPollingChannelAdapter jdbcPollingChannelAdapter = new JdbcPollingChannelAdapter(dataSource,
"SELECT * FROM common_task where due_at <= NOW() and retries < order by due_at ASC FOR UPDATE SKIP LOCKED");
jdbcPollingChannelAdapter.setMaxRowsPerPoll(1);
jdbcPollingChannelAdapter.setUpdateSql("Update common_task set retries = :retries, due_at = due_at + interval '10 minutes' WHERE ID = (:id)");
jdbcPollingChannelAdapter.setUpdatePerRow(true);
jdbcPollingChannelAdapter.setRowMapper(this::mapRow);
jdbcPollingChannelAdapter.setUpdateSqlParameterSourceFactory(this::updateParamSource);
return jdbcPollingChannelAdapter;
}
The integration flow for this:
#Bean
public IntegrationFlow pollingFlow(MessageSource<Object> jdbcMessageSource) {
return IntegrationFlows.from(jdbcMessageSource,
c -> c.poller(Pollers.fixedRate(250, TimeUnit.MILLISECONDS)
.maxMessagesPerPoll(1)
.transactional()))
.split()
.channel(taskSourceChannel())
.get();
}
The service activator is defined as
#ServiceActivator(inputChannel = "taskSourceChannel")
public void doSomething(FooTask event) {
//do something but ** not ** within the transaction of the poller.
}
The poller in integration flow is defined as transactional. Based on my understanding, this will
1. Execute select query and update query in transaction.
2. It will also execute doSomething() method in the same transaction.
Goal: I would like to do 1 and not 2. I would like to do select and update in a transaction to make sure both happens. But, I don;t want to execute doSomething() in the same transaction. In case of an exeception in doSomething(), I still want to persist the updates made during polling. How can I acheive this ?
This is done via simple thread shifting. So, what you need is just leave the polling thread, allow it to commit TX and continue process in the separate thread.
According your logic with the .split(), it's even better to have new thread processing already after splitting, so items are going to be even processed by that doSomething() in parallel.
The goal simply can be achieved with an ExecutorChannel. Since you already have that taskSourceChannel(), just consider to replace it with ExecutorChannel based on some managed ThreadPoolTaskExecutor.
See more info in the Reference Manual: https://docs.spring.io/spring-integration/reference/html/messaging-channels-section.html#channel-configuration-executorchannel
And its Javadocs.
The simple Java Configuration variant is like this:
#Bean
public MessageChannel taskSourceChannel() {
return new ExecutorChannel(executor());
}
#Bean
public Executor executor() {
return new ThreadPoolTaskExecutor();
}

Remote directory for sftp outbound gateway with DSL

I'm having an issue with the SFTP outbound gateway using DSL.
I want to use a outbound gateway to send a file, then continue my flow.
The problem is that I have an exception telling me:
IllegalArgumentException: 'remoteDirectoryExpression' is required
I saw that I can use a RemoteFileTemplate where I can set the sftp session factory plus the remote directory information, but the directory I wan't is defined in my flow by the code put in the header just before the launch of the batch.
#Bean
public IntegrationFlow orderServiceFlow() {
return f -> f
.handleWithAdapter(h -> h.httpGateway("myUrl")
.httpMethod(HttpMethod.GET)
.expectedResponseType(List.class)
)
.split()
.channel(batchLaunchChannel());
}
#Bean
public DirectChannel batchLaunchChannel() {
return MessageChannels.direct("batchLaunchChannel").get();
}
#Bean
public IntegrationFlow batchLaunchFlow() {
return IntegrationFlows.from(batchLaunchChannel())
.enrichHeaders(h -> h
.headerExpression("oiCode", "payload")
)
.transform((GenericTransformer<String, JobLaunchRequest>) message -> {
JobParameters jobParameters = new JobParametersBuilder()
.addDate("exec_date", new Date())
.addString("oiCode", message)
.toJobParameters();
return new JobLaunchRequest(orderServiceJob, jobParameters);
})
.handle(new JobLaunchingMessageHandler(jobLauncher))
.enrichHeaders(h -> h
.headerExpression("jobExecution", "payload")
)
.handle((p, h) -> {
//Logic to retreive file...
return new File("");
})
.handle(Sftp.outboundGateway(sftpSessionFactory,
AbstractRemoteFileOutboundGateway.Command.PUT,
"payload")
)
.get();
}
I don't see how I can tell my outbound gateway which will be the directory depending what is in my header.
The Sftp.outboundGateway() has an overloaded version with the RemoteFileTemplate. So, you need to instantiate SftpRemoteFileTemplate bean and configure its:
/**
* Set the remote directory expression used to determine the remote directory to which
* files will be sent.
* #param remoteDirectoryExpression the remote directory expression.
*/
public void setRemoteDirectoryExpression(Expression remoteDirectoryExpression) {
This one can be like FunctionExpression:
setRemoteDirectoryExpression(m -> m.getHeaders().get("remoteDireHeader"))
Before I got an answer, I came with this solution but I'm not sure it's a good one.
// flow //
.handle((p, h) -> {
//Logic to retreive file...
return new File("");
})
.handle(
Sftp.outboundGateway(
remoteFileTemplate(new SpelExpressionParser().parseExpression("headers['oiCode']")),
AbstractRemoteFileOutboundGateway.Command.PUT,
"payload")
)
.handle(// next steps //)
.get();
public RemoteFileTemplate remoteFileTemplate(Expression directory) throws Exception {
RemoteFileTemplate template = new SftpRemoteFileTemplate(sftpSessionFactory);
template.setRemoteDirectoryExpression(directory);
template.setAutoCreateDirectory(true);
template.afterPropertiesSet();
return template;
}
But this provoke a warn because of exception thrown by the ExpresionUtils
java.lang.RuntimeException: No beanFactory

How to get a file daily via SFTP using Spring Integration with Java config?

I need to get a file daily via SFTP. I would like to use Spring Integration with Java config. The file is generally available at a specific time each day. The application should try to get the file near that time each day. If the file is not available, it should continue to retry for x attempts. After x attempts, it should send an email to let the admin know that the file is still not available on the SFTP site.
One option is to use SftpInboundFileSynchronizingMessageSource. In the MessageHandler, I can kick off a job to process the file. However, I really don't need synchronization with the remote file system. After all, it is a scheduled delivery of the file. Plus, I need to delay at most 15 minutes for the next retry and to poll every 15 minutes seems a bit overkill for a daily file. I guess that I could use this but would need some mechanism to send email after a certain time elapsed and no file was received.
The other option seems to be using get of the SFTP Outbound Gateway. But the only examples I can find seem to be XML config.
Update
Adding code after using help provided by Artem Bilan's answer below:
Configuration class:
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource(ApplicationProperties applicationProperties, PropertiesPersistingMetadataStore store) {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer(applicationProperties));
source.setLocalDirectory(new File("ftp-inbound"));
source.setAutoCreateLocalDirectory(true);
FileSystemPersistentAcceptOnceFileListFilter local = new FileSystemPersistentAcceptOnceFileListFilter(store,"test");
source.setLocalFilter(local);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata() {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice());
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
return pollerMetadata;
}
#Bean
public RetryCompoundTriggerAdvice retryCompoundTriggerAdvice() {
return new RetryCompoundTriggerAdvice(compoundTrigger(), secondaryTrigger());
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public Trigger primaryTrigger() {
return new CronTrigger("*/60 * * * * *");
}
#Bean
public Trigger secondaryTrigger() {
return new PeriodicTrigger(10000);
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler(PropertiesPersistingMetadataStore store) {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
store.flush();
}
};
}
RetryCompoundTriggerAdvice class:
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private int count = 0;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger, Trigger overrideTrigger) {
Assert.notNull(compoundTrigger, "'compoundTrigger' cannot be null");
this.compoundTrigger = compoundTrigger;
this.override = overrideTrigger;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null && count <= 5) {
count++;
this.compoundTrigger.setOverride(this.override);
}
else {
this.compoundTrigger.setOverride(null);
if (count > 5) {
//send email
}
count = 0;
}
return result;
}
}
Since Spring Integration 4.3 there is CompoundTrigger:
* A {#link Trigger} that delegates the {#link #nextExecutionTime(TriggerContext)}
* to one of two Triggers. If the {#link #setOverride(Trigger) override} trigger is
* {#code null}, the primary trigger is invoked; otherwise the override trigger is
* invoked.
With the combination of CompoundTriggerAdvice:
* An {#link AbstractMessageSourceAdvice} that uses a {#link CompoundTrigger} to adjust
* the poller - when a message is present, the compound trigger's primary trigger is
* used to determine the next poll. When no message is present, the override trigger is
* used.
it can be used to reach your task:
The primaryTrigger can be a CronTrigger to run the task only once a day.
The override could be a PeriodicTrigger with desired short period to retry.
The retry logic you can utilize with one more Advice for poller or just extend that CompoundTriggerAdvice to add count logic to send an email eventually.
Since there is no file, therefore no message to kick the flow. And we don't have choice unless dance around the poller infrastructure.

Lost headers when using UnZipResultSplitter

I'm using the Spring Integration Zip extension and it appears that I'm losing headers I've added upstream in the flow. I'm guessing that they are being lost in UnZipResultSplitter.splitUnzippedMap() as I don't see anything that explicitly copies them over.
I seem to recall that this is not unusual with splitters but I can't determine what strategy one should use in such a case.
Yep!
It looks like a bug.
The splitter contract is like this:
if (item instanceof Message) {
builder = this.getMessageBuilderFactory().fromMessage((Message<?>) item);
}
else {
builder = this.getMessageBuilderFactory().withPayload(item);
builder.copyHeaders(headers);
}
So, if those splitted items are messages already, like in case of our UnZipResultSplitter, we just use message as is without copying headers from upstream.
Please, raise a JIRA ticket (https://jira.spring.io/browse/INTEXT) on the matter.
Meanwhile let's consider some workaround:
public class MyUnZipResultSplitter {
public List<Message<Object>> splitUnzipped(Message<Map<String, Object>> unzippedEntries) {
final List<Message<Object>> messages = new ArrayList<Message<Object>>(unzippedEntries.size());
for (Map.Entry<String, Object> entry : unzippedEntries.getPayload().entrySet()) {
final String path = FilenameUtils.getPath(entry.getKey());
final String filename = FilenameUtils.getName(entry.getKey());
final Message<Object> splitMessage = MessageBuilder.withPayload(entry.getValue())
.setHeader(FileHeaders.FILENAME, filename)
.setHeader(ZipHeaders.ZIP_ENTRY_PATH, path)
.copyHeaders(unzippedEntries/getHeaders())
.build();
messages.add(splitMessage);
}
return messages;
}
}

Resources