How to write unit test for connection factory in Spring webFlux - mockito

#Component
public class DisplayOverrideRowMapper {
public DisplayOverride mapRow(Row row, RowMetadata meta) {
return new DisplayOverride(
row.get("store_auth_override_id",Integer.class),
row.get("item_no",String.class),
row.get("item_dsc",String.class),
toForceOnlineOffline(row.get("display_override_type",String.class)),
row.get("billing_division_no",String.class),
row.get("store_no",String.class),
row.get("in_store_flg",Boolean.class),
row.get("pickup_flg",Boolean.class),
row.get("delivery_flg",Boolean.class),
row.get("ship_flg",Boolean.class),
row.get("reason_cd",Integer.class),
row.get("effective_dt", LocalDate.class),
row.get("expiration_dt",LocalDate.class),
row.get("last_update_ts",LocalDate.class),
row.get("last_update_user_id",String.class));
}
private DisplayOverrideType toForceOnlineOffline(String displayOverridetype) {
switch (displayOverridetype) {
case "offline":
return DisplayOverrideType.FORCE_OFFLINE;
case "online":
return DisplayOverrideType.FORCE_ONLINE;
default:
return null;
}
}
}
The below func method I want to write unit test for
#Autowired
public OverridesRepositoryHelperImpl(DisplayOverrideRowMapper displayOverrideRowMapper,
ConnectionFactory connectionFactory) {
this.displayOverrideRowMapper = displayOverrideRowMapper;
this.connectionFactory = connectionFactory;
}
#Override
public Flux<DisplayOverride> func(DisplayOverrideRequest displayOverrideRequest) {
StringBuilder sql = new StringBuilder(SELECT_GET_OVERRIDE);
String results = buildWhereClauseForGet(displayOverrideRequest);
sql.append(results);
return Flux.from(connectionFactory.create())
.flatMap(c ->
Flux.from(c.createStatement(sql.toString()).execute())
.map(result -> result.map((row,meta) -> displayOverrideRowMapper.mapRow(row,meta)))
.flatMap(p -> Flux.from(p))
.doFinally(st -> c.close())
);
}
Can any help me in writing the unit test for this func method. As i wrote this but it not covering the all the lines of func method as when I am stubbing the connection.create() method then the actual code does not shows any coverage
When I use the debugger it goes inside this line connectionFactory.create()).thenReturn((Publisher) Flux.just(connection) but it not going further inside
#MockBean
DisplayOverrideRowMapper displayOverrideRowMapper;
#MockBean
ConnectionFactory connectionFactory;
#MockBean
Connection connection;
#MockBean
Result result;
Flux<DisplayOverride> displayOverrideResponseFlux = Flux.just(displayOverrideResponse); // I already created this so you don’t need to worry about it
Mockito.when(displayOverrideRowMapper.mapRow(any(), any())).thenCallRealMethod();
Mockito.when(connectionFactory.create()).thenReturn((Publisher) Flux.just(connection).flatMap(
c -> Flux.just(result)
.map(result -> result.map((row, meta) -> displayOverrideRowMapper.mapRow(row, meta)))
.flatMap(p -> Flux.from(p)).doFinally(signalType -> c.close())));
OverridesRepositoryHelperImpl overridesRepositoryHelper = new OverridesRepositoryHelperImpl(displayOverrideRowMapper,connectionFactory);
Flux<DisplayOverride> Result = overridesRepositoryHelper.func(displayOverrideRequest)

Related

Spring integration Java DSL http outbound

How to resend same or modified message from outbound http call in case of specific client error responses like 400, 413 etc
#Bean
private IntegrationFlow myChannel() {
IntegrationFlowBuilder builder =
IntegrationFlows.from(queue)
.handle(//http post method config)
...
.expectedResponseType(String.class))
.channel(MessageChannels.publishSubscribe(channel2));
return builder.get();
}
#Bean
private IntegrationFlow defaultErrorChannel() {
}
EDIT: Added end point to handle method
#Bean
private IntegrationFlow myChannel() {
IntegrationFlowBuilder builder =
IntegrationFlows.from(queue)
.handle(//http post method config)
...
.expectedResponseType(String.class),
e -> e.advice(myRetryAdvice()))
.channel(MessageChannels.publishSubscribe(channel2));
return builder.get();
}
#Bean
public Advice myRetryAdvice(){
... // set custom retry policy
}
Custom Retry policy:
class InternalServerExceptionClassifierRetryPolicy extends
ExceptionClassifierRetryPolicy {
public InternalServerExceptionClassifierRetryPolicy() {
final SimpleRetryPolicy simpleRetryPolicy =
new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(2);
this.setExceptionClassifier(new Classifier<Throwable, RetryPolicy>() {
#Override
public RetryPolicy classify(Throwable classifiable) {
if (classifiable instanceof HttpServerErrorException) {
// For specifically 500 and 504
if (((HttpServerErrorException) classifiable).getStatusCode() == HttpStatus.INTERNAL_SERVER_ERROR
|| ((HttpServerErrorException) classifiable)
.getStatusCode() == HttpStatus.GATEWAY_TIMEOUT) {
return simpleRetryPolicy;
}
return new NeverRetryPolicy();
}
return new NeverRetryPolicy();
}
});
}}
EDIT 2: Override open() to modify the original message
RequestHandlerRetryAdvice retryAdvice = new
RequestHandlerRetryAdvice(){
#Override
public<T, E extends Throwable> boolean open(RetryContext
retryContext, RetryCallback<T,E> callback){
Message<String> originalMsg =
(Message) retryContext.getAttribute(ErrorMessageUtils.FAILED_MESSAGE_CONTEXT);
Message<String> updatedMsg = //some updated message
retryContext.setAttribute(ErrorMessageUtils.FAILED_MESSAGE_CONTEXT,up datedMsg);
return super.open(retryContext, callback);
}
See a RequestHandlerRetryAdvice: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints.html#message-handler-advice-chain. So, you configure some RetryPolicy to check those HttpClientErrorException for retry and the framework will re-send for you.
Java DSL allows us to configure it via second handle() argument - endpoint configurer: .handle(..., e -> e.advice(myRetryAdvice)): https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-endpoints

How to poll for multiple files at once with Spring Integration with WebFlux?

I have the following configuration below for file monitoring using Spring Integration and WebFlux.
It works well, but if I drop in 100 files it will pick up one file at a time with a 10 second gap between the "Received a notification of new file" log messages.
How do I poll for multiple files at once, so I don't have to wait 1000 seconds for all my files to finally register?
#Configuration
#EnableIntegration
public class FileMonitoringConfig {
private static final Logger logger =
LoggerFactory.getLogger(FileMonitoringConfig.class.getName());
#Value("${monitoring.folder}")
private String monitoringFolder;
#Value("${monitoring.polling-in-seconds:10}")
private int pollingInSeconds;
#Bean
Publisher<Message<Object>> myMessagePublisher() {
return IntegrationFlows.from(
Files.inboundAdapter(new File(monitoringFolder))
.useWatchService(false),
e -> e.poller(Pollers.fixedDelay(pollingInSeconds, TimeUnit.SECONDS)))
.channel(myChannel())
.toReactivePublisher();
}
#Bean
Function<Flux<Message<Object>>, Publisher<Message<Object>>> myReactiveSource() {
return flux -> myMessagePublisher();
}
#Bean
FluxMessageChannel myChannel() {
return new FluxMessageChannel();
}
#Bean
#ServiceActivator(
inputChannel = "myChannel",
async = "true",
reactive = #Reactive("myReactiveSource"))
ReactiveMessageHandler myMessageHandler() {
return new ReactiveMessageHandler() {
#Override
public Mono<Void> handleMessage(Message<?> message) throws MessagingException {
return Mono.fromFuture(doHandle(message));
}
private CompletableFuture<Void> doHandle(Message<?> message) {
return CompletableFuture.runAsync(
() -> {
logger.info("Received a notification of new file: {}", message.getPayload());
File file = (File) message.getPayload();
});
}
};
}
}
The Inbound Channel Adapter polls a single data record from the source per poll cycle.
Consider to add maxMessagesPerPoll(-1) to your poller() configuration.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-adapter-namespace-inbound

Streaming from remote SFTP directories and sub-directories with Spring Integration

I am using Spring Integration Streaming Inbound Channel Adapter, to get stream from remote SFTP and parse every lines of content process.
I use :
IntegrationFlows.from(Sftp.inboundStreamingAdapter(template)
.filter(remoteFileFilter)
.remoteDirectory("test_dir"),
e -> e.id("sftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedDelay(fetchInt)))
.handle(Files.splitter(true, true))
....
And it can work now. But I can only get file from test_dir directory, but I need to recursively get files from this dir and sub-directory and parse every line.
I noticed that the Inbound Channel Adapter which is Sftp.inboundAdapter(sftpSessionFactory).scanner(...) . It can scan sub-directory. But I didn't see anything for Streaming Inbound Channel Adapter.
So, how can I implement the 'recursively get files from dir' in Streaming Inbound Channel Adapter?
Thanks.
You can use a two outbound gateways - the first doing ls -R (recursive list); split the result and use a gateway configured with mget -stream to get each file.
EDIT
#SpringBootApplication
public class So60987851Application {
public static void main(String[] args) {
SpringApplication.run(So60987851Application.class, args);
}
#Bean
IntegrationFlow flow(SessionFactory<LsEntry> csf) {
return IntegrationFlows.from(() -> "foo", e -> e.poller(Pollers.fixedDelay(5_000)))
.handle(Sftp.outboundGateway(csf, Command.LS, "payload")
.options(Option.RECURSIVE, Option.NAME_ONLY)
// need a more robust metadata store for persistence, unless the files are removed
.filter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "test")))
.split()
.log()
.enrichHeaders(headers -> headers.headerExpression("fileToRemove", "'foo/' + payload"))
.handle(Sftp.outboundGateway(csf, Command.GET, "'foo/' + payload")
.options(Option.STREAM))
.split(new FileSplitter())
.log()
// instead of a filter, we can remove the remote file.
// but needs some logic to wait until all lines read
// .handle(Sftp.outboundGateway(csf, Command.RM, "headers['fileToRemove']"))
// .log()
.get();
}
#Bean
CachingSessionFactory<LsEntry> csf(DefaultSftpSessionFactory sf) {
return new CachingSessionFactory<>(sf);
}
#Bean
DefaultSftpSessionFactory sf() {
DefaultSftpSessionFactory sf = new DefaultSftpSessionFactory();
sf.setHost("10.0.0.8");
sf.setUser("gpr");
sf.setPrivateKey(new FileSystemResource(new File("/Users/grussell/.ssh/id_rsa")));
sf.setAllowUnknownKeys(true);
return sf;
}
}
It works for me, this is my full code
#Configuration
public class SftpIFConfig {
#InboundChannelAdapter(value = "sftpMgetInputChannel",
poller = #Poller(fixedDelay = "5000"))
public String filesForMGET(){
return "/upload/done";
}
#Bean
public IntegrationFlow sftpMGetFlow(SessionFactory<ChannelSftp.LsEntry> csf) {
return IntegrationFlows.from("sftpMgetInputChannel")
.handle(Sftp.outboundGateway(csf,
AbstractRemoteFileOutboundGateway.Command.LS, "payload")
.options(AbstractRemoteFileOutboundGateway.Option.RECURSIVE, AbstractRemoteFileOutboundGateway.Option.NAME_ONLY)
//Persistent file list filter using the server's file timestamp to detect if we've already 'seen' this file.
.filter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "test")))
.split()
.log(message -> "file path -> "+message.getPayload())
.enrichHeaders(headers -> headers.headerExpression("fileToRemove", "'/upload/done/' + payload"))
.log(message -> "Heder file info -> "+message.getHeaders())
.handle(Sftp.outboundGateway(csf, AbstractRemoteFileOutboundGateway.Command.GET, "'/upload/done/' + payload")
.options(AbstractRemoteFileOutboundGateway.Option.STREAM))
.split(new FileSplitter())
.log(message -> "File content -> "+message.getPayload())
.get();
}
#Bean
CachingSessionFactory<ChannelSftp.LsEntry> csf(DefaultSftpSessionFactory sf) {
return new CachingSessionFactory<>(sf);
}
#Bean
DefaultSftpSessionFactory sf() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory();
factory.setHost("0.0.0.0");
factory.setPort(2222);
factory.setAllowUnknownKeys(true);
factory.setUser("xxxx");
factory.setPassword("xxx");
return factory;
}
Like dsillman2000 commented, I also found this answer could take more of an explanation.
After figuring this out based on the examples here, here is my extended example that works for me, with extracted variables and methods that (hopefully) clearly say what the individual parts are or do.
Dependencies: Mostly org.springframework.integration:spring-integration-sftp:2.6.6
package com.example.sftp.incoming;
import com.jcraft.jsch.ChannelSftp;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.dsl.HeaderEnricherSpec;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.integration.file.filters.AbstractFileListFilter;
import org.springframework.integration.file.remote.gateway.AbstractRemoteFileOutboundGateway;
import org.springframework.integration.file.remote.session.CachingSessionFactory;
import org.springframework.integration.file.remote.session.SessionFactory;
import org.springframework.integration.file.splitter.FileSplitter;
import org.springframework.integration.metadata.SimpleMetadataStore;
import org.springframework.integration.sftp.dsl.Sftp;
import org.springframework.integration.sftp.dsl.SftpOutboundGatewaySpec;
import org.springframework.integration.sftp.filters.SftpPersistentAcceptOnceFileListFilter;
import org.springframework.integration.sftp.session.DefaultSftpSessionFactory;
import org.springframework.messaging.Message;
import java.util.function.Consumer;
import java.util.function.Function;
#Configuration
public class SftpIncomingRecursiveConfiguration {
private static final String BASE_REMOTE_FOLDER_TO_GET_FILES_FROM = "/tmp/sftptest/";
private static final String REMOTE_FOLDER_PATH_AS_EXPRESSION = "'" + BASE_REMOTE_FOLDER_TO_GET_FILES_FROM + "'";
private static final String INBOUND_CHANNEL_NAME = "sftpGetInputChannel";
#Value("${demo.sftp.host}")
private String sftpHost;
#Value("${demo.sftp.user}")
private String sftpUser;
#Value("${demo.sftp.password}")
private String sftpPassword;
#Value("${demo.sftp.port}")
private String sftpPort;
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpHost);
factory.setPort(Integer.parseInt(sftpPort));
factory.setUser(sftpUser);
factory.setPassword(sftpPassword);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
// poll for new files every 500ms
#InboundChannelAdapter(value = INBOUND_CHANNEL_NAME, poller = #Poller(fixedDelay = "500"))
public String filesForSftpGet() {
return BASE_REMOTE_FOLDER_TO_GET_FILES_FROM;
}
#Bean
public IntegrationFlow sftpGetFlow(SessionFactory<ChannelSftp.LsEntry> sessionFactory) {
return IntegrationFlows
.from(INBOUND_CHANNEL_NAME)
.handle(listRemoteFiles(sessionFactory))
.split()
.log(logTheFilePath())
.enrichHeaders(addAMessageHeader())
.log(logTheMessageHeaders())
.handle(getTheFile(sessionFactory))
.split(splitContentIntoLines())
.log(logTheFileContent())
.get();
}
private SftpOutboundGatewaySpec listRemoteFiles(SessionFactory<ChannelSftp.LsEntry> sessionFactory) {
return Sftp.outboundGateway(sessionFactory,
AbstractRemoteFileOutboundGateway.Command.LS, REMOTE_FOLDER_PATH_AS_EXPRESSION)
.options(AbstractRemoteFileOutboundGateway.Option.RECURSIVE, AbstractRemoteFileOutboundGateway.Option.NAME_ONLY)
.filter(onlyFilesWeHaveNotSeenYet())
.filter(onlyTxtFiles());
}
/* Persistent file list filter using the server's file timestamp to detect if we've already 'seen' this file.
Without it, the program would report the same file over and over again. */
private SftpPersistentAcceptOnceFileListFilter onlyFilesWeHaveNotSeenYet() {
return new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "keyPrefix");
}
private AbstractFileListFilter<ChannelSftp.LsEntry> onlyTxtFiles() {
return new AbstractFileListFilter<>() {
#Override
public boolean accept(ChannelSftp.LsEntry file) {
return file.getFilename().endsWith(".txt");
}
};
}
private Function<Message<Object>, Object> logTheFilePath() {
return message -> "### File path: " + message.getPayload();
}
private Consumer<HeaderEnricherSpec> addAMessageHeader() {
return headers -> headers.headerExpression("fileToRemove", REMOTE_FOLDER_PATH_AS_EXPRESSION + " + payload");
}
private Function<Message<Object>, Object> logTheMessageHeaders() {
return message -> "### Header file info: " + message.getHeaders();
}
private SftpOutboundGatewaySpec getTheFile(SessionFactory<ChannelSftp.LsEntry> sessionFactory) {
return Sftp.outboundGateway(sessionFactory, AbstractRemoteFileOutboundGateway.Command.GET,
REMOTE_FOLDER_PATH_AS_EXPRESSION + " + payload")
.options(AbstractRemoteFileOutboundGateway.Option.STREAM);
}
private FileSplitter splitContentIntoLines() {
return new FileSplitter();
}
private Function<Message<Object>, Object> logTheFileContent() {
return message -> "### File content line: '" + message.getPayload() + "'";
}
}
EDIT: Please note, there is a difference here. The other example uses a poller to generate the message with the remote file path from "filesForMGET" over and over again, and that message payload (file path) gets used as an argument to "ls". I'm hard-coding it here, ignoring the message content from the poller.

How to route using message headers in Spring Integration DSL Tcp

I have 2 server side services and I would like route messages to them using message headers, where remote clients put service identification into field type.
Is the code snippet, from server side config, the correct way? It throws cast exception indicating that route() see only payload, but not the message headers. Also all example in the Spring Integration manual shows only payload based decisioning.
#Bean
public IntegrationFlow serverFlow( // common flow for all my services, currently 2
TcpNetServerConnectionFactory serverConnectionFactory,
HeartbeatServer heartbeatServer,
FeedServer feedServer) {
return IntegrationFlows
.from(Tcp.inboundGateway(serverConnectionFactory))
.<Message<?>, String>route((m) -> m.getHeaders().get("type", String.class),
(routeSpec) -> routeSpec
.subFlowMapping("hearbeat", subflow -> subflow.handle(heartbeatServer::processRequest))
.subFlowMapping("feed", subflow -> subflow.handle(feedServer::consumeFeed)))
.get();
}
Client side config:
#Bean
public IntegrationFlow heartbeatClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory,
HeartbeatClient heartbeatClient) {
return IntegrationFlows.from(heartbeatClient::send, e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.enrichHeaders(c -> c.header("type", "heartbeat"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.handle(heartbeatClient::receive)
.get();
}
#Bean
public IntegrationFlow feedClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory) {
return IntegrationFlows.from(FeedClient.MessageGateway.class)
.enrichHeaders(c -> c.header("type", "feed"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.get();
}
And as usual here is the full demo project code, ClientConfig and ServerConfig.
There is no standard way to send headers over raw TCP. You need to encode them into the payload somehow (and extract them on the server side).
The framework provides a mechanism to do this for you, but it requires extra configuration.
See the documentation.
Specifically...
The MapJsonSerializer uses a Jackson ObjectMapper to convert between a Map and JSON. You can use this serializer in conjunction with a MessageConvertingTcpMessageMapper and a MapMessageConverter to transfer selected headers and the payload in JSON.
I'll try to find some time to create an example of how to use it.
But, of course, you can roll your own encoding/decoding.
EDIT
Here's an example configuration to use JSON to convey message headers over TCP...
#SpringBootApplication
public class TcpWithHeadersApplication {
public static void main(String[] args) {
SpringApplication.run(TcpWithHeadersApplication.class, args);
}
// Client side
public interface TcpExchanger {
public String exchange(String data, #Header("type") String type);
}
#Bean
public IntegrationFlow client(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(TcpExchanger.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.get();
}
// Server side
#Bean
public IntegrationFlow server(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.log(Level.INFO, "exampleLogger", "'Received type header:' + headers['type']")
.route("headers['type']", r -> r
.subFlowMapping("upper",
subFlow -> subFlow.transform(String.class, p -> p.toUpperCase()))
.subFlowMapping("lower",
subFlow -> subFlow.transform(String.class, p -> p.toLowerCase())))
.get();
}
// Common
#Bean
public MessageConvertingTcpMessageMapper mapper() {
MapMessageConverter converter = new MapMessageConverter();
converter.setHeaderNames("type");
return new MessageConvertingTcpMessageMapper(converter);
}
#Bean
public MapJsonSerializer jsonMapping() {
return new MapJsonSerializer();
}
// Console
#Bean
#DependsOn("client")
public ApplicationRunner runner(TcpExchanger exchanger,
ConfigurableApplicationContext context) {
return args -> {
System.out.println("Enter some text; if it starts with a lower case character,\n"
+ "it will be uppercased by the server; otherwise it will be lowercased;\n"
+ "enter 'quit' to end");
Scanner scanner = new Scanner(System.in);
String request = scanner.nextLine();
while (!"quit".equals(request.toLowerCase())) {
if (StringUtils.hasText(request)) {
String result = exchanger.exchange(request,
Character.isLowerCase(request.charAt(0)) ? "upper" : "lower");
System.out.println(result);
}
request = scanner.nextLine();
}
scanner.close();
context.close();
};
}
}

How do I prevent ListenerExecutionFailedException: Listener threw exception

What do I need to do to prevent the following Exception which is presumably thrown by RabbitMQ.
org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Listener threw exception
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.wrapToListenerExecutionFailedExceptionIfNeeded(AbstractMessageListenerContainer.java:877)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:787)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:707)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$001(SimpleMessageListenerContainer.java:98)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$1.invokeListener(SimpleMessageListenerContainer.java:189)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1236)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:688)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1190)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:1174)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$1200(SimpleMessageListenerContainer.java:98)
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1363)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'amqpLaunchSpringBatchJobFlow.channel#0'; nested exception is jp.ixam_drive.batch.service.JobExecutionRuntimeException: Failed to start job with name ads-insights-import and parameters {accessToken=<ACCESS_TOKEN>, id=act_1234567890, classifier=stats, report_run_id=1482330625184792, job_request_id=32}
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:449)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:171)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$400(AmqpInboundChannelAdapter.java:45)
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$1.onMessage(AmqpInboundChannelAdapter.java:95)
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:784)
... 10 common frames omitted
Caused by: jp.ixam_drive.batch.service.JobExecutionRuntimeException: Failed to start job with name ads-insights-import and parameters {accessToken=<ACCESS_TOKEN>, id=act_1234567890, classifier=stats, report_run_id=1482330625184792, job_request_id=32}
at jp.ixam_drive.facebook.SpringBatchLauncher.launchJob(SpringBatchLauncher.java:42)
at jp.ixam_drive.facebook.AmqpBatchLaunchIntegrationFlows.lambda$amqpLaunchSpringBatchJobFlow$1(AmqpBatchLaunchIntegrationFlows.java:71)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:121)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:89)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
... 18 common frames omitted
Caused by: org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException: A job instance already exists and is complete for parameters={accessToken=<ACCESS_TOKEN>, id=act_1234567890, classifier=stats, report_run_id=1482330625184792, job_request_id=32}. If you want to run this job again, change the parameters.
at org.springframework.batch.core.repository.support.SimpleJobRepository.createJobExecution(SimpleJobRepository.java:126)
at sun.reflect.GeneratedMethodAccessor193.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:282)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.batch.core.repository.support.AbstractJobRepositoryFactoryBean$1.invoke(AbstractJobRepositoryFactoryBean.java:172)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213)
at com.sun.proxy.$Proxy125.createJobExecution(Unknown Source)
at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:125)
at jp.ixam_drive.batch.service.JobOperationsService.launch(JobOperationsService.java:64)
at jp.ixam_drive.facebook.SpringBatchLauncher.launchJob(SpringBatchLauncher.java:37)
... 24 common frames omitted
when I have 2 instances of Spring Boot application both of which run the following code in parallel to execute Spring Batch Jobs?
#Configuration
#Conditional(AmqpBatchLaunchCondition.class)
#Slf4j
public class AmqpAsyncAdsInsightsConfiguration {
#Autowired
ObjectMapper objectMapper;
#Value("${batch.launch.amqp.routing-keys.async-insights}")
String routingKey;
#Bean
public IntegrationFlow amqpOutboundAsyncAdsInsights(AmqpTemplate amqpTemplate) {
return IntegrationFlows.from("async_ads_insights")
.<JobParameters, byte[]>transform(SerializationUtils::serialize)
.handle(Amqp.outboundAdapter(amqpTemplate).routingKey(routingKey)).get();
}
#Bean
public IntegrationFlow amqpAdsInsightsAsyncJobRequestFlow(FacebookMarketingServiceProvider serviceProvider,
JobParametersToApiParametersTransformer transformer, ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Amqp.inboundAdapter(connectionFactory, routingKey))
.<byte[], JobParameters>transform(SerializationUtils::deserialize)
.<JobParameters, ApiParameters>transform(transformer)
.<ApiParameters>handle((payload, header) -> {
String accessToken = (String) header.get("accessToken");
String id = (String) header.get("object_id");
FacebookMarketingApi api = serviceProvider.getApi(accessToken);
String reportRunId = api.asyncRequestOperations().getReportRunId(id, payload.toMap());
ObjectNode objectNode = objectMapper.createObjectNode();
objectNode.put("accessToken", accessToken);
objectNode.put("id", id);
objectNode.put("report_run_id", reportRunId);
objectNode.put("classifier", (String) header.get("classifier"));
objectNode.put("job_request_id", (Long) header.get("job_request_id"));
return serialize(objectNode);
}).channel("ad_report_run_polling_channel").get();
}
#SneakyThrows
private String serialize(JsonNode jsonNode) {
return objectMapper.writeValueAsString(jsonNode);
}
}
#Configuration
#Conditional(AmqpBatchLaunchCondition.class)
#Slf4j
public class AmqpBatchLaunchIntegrationFlows {
#Autowired
SpringBatchLauncher batchLauncher;
#Value("${batch.launch.amqp.routing-keys.job-launch}")
String routingKey;
#Bean(name = "batch_launch_channel")
public MessageChannel batchLaunchChannel() {
return MessageChannels.executor(Executors.newSingleThreadExecutor()).get();
}
#Bean
public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate,
#Qualifier("batch_launch_channel") MessageChannel batchLaunchChannel) {
return IntegrationFlows.from(batchLaunchChannel)
.<JobParameters, byte[]>transform(SerializationUtils::serialize)
.handle(Amqp.outboundAdapter(amqpTemplate).routingKey(routingKey)).get();
}
#Bean
public IntegrationFlow amqpLaunchSpringBatchJobFlow(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Amqp.inboundAdapter(connectionFactory, routingKey))
.handle(message -> {
String jobName = (String) message.getHeaders().get("job_name");
byte[] bytes = (byte[]) message.getPayload();
JobParameters jobParameters = SerializationUtils.deserialize(bytes);
batchLauncher.launchJob(jobName, jobParameters);
}).get();
}
}
#Configuration
#Slf4j
public class AsyncAdsInsightsConfiguration {
#Value("${batch.core.pool.size}")
public Integer batchCorePoolSize;
#Value("${ixam_drive.facebook.api.ads-insights.async-poll-interval}")
public String asyncPollInterval;
#Autowired
ObjectMapper objectMapper;
#Autowired
private DataSource dataSource;
#Bean(name = "async_ads_insights")
public MessageChannel adsInsightsAsyncJobRequestChannel() {
return MessageChannels.direct().get();
}
#Bean(name = "ad_report_run_polling_channel")
public MessageChannel adReportRunPollingChannel() {
return MessageChannels.executor(Executors.newFixedThreadPool(batchCorePoolSize)).get();
}
#Bean
public IntegrationFlow adReportRunPollingLoopFlow(FacebookMarketingServiceProvider serviceProvider) {
return IntegrationFlows.from(adReportRunPollingChannel())
.<String>handle((payload, header) -> {
ObjectNode jsonNode = deserialize(payload);
String accessToken = jsonNode.get("accessToken").asText();
String reportRunId = jsonNode.get("report_run_id").asText();
try {
AdReportRun adReportRun = serviceProvider.getApi(accessToken)
.fetchObject(reportRunId, AdReportRun.class);
log.debug("ad_report_run: {}", adReportRun);
return jsonNode.set("ad_report_run", objectMapper.valueToTree(adReportRun));
} catch (Exception e) {
log.error("failed while polling for ad_report_run.id: {}", reportRunId);
throw new RuntimeException(e);
}
}).<JsonNode, Boolean>route(payload -> {
JsonNode adReportRun = payload.get("ad_report_run");
return adReportRun.get("async_percent_completion").asInt() == 100 &&
"Job Completed".equals(adReportRun.get("async_status").asText());
}, rs -> rs.subFlowMapping(true,
f -> f.transform(JsonNode.class,
source -> {
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
jobParametersBuilder
.addString("accessToken", source.get("accessToken").asText());
jobParametersBuilder.addString("id", source.get("id").asText());
jobParametersBuilder
.addString("classifier", source.get("classifier").asText());
jobParametersBuilder
.addLong("report_run_id", source.get("report_run_id").asLong());
jobParametersBuilder
.addLong("job_request_id", source.get("job_request_id").asLong());
return jobParametersBuilder.toJobParameters();
}).channel("batch_launch_channel"))
.subFlowMapping(false,
f -> f.transform(JsonNode.class, this::serialize)
.<String>delay("delay", asyncPollInterval, c -> c.transactional()
.messageStore(jdbcMessageStore()))
.channel(adReportRunPollingChannel()))).get();
}
#SneakyThrows
private String serialize(JsonNode jsonNode) {
return objectMapper.writeValueAsString(jsonNode);
}
#SneakyThrows
private ObjectNode deserialize(String payload) {
return objectMapper.readerFor(ObjectNode.class).readValue(payload);
}
#Bean
public JdbcMessageStore jdbcMessageStore() {
JdbcMessageStore jdbcMessageStore = new JdbcMessageStore(dataSource);
return jdbcMessageStore;
}
#Bean
public JobParametersToApiParametersTransformer jobParametersToApiParametersTransformer() {
return new JobParametersToApiParametersTransformer() {
#Override
protected ApiParameters transform(JobParameters jobParameters) {
ApiParameters.ApiParametersBuilder builder = ApiParameters.builder();
MultiValueMap<String, String> multiValueMap = new LinkedMultiValueMap<>();
String level = jobParameters.getString("level");
if (!StringUtils.isEmpty(level)) {
multiValueMap.set("level", level);
}
String fields = jobParameters.getString("fields");
if (!StringUtils.isEmpty(fields)) {
multiValueMap.set("fields", fields);
}
String filter = jobParameters.getString("filter");
if (filter != null) {
try {
JsonNode jsonNode = objectMapper.readTree(filter);
if (jsonNode != null && jsonNode.isArray()) {
List<ApiFilteringParameters> filteringParametersList = new ArrayList<>();
List<ApiSingleValueFilteringParameters> singleValueFilteringParameters = new ArrayList<>();
ArrayNode arrayNode = (ArrayNode) jsonNode;
arrayNode.forEach(node -> {
String field = node.get("field").asText();
String operator = node.get("operator").asText();
if (!StringUtils.isEmpty(field) && !StringUtils.isEmpty(operator)) {
String values = node.get("values").asText();
String[] valuesArray = !StringUtils.isEmpty(values) ? values.split(",") : null;
if (valuesArray != null) {
if (valuesArray.length > 1) {
filteringParametersList.add(ApiFilteringParameters
.of(field, Operator.valueOf(operator), valuesArray));
} else {
singleValueFilteringParameters.add(ApiSingleValueFilteringParameters
.of(field, Operator.valueOf(operator), valuesArray[0]));
}
}
}
});
if (!filteringParametersList.isEmpty()) {
builder.filterings(filteringParametersList);
}
if (!singleValueFilteringParameters.isEmpty()) {
builder.filterings(singleValueFilteringParameters);
}
}
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
String start = jobParameters.getString("time_ranges.start");
String end = jobParameters.getString("time_ranges.end");
String since = jobParameters.getString("time_range.since");
String until = jobParameters.getString("time_range.until");
if (!StringUtils.isEmpty(start) && !StringUtils.isEmpty(end)) {
builder.timeRanges(ApiParameters.timeRanges(start, end));
} else if (!StringUtils.isEmpty(since) && !StringUtils.isEmpty(until)) {
builder.timeRange(new TimeRange(since, until));
}
String actionBreakdowns = jobParameters.getString("action_breakdowns");
if (!StringUtils.isEmpty(actionBreakdowns)) {
multiValueMap.set("action_breakdowns", actionBreakdowns);
}
String attributionWindows = jobParameters.getString("action_attribution_windows");
if (attributionWindows != null) {
try {
multiValueMap
.set("action_attribution_windows",
objectMapper.writeValueAsString(attributionWindows.split(",")));
} catch (JsonProcessingException e) {
e.printStackTrace();
}
}
builder.multiValueMap(multiValueMap);
String pageSize = jobParameters.getString("pageSize");
if (!StringUtils.isEmpty(pageSize)) {
builder.limit(pageSize);
}
return builder.build();
}
};
}
}
Here is how message flows:
1. channel[async_ads_insights] ->IntegrationFlow[amqpOutboundAsyncAdsInsights]->[AMQP]->IntegrationFlow[amqpAdsInsightsAsyncJobRequestFlow]->channel[ad_report_run_polling_channel]->IntegrationFlow[adReportRunPollingLoopFlow]-IF END LOOP->channel[batch_launch_channel] ELSE -> channel[ad_report_run_polling_channel]
2. channel[batch_launch_channel] -> IntegrationFlow[amqpOutbound]-> IntegrationFlow[amqpLaunchSpringBatchJobFlow]
3. Spring Batch Job is launched.
The exception isn't thrown immediately after both instances are started, but after a while. Launching Spring Batch Jobs do succeeds but then start to fail with "A job instance already exists and is complete for..."
The job is for retrieving facebook ads results.
I would appreciate your insights into what is causing the error above.
I also have this configuration which does not use AMQP and works without any problem, but it is only for one instance.
#Configuration
#Conditional(SimpleBatchLaunchCondition.class)
#Slf4j
public class SimpleBatchLaunchIntegrationFlows {
#Autowired
SpringBatchLauncher batchLauncher;
#Autowired
DataSource dataSource;
#Bean(name = "batch_launch_channel")
public MessageChannel batchLaunchChannel() {
return MessageChannels.queue(jdbcChannelMessageStore(), "batch_launch_channel").get();
}
#Bean
public ChannelMessageStoreQueryProvider channelMessageStoreQueryProvider() {
return new MySqlChannelMessageStoreQueryProvider();
}
#Bean
public JdbcChannelMessageStore jdbcChannelMessageStore() {
JdbcChannelMessageStore channelMessageStore = new JdbcChannelMessageStore(dataSource);
channelMessageStore.setChannelMessageStoreQueryProvider(channelMessageStoreQueryProvider());
channelMessageStore.setUsingIdCache(true);
channelMessageStore.setPriorityEnabled(true);
return channelMessageStore;
}
#Bean
public IntegrationFlow launchSpringBatchJobFlow(#Qualifier("batch_launch_channel")
MessageChannel batchLaunchChannel) {
return IntegrationFlows.from(batchLaunchChannel)
.handle(message -> {
String jobName = (String) message.getHeaders().get("job_name");
JobParameters jobParameters = (JobParameters) message.getPayload();
batchLauncher.launchJob(jobName, jobParameters);
}, e->e.poller(Pollers.fixedRate(500).receiveTimeout(500))).get();
}
}
Refer to the Spring Batch documentation. When launching a new instance of a job, the job parameters must be unique.
A common solution is to add a dummy parameter with a UUID or similar but batch provides a strategy, e.g to increment a numeric parameter each time.
EDIT
There is a certain class of exceptions where the members of which are considered irrecoverable (fatal) and it makes no sense to attempt redelivery.
Examples include MessageConversionException - if we can't convert it the first time, we probably can't convert on a redelivery. The ConditionalRejectingErrorHandler is the mechanism by which we detect such exceptions, and cause them to be permanently rejected (and not redelivered).
Other exceptions cause the message to be redelivered by default - there is another property defaultRequeuRejected which can be set to false to permanently reject all failures (not recommended).
You can customize the error handler by subclassing its DefaultExceptionStrategy - override isUserCauseFatal(Throwable cause) to scan the cause tree to look for a JobInstanceAlreadyCompleteException and return true (cause.getCause().getCause() instanceof ...)
I think it was triggered by the error thrown by the "SpringBatch job running already" exception.
That still indicates you have somehow received a second message with the same parameters; it's a different error because the original job is still running; that message is rejected (and requeued) but on subsequent deliveries you get the already completed exception.
So, I still say the root cause of your problem is duplicate requests, but you can avoid the behavior with a customized error handler in the channel adapter's listener container.
I suggest you log the duplicate message so you can figure out why you are getting them.

Resources