Spring integration migration to Redis from RabbitMQ to share application events - spring-integration

We are migrating to Redis from RabbitMQ in our microservice applications.
Here is our service activator
#ServiceActivator(inputChannel = ApplicationEventChannelNames.REMOTE_CHANNEL)
public void handleApplicationEvent(#Header(value = ApplicationEventHeaders.APPLICATION_EVENT) final ApplicationEvent event,
#Payload Object message) {
...
}
Initially we had a problem where we were losing application event in the SimpleMessageConverter. We solved it by implementing a CustomRedisMessageConverter and putting application event into the payload in the fromMessage method and retreiving it from payload and create a new message headers with application event in the toMessage method.
#Override
public Object fromMessage(Message<?> message, Class<?> targetClass) {
if (message.getHeaders().get(ApplicationEventHeaders.APPLICATION_EVENT) != null) {
Map<String, Object> map = new HashMap<>();
map.put("headers", ((ApplicationEvent) message.getHeaders().get(ApplicationEventHeaders.APPLICATION_EVENT)).getName());
map.put("payload", message.getPayload());
GenericMessage<Map<String, Object>> msg = new GenericMessage<>(map, message.getHeaders());
return super.fromMessage(msg, targetClass);
}
return super.fromMessage(message, targetClass);
}
#Override
public Message<?> toMessage(Object payload, MessageHeaders headers) {
try {
final Map<String, ?> message = new ObjectMapper().readValue((String) payload, new TypeReference<Map<String, ?>>() {});
if (message.get("headers") != null) {
final Map<String, Object> messageHeaders = new HashMap<>(headers);
messageHeaders.put(ApplicationEventHeaders.APPLICATION_EVENT, new ApplicationEvent((String) message.get("headers")));
return super.toMessage(message.get("payload"), new MessageHeaders(messageHeaders));
}
} catch (JsonProcessingException exception) {
/* Intentionally left blank */
}
return super.toMessage(payload, headers);
}
We are wondering if there is a better approach for doing this?
Lastly, payload in the service activator come as a LinkedHashMap but we want it to be an object. With RabbitMQ this was handled.
Is there any way to do the same in Redis? Or do we use headers to keep track of the type of a payload and manually convert them into an object?
UPDATE - REDIS Configuration
#Bean
public RedisInboundChannelAdapter applicationEventInboundChannelAdapter(#Value(value = "${com.xxx.xxx.xxx.integration.spring.topic}") String topic,
MessageChannel applicationEventRemoteChannel,
RedisConnectionFactory connectionFactory) {
final RedisInboundChannelAdapter inboundChannelAdapter = new RedisInboundChannelAdapter(connectionFactory);
inboundChannelAdapter.setTopics(topic);
inboundChannelAdapter.setOutputChannel(applicationEventRemoteChannel);
inboundChannelAdapter.setErrorChannel(errorChannel());
inboundChannelAdapter.setMessageConverter(new CustomRedisMessageConverter());
return inboundChannelAdapter;
}
#ServiceActivator(inputChannel = "errorChannel")
public void processError(MessageHandlingException exception) {
try {
logger.error(
"Could not process {}, got exception: {}",
exception.getFailedMessage().getPayload(),
exception.getMessage());
logger.error(
ExceptionUtils.readStackTrace(exception));
} catch (Throwable throwable) {
logger.error(
"Got {} during processing with message: {} ",
MessageHandlingException.class.getSimpleName(),
exception);
}
}
#Bean
#ServiceActivator(inputChannel = ApplicationEventChannelNames.LOCAL_CHANNEL)
public RedisPublishingMessageHandler redisPublishingMessageHandler(#Value(value = "${com.xxx.xxx.xxx.integration.spring.topic}") String topic,
RedisConnectionFactory redisConnectionFactory) {
final RedisPublishingMessageHandler redisPublishingMessageHandler = new RedisPublishingMessageHandler(redisConnectionFactory);
redisPublishingMessageHandler.setTopic(topic);
redisPublishingMessageHandler.setSerializer(new Jackson2JsonRedisSerializer<>(String.class));
redisPublishingMessageHandler.setMessageConverter(new CusomRedisMessageConverter());
return redisPublishingMessageHandler;
}
/*
* MessageChannel
*/
#Bean
public MessageChannel errorChannel() {
return new DirectChannel();
}

Redis does not support headers, so you have to embed them into a body. See EmbeddedJsonHeadersMessageMapper which could be supplied into that org.springframework.integration.support.converter.SimpleMessageConverter on both side.

Related

How to get Azure Service Bus message id when sending a message to a topic using Spring Integration

After I send a message to a topic on Azure Service Bus using Spring Integration I would like to get the message id Azure generates. I can do this using JMS. Is there a way to do this using Spring Integration? The code I'm working with:
#Service
public class ServiceBusDemo {
private static final String OUTPUT_CHANNEL = "topic.output";
private static final String TOPIC_NAME = "my_topic";
#Autowired
TopicOutboundGateway messagingGateway;
public String send(String message) {
// How can I get the Azure message id after sending here?
this.messagingGateway.send(message);
return message;
}
#Bean
#ServiceActivator(inputChannel = OUTPUT_CHANNEL)
public MessageHandler topicMessageSender(ServiceBusTopicOperation topicOperation) {
DefaultMessageHandler handler = new DefaultMessageHandler(TOPIC_NAME, topicOperation);
handler.setSendCallback(new ListenableFutureCallback<>() {
#Override
public void onSuccess(Void result) {
System.out.println("Message was sent successfully to service bus.");
}
#Override
public void onFailure(Throwable ex) {
System.out.println("There was an error sending the message to service bus.");
}
});
return handler;
}
#MessagingGateway(defaultRequestChannel = OUTPUT_CHANNEL)
public interface TopicOutboundGateway {
void send(String text);
}
}
You could use ChannelInterceptor to get message headers:
public class CustomChannelInterceptor implements ChannelInterceptor {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
//key of the message-id header is not stable, you should add logic here to check which header key should be used here.
//ref: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/azure-spring-cloud-starter-servicebus#support-for-service-bus-message-headers-and-properties
String messageId = message.getHeaders().get("message-id-header-key").toString();
return ChannelInterceptor.super.preSend(message, channel);
}
}
Then in the configuration, set this interceptor to your channel
#Bean(name = OUTPUT_CHANNEL)
public BroadcastCapableChannel pubSubChannel() {
PublishSubscribeChannel channel = new PublishSubscribeChannel();
channel.setInterceptors(Arrays.asList(new CustomChannelInterceptor()));
return channel;
}

Spring Integration redelivery via errorChannel throw with JmsTransactionManager doesnt honor maximumRedeliveries

Related to SO question: Spring Integration Java DSL using JMS retry/redlivery
Using a transacted poller and JmsTransactionManager on a connectionFactory with maximumRedeliveries set to 3 results in a doubling of the actual redlievery attempts.
How can I get this to honor the redelivery settings of the connection factory?
My connectionFactory is built as:
#Bean (name="spring-int-connection-factory")
ActiveMQConnectionFactory jmsConnectionFactory(){
return buildConnectionFactory(
brokerUrl,
DELAY_2_SECS,
MAX_REDELIVERIES,
"spring-int");
}
public static ActiveMQConnectionFactory buildConnectionFactory(String brokerUrl, Long retryDelay, Integer maxRedeliveries, String clientIdPrefix){
ActiveMQConnectionFactory amqcf = new ActiveMQConnectionFactory();
amqcf.setBrokerURL(brokerUrl);
amqcf.setClientIDPrefix(clientIdPrefix);
if (maxRedeliveries != null) {
if (retryDelay == null) {
retryDelay = 500L;
}
RedeliveryPolicy rp = new org.apache.activemq.RedeliveryPolicy();
rp.setInitialRedeliveryDelay(retryDelay);
rp.setRedeliveryDelay(retryDelay);
rp.setMaximumRedeliveries(maxRedeliveries);
}
return amqcf;
}
My flow with poller is as:
#Bean
public IntegrationFlow flow2(#Qualifier("spring-int-connection-factory") ConnectionFactory connectionFactory) {
IntegrationFlow flow = IntegrationFlows.from(
Jms.inboundAdapter(connectionFactory)
.configureJmsTemplate(t -> t.receiveTimeout(1000).sessionTransacted(true))
.destination(INPUT_DIRECT_QUEUE),
e -> e.poller(Pollers
.fixedDelay(5000)
.transactional()
.errorChannel("customErrorChannel")
.maxMessagesPerPoll(2))
).handle(this.msgHandler).get();
return flow;
}
My errorChannel handler simply re-throws which causes JMS redelivery to happen.
When I run this with the handler set to always throw an exception, I see that the message handler actually receives the message 7 times (1 initial and 6 redeliveries).
I expected only 3 redeliveries according to my connectionFactory config.
Any ideas what is causing the doubling of attempts and how to mitigate it?
This works fine for me - stops at 4...
#SpringBootApplication
public class So51792909Application {
private static final Logger logger = LoggerFactory.getLogger(So51792909Application.class);
public static void main(String[] args) {
SpringApplication.run(So51792909Application.class, args);
}
#Bean
public ApplicationRunner runner(JmsTemplate template) {
return args -> {
for (int i = 0; i < 1; i++) {
template.convertAndSend("foo", "test");
}
};
}
#Bean
public IntegrationFlow flow(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Jms.inboundAdapter(connectionFactory)
.destination("foo"), e -> e
.poller(Pollers
.fixedDelay(5000)
.transactional()
.maxMessagesPerPoll(2)))
.handle((p, h) -> {
System.out.println(h.get("JMSXDeliveryCount"));
try {
Thread.sleep(2000);
}
catch (InterruptedException e1) {
Thread.currentThread().interrupt();
}
throw new RuntimeException("foo");
})
.get();
}
#Bean
public JmsTransactionManager transactionManager(ConnectionFactory cf) {
return new JmsTransactionManager(cf);
}
#Bean
public ActiveMQConnectionFactory amqCF() {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
RedeliveryPolicy rp = new RedeliveryPolicy();
rp.setMaximumRedeliveries(3);
cf.setRedeliveryPolicy(rp);
return cf;
}
public CachingConnectionFactory connectionFactory() {
return new CachingConnectionFactory(amqCF());
}
#JmsListener(destination = "ActiveMQ.DLQ")
public void listen(String in) {
logger.info(in);
}
}

ServiceActivator does not receive message from ImapIdleChannelAdapter

ServiceActivator does not receive messages from ImapIdleChannelAdapter...
JavaMail logs successful FETCH, but MIME messages do not get delivered to SA endpoint... I want to understand what is wrong in my code.
A7 FETCH 1:35 (ENVELOPE INTERNALDATE RFC822.SIZE FLAGS BODYSTRUCTURE)
* 1 FETCH (ENVELOPE ("Fri....
Code snippet below:
`
#Autowired
EmailConfig emailCfg;
#Bean
public SubscribableChannel mailChannel() {
return MessageChannels.direct().get();
}
#Bean
public ImapIdleChannelAdapter getMailAdapter() {
ImapMailReceiver mailReceiver = new ImapMailReceiver(emailCfg.getImapUrl());
mailReceiver.setJavaMailProperties(javaMailProperties());
mailReceiver.setShouldDeleteMessages(false);
mailReceiver.setShouldMarkMessagesAsRead(true);
ImapIdleChannelAdapter imapIdleChannelAdapter = new ImapIdleChannelAdapter(mailReceiver);
imapIdleChannelAdapter.setOutputChannel(mailChannel());
imapIdleChannelAdapter.setAutoStartup(true);
imapIdleChannelAdapter.afterPropertiesSet();
return imapIdleChannelAdapter;
}
#ServiceActivator(inputChannel = "mailChannel")
public void receive(String mail) {
log.warn(mail);
}
private Properties javaMailProperties() {
Properties javaMailProperties = new Properties();
javaMailProperties.setProperty("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
javaMailProperties.setProperty("mail.imap.socketFactory.fallback", "false");
javaMailProperties.setProperty("mail.store.protocol", "imaps");
javaMailProperties.setProperty("mail.debug", "true");
javaMailProperties.setProperty("mail.imap.ssl", "true");
return javaMailProperties;
}
`
The problem was due to wrong bean initialization. Full version that works OK:
#Slf4j
#Configuration
#EnableIntegration
public class MyMailAdapter {
#Autowired
EmailConfig emailCfg;
#Bean
public SubscribableChannel mailChannel() {
log.info("Channel ready");
return MessageChannels.direct().get();
}
#Bean
public ImapMailReceiver receiver() {
ImapMailReceiver mailReceiver = new ImapMailReceiver(emailCfg.getImapUrl());
mailReceiver.setJavaMailProperties(javaMailProperties());
mailReceiver.setShouldDeleteMessages(false);
mailReceiver.setShouldMarkMessagesAsRead(true);
return mailReceiver;
}
#Bean
public ImapIdleChannelAdapter adapter() {
ImapIdleChannelAdapter imapIdleChannelAdapter = new ImapIdleChannelAdapter(receiver());
imapIdleChannelAdapter.setOutputChannel(mailChannel());
imapIdleChannelAdapter.afterPropertiesSet();
return imapIdleChannelAdapter;
}
#ServiceActivator(inputChannel = "mailChannel")
public void receive(Message<MimeMessage> mail) throws MessagingException {
log.info(mail.getPayload().toString());
}
private Properties javaMailProperties() {
Properties javaMailProperties = new Properties();
javaMailProperties.setProperty("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
javaMailProperties.setProperty("mail.imap.socketFactory.fallback", "false");
javaMailProperties.setProperty("mail.store.protocol", "imaps");
javaMailProperties.setProperty("mail.debug", "true");
javaMailProperties.setProperty("mail.imap.ssl", "true");
return javaMailProperties;
}
}
I don't know what's exactly wrong with your code but I will suggest you few approches that could help you.
Firstly I suggest you to use java DSL in java based configuration. It will provide you nice way to directly specific flow of your integration application (and avoid simply mistakes). For example for spliiter and service activator:
#Bean
public IntegrationFlow yourFlow(AbstractMessageSplitter splitter,
MessageHandler handler) {
return
IntegrationFlows
.from(CHANNEL)
.split(splitter)
.handle(handler).get();
}
Secondly it's generally bad idea to directly specify message type to String. Try something like this (why String?):
#ServiceActivator(inputChannel = "mailChannel")
public void receive(Message<?> message) {
/* (String) message.getPayload() */
}
Maybe it's not a case but let's check it.

Spring Integration - Handling stale sftp sessions

I have implemented the following scenario:
A queueChannel holding Messages in form of byte[]
A MessageHandler, polling the queue channel and uploading files over sftp
A Transformer, listening to errorChannel and sending extracted payload from the failed message back to the queueChannel (thought of as an error handler to handle failed messages so nothing gets lost)
If the sftp server is online, everything works as expected.
If the sftp server is down, the errormessage, that arrives as the transformer is:
org.springframework.messaging.MessagingException: Failed to obtain pooled item; nested exception is java.lang.IllegalStateException: failed to create SFTP Session
The transformer cannot do anything with this, since the payload's failedMessage is null and throws an exception itself. The transformer looses the message.
How can I configure my flow to make the tranformer get the right message with the corresponding payload of the unsucsesfully uploaded file?
My Configuration:
#Bean
public MessageChannel toSftpChannel() {
final QueueChannel channel = new QueueChannel();
channel.setLoggingEnabled(true);
return new QueueChannel();
}
#Bean
public MessageChannel toSplitter() {
return new PublishSubscribeChannel();
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannel", poller = #Poller(fixedDelay = "10000", maxMessagesPerPoll = "1"))
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("name");
} else {
throw new IllegalArgumentException("byte[] expected in Payload!");
}
});
return handler;
}
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
final DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
final Properties jschProps = new Properties();
jschProps.put("StrictHostKeyChecking", "no");
jschProps.put("PreferredAuthentications", "publickey,password");
factory.setSessionConfig(jschProps);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(sftpPrivateKeyPassphrase);
} else {
factory.setPassword(sftpPasword);
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
#Splitter(inputChannel = "toSplitter")
public DmsDocumentMessageSplitter splitter() {
final DmsDocumentMessageSplitter splitter = new DmsDocumentMessageSplitter();
splitter.setOutputChannelName("toSftpChannel");
return splitter;
}
#Transformer(inputChannel = "errorChannel", outputChannel = "toSftpChannel")
public Message<?> errorChannelHandler(ErrorMessage errorMessage) throws RuntimeException {
Message<?> failedMessage = ((MessagingException) errorMessage.getPayload())
.getFailedMessage();
return MessageBuilder.withPayload(failedMessage)
.copyHeadersIfAbsent(failedMessage.getHeaders())
.build();
}
#MessagingGateway
public interface UploadGateway {
#Gateway(requestChannel = "toSplitter")
void upload(#Payload List<byte[]> payload, #Header("header") DmsDocumentUploadRequestHeader header);
}
Thanks..
Update
#Bean(PollerMetadata.DEFAULT_POLLER)
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED)
PollerMetadata poller() {
return Pollers
.fixedRate(5000)
.maxMessagesPerPoll(1)
.receiveTimeout(500)
.taskExecutor(taskExecutor())
.transactionSynchronizationFactory(transactionSynchronizationFactory())
.get();
}
#Bean
#ServiceActivator(inputChannel = "toMessageStore", poller = #Poller(PollerMetadata.DEFAULT_POLLER))
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannelName("toSftpChannel");
return bridgeHandler;
}
The null failedMessage is a bug; reproduced INT-4421.
I would not recommend using a QueueChannel for this scenario. If you use a direct channel, you can configure a retry advice to attempt redeliveries. when the retries are exhausted (if so configured), the exception will be thrown back to the calling thread.
Add the advice to the SftpMessageHandler's adviceChain property.
EDIT
You can work around the "missing" failedMessage by inserting a bridge between the pollable channel and the sftp adapter:
#Bean
#ServiceActivator(inputChannel = "toSftpChannel", poller = #Poller(fixedDelay = "5000", maxMessagesPerPoll = "1"))
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannelName("toRealSftpChannel");
return bridgeHandler;
}
#Bean
#ServiceActivator(inputChannel = "toRealSftpChannel")
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression("foo"));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("name");
}
else {
throw new IllegalArgumentException("byte[] expected in Payload!");
}
});
return handler;
}

Spring Integration TcpInboundGateway sending conditional reply

I have configured TcpInboundGateway to receive requests from client and my configuration is as follows. So as per below configuration every client requested is responded back,but what i want is response should be send back only if certain condition is true,not the every time, what changes needs to be done in configuration?
#SpringBootApplication
#IntegrationComponentScan
public class SpringIntegrationApplication extends SpringBootServletInitializer{
public static void main(String[] args) throws IOException {
ConfigurableApplicationContext ctx = SpringApplication.run(SpringIntegrationApplication.class, args);
System.in.read();
ctx.close();
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(SpringIntegrationApplication.class);
}
private static Class<SpringIntegrationApplication> applicationClass = SpringIntegrationApplication.class;
#Bean
TcpNetServerConnectionFactory cf(){
TcpNetServerConnectionFactory connectionFactory=new TcpNetServerConnectionFactory(8765);
return connectionFactory;
}
#Bean
TcpInboundGateway tcpGate(){
TcpInboundGateway gateway=new TcpInboundGateway();
gateway.setConnectionFactory(cf());
gateway.setRequestChannel(requestChannel());
return gateway;
}
#Bean
public MessageChannel requestChannel(){
return new DirectChannel();
}
#MessageEndpoint
public class Echo {
#ServiceActivator(inputChannel="requestChannel")
public byte[] echo(byte[] in,#SuppressWarnings("deprecation") #Header("ip_address") String ip){
byte[] rawbytes = gosDataSerivce.byteArrayToHex(in,ip);//Process bytes and returns result
return rawbytes;
}
}
}
Not sure where is your problem, but you can just simply return null from your echo(). In that case the ServiceActivatingHandler doesn't care and stops it work. Just because of requiresReply = false.
From other side the TcpInboundGateway doesn't care about null, too:
Message<?> reply = this.sendAndReceiveMessage(message);
if (reply == null) {
if (logger.isDebugEnabled()) {
logger.debug("null reply received for " + message + " nothing to send");
}
return false;
}
That is possible because of replyTimeout option for the MessagingTemplate on the background. By default it is 1 sec. After that the sendAndReceiveMessage() just returns null to the caller.
You can adjust this option on the TcpInboundGateway.

Resources