Spring Integration - Request-Reply Implementation - spring-integration

I am new to Spring Integration and new to Stack Overflow. I am looking for some help in understanding Spring Integration as it relates to a request-reply pattern. From reading on the web, I am thinking that I should be using a Service Activator to enable this type of use case.
I am using JMS to facilitate the sending and receiving of XML based messages. Our underlining implementation is IBM Websphere MQ.
I am also using Spring Boot (version 1.3.6.RELEASE) and attempting to use a pure annotation based configuration approach (if that is possible). I have searched the web and see some example but nothing that so far I can see that helps me understand how it all fits together. The Spring Integration documentation is excellent but I am still struggling with how all the pieces fit together. I apologize in advance if there is something out there that I missed. I treat posting here as a last alternative.
Here is what I have for my configuration:
package com.daluga.spring.integration.configuration
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueue;
import com.ibm.msg.client.wmq.WMQConstants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.annotation.IntegrationComponentScan;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.connection.CachingConnectionFactory;
import org.springframework.jms.core.JmsTemplate;
import javax.jms.ConnectionFactory;
import javax.jms.DeliveryMode;
import javax.jms.Destination;
import javax.jms.JMSException;
//import com.ibm.msg.client.services.Trace;
#Configuration
public class MQConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(MQConfiguration.class);
#Value("${host-name}")
private String hostName;
#Value("${port}")
private int port;
#Value("${channel}")
private String channel;
#Value("${time-to-live}")
private int timeToLive;
#Autowired
#Qualifier("MQConnectionFactory")
ConnectionFactory connectionFactory;
#Bean(name = "jmsTemplate")
public JmsTemplate provideJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory);
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(timeToLive);
jmsTemplate.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
return jmsTemplate;
}
#Bean(name = "MQConnectionFactory")
public ConnectionFactory connectionFactory() {
CachingConnectionFactory ccf = new CachingConnectionFactory();
//Trace.setOn();
try {
MQConnectionFactory mqcf = new MQConnectionFactory();
mqcf.setHostName(hostName);
mqcf.setPort(port);
mqcf.setChannel(channel);
mqcf.setTransportType(WMQConstants.WMQ_CM_CLIENT);
ccf.setTargetConnectionFactory(mqcf);
ccf.setSessionCacheSize(2);
} catch (JMSException e) {
throw new RuntimeException(e);
}
return ccf;
}
#Bean(name = "requestQueue")
public Destination createRequestQueue() {
Destination queue = null;
try {
queue = new MQQueue("REQUEST.QUEUE");
} catch (JMSException e) {
throw new RuntimeException(e);
}
return queue;
}
#Bean(name = "replyQueue")
public Destination createReplyQueue() {
Destination queue = null;
try {
queue = new MQQueue("REPLY.QUEUE");
} catch (JMSException e) {
throw new RuntimeException(e);
}
return queue;
}
#Bean(name = "requestChannel")
public QueueChannel createRequestChannel() {
QueueChannel channel = new QueueChannel();
return channel;
}
#Bean(name = "replyChannel")
public QueueChannel createReplyChannel() {
QueueChannel channel = new QueueChannel();
return channel;
}
}
And here is my Service class:
package com.daluga.spring.integration.service
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.stereotype.Service;
#Service
public class MyRequestReplyService {
private static final Logger LOGGER = LoggerFactory.getLogger(MyRequestReplyService.class);
#ServiceActivator(inputChannel = "replyChannel")
public void sendAndReceive(String requestPayload) {
// How to get replyPayload
}
}
So, at this point, I am not quite sure how to glue all this together to make this work. I don't understand how to glue together my request and reply queues to the service activator to make this all work.
The service I am calling (JMS/Webshere MQ based) is using the typical message and correlation id so that I can properly tied the request to the corresponding response.
Can anyone provide me any guidance on how to get this to work? Please let me know what additional information I can provide to make this clear.
Thanks in advance for your help!
Dan

Gateways provide request/reply semantics.
Instead of using a JmsTemplate directly, you should be using Spring Integration's built-in JMS Support.
#Bean
#ServiceActivator(inputChannel="requestChannel")
public MessageHandler jmsOutGateway() {
JmsOutboundGateway outGateway = new JmsOutboundGateway();
// set properties
outGateway.setOutputChannel(replyChannel());
return outGateway;
}
If you want to roll your own, change the service activator method the return a reply type and use one of the template sendAndReceive() or convertSendAndReceive() methods.
The sample app uses XML configuration but should provide some additional guidance.

Related

ActiveMQ STOMP over Websockets Remote Host name

I'm trying to connect to an ActiveMQ instance from node.js using STOMP.js which connects via STOMP over websockets. My broker has a security policy enforced by a BrokerFilter:
package com.mycompany.queues.security;
import com.mycompany.domain.businessObjects.User;
import com.mycompany.domain.services.UserService;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.activemq.broker.Broker;
import org.apache.activemq.broker.BrokerFilter;
import org.apache.activemq.broker.ConnectionContext;
import org.apache.activemq.broker.region.Subscription;
import org.apache.activemq.command.ConnectionInfo;
import org.apache.activemq.command.ConsumerInfo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class BrokerAuthentication extends BrokerFilter {
private static final Logger log = LoggerFactory.getLogger(BrokerAuthentication.class);
private UserService userService;
private List<String> noAuthIPs;
private static final Pattern IP_PATTERN = Pattern.compile(".*://([0-9A-Za-z\\.]*).*");
private static final Pattern CUSTOMER_ID_PATTERN = Pattern.compile(".*customer_id\\s*=\\s*(\\d+)\\s*.*");
public BrokerAuthentication(Broker broker, UserService userService, List<String> noAuthIPs) {
super(broker);
this.userService = userService;
this.noAuthIPs = noAuthIPs;
}
#Override
public void addConnection(ConnectionContext context, ConnectionInfo info) throws Exception {
if (requiresAuth(context)) {
//...
}
super.addConnection(context, info);
}
//...
private boolean requiresAuth(ConnectionContext context) {
String remoteAddress = context.getConnection().getRemoteAddress();
Matcher matcher = IP_PATTERN.matcher(remoteAddress);
if (matcher.matches()) {
String ip = matcher.group(1);
if (noAuthIPs.contains(ip)) {
return false;
}
} else {
log.info("IP not in no auth list " + remoteAddress);
}
return true;
}
}
where I've omitted some irrelevant stuff. NoAuthIPs is set in a config xml file, and should include localhost.
When I attempt to connect to the broker from the same machine, I'm getting this error logged by requiresAuth:
IP not in no auth list StompSocket_814158251
Going digging through the activemq source code it seems as if there's 29 different implementations of the Connection interface, but I'm having trouble finding one that could possibly give me StompSocket_814158251 as the remote address.
I've tried grepping for StompSocket in the node library on GitHub, and drew a blank.
I can't just add that specific string to my "allowed hosts" because of the random numbers at the end, and it's obviously not secure to try and add some catch-all workaround like matching StompSocket against a regex just because I don't understand it.
Where is this weird remote address coming from, and how can I configure my auth around this behaviour?
Thanks in advance for any help.
EDIT:
My ActiveMQ version is 5.11.1
My connection configuration for the node Stomp.js client:
const { AMQ_HOST, AMQ_PORT, AMQ_USERNAME, AMQ_PWD } = process.env;
//...
new Client({
brokerURL: `ws://${AMQ_HOST}:${AMQ_PORT}/stomp`,
connectHeaders: {
login: AMQ_USERNAME,
passcode: AMQ_PWD
},
debug: function (str) {
logger.info(str);
},
reconnectDelay: 2,
heartbeatIncoming: 4000,
heartbeatOutgoing: 4000
});
where in my .env file I have
AMQ_HOST = localhost
AMQ_PORT = 61614

How to configure Message Bus In Liferay 7?

I want to use Liferay Message bus in DXP. I have written the following code.
DemoSender.java
package demo.sender.portlet;
import demo.sender.constants.DemoSenderPortletKeys;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.messaging.Message;
import com.liferay.portal.kernel.messaging.MessageBus;
import com.liferay.portal.kernel.messaging.MessageBusUtil;
import com.liferay.portal.kernel.portlet.bridges.mvc.MVCPortlet;
import javax.portlet.ActionRequest;
import javax.portlet.ActionResponse;
import javax.portlet.Portlet;
import org.osgi.framework.BundleContext;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
/**
* #author parth.ghiya
*/
#Component(
immediate = true,
property = {
"com.liferay.portlet.display-category=category.sample",
"com.liferay.portlet.instanceable=true",
"javax.portlet.display-name=demo-sender Portlet",
"javax.portlet.init-param.template-path=/",
"javax.portlet.init-param.view-template=/view.jsp",
"javax.portlet.name=" + DemoSenderPortletKeys.DemoSender,
"javax.portlet.resource-bundle=content.Language",
"javax.portlet.security-role-ref=power-user,user"
},
service = Portlet.class
)
public class DemoSenderPortlet extends MVCPortlet {
#Activate
protected void activate(BundleContext bundleContext) {
_bundleContext = bundleContext;
}
public void sendMessage(
ActionRequest actionRequest, ActionResponse actionResponse) {
if (_log.isInfoEnabled()) {
_log.info("Sending message to DE Echo service");
}
Message message = new Message();
message.setDestinationName("MyEchoDestination");
message.setPayload("Hello World!");
message.setResponseDestinationName("MyEchoResponse");
_messageBus.sendMessage(message.getDestinationName(), message);
}
private static final Log _log = LogFactoryUtil.getLog(DemoSenderPortlet.class);
private BundleContext _bundleContext;
#Reference
private MessageBus _messageBus;
}
DemoReceiver.java
package demo.receiver.portlet;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.messaging.BaseMessageListener;
import com.liferay.portal.kernel.messaging.Message;
import com.liferay.portal.kernel.messaging.MessageBus;
import com.liferay.portal.kernel.messaging.MessageListener;
#Component(
immediate = true, property = {"destination.name=MyEchoDestination"},
service = MessageListener.class
)
public class DemoReceiverPortlet extends BaseMessageListener {
#Override
protected void doReceive(Message message) throws Exception {
if (_log.isInfoEnabled()) {
_log.info("Received: " + message);
}
String payload = (String)message.getPayload();
if (_log.isInfoEnabled()) {
_log.info("Message payload: " + payload);
}
/*
String responseDestinationName = message.getResponseDestinationName();
if ((responseDestinationName != null) &&
(responseDestinationName.length() > 0)) {
Message responseMessage = new Message();
responseMessage.setDestinationName(responseDestinationName);
responseMessage.setResponseId(message.getResponseId());
//This is just for demo purposes
responseMessage.setPayload(payload);
_messageBus.sendMessage(
message.getResponseDestinationName(), responseMessage);
}
*/
}
private static final Log _log = LogFactoryUtil.getLog(DemoReceiverPortlet.class);
#Reference
private volatile MessageBus _messageBus;
}
The problem is that my doReceive method is never getting called.
What configuration needs to be further added?
Regards
P.S : in DemoSender, i send some message on click of button
Edit # 1
I did added configurator code as follows.
package demo.receiver.portlet;
import java.util.Dictionary;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceRegistration;
import org.osgi.service.component.ComponentContext;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Reference;
import com.liferay.portal.kernel.concurrent.DiscardOldestPolicy;
import com.liferay.portal.kernel.concurrent.RejectedExecutionHandler;
import com.liferay.portal.kernel.concurrent.ThreadPoolExecutor;
import com.liferay.portal.kernel.log.Log;
import com.liferay.portal.kernel.log.LogFactoryUtil;
import com.liferay.portal.kernel.messaging.Destination;
import com.liferay.portal.kernel.messaging.DestinationConfiguration;
import com.liferay.portal.kernel.messaging.DestinationFactory;
import com.liferay.portal.kernel.messaging.MessageBus;
import com.liferay.portal.kernel.util.HashMapDictionary;
#Component(
enabled = false, immediate = true,
service = DemoReceiverConfigurator.class
)
public class DemoReceiverConfigurator {
#Activate
protected void activate(ComponentContext componentContext) {
_bundleContext = componentContext.getBundleContext();
System.out.println("===demo===");
Dictionary<String, Object> properties =
componentContext.getProperties();
DestinationConfiguration destinationConfiguration =
new DestinationConfiguration(DestinationConfiguration.DESTINATION_TYPE_PARALLEL,"MyEchoDestination");
destinationConfiguration.setMaximumQueueSize(200);
RejectedExecutionHandler rejectedExecutionHandler =
new DiscardOldestPolicy() {
#Override
public void rejectedExecution(
Runnable runnable, ThreadPoolExecutor threadPoolExecutor) {
if (_log.isWarnEnabled()) {
_log.warn(
"The current thread will handle the request " +
"because the audit router's task queue is at " +
"its maximum capacity");
}
super.rejectedExecution(runnable, threadPoolExecutor);
}
};
destinationConfiguration.setRejectedExecutionHandler(
rejectedExecutionHandler);
Destination destination = _destinationFactory.createDestination(
destinationConfiguration);
Dictionary<String, Object> destinationProperties =
new HashMapDictionary<>();
destinationProperties.put("destination.name", destination.getName());
_destinationServiceRegistration = _bundleContext.registerService(
Destination.class, destination, destinationProperties);
}
#Deactivate
protected void deactivate() {
if (_destinationServiceRegistration != null) {
Destination destination = _bundleContext.getService(
_destinationServiceRegistration.getReference());
_destinationServiceRegistration.unregister();
destination.destroy();
}
_bundleContext = null;
}
#Reference(unbind = "-")
protected void setMessageBus(MessageBus messageBus) {
}
private static final Log _log = LogFactoryUtil.getLog(
DemoReceiverConfigurator.class);
private volatile BundleContext _bundleContext;
#Reference
private DestinationFactory _destinationFactory;
private volatile ServiceRegistration<Destination>
_destinationServiceRegistration;
}
But my Activate method aint getting called, i have enabledfalse in my message listener class and enabled = false, immediate = true in my Configurator class.
Dont know what i am missing.
Often in OSGi, this seemingly obvious configuration is enough. In this case though, it obviously isn't, because Liferay now knows about the message you're sending and that you're interested to receive, but the Messagebus doesn't know about this destination to be created.
It seems obvious - if there is a listener to a particular message, there probably needs to be a destination. But what type will it be? Parallel processing? How many parallel handlers? Synchronous? Queued? This is what you'll need to do.
While a quick search didn't find a documentation on how to do this, you can use this configurator as an example for creating the missing link.
MessageBus documentation was improved a few days ago, have a look to following page https://dev.liferay.com/develop/tutorials/-/knowledge_base/7-0/message-bus

How can #MessagingGateway be configured with Spring Cloud Stream MessageChannels?

I have developed asynchronous Spring Cloud Stream services, and I am trying to develop an edge service that uses #MessagingGateway to provide synchronous access to services that are async by nature.
I am currently getting the following stack trace:
Caused by: org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:355)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:271)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:188)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
... 47 common frames omitted
My #MessagingGateway:
#EnableBinding(AccountChannels.class)
#MessagingGateway
public interface AccountService {
#Gateway(requestChannel = AccountChannels.CREATE_ACCOUNT_REQUEST,replyChannel = AccountChannels.ACCOUNT_CREATED, replyTimeout = 60000, requestTimeout = 60000)
Account createAccount(#Payload Account account, #Header("Authorization") String authorization);
}
If I consume the message on the reply channel via a #StreamListener, it works just fine:
#HystrixCommand(commandKey = "acounts-edge:accountCreated", fallbackMethod = "accountCreatedFallback", commandProperties = {#HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE")}, ignoreExceptions = {ClientException.class})
#StreamListener(AccountChannels.ACCOUNT_CREATED)
public void accountCreated(Account account, #Header(name = "spanTraceId", required = false) String traceId) {
try {
if (log.isInfoEnabled()) {
log.info(new StringBuilder("Account created: ").append(objectMapper.writeValueAsString(account)).toString());
}
} catch (JsonProcessingException e) {
log.error(e.getMessage(), e);
}
}
On the producer side, I am configuring requiredGroups to ensure that multiple consumers can process the message, and correspondingly, the consumers have matching group configurations.
Consumer:
spring:
cloud:
stream:
bindings:
create-account-request:
binder: rabbit1
contentType: application/json
destination: create-account-request
requiredGroups: accounts-service-create-account-request
account-created:
binder: rabbit1
contentType: application/json
destination: account-created
group: accounts-edge-account-created
Producer:
spring:
cloud:
stream:
bindings:
create-account-request:
binder: rabbit1
contentType: application/json
destination: create-account-request
group: accounts-service-create-account-request
account-created:
binder: rabbit1
contentType: application/json
destination: account-created
requiredGroups: accounts-edge-account-created
The bit of code on the producer side that processes the request and sends the response:
accountChannels.accountCreated().send(MessageBuilder.withPayload(accountService.createAccount(account)).build());
I can debug and see that the request is received and processed, but when the response is sent to the reply channel, that's when the error occurs.
To get the #MessagingGateway working, what configurations and/or code am I missing? I know I'm combining Spring Integration and Spring Cloud Gateway, so I'm not sure if using them together is causing the issues.
It's good question and really good idea. But it isn't going to work so easy.
First of all we have to determine for ourselves that gateway means request/reply, therefore correlation. And this available in #MessagingGateway via replyChannel header in face of TemporaryReplyChannel instance. Even if you have an explicit replyChannel = AccountChannels.ACCOUNT_CREATED, the correlation is done only via the mentioned header and its value. The fact that this TemporaryReplyChannel is not serializable and can't be transferred over the network to the consumer on another side.
Luckily Spring Integration provide some solution for us. It is a part of the HeaderEnricher and its headerChannelsToString option behind HeaderChannelRegistry:
Starting with Spring Integration 3.0, a new sub-element <int:header-channels-to-string/> is available; it has no attributes. This converts existing replyChannel and errorChannel headers (when they are a MessageChannel) to a String and stores the channel(s) in a registry for later resolution when it is time to send a reply, or handle an error. This is useful for cases where the headers might be lost; for example when serializing a message into a message store or when transporting the message over JMS. If the header does not already exist, or it is not a MessageChannel, no changes are made.
https://docs.spring.io/spring-integration/docs/5.0.0.RELEASE/reference/html/messaging-transformation-chapter.html#header-enricher
But in this case you have to introduce an internal channel from the gateway to the HeaderEnricher and only the last one will send the message to the AccountChannels.CREATE_ACCOUNT_REQUEST. So, the replyChannel header will be converted to a string representation and be able to travel over the network. On the consumer side when you send a reply you should ensure that you transfer that replyChannel header as well, as it is. So, when the message will arrive to the AccountChannels.ACCOUNT_CREATED on the producer side, where we have that #MessagingGateway, the correlation mechanism is able to convert a channel identificator to the proper TemporaryReplyChannel and correlate the reply to the waiting gateway call.
Only the problem here that your producer application must be as single consumer in the group for the AccountChannels.ACCOUNT_CREATED - we have to ensure that only one instance in the cloud is operating at a time. Just because only one instance has that TemporaryReplyChannel in its memory.
More info about gateway: https://docs.spring.io/spring-integration/docs/5.0.0.RELEASE/reference/html/messaging-endpoints-chapter.html#gateway
UPDATE
Some code for help:
#EnableBinding(AccountChannels.class)
#MessagingGateway
public interface AccountService {
#Gateway(requestChannel = AccountChannels.INTERNAL_CREATE_ACCOUNT_REQUEST, replyChannel = AccountChannels.ACCOUNT_CREATED, replyTimeout = 60000, requestTimeout = 60000)
Account createAccount(#Payload Account account, #Header("Authorization") String authorization);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(AccountChannels.INTERNAL_CREATE_ACCOUNT_REQUEST)
.enrichHeaders(headerEnricher -> headerEnricher.headerChannelsToString())
.channel(AccountChannels.CREATE_ACCOUNT_REQUEST)
.get();
}
UPDATE
Some simple application to demonstrate the PoC:
#EnableBinding({ Processor.class, CloudStreamGatewayApplication.GatewayChannels.class })
#SpringBootApplication
public class CloudStreamGatewayApplication {
interface GatewayChannels {
String REQUEST = "request";
#Output(REQUEST)
MessageChannel request();
String REPLY = "reply";
#Input(REPLY)
SubscribableChannel reply();
}
private static final String ENRICH = "enrich";
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = ENRICH, replyChannel = GatewayChannels.REPLY)
String process(String payload);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(ENRICH)
.enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.channel(GatewayChannels.REQUEST)
.get();
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public Message<?> process(Message<String> request) {
return MessageBuilder.withPayload(request.getPayload().toUpperCase())
.copyHeaders(request.getHeaders())
.build();
}
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext =
SpringApplication.run(CloudStreamGatewayApplication.class, args);
StreamGateway gateway = applicationContext.getBean(StreamGateway.class);
String result = gateway.process("foo");
System.out.println(result);
}
}
The application.yml:
spring:
cloud:
stream:
bindings:
input:
destination: requests
output:
destination: replies
request:
destination: requests
reply:
destination: replies
I use spring-cloud-starter-stream-rabbit.
The
MessageBuilder.withPayload(request.getPayload().toUpperCase())
.copyHeaders(request.getHeaders())
.build()
Does the trick copying request headers to the reply message. So, the gateway is able on the reply side to convert channel identifier in the headers to the appropriate TemporaryReplyChannel to convey the reply properly to the caller of gateway.
The SCSt issue on the matter: https://github.com/spring-cloud/spring-cloud-stream/issues/815
With Artem's help, I've found the solution I was looking for. I have taken the code Artem posted and split it into two services, a Gateway service and a CloudStream service. I also added a #RestController for testing purposes. This essentially mimics what I was wanting to do with durable queues. Thanks Artem for your assistance! I really appreciate your time! I hope this helps others who want to do the same thing.
Gateway Code
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.context.annotation.Bean;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.integration.annotation.Gateway;
import org.springframework.integration.annotation.MessagingGateway;
import org.springframework.integration.dsl.HeaderEnricherSpec;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;
#EnableBinding({GatewayApplication.GatewayChannels.class})
#SpringBootApplication
public class GatewayApplication {
interface GatewayChannels {
String TO_UPPERCASE_REPLY = "to-uppercase-reply";
String TO_UPPERCASE_REQUEST = "to-uppercase-request";
#Input(TO_UPPERCASE_REPLY)
SubscribableChannel toUppercaseReply();
#Output(TO_UPPERCASE_REQUEST)
MessageChannel toUppercaseRequest();
}
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = ENRICH, replyChannel = GatewayChannels.TO_UPPERCASE_REPLY)
String process(String payload);
}
private static final String ENRICH = "enrich";
public static void main(String[] args) {
SpringApplication.run(GatewayApplication.class, args);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(ENRICH).enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.channel(GatewayChannels.TO_UPPERCASE_REQUEST).get();
}
#RestController
public class UppercaseController {
#Autowired
StreamGateway gateway;
#GetMapping(value = "/string/{string}",
produces = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<String> getUser(#PathVariable("string") String string) {
return new ResponseEntity<String>(gateway.process(string), HttpStatus.OK);
}
}
}
Gateway Config (application.yml)
spring:
cloud:
stream:
bindings:
to-uppercase-request:
destination: to-uppercase-request
producer:
required-groups: stream-to-uppercase-request
to-uppercase-reply:
destination: to-uppercase-reply
group: gateway-to-uppercase-reply
server:
port: 8080
CloudStream Code
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
#EnableBinding({CloudStreamApplication.CloudStreamChannels.class})
#SpringBootApplication
public class CloudStreamApplication {
interface CloudStreamChannels {
String TO_UPPERCASE_REPLY = "to-uppercase-reply";
String TO_UPPERCASE_REQUEST = "to-uppercase-request";
#Output(TO_UPPERCASE_REPLY)
SubscribableChannel toUppercaseReply();
#Input(TO_UPPERCASE_REQUEST)
MessageChannel toUppercaseRequest();
}
public static void main(String[] args) {
SpringApplication.run(CloudStreamApplication.class, args);
}
#StreamListener(CloudStreamChannels.TO_UPPERCASE_REQUEST)
#SendTo(CloudStreamChannels.TO_UPPERCASE_REPLY)
public Message<?> process(Message<String> request) {
return MessageBuilder.withPayload(request.getPayload().toUpperCase())
.copyHeaders(request.getHeaders()).build();
}
}
CloudStream Config (application.yml)
spring:
cloud:
stream:
bindings:
to-uppercase-request:
destination: to-uppercase-request
group: stream-to-uppercase-request
to-uppercase-reply:
destination: to-uppercase-reply
producer:
required-groups: gateway-to-uppercase-reply
server:
port: 8081
Hmm, I am a bit confused as well as to what you are trying to accomplish, but let's se if we can figure this out.
Mixing SI and SCSt is definitely natural as one builds on another so all should work:
Here is an example code snippet I just dug up from an old sample that exposes REST endpoint yet delegates (via Gateway) to Source's output channel. See if that helps:
#EnableBinding(Source.class)
#SpringBootApplication
#RestController
public class FooApplication {
. . .
#Autowired
private Source channels;
#Autowired
private CompletionService completionService;
#RequestMapping("/complete")
public String completeRequest(#RequestParam int id) {
this.completionService.complete("foo");
return "OK";
}
#MessagingGateway
interface CompletionService {
#Gateway(requestChannel = Source.OUTPUT)
void complete(String message);
}
}

Commons Configuration2 ReloadingFileBasedConfiguration

I am trying to implement the Apache Configuration 2 in my codebase
import java.io.File;
import java.util.concurrent.TimeUnit;
import org.apache.commons.configuration2.PropertiesConfiguration;
import org.apache.commons.configuration2.builder.ConfigurationBuilderEvent;
import org.apache.commons.configuration2.builder.ReloadingFileBasedConfigurationBuilder;
import org.apache.commons.configuration2.builder.fluent.Parameters;
import org.apache.commons.configuration2.convert.DefaultListDelimiterHandler;
import org.apache.commons.configuration2.event.EventListener;
import org.apache.commons.configuration2.ex.ConfigurationException;
import org.apache.commons.configuration2.reloading.PeriodicReloadingTrigger;
import org.apache.commons.configuration2.CompositeConfiguration;
public class Test {
private static final long DELAY_MILLIS = 10 * 60 * 5;
public static void main(String[] args) {
// TODO Auto-generated method stub
CompositeConfiguration compositeConfiguration = new CompositeConfiguration();
PropertiesConfiguration props = null;
try {
props = initPropertiesConfiguration(new File("/tmp/DEV.properties"));
} catch (ConfigurationException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
compositeConfiguration.addConfiguration( props );
compositeConfiguration.addEventListener(ConfigurationBuilderEvent.ANY,
new EventListener<ConfigurationBuilderEvent>()
{
#Override
public void onEvent(ConfigurationBuilderEvent event)
{
System.out.println("Event:" + event);
}
});
System.out.println(compositeConfiguration.getString("property1"));
try {
Thread.sleep(14*1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
// Have a script which changes the value of property1 in DEV.properties
System.out.println(compositeConfiguration.getString("property1"));
}
protected static PropertiesConfiguration initPropertiesConfiguration(File propsFile) throws ConfigurationException {
if(propsFile.exists()) {
final ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(new Parameters().fileBased()
.setFile(propsFile)
.setReloadingRefreshDelay(DELAY_MILLIS)
.setThrowExceptionOnMissing(false)
.setListDelimiterHandler(new DefaultListDelimiterHandler(';')));
final PropertiesConfiguration propsConfiguration = builder.getConfiguration();
PeriodicReloadingTrigger trigger = new PeriodicReloadingTrigger(builder.getReloadingController(),
null, 1, TimeUnit.SECONDS);
trigger.start();
return propsConfiguration;
} else {
return new PropertiesConfiguration();
}
}
}
Here is a sample code that I using to check whether the Automatic Reloading works or not. However when the underlying property file is updated, the configuration doesn't reflect it.
As per the documentation :
One important point to keep in mind when using this approach to reloading is that reloads are only functional if the builder is used as central component for accessing configuration data. The configuration instance obtained from the builder will not change automagically! So if an application fetches a configuration object from the builder at startup and then uses it throughout its life time, changes on the external configuration file become never visible. The correct approach is to keep a reference to the builder centrally and obtain the configuration from there every time configuration data is needed.
https://commons.apache.org/proper/commons-configuration/userguide/howto_reloading.html#Reloading_File-based_Configurations
This is different from what the old implementation was.
I was able to successfully execute your sample code by making 2 changes :
make the builder available globally and access the configuration from the builder :
System.out.println(builder.getConfiguration().getString("property1"));
add the listener to the builder :
`builder.addEventListener(ConfigurationBuilderEvent.ANY, new EventListener() {
public void onEvent(ConfigurationBuilderEvent event) {
System.out.println("Event:" + event);
}
});
Posting my sample program, where I was able to successfully demonstrate it
import java.io.File;
import java.util.concurrent.TimeUnit;
import org.apache.commons.configuration2.PropertiesConfiguration;
import org.apache.commons.configuration2.builder.ConfigurationBuilderEvent;
import org.apache.commons.configuration2.builder.ReloadingFileBasedConfigurationBuilder;
import org.apache.commons.configuration2.builder.fluent.Parameters;
import org.apache.commons.configuration2.event.EventListener;
import org.apache.commons.configuration2.reloading.PeriodicReloadingTrigger;
public class TestDynamicProps {
public static void main(String[] args) throws Exception {
Parameters params = new Parameters();
ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new ReloadingFileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(params.fileBased()
.setFile(new File("src/main/resources/override.properties")));
PeriodicReloadingTrigger trigger = new PeriodicReloadingTrigger(builder.getReloadingController(),
null, 1, TimeUnit.SECONDS);
trigger.start();
builder.addEventListener(ConfigurationBuilderEvent.ANY, new EventListener<ConfigurationBuilderEvent>() {
public void onEvent(ConfigurationBuilderEvent event) {
System.out.println("Event:" + event);
}
});
while (true) {
Thread.sleep(1000);
System.out.println(builder.getConfiguration().getString("property1"));
}
}
}
The problem with your implementation is, that the reloading is done on the ReloadingFileBasedConfigurationBuilder Object and is not being returned to the PropertiesConfiguration Object.

Spring Integration Cassandra persistence workflow

I try to realize the following workflow with Spring Integration:
1) Poll REST API
2) store the POJO in Cassandra cluster
It's my first try with Spring Integration, so I'm still a bit overwhelmed about the mass of information from the reference. After some research, I could make the following work.
1) Poll REST API
2) Transform mapped POJO JSON result into a string
3) save string into file
Here's the code:
#Configuration
public class ConsulIntegrationConfig {
#InboundChannelAdapter(value = "consulHttp", poller = #Poller(maxMessagesPerPoll = "1", fixedDelay = "1000"))
public String consulAgentPoller() {
return "";
}
#Bean
public MessageChannel consulHttp() {
return MessageChannels.direct("consulHttp").get();
}
#Bean
#ServiceActivator(inputChannel = "consulHttp")
MessageHandler consulAgentHandler() {
final HttpRequestExecutingMessageHandler handler =
new HttpRequestExecutingMessageHandler("http://localhost:8500/v1/agent/self");
handler.setExpectedResponseType(AgentSelfResult.class);
handler.setOutputChannelName("consulAgentSelfChannel");
LOG.info("Created bean'consulAgentHandler'");
return handler;
}
#Bean
public MessageChannel consulAgentSelfChannel() {
return MessageChannels.direct("consulAgentSelfChannel").get();
}
#Bean
public MessageChannel consulAgentSelfFileChannel() {
return MessageChannels.direct("consulAgentSelfFileChannel").get();
}
#Bean
#ServiceActivator(inputChannel = "consulAgentSelfFileChannel")
MessageHandler consulAgentFileHandler() {
final Expression directoryExpression = new SpelExpressionParser().parseExpression("'./'");
final FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
handler.setFileNameGenerator(message -> "../../agent_self.txt");
handler.setFileExistsMode(FileExistsMode.APPEND);
handler.setCharset("UTF-8");
handler.setExpectReply(false);
return handler;
}
}
#Component
public final class ConsulAgentTransformer {
#Transformer(inputChannel = "consulAgentSelfChannel", outputChannel = "consulAgentSelfFileChannel")
public String transform(final AgentSelfResult json) throws IOException {
final String result = new StringBuilder(json.toString()).append("\n").toString();
return result;
}
This works fine!
But now, instead of writing the object to a file, I want to store it in a Cassandra cluster with spring-data-cassandra. For that, I commented out the file handler in the config file, return the POJO in transformer and created the following, :
#MessagingGateway(name = "consulCassandraGateway", defaultRequestChannel = "consulAgentSelfFileChannel")
public interface CassandraStorageService {
#Gateway(requestChannel="consulAgentSelfFileChannel")
void store(AgentSelfResult agentSelfResult);
}
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {} in Cassandra cluster...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
But this seems to be a wrong approach, the service method is never triggered.
So my question is, what would be a correct approach for my usecase? Do I have to implement the MessageHandler interface in my service component, and use a #ServiceActivator in my config. Or is there something missing in my current "gateway-approach"?? Or maybe there is another solution, that I'm not able to see..
Like mentioned before, I'm new to SI, so this may be a stupid question...
Nevertheless, thanks a lot in advance!
It's not clear how you are wiring in your CassandraStorageService bean.
The Spring Integration Cassandra Extension Project has a message-handler implementation.
The Cassandra Sink in spring-cloud-stream-modules uses it with Java configuration so you can use that as an example.
So I finally made it work. All I needed to do was
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#ServiceActivator(inputChannel="consulAgentSelfFileChannel")
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {}...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
The CassandraMessageHandler and the spring-cloud-streaming seemed to be a to big overhead to my use case, and I didn't really understand yet... And with this solution, I keep control over what happens in my spring component.

Resources