I have developed asynchronous Spring Cloud Stream services, and I am trying to develop an edge service that uses #MessagingGateway to provide synchronous access to services that are async by nature.
I am currently getting the following stack trace:
Caused by: org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:355)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:271)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:188)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:115)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
... 47 common frames omitted
My #MessagingGateway:
#EnableBinding(AccountChannels.class)
#MessagingGateway
public interface AccountService {
#Gateway(requestChannel = AccountChannels.CREATE_ACCOUNT_REQUEST,replyChannel = AccountChannels.ACCOUNT_CREATED, replyTimeout = 60000, requestTimeout = 60000)
Account createAccount(#Payload Account account, #Header("Authorization") String authorization);
}
If I consume the message on the reply channel via a #StreamListener, it works just fine:
#HystrixCommand(commandKey = "acounts-edge:accountCreated", fallbackMethod = "accountCreatedFallback", commandProperties = {#HystrixProperty(name = "execution.isolation.strategy", value = "SEMAPHORE")}, ignoreExceptions = {ClientException.class})
#StreamListener(AccountChannels.ACCOUNT_CREATED)
public void accountCreated(Account account, #Header(name = "spanTraceId", required = false) String traceId) {
try {
if (log.isInfoEnabled()) {
log.info(new StringBuilder("Account created: ").append(objectMapper.writeValueAsString(account)).toString());
}
} catch (JsonProcessingException e) {
log.error(e.getMessage(), e);
}
}
On the producer side, I am configuring requiredGroups to ensure that multiple consumers can process the message, and correspondingly, the consumers have matching group configurations.
Consumer:
spring:
cloud:
stream:
bindings:
create-account-request:
binder: rabbit1
contentType: application/json
destination: create-account-request
requiredGroups: accounts-service-create-account-request
account-created:
binder: rabbit1
contentType: application/json
destination: account-created
group: accounts-edge-account-created
Producer:
spring:
cloud:
stream:
bindings:
create-account-request:
binder: rabbit1
contentType: application/json
destination: create-account-request
group: accounts-service-create-account-request
account-created:
binder: rabbit1
contentType: application/json
destination: account-created
requiredGroups: accounts-edge-account-created
The bit of code on the producer side that processes the request and sends the response:
accountChannels.accountCreated().send(MessageBuilder.withPayload(accountService.createAccount(account)).build());
I can debug and see that the request is received and processed, but when the response is sent to the reply channel, that's when the error occurs.
To get the #MessagingGateway working, what configurations and/or code am I missing? I know I'm combining Spring Integration and Spring Cloud Gateway, so I'm not sure if using them together is causing the issues.
It's good question and really good idea. But it isn't going to work so easy.
First of all we have to determine for ourselves that gateway means request/reply, therefore correlation. And this available in #MessagingGateway via replyChannel header in face of TemporaryReplyChannel instance. Even if you have an explicit replyChannel = AccountChannels.ACCOUNT_CREATED, the correlation is done only via the mentioned header and its value. The fact that this TemporaryReplyChannel is not serializable and can't be transferred over the network to the consumer on another side.
Luckily Spring Integration provide some solution for us. It is a part of the HeaderEnricher and its headerChannelsToString option behind HeaderChannelRegistry:
Starting with Spring Integration 3.0, a new sub-element <int:header-channels-to-string/> is available; it has no attributes. This converts existing replyChannel and errorChannel headers (when they are a MessageChannel) to a String and stores the channel(s) in a registry for later resolution when it is time to send a reply, or handle an error. This is useful for cases where the headers might be lost; for example when serializing a message into a message store or when transporting the message over JMS. If the header does not already exist, or it is not a MessageChannel, no changes are made.
https://docs.spring.io/spring-integration/docs/5.0.0.RELEASE/reference/html/messaging-transformation-chapter.html#header-enricher
But in this case you have to introduce an internal channel from the gateway to the HeaderEnricher and only the last one will send the message to the AccountChannels.CREATE_ACCOUNT_REQUEST. So, the replyChannel header will be converted to a string representation and be able to travel over the network. On the consumer side when you send a reply you should ensure that you transfer that replyChannel header as well, as it is. So, when the message will arrive to the AccountChannels.ACCOUNT_CREATED on the producer side, where we have that #MessagingGateway, the correlation mechanism is able to convert a channel identificator to the proper TemporaryReplyChannel and correlate the reply to the waiting gateway call.
Only the problem here that your producer application must be as single consumer in the group for the AccountChannels.ACCOUNT_CREATED - we have to ensure that only one instance in the cloud is operating at a time. Just because only one instance has that TemporaryReplyChannel in its memory.
More info about gateway: https://docs.spring.io/spring-integration/docs/5.0.0.RELEASE/reference/html/messaging-endpoints-chapter.html#gateway
UPDATE
Some code for help:
#EnableBinding(AccountChannels.class)
#MessagingGateway
public interface AccountService {
#Gateway(requestChannel = AccountChannels.INTERNAL_CREATE_ACCOUNT_REQUEST, replyChannel = AccountChannels.ACCOUNT_CREATED, replyTimeout = 60000, requestTimeout = 60000)
Account createAccount(#Payload Account account, #Header("Authorization") String authorization);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(AccountChannels.INTERNAL_CREATE_ACCOUNT_REQUEST)
.enrichHeaders(headerEnricher -> headerEnricher.headerChannelsToString())
.channel(AccountChannels.CREATE_ACCOUNT_REQUEST)
.get();
}
UPDATE
Some simple application to demonstrate the PoC:
#EnableBinding({ Processor.class, CloudStreamGatewayApplication.GatewayChannels.class })
#SpringBootApplication
public class CloudStreamGatewayApplication {
interface GatewayChannels {
String REQUEST = "request";
#Output(REQUEST)
MessageChannel request();
String REPLY = "reply";
#Input(REPLY)
SubscribableChannel reply();
}
private static final String ENRICH = "enrich";
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = ENRICH, replyChannel = GatewayChannels.REPLY)
String process(String payload);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(ENRICH)
.enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.channel(GatewayChannels.REQUEST)
.get();
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public Message<?> process(Message<String> request) {
return MessageBuilder.withPayload(request.getPayload().toUpperCase())
.copyHeaders(request.getHeaders())
.build();
}
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext =
SpringApplication.run(CloudStreamGatewayApplication.class, args);
StreamGateway gateway = applicationContext.getBean(StreamGateway.class);
String result = gateway.process("foo");
System.out.println(result);
}
}
The application.yml:
spring:
cloud:
stream:
bindings:
input:
destination: requests
output:
destination: replies
request:
destination: requests
reply:
destination: replies
I use spring-cloud-starter-stream-rabbit.
The
MessageBuilder.withPayload(request.getPayload().toUpperCase())
.copyHeaders(request.getHeaders())
.build()
Does the trick copying request headers to the reply message. So, the gateway is able on the reply side to convert channel identifier in the headers to the appropriate TemporaryReplyChannel to convey the reply properly to the caller of gateway.
The SCSt issue on the matter: https://github.com/spring-cloud/spring-cloud-stream/issues/815
With Artem's help, I've found the solution I was looking for. I have taken the code Artem posted and split it into two services, a Gateway service and a CloudStream service. I also added a #RestController for testing purposes. This essentially mimics what I was wanting to do with durable queues. Thanks Artem for your assistance! I really appreciate your time! I hope this helps others who want to do the same thing.
Gateway Code
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.context.annotation.Bean;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.integration.annotation.Gateway;
import org.springframework.integration.annotation.MessagingGateway;
import org.springframework.integration.dsl.HeaderEnricherSpec;
import org.springframework.integration.dsl.IntegrationFlow;
import org.springframework.integration.dsl.IntegrationFlows;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RestController;
#EnableBinding({GatewayApplication.GatewayChannels.class})
#SpringBootApplication
public class GatewayApplication {
interface GatewayChannels {
String TO_UPPERCASE_REPLY = "to-uppercase-reply";
String TO_UPPERCASE_REQUEST = "to-uppercase-request";
#Input(TO_UPPERCASE_REPLY)
SubscribableChannel toUppercaseReply();
#Output(TO_UPPERCASE_REQUEST)
MessageChannel toUppercaseRequest();
}
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = ENRICH, replyChannel = GatewayChannels.TO_UPPERCASE_REPLY)
String process(String payload);
}
private static final String ENRICH = "enrich";
public static void main(String[] args) {
SpringApplication.run(GatewayApplication.class, args);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(ENRICH).enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.channel(GatewayChannels.TO_UPPERCASE_REQUEST).get();
}
#RestController
public class UppercaseController {
#Autowired
StreamGateway gateway;
#GetMapping(value = "/string/{string}",
produces = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public ResponseEntity<String> getUser(#PathVariable("string") String string) {
return new ResponseEntity<String>(gateway.process(string), HttpStatus.OK);
}
}
}
Gateway Config (application.yml)
spring:
cloud:
stream:
bindings:
to-uppercase-request:
destination: to-uppercase-request
producer:
required-groups: stream-to-uppercase-request
to-uppercase-reply:
destination: to-uppercase-reply
group: gateway-to-uppercase-reply
server:
port: 8080
CloudStream Code
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.messaging.support.MessageBuilder;
#EnableBinding({CloudStreamApplication.CloudStreamChannels.class})
#SpringBootApplication
public class CloudStreamApplication {
interface CloudStreamChannels {
String TO_UPPERCASE_REPLY = "to-uppercase-reply";
String TO_UPPERCASE_REQUEST = "to-uppercase-request";
#Output(TO_UPPERCASE_REPLY)
SubscribableChannel toUppercaseReply();
#Input(TO_UPPERCASE_REQUEST)
MessageChannel toUppercaseRequest();
}
public static void main(String[] args) {
SpringApplication.run(CloudStreamApplication.class, args);
}
#StreamListener(CloudStreamChannels.TO_UPPERCASE_REQUEST)
#SendTo(CloudStreamChannels.TO_UPPERCASE_REPLY)
public Message<?> process(Message<String> request) {
return MessageBuilder.withPayload(request.getPayload().toUpperCase())
.copyHeaders(request.getHeaders()).build();
}
}
CloudStream Config (application.yml)
spring:
cloud:
stream:
bindings:
to-uppercase-request:
destination: to-uppercase-request
group: stream-to-uppercase-request
to-uppercase-reply:
destination: to-uppercase-reply
producer:
required-groups: gateway-to-uppercase-reply
server:
port: 8081
Hmm, I am a bit confused as well as to what you are trying to accomplish, but let's se if we can figure this out.
Mixing SI and SCSt is definitely natural as one builds on another so all should work:
Here is an example code snippet I just dug up from an old sample that exposes REST endpoint yet delegates (via Gateway) to Source's output channel. See if that helps:
#EnableBinding(Source.class)
#SpringBootApplication
#RestController
public class FooApplication {
. . .
#Autowired
private Source channels;
#Autowired
private CompletionService completionService;
#RequestMapping("/complete")
public String completeRequest(#RequestParam int id) {
this.completionService.complete("foo");
return "OK";
}
#MessagingGateway
interface CompletionService {
#Gateway(requestChannel = Source.OUTPUT)
void complete(String message);
}
}
Related
After a messsage is sent, it gets published to Kafka topic but the Message from KafkaSuccessTransformer does not return back to the REST controller. I am trying to return the message as-is if sent successfully but nothing after Kafka handler seems to be invoked.
#MessagingGateway
public interface MyGateway<String, Message<?>> {
#Gateway(requestChannel = "enrollChannel")
Message<?> sendMsg(#Payload String payload);
}
------------------------
#RestController
public class Controller {
MyGateway<String, Message<?>> myGateway;
#PostMapping
public Message<?> send(#RequestBody String request) throws Exception {
Message<?> resp = myGateway.sendMsg(request);
log.info("I am back"); // control doesn't come to this point
return resp;
}
}
--------------------------
#Component
public class MyIntegrationFlow {
KafkaSuccessTransformer stransformer;
#Bean
public MessageChannel enrollChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel kafkaSuccessChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow enrollIntegrationFlow() {
return IntegrationFlows.from("enrollChannel")
//another transformer which turns the string to Message<?>
.handle(Kafka.outboundChannelAdapter(kafkaTemplate) //kafkaTemplate has the necesssary config
.topic("topic1")
.messageKey(messageKeyFunction -> messageKeyFunction.getHeaders()
.get("key1")
.sendSuccessChannel("kafkaSuccessChannel"));
}
#Bean
public IntegrationFlow successfulKafkaSends() {
return f -> IntegrationFlows.from("kafkaSuccessChannel").transform(stransformer);
}
}
--------------
#Component
public class KafkaSuccessTransformer {
#Transformer
public Message<?> transform(Message<?> message) {
log.info("Message is sent to Kafka");
return message; //control comes here but does not return to REST controller
}
}
Channel adapters are for one-way traffic; there is no result.
Add a publishSubscribe channel with two subflows; the second one can be just a bridge to nowhere - .bridge() ends the flow. It will then return the outbound message to the gateway.
See https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-subflows
Per Artem:
Something is off in the configuration or code. The logic is like this: processSendResult(message, producerRecord, sendFuture, getSendSuccessChannel());. Then: getMessageBuilderFactory().fromMessage(message). So, the replyChannel header is present in this "success" message. Therefore that transform(stransformer) should really produce its return to the replyChannel for a gateway in the beginning. Only the problem could be in the KafkaSuccessTransformer code where it does not copy request message headers for reply message. Please, share its whole code.
I'm new with Spring Integration. I need to implement a Messaging Gateway with a returning value. In order to continue some processing asynchronously after executing some synchronous steps. So I made 2 activators
#Slf4j
#MessageEndpoint
public class Activator1 {
#ServiceActivator(inputChannel = "asyncChannel")
public void async(){
log.info("Just async message");
try {
Thread.sleep(500);
} catch (InterruptedException e) {
log.error("I don't want to sleep now");
}
}
}
and
#Slf4j
#MessageEndpoint
public class Activator2 {
#ServiceActivator(inputChannel = "syncChannel")
public ResponseEntity sync(){
try {
Thread.sleep(500);
return ResponseEntity.ok("Return Http Message");
} catch (InterruptedException e) {
log.error("I don't want to sleep");
}
return ResponseEntity.badRequest().build();
}
}
The pipeline
#Configuration
public class Pipeline {
#Bean
MessageChannel asyncChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel syncChannel() {
return MessageChannels.direct().get();
}
}
the gateway
#MessagingGateway
public interface ReturningGateway {
#Gateway(requestChannel = "asyncChannel", replyChannel = "syncChannel")
public ResponseEntity getSyncHttpResponse();
}
And Controller
#Slf4j
#RestController
#RequestMapping("/sync")
public class ResponseController {
#Autowired
ReturningGateway returningGateway;
#PostMapping("/http-response")
public ResponseEntity post() {
return returningGateway.getSyncHttpResponse();
}
}
So I'm not sure if thats the correct way to do what I want to do
Can you give me hand?
Let me try to explain some things first of all!
#Gateway(requestChannel = "asyncChannel", replyChannel = "syncChannel")
The requestChannel is where a gateway sends a message. But since you don't have any arguments in the gateway method and there is no a payloadExpression, the behavior is to "receive" from that channel. See docs for more info: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#gateway-calling-no-argument-methods.
The replyChannel is where to wait for a reply, not send. In most cases a gateway relies on the replyChannel header for correlation. The request-reply pattern in messaging. We need an explicit replyChannel if it is a PublishSubscribeChannel to track a reply somehow or when we deal with the flow which we can't modify to rely on the replyChannel header. See the same gateway chapter in the docs.
Your use-case is not clear for me: you say async continuation, but at the same time the return from your gateway contract looks like a result of that sync() method. From here, please make yourself familiar with the gateway contract and then come back to us with refreshed vision for your solution.
I'm trying to connect to an ActiveMQ instance from node.js using STOMP.js which connects via STOMP over websockets. My broker has a security policy enforced by a BrokerFilter:
package com.mycompany.queues.security;
import com.mycompany.domain.businessObjects.User;
import com.mycompany.domain.services.UserService;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.activemq.broker.Broker;
import org.apache.activemq.broker.BrokerFilter;
import org.apache.activemq.broker.ConnectionContext;
import org.apache.activemq.broker.region.Subscription;
import org.apache.activemq.command.ConnectionInfo;
import org.apache.activemq.command.ConsumerInfo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class BrokerAuthentication extends BrokerFilter {
private static final Logger log = LoggerFactory.getLogger(BrokerAuthentication.class);
private UserService userService;
private List<String> noAuthIPs;
private static final Pattern IP_PATTERN = Pattern.compile(".*://([0-9A-Za-z\\.]*).*");
private static final Pattern CUSTOMER_ID_PATTERN = Pattern.compile(".*customer_id\\s*=\\s*(\\d+)\\s*.*");
public BrokerAuthentication(Broker broker, UserService userService, List<String> noAuthIPs) {
super(broker);
this.userService = userService;
this.noAuthIPs = noAuthIPs;
}
#Override
public void addConnection(ConnectionContext context, ConnectionInfo info) throws Exception {
if (requiresAuth(context)) {
//...
}
super.addConnection(context, info);
}
//...
private boolean requiresAuth(ConnectionContext context) {
String remoteAddress = context.getConnection().getRemoteAddress();
Matcher matcher = IP_PATTERN.matcher(remoteAddress);
if (matcher.matches()) {
String ip = matcher.group(1);
if (noAuthIPs.contains(ip)) {
return false;
}
} else {
log.info("IP not in no auth list " + remoteAddress);
}
return true;
}
}
where I've omitted some irrelevant stuff. NoAuthIPs is set in a config xml file, and should include localhost.
When I attempt to connect to the broker from the same machine, I'm getting this error logged by requiresAuth:
IP not in no auth list StompSocket_814158251
Going digging through the activemq source code it seems as if there's 29 different implementations of the Connection interface, but I'm having trouble finding one that could possibly give me StompSocket_814158251 as the remote address.
I've tried grepping for StompSocket in the node library on GitHub, and drew a blank.
I can't just add that specific string to my "allowed hosts" because of the random numbers at the end, and it's obviously not secure to try and add some catch-all workaround like matching StompSocket against a regex just because I don't understand it.
Where is this weird remote address coming from, and how can I configure my auth around this behaviour?
Thanks in advance for any help.
EDIT:
My ActiveMQ version is 5.11.1
My connection configuration for the node Stomp.js client:
const { AMQ_HOST, AMQ_PORT, AMQ_USERNAME, AMQ_PWD } = process.env;
//...
new Client({
brokerURL: `ws://${AMQ_HOST}:${AMQ_PORT}/stomp`,
connectHeaders: {
login: AMQ_USERNAME,
passcode: AMQ_PWD
},
debug: function (str) {
logger.info(str);
},
reconnectDelay: 2,
heartbeatIncoming: 4000,
heartbeatOutgoing: 4000
});
where in my .env file I have
AMQ_HOST = localhost
AMQ_PORT = 61614
We have used Spring cloud stream with more than one handlers in the same application and we are trying to create an integration test.
Few handlers out of all are as follows:
public interface Source1 {
String OUTPUT = "output_source1";
/**
* #return output channel
*/
#Output(Source1.OUTPUT)
MessageChannel output();
}
public interface Processor1 {
String INPUT = "input_process1";
String OUTPUT = "output_process1";
/**
* #return input channel
*/
#Input(Processor1.INPUT)
MessageChannel input();
/**
* #return output channel
*/
#Output(Processor1.OUTPUT)
MessageChannel output();
}
public interface Sink1 {
/**
* Name of the output channel.
*/
String INPUT = "input_sink1";
/**
* #return input channel
*/
#Output(Source1.INPUT)
MessageChannel input();
}
We have channel configuration like follows in application.yml:
spring:
cloud:
stream:
bindings:
output_source1:
destination: source1
binder: local_rabbit
input_process1:
destination: source1
binder: local_rabbit
output_process1:
destination: processed
binder: local_rabbit
input_sink1:
destination: processed
binder: local_rabbit
Here data flows from Source1 -> Processor1 -> Sink1.
Problem : We need to check the whole flow so in a test case if Source1 produces the data then It should be available in Sink1.. How to test it?
We checked this doc (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.2.0.RELEASE/spring-cloud-stream.html#spring_integration_test_binder) but it says "Test Binder only supports the three bindings provided by the framework (Source, Processor, Sink)"
We have used more than one channel in many functionalities so in integration test those channel linking should be working.
Also, is there a way to have an integration test without using actual message broker?
We're at 3.0.1 version now and the test binder was upgraded to support multiple bindings - https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.1.RELEASE/reference/html/spring-cloud-stream.html#_testing
Also, just as an fyi, we're moving away from annotation-based programming model and into functional. You can get more info and details from this post (see Quick highlights section for more links). In other words you can greatly reduce your code by eliminating Processor1, Sink1, Source1, EnableBidning, StreamListener etc. . .
One of solutions to organize the code is presented below.
Let's consider we have a class with a supplier, a processor and a consumer. And we have some interface to emit data into the stream by means of emitData method.
#Component
public class Handlers {
private EmitterProcessor<String> sourceGenerator = EmitterProcessor.create();
public void emitData(String str){
sourceGenerator.onNext(str);
}
#Bean
public Supplier<Flux<String>> generate() {
return () -> sourceGenerator;
}
#Bean
public Function<String, String> process() {
return str -> str.toUpperCase();
}
#Bean
public Consumer<String> sink() {
return val -> System.out.println(val);
}
For that application we have following fragment of application.yml with restriction of bindings to dev,production profiles only:
spring:
profiles: dev,production
cloud:
stream:
function:
definition: generate;process;sink
bindings:
generate-out-0: source1
process-in-0: source1
process-out-0: processed
sink-in-0: processed
bindingServiceProperties:
defaultBinder: local_rabbit
binders:
local_rabbit:
type: rabbit
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
And test classes. AbstractTest we are using as a common class for all the tests. Let's think we have big application with web/non web parts and with widely used dependency injection which we cannot switch off.
#SpringBootTest(
classes = App.class,
webEnvironment= SpringBootTest.WebEnvironment.RANDOM_PORT
)
#Import(TestChannelBinderConfiguration.class)
#ActiveProfiles("test")
public class AbstractTest {
}
HandlersTest class we are using for that unit testing only.
#Slf4j
#TestPropertySource(
properties = {"spring.cloud.function.definition = generate|process"}
)
public class HandlersTest extends AbstractTest {
#Autowired
private OutputDestination outputDestination;
#Autowired
private Handlers handlers;
// A way to test a workflow with internal function composition
// declared through spring.cloud.function.definition
#Test
public void testGeneratorAndProcessor() {
final String testStr = "test";
handlers.emitData(testStr);
Object eventObj;
final Message<byte[]> message = outputDestination.receive(1000);
assertNotNull(message, "processing timeout");
eventObj = message.getPayload();
assertEquals(new String((byte[]) eventObj), testStr.toUpperCase());
}
// A way to test processor function only with direct
// access to the function
#Autowired
private FunctionCatalog catalog;
#Test
public void testProcessor() {
final String testStr = "test";
final Function<String, String> function = catalog.lookup("process");
assertNotNull(function, "The function was not found");
final String result = function.apply(testStr);
assertEquals(result, testStr.toUpperCase());
}
}
The test testGeneratorAndProcessor we can use for partial testing of the workflow with internal transmitting of the data from generate to process. In that case we have only one output with an index 0 and no inputs. By same way we can collect different combination of the workflow like process|sinkwith one input and no outputs or full workflow generate|process|sink with no inputs and no outputs.
When we need to test one specific function only, it is simpler to use catalog.lookup to directly get that function.
I am new to Spring Integration and new to Stack Overflow. I am looking for some help in understanding Spring Integration as it relates to a request-reply pattern. From reading on the web, I am thinking that I should be using a Service Activator to enable this type of use case.
I am using JMS to facilitate the sending and receiving of XML based messages. Our underlining implementation is IBM Websphere MQ.
I am also using Spring Boot (version 1.3.6.RELEASE) and attempting to use a pure annotation based configuration approach (if that is possible). I have searched the web and see some example but nothing that so far I can see that helps me understand how it all fits together. The Spring Integration documentation is excellent but I am still struggling with how all the pieces fit together. I apologize in advance if there is something out there that I missed. I treat posting here as a last alternative.
Here is what I have for my configuration:
package com.daluga.spring.integration.configuration
import com.ibm.mq.jms.MQConnectionFactory;
import com.ibm.mq.jms.MQQueue;
import com.ibm.msg.client.wmq.WMQConstants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.InboundChannelAdapter;
import org.springframework.integration.annotation.IntegrationComponentScan;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.channel.QueueChannel;
import org.springframework.integration.config.EnableIntegration;
import org.springframework.jms.annotation.EnableJms;
import org.springframework.jms.connection.CachingConnectionFactory;
import org.springframework.jms.core.JmsTemplate;
import javax.jms.ConnectionFactory;
import javax.jms.DeliveryMode;
import javax.jms.Destination;
import javax.jms.JMSException;
//import com.ibm.msg.client.services.Trace;
#Configuration
public class MQConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(MQConfiguration.class);
#Value("${host-name}")
private String hostName;
#Value("${port}")
private int port;
#Value("${channel}")
private String channel;
#Value("${time-to-live}")
private int timeToLive;
#Autowired
#Qualifier("MQConnectionFactory")
ConnectionFactory connectionFactory;
#Bean(name = "jmsTemplate")
public JmsTemplate provideJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory);
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(timeToLive);
jmsTemplate.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
return jmsTemplate;
}
#Bean(name = "MQConnectionFactory")
public ConnectionFactory connectionFactory() {
CachingConnectionFactory ccf = new CachingConnectionFactory();
//Trace.setOn();
try {
MQConnectionFactory mqcf = new MQConnectionFactory();
mqcf.setHostName(hostName);
mqcf.setPort(port);
mqcf.setChannel(channel);
mqcf.setTransportType(WMQConstants.WMQ_CM_CLIENT);
ccf.setTargetConnectionFactory(mqcf);
ccf.setSessionCacheSize(2);
} catch (JMSException e) {
throw new RuntimeException(e);
}
return ccf;
}
#Bean(name = "requestQueue")
public Destination createRequestQueue() {
Destination queue = null;
try {
queue = new MQQueue("REQUEST.QUEUE");
} catch (JMSException e) {
throw new RuntimeException(e);
}
return queue;
}
#Bean(name = "replyQueue")
public Destination createReplyQueue() {
Destination queue = null;
try {
queue = new MQQueue("REPLY.QUEUE");
} catch (JMSException e) {
throw new RuntimeException(e);
}
return queue;
}
#Bean(name = "requestChannel")
public QueueChannel createRequestChannel() {
QueueChannel channel = new QueueChannel();
return channel;
}
#Bean(name = "replyChannel")
public QueueChannel createReplyChannel() {
QueueChannel channel = new QueueChannel();
return channel;
}
}
And here is my Service class:
package com.daluga.spring.integration.service
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.stereotype.Service;
#Service
public class MyRequestReplyService {
private static final Logger LOGGER = LoggerFactory.getLogger(MyRequestReplyService.class);
#ServiceActivator(inputChannel = "replyChannel")
public void sendAndReceive(String requestPayload) {
// How to get replyPayload
}
}
So, at this point, I am not quite sure how to glue all this together to make this work. I don't understand how to glue together my request and reply queues to the service activator to make this all work.
The service I am calling (JMS/Webshere MQ based) is using the typical message and correlation id so that I can properly tied the request to the corresponding response.
Can anyone provide me any guidance on how to get this to work? Please let me know what additional information I can provide to make this clear.
Thanks in advance for your help!
Dan
Gateways provide request/reply semantics.
Instead of using a JmsTemplate directly, you should be using Spring Integration's built-in JMS Support.
#Bean
#ServiceActivator(inputChannel="requestChannel")
public MessageHandler jmsOutGateway() {
JmsOutboundGateway outGateway = new JmsOutboundGateway();
// set properties
outGateway.setOutputChannel(replyChannel());
return outGateway;
}
If you want to roll your own, change the service activator method the return a reply type and use one of the template sendAndReceive() or convertSendAndReceive() methods.
The sample app uses XML configuration but should provide some additional guidance.