In the below example, I'm getting [Manoj, Jeeva] as output. But [Hello Manoj, Hello Jeeva] is the expected. Why serviceChnl is NOT giving output to aggregate?
#Bean
public IntegrationFlow sayHelloIntFlow() {
return IntegrationFlows.from("serviceChnl")
.handle(new GenericHandler<String>() {
public Object handle(String payload, Map<String, Object> headers) {
return "Hello " + payload;
}
})
.get();
}
#Bean
public IntegrationFlow splitFlow() {
return IntegrationFlows.from("splitChnl")
.split()
.channel("serviceChnl")
.aggregate()
.handle(new GenericHandler() {
public Object handle(Object payload, Map headers) {
System.out.println(payload);
return null;
}
})
.channel("nullChannel")
.get();
}
#Test
public void test() {
String[] strArr = new String[] {"Manoj", "Jeeva"};
Message msg = MessageBuilder.withPayload(strArr)
.build();
splitChnl.send(msg);
}
I got it now, after split the message, i either do enrich or transform. I should not put it into the channel.
#Bean
public IntegrationFlow splitFlow() {
return IntegrationFlows.from("splitChnl")
.split()
.transform(new HelloTransformer())
.aggregate()
.handle(new ShowOutput<String>())
.channel("nullChannel")
.get();
}
That's correct answer and you can accept it yourself. Your issue was that you misunderstood a bit inter-channel concept. They aren't intended to send messages to the separate flow (also we can do that), but they connect those endpoints in the one flow. They are there anyway, even if you don't declare .channel() there. For the other flow we have .wireTap() and .gateway(). Please, read Spring Integration Reference Manual about <chain> and DSL Manual. There is enough info to not confuse on the development phase...
Below is an example using .gateway to delegate to another flow.
DefaultAggregatingMessageGroupProcessor is used to aggregate the individual Message payloads into a Collection of payloads.
#Bean
IntegrationFlow splitAndDelegate(IntegrationFlow delegateFlow) {
return flowDef -> flowDef.split()
.gateway(delegateFlow)
.aggregate(aggregatorSpec -> aggregatorSpec.outputProcessor(new DefaultAggregatingMessageGroupProcessor()));
}
Related
I have two IntegrationFlows
both receive messages from Apache Kafka
first IntegrationFlow - in the input channel, Consumer1(concurrency=4) reads topic_1
second IntegrationFlow - in the input channel, Consumer2(concurrency=4) reads topic_2
but these two IntegrationFlows, send messages to the output channel, where one common class MyMessageHandler is specified
like this:
#Bean
public IntegrationFlow sendFromQueueFlow1(MyMessageHandler message) {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory1, "topic_1")
.configureListenerContainer(configureListenerContainer_priority1)
)
.handle(message)
.get();
}
#Bean
public IntegrationFlow sendFromQueueFlow2(MyMessageHandler message) {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory2, "topic_2")
.configureListenerContainer(configureListenerContainer_priority2)
)
.handle(message)
.get();
}
class MyMessageHandler have method send(message), this method passes messages further to another service
class MyMessageHandler {
protected void handleMessageInternal(Message<?> message)
{
String postResponse = myService.send(message); // remote service calling
msgsStatisticsService.sendMessage(message, postResponse);
// *******
}
}
inside each IntegrationFlow, 4 Consumer-threads are working (
a total of 8 threads), and they all go to one class MyMessageHandler,
into one metod send()
what problems could there be?
two IntegrationFlow, do they see each other when they pass a message to one common class??? do I need to provide thread safety in the MyMessageHandler class??? Do I need to prepend the send () method with the word synchronized???
But what if we make a third IntegrationFlow?
so that only one IntegrationFlow can pass messages through itself to the MyMessageHandler class? then would it be thread safe? example:
#Bean
public IntegrationFlow sendFromQueueFlow1() {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory1, "topic_1")
.configureListenerContainer(configureListenerContainer_priority1)
)
.channel(**SOME_CHANNEL**())
.get();
}
#Bean
public IntegrationFlow sendFromQueueFlow2() {
return IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory2, "topic_2")
.configureListenerContainer(configureListenerContainer_priority2)
)
.channel(**SOME_CHANNEL**())
.get();
}
#Bean
public MessageChannel **SOME_CHANNEL**() {
DirectChannel channel = new DirectChannel();
return channel;
}
#Bean
public IntegrationFlow sendALLFromQueueFlow(MyMessageHandler message) {
return IntegrationFlows
.from(**SOME_CHANNEL**())
.handle(message)
.get();
}
You need to make your handler code thread-safe.
Using synchronized on the whole method you will effectively disable the concurrency.
It's better to use thread-safe techniques - no mutable fields or use limited synchronization blocks, just around critical code.
I have 2 server side services and I would like route messages to them using message headers, where remote clients put service identification into field type.
Is the code snippet, from server side config, the correct way? It throws cast exception indicating that route() see only payload, but not the message headers. Also all example in the Spring Integration manual shows only payload based decisioning.
#Bean
public IntegrationFlow serverFlow( // common flow for all my services, currently 2
TcpNetServerConnectionFactory serverConnectionFactory,
HeartbeatServer heartbeatServer,
FeedServer feedServer) {
return IntegrationFlows
.from(Tcp.inboundGateway(serverConnectionFactory))
.<Message<?>, String>route((m) -> m.getHeaders().get("type", String.class),
(routeSpec) -> routeSpec
.subFlowMapping("hearbeat", subflow -> subflow.handle(heartbeatServer::processRequest))
.subFlowMapping("feed", subflow -> subflow.handle(feedServer::consumeFeed)))
.get();
}
Client side config:
#Bean
public IntegrationFlow heartbeatClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory,
HeartbeatClient heartbeatClient) {
return IntegrationFlows.from(heartbeatClient::send, e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.enrichHeaders(c -> c.header("type", "heartbeat"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.handle(heartbeatClient::receive)
.get();
}
#Bean
public IntegrationFlow feedClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory) {
return IntegrationFlows.from(FeedClient.MessageGateway.class)
.enrichHeaders(c -> c.header("type", "feed"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.get();
}
And as usual here is the full demo project code, ClientConfig and ServerConfig.
There is no standard way to send headers over raw TCP. You need to encode them into the payload somehow (and extract them on the server side).
The framework provides a mechanism to do this for you, but it requires extra configuration.
See the documentation.
Specifically...
The MapJsonSerializer uses a Jackson ObjectMapper to convert between a Map and JSON. You can use this serializer in conjunction with a MessageConvertingTcpMessageMapper and a MapMessageConverter to transfer selected headers and the payload in JSON.
I'll try to find some time to create an example of how to use it.
But, of course, you can roll your own encoding/decoding.
EDIT
Here's an example configuration to use JSON to convey message headers over TCP...
#SpringBootApplication
public class TcpWithHeadersApplication {
public static void main(String[] args) {
SpringApplication.run(TcpWithHeadersApplication.class, args);
}
// Client side
public interface TcpExchanger {
public String exchange(String data, #Header("type") String type);
}
#Bean
public IntegrationFlow client(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(TcpExchanger.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.get();
}
// Server side
#Bean
public IntegrationFlow server(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.log(Level.INFO, "exampleLogger", "'Received type header:' + headers['type']")
.route("headers['type']", r -> r
.subFlowMapping("upper",
subFlow -> subFlow.transform(String.class, p -> p.toUpperCase()))
.subFlowMapping("lower",
subFlow -> subFlow.transform(String.class, p -> p.toLowerCase())))
.get();
}
// Common
#Bean
public MessageConvertingTcpMessageMapper mapper() {
MapMessageConverter converter = new MapMessageConverter();
converter.setHeaderNames("type");
return new MessageConvertingTcpMessageMapper(converter);
}
#Bean
public MapJsonSerializer jsonMapping() {
return new MapJsonSerializer();
}
// Console
#Bean
#DependsOn("client")
public ApplicationRunner runner(TcpExchanger exchanger,
ConfigurableApplicationContext context) {
return args -> {
System.out.println("Enter some text; if it starts with a lower case character,\n"
+ "it will be uppercased by the server; otherwise it will be lowercased;\n"
+ "enter 'quit' to end");
Scanner scanner = new Scanner(System.in);
String request = scanner.nextLine();
while (!"quit".equals(request.toLowerCase())) {
if (StringUtils.hasText(request)) {
String result = exchanger.exchange(request,
Character.isLowerCase(request.charAt(0)) ? "upper" : "lower");
System.out.println(result);
}
request = scanner.nextLine();
}
scanner.close();
context.close();
};
}
}
I am using Spring Integration DSL and have a simple Gateway:
#MessagingGateway(name = "eventGateway", defaultRequestChannel = "inputChannel")
public interface EventProcessorGateway {
#Gateway(requestChannel="inputChannel")
public void processEvent(Message message)
}
My spring integration flow is defined as:
#Bean MessageChannel inputChannel() { return new DirectChannel(); }
#Bean MessageChannel errorChannel() { return new DirectChannel(); }
#Bean MessageChannel retryGatewayChannel() { return new DirectChannel(); }
#Bean MessageChannel jsonChannel() { return new DirectChannel(); }
#Bean
public IntegrationFlow postEvents() {
return IntegrationFlows.from(inputChannel())
.route("headers.contentType", m -> m.channelMapping(MediaType.APPLICATION_JSON_VALUE, "json")
)
.get();
}
#Bean
public IntegrationFlow retryGateway() {
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
}
#Bean
public IntegrationFlow transformJsonEvents() {
return IntegrationFlows
.from(retryGatewayChannel())
.transform(new JsonTransformer())
.handle(new JsonHandler())
.get();
}
The JsonTransformer is a simple AbstractTransformer that transforms the JSON data and passes it to the JsonHandler.
class JsonHandler extends AbstractMessageHandler {
public void handleMessageInternal(Message message) throws Exception {
// do stuff, return nothing if success else throw Exception
}
}
I call my gateway from code as such:
try {
Message<List<EventRecord>> message = MessageBuilder.createMessage(eventList, new MessageHeaders(['contentType': contentType]))
eventProcessorGateway.processEvent(message)
logSuccess(eventList)
} catch (Exception e) {
logError(eventList)
}
I want the entire call and processing to be synchronous, and any errors that occur to be caught so I can handle them appropriately. The call to the gateway works, the message gets sent to through the Transformer and to the Handler, processed and if an Exception occurs it bubbles back and is caught and logError() is called. However if the call is successful, the call to logSuccess() never occurs. It is like execution stops/hangs after the Handler processes the message and never returns. I do not need to actually get any response, I am more concerned if something fails to process. Do I need to send something back to the initial EventProcessorGateway?
Your issue is here:
return IntegrationFlows.from("json")
.gateway(retryGatewayChannel(), e -> e.advice(retryAdvice()))
.get();
where that .gateway() is request/reply because it is a part of the main flow.
It is something similar to the <gateway> within <chain>.
So, even if your main flow is one-way, using .gateway() inside that requires from your sub-flow some reply, but this one:
.handle(new JsonHandler())
.get();
doesn't do that.
Because it is one-way MessageHandler.
From other side, even if you'd make the last one as request-reply (AbstractReplyProducingMessageHandler), it won't help you because you don't know what to do with that reply after the mid-flow gateway. Just because your main flow is the one-way.
You must re-think your desing a bit more and try to get rid of that mid-flow gateway. I see that you try to make some logic with retryAdvice().
But how about to move it to the .handle(new JsonHandler()) instead of that wrong .gateway()?
I am trying to implement the following using Spring Integration with DSL and lambda:
Given a message, send it to N consumers (via publish-subscribe). Wait for limited time and return all results that have arrived form consumers (<= N) during that interval.
Here is an example configuration I have so far:
#Configuration
#EnableIntegration
#IntegrationComponentScan
#ComponentScan
public class ExampleConfiguration {
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedRate(1000).maxMessagesPerPoll(1).get();
}
#Bean
public MessageChannel publishSubscribeChannel() {
return MessageChannels.publishSubscribe(splitterExecutorService()).applySequence(true).get();
}
#Bean
public ThreadPoolTaskExecutor splitterExecutorService() {
final ThreadPoolTaskExecutor executorService = new ThreadPoolTaskExecutor();
executorService.setCorePoolSize(3);
executorService.setMaxPoolSize(10);
return executorService;
}
#Bean
public DirectChannel errorChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel channel1() {
return new DirectChannel();
}
#Bean
public DirectChannel channel2() {
return new DirectChannel();
}
#Bean
public DirectChannel collectorChannel() {
return new DirectChannel();
}
#Bean
public TransformerChannel1 transformerChannel1() {
return new TransformerChannel1();
}
#Bean
public TransformerChannel2 transformerChannel2() {
return new TransformerChannel2();
}
#Bean
public IntegrationFlow errorFlow() {
return IntegrationFlows.from(errorChannel())
.handle(m -> System.err.println("[" + Thread.currentThread().getName() + "] " + m.getPayload()))
.get();
}
#Bean
public IntegrationFlow channel1Flow() {
return IntegrationFlows.from(publishSubscribeChannel())
.transform("1: "::concat)
.transform(transformerChannel1())
.channel(collectorChannel())
.get();
}
#Bean
public IntegrationFlow channel2Flow() {
return IntegrationFlows.from(publishSubscribeChannel())
.transform("2: "::concat)
.transform(transformerChannel2())
.channel(collectorChannel())
.get();
}
#Bean
public IntegrationFlow splitterFlow() {
return IntegrationFlows.from(requestChannel())
.channel(publishSubscribeChannel())
.get();
}
#Bean
public IntegrationFlow collectorFlow() {
return IntegrationFlows.from(collectorChannel())
.resequence(r -> r.releasePartialSequences(true),
null)
.aggregate(a ->
a.sendPartialResultOnExpiry(true)
.groupTimeout(500)
, null)
.get();
}
}
TransformerChannel1 and TransformerChannel2 are sample consumers and have been implemented with just a sleep to emulate delay.
The message flow is:
splitterFlow -> channel1Flow \
-> channel2Flow / -> collectorFlow
Everything seem to work as expected, but I see warnings like:
Reply message received but the receiving thread has already received a reply
which is to be expected, given that partial result was returned.
Questions:
Overall, is this a good approach?
What is the right way to gracefully service or discard those delayed messages?
How to deal with exceptions? Ideally I'd like to send them to errorChannel, but am not sure where to specify this.
Yes, the solution looks good. I guess it fits for the Scatter-Gather pattern. The implementation is provided since version 4.1.
From other side there is on more option for the aggregator since that version, too - expire-groups-upon-timeout, which is true for the aggregator by default. With this option as false you will be able to achieve your requirement to discard all those late messages. Unfortunately DSL doesn't support it yet. Hence it won't help even if you upgrade your project to use Spring Integration 4.1.
Another option for those "Reply message received but the receiving thread has already received a reply" is on the spring.integraton.messagingTemplate.throwExceptionOnLateReply = true option using spring.integration.properties file within the META-INF of one of jar.
Anyway I think that Scatter-Gather is the best solution for you use-case.
You can find here how to configure it from JavaConfig.
UPDATE
What about exceptions and error channel?
Since you get deal already with the throwExceptionOnLateReply I guess you send a message to the requestChannel via #MessagingGateway. The last one has errorChannel option. From other side the PublishSubscribeChannel has errorHandler option, for which you can use MessagePublishingErrorHandler with your errorChannel as a default one.
BTW, don't forget that Framework provides errorChannel bean and the endpoint on it for the LoggingHandler. So, think, please, if you really need to override that stuff. The default errorChannel is PublishSubscribeChannel, hence you can simply add your own subscribers to it.
I have a simple Spring Integration 4 Java DSL flow which uses a DirectChannel's LoadBalancingStrategy to round-robin Message requests to a number of possible REST Services (i.e. calls a REST service from one of two possible service endpoint URIs).
How my flow is currently configured:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
LoadBalancingStrategy loadBalancingStrategy = new RoundRobinLoadBalancingStrategy();
DirectChannel directChannel = new DirectChannel(loadBalancingStrategy);
return directChannel;
}
#Bean
public IntegrationFlow testLoadBalancing0Flow() {
return IntegrationFlows.from("test.load.balancing.ch")
.handle(restHandler0())
.channel("test.result.ch")
.get();
}
#Bean
public IntegrationFlow testLoadBalancing1Flow() {
return IntegrationFlows.from("test.load.balancing.ch")
.handle(restHandler1())
.channel("test.result.ch")
.get();
}
#Bean
public HttpRequestExecutingMessageHandler restHandler0() {
return createRestHandler(endpointUri0, 0);
}
#Bean
public HttpRequestExecutingMessageHandler restHandler1() {
return createRestHandler(endpointUri1, 1);
}
private HttpRequestExecutingMessageHandler createRestHandler(String uri, int order) {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler(uri);
// handler configuration goes here..
handler.setOrder(order);
return handler;
}
My configuration works, but I am wondering whether there is a simpler/better way of configuring the flow using Spring Integration's Java DSL?
Cheers,
PM
First of all the RoundRobinLoadBalancingStrategy is the default one for the DirectChannel.
So, can get rid of the testLoadBalancingCh() bean definition at all.
Further, to avoid duplication for the .channel("test.result.ch") you can configure it on the HttpRequestExecutingMessageHandler as setOutputChannel().
From other side your configuration is so simple that I don't see reason to use DSL. You can achieve the same just with annotation configuration:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
return new DirectChannel();
}
#Bean(name = "test.result.ch")
public DirectChannel testResultCh() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "test.load.balancing.ch")
public HttpRequestExecutingMessageHandler restHandler0() {
return createRestHandler(endpointUri0, 0);
}
#Bean
#ServiceActivator(inputChannel = "test.load.balancing.ch")
public HttpRequestExecutingMessageHandler restHandler1() {
return createRestHandler(endpointUri1, 1);
}
private HttpRequestExecutingMessageHandler createRestHandler(String uri, int order) {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler(uri);
// handler configuration goes here..
handler.setOrder(order);
handler.setOutputChannel(testResultCh());
return handler;
}
From other side there is MessageChannels builder factory to allow to simplify loadBalancer for your case:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
return MessageChannels.direct()
.loadBalancer(new RoundRobinLoadBalancingStrategy())
.get();
}
However, I can guess that you want to avoid duplication within DSL flow definition to DRY, but it isn't possible now. That's because IntegrationFlow is linear to tie endoints bypassing the boilerplate code for standard objects creation.
As you see to achieve Round-Robin we have to duplicate, at least, inputChannel, to subscribe several MessageHandlers to the same channel. And we do that in the XML, via Annotations and, of course, from DSL.
I'm not sure that it will be so useful for real applications to provide a hook to configure several handlers using single .handle() for the same Round-Robin channel. Because the further downstream flow may not be so simple as your .channel("test.result.ch").
Cheers