I have 2 server side services and I would like route messages to them using message headers, where remote clients put service identification into field type.
Is the code snippet, from server side config, the correct way? It throws cast exception indicating that route() see only payload, but not the message headers. Also all example in the Spring Integration manual shows only payload based decisioning.
#Bean
public IntegrationFlow serverFlow( // common flow for all my services, currently 2
TcpNetServerConnectionFactory serverConnectionFactory,
HeartbeatServer heartbeatServer,
FeedServer feedServer) {
return IntegrationFlows
.from(Tcp.inboundGateway(serverConnectionFactory))
.<Message<?>, String>route((m) -> m.getHeaders().get("type", String.class),
(routeSpec) -> routeSpec
.subFlowMapping("hearbeat", subflow -> subflow.handle(heartbeatServer::processRequest))
.subFlowMapping("feed", subflow -> subflow.handle(feedServer::consumeFeed)))
.get();
}
Client side config:
#Bean
public IntegrationFlow heartbeatClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory,
HeartbeatClient heartbeatClient) {
return IntegrationFlows.from(heartbeatClient::send, e -> e.poller(Pollers.fixedDelay(Duration.ofSeconds(5))))
.enrichHeaders(c -> c.header("type", "heartbeat"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.handle(heartbeatClient::receive)
.get();
}
#Bean
public IntegrationFlow feedClientFlow(
TcpNetClientConnectionFactory clientConnectionFactory) {
return IntegrationFlows.from(FeedClient.MessageGateway.class)
.enrichHeaders(c -> c.header("type", "feed"))
.log()
.handle(outboundGateway(clientConnectionFactory))
.get();
}
And as usual here is the full demo project code, ClientConfig and ServerConfig.
There is no standard way to send headers over raw TCP. You need to encode them into the payload somehow (and extract them on the server side).
The framework provides a mechanism to do this for you, but it requires extra configuration.
See the documentation.
Specifically...
The MapJsonSerializer uses a Jackson ObjectMapper to convert between a Map and JSON. You can use this serializer in conjunction with a MessageConvertingTcpMessageMapper and a MapMessageConverter to transfer selected headers and the payload in JSON.
I'll try to find some time to create an example of how to use it.
But, of course, you can roll your own encoding/decoding.
EDIT
Here's an example configuration to use JSON to convey message headers over TCP...
#SpringBootApplication
public class TcpWithHeadersApplication {
public static void main(String[] args) {
SpringApplication.run(TcpWithHeadersApplication.class, args);
}
// Client side
public interface TcpExchanger {
public String exchange(String data, #Header("type") String type);
}
#Bean
public IntegrationFlow client(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(TcpExchanger.class)
.handle(Tcp.outboundGateway(Tcp.netClient("localhost", port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.get();
}
// Server side
#Bean
public IntegrationFlow server(#Value("${tcp.port:1234}") int port) {
return IntegrationFlows.from(Tcp.inboundGateway(Tcp.netServer(port)
.deserializer(jsonMapping())
.serializer(jsonMapping())
.mapper(mapper())))
.log(Level.INFO, "exampleLogger", "'Received type header:' + headers['type']")
.route("headers['type']", r -> r
.subFlowMapping("upper",
subFlow -> subFlow.transform(String.class, p -> p.toUpperCase()))
.subFlowMapping("lower",
subFlow -> subFlow.transform(String.class, p -> p.toLowerCase())))
.get();
}
// Common
#Bean
public MessageConvertingTcpMessageMapper mapper() {
MapMessageConverter converter = new MapMessageConverter();
converter.setHeaderNames("type");
return new MessageConvertingTcpMessageMapper(converter);
}
#Bean
public MapJsonSerializer jsonMapping() {
return new MapJsonSerializer();
}
// Console
#Bean
#DependsOn("client")
public ApplicationRunner runner(TcpExchanger exchanger,
ConfigurableApplicationContext context) {
return args -> {
System.out.println("Enter some text; if it starts with a lower case character,\n"
+ "it will be uppercased by the server; otherwise it will be lowercased;\n"
+ "enter 'quit' to end");
Scanner scanner = new Scanner(System.in);
String request = scanner.nextLine();
while (!"quit".equals(request.toLowerCase())) {
if (StringUtils.hasText(request)) {
String result = exchanger.exchange(request,
Character.isLowerCase(request.charAt(0)) ? "upper" : "lower");
System.out.println(result);
}
request = scanner.nextLine();
}
scanner.close();
context.close();
};
}
}
Related
How to resend same or modified message from outbound http call in case of specific client error responses like 400, 413 etc
#Bean
private IntegrationFlow myChannel() {
IntegrationFlowBuilder builder =
IntegrationFlows.from(queue)
.handle(//http post method config)
...
.expectedResponseType(String.class))
.channel(MessageChannels.publishSubscribe(channel2));
return builder.get();
}
#Bean
private IntegrationFlow defaultErrorChannel() {
}
EDIT: Added end point to handle method
#Bean
private IntegrationFlow myChannel() {
IntegrationFlowBuilder builder =
IntegrationFlows.from(queue)
.handle(//http post method config)
...
.expectedResponseType(String.class),
e -> e.advice(myRetryAdvice()))
.channel(MessageChannels.publishSubscribe(channel2));
return builder.get();
}
#Bean
public Advice myRetryAdvice(){
... // set custom retry policy
}
Custom Retry policy:
class InternalServerExceptionClassifierRetryPolicy extends
ExceptionClassifierRetryPolicy {
public InternalServerExceptionClassifierRetryPolicy() {
final SimpleRetryPolicy simpleRetryPolicy =
new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(2);
this.setExceptionClassifier(new Classifier<Throwable, RetryPolicy>() {
#Override
public RetryPolicy classify(Throwable classifiable) {
if (classifiable instanceof HttpServerErrorException) {
// For specifically 500 and 504
if (((HttpServerErrorException) classifiable).getStatusCode() == HttpStatus.INTERNAL_SERVER_ERROR
|| ((HttpServerErrorException) classifiable)
.getStatusCode() == HttpStatus.GATEWAY_TIMEOUT) {
return simpleRetryPolicy;
}
return new NeverRetryPolicy();
}
return new NeverRetryPolicy();
}
});
}}
EDIT 2: Override open() to modify the original message
RequestHandlerRetryAdvice retryAdvice = new
RequestHandlerRetryAdvice(){
#Override
public<T, E extends Throwable> boolean open(RetryContext
retryContext, RetryCallback<T,E> callback){
Message<String> originalMsg =
(Message) retryContext.getAttribute(ErrorMessageUtils.FAILED_MESSAGE_CONTEXT);
Message<String> updatedMsg = //some updated message
retryContext.setAttribute(ErrorMessageUtils.FAILED_MESSAGE_CONTEXT,up datedMsg);
return super.open(retryContext, callback);
}
See a RequestHandlerRetryAdvice: https://docs.spring.io/spring-integration/reference/html/messaging-endpoints.html#message-handler-advice-chain. So, you configure some RetryPolicy to check those HttpClientErrorException for retry and the framework will re-send for you.
Java DSL allows us to configure it via second handle() argument - endpoint configurer: .handle(..., e -> e.advice(myRetryAdvice)): https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-endpoints
After a messsage is sent, it gets published to Kafka topic but the Message from KafkaSuccessTransformer does not return back to the REST controller. I am trying to return the message as-is if sent successfully but nothing after Kafka handler seems to be invoked.
#MessagingGateway
public interface MyGateway<String, Message<?>> {
#Gateway(requestChannel = "enrollChannel")
Message<?> sendMsg(#Payload String payload);
}
------------------------
#RestController
public class Controller {
MyGateway<String, Message<?>> myGateway;
#PostMapping
public Message<?> send(#RequestBody String request) throws Exception {
Message<?> resp = myGateway.sendMsg(request);
log.info("I am back"); // control doesn't come to this point
return resp;
}
}
--------------------------
#Component
public class MyIntegrationFlow {
KafkaSuccessTransformer stransformer;
#Bean
public MessageChannel enrollChannel() {
return new DirectChannel();
}
#Bean
public MessageChannel kafkaSuccessChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow enrollIntegrationFlow() {
return IntegrationFlows.from("enrollChannel")
//another transformer which turns the string to Message<?>
.handle(Kafka.outboundChannelAdapter(kafkaTemplate) //kafkaTemplate has the necesssary config
.topic("topic1")
.messageKey(messageKeyFunction -> messageKeyFunction.getHeaders()
.get("key1")
.sendSuccessChannel("kafkaSuccessChannel"));
}
#Bean
public IntegrationFlow successfulKafkaSends() {
return f -> IntegrationFlows.from("kafkaSuccessChannel").transform(stransformer);
}
}
--------------
#Component
public class KafkaSuccessTransformer {
#Transformer
public Message<?> transform(Message<?> message) {
log.info("Message is sent to Kafka");
return message; //control comes here but does not return to REST controller
}
}
Channel adapters are for one-way traffic; there is no result.
Add a publishSubscribe channel with two subflows; the second one can be just a bridge to nowhere - .bridge() ends the flow. It will then return the outbound message to the gateway.
See https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-subflows
Per Artem:
Something is off in the configuration or code. The logic is like this: processSendResult(message, producerRecord, sendFuture, getSendSuccessChannel());. Then: getMessageBuilderFactory().fromMessage(message). So, the replyChannel header is present in this "success" message. Therefore that transform(stransformer) should really produce its return to the replyChannel for a gateway in the beginning. Only the problem could be in the KafkaSuccessTransformer code where it does not copy request message headers for reply message. Please, share its whole code.
I'm currently trying to write and integration flow then reads a csv file and processes it in chunks (Calls API for enrichment) then writes in back out as a new csv. I currently have an example working perfectly except that it is polling a directory. What I would like to do is be able to pass the file-path and file-name to the integration flow in the headers and then just perform the operation on that one file.
Here is my code for the polling example that works great except for the polling.
#Bean
#SuppressWarnings("unchecked")
public IntegrationFlow getUIDsFromTTDandOutputToFile() {
Gson gson = new GsonBuilder().disableHtmlEscaping().create();
return IntegrationFlows
.from(Files.inboundAdapter(new File(inputFilePath))
.filter(getFileFilters())
.preventDuplicates(true)
.autoCreateDirectory(true),
c -> c
.poller(Pollers.fixedRate(1000)
.maxMessagesPerPoll(1)
)
)
.log(Level.INFO, m -> "TTD UID 2.0 Integration Start" )
.split(Files.splitter())
.channel(c -> c.executor(Executors.newFixedThreadPool(7)))
.handle((p, h) -> new CSVUtils().csvColumnSelector((String) p, ttdColNum))
.channel("chunkingChannel")
.get();
}
#Bean
#ServiceActivator(inputChannel = "chunkingChannel")
public AggregatorFactoryBean chunker() {
log.info("Initializing Chunker");
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(batchSize));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(100L));
aggregator.setOutputChannelName("chunkingOutput");
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setCorrelationStrategy(new CorrelationStrategyIml());
return aggregator;
}
#Bean
public IntegrationFlow enrichFlow() {
return IntegrationFlows.from("chunkingOutput")
.handle((p, h) -> gson.toJson(new TradeDeskUIDRequestPayloadBean((Collection<String>) p)))
.enrichHeaders(eh -> eh.async(false)
.header("accept", "application/json")
.header("contentType", "application/json")
.header("Authorization", "Bearer [TOKEN]")
)
.log(Level.INFO, m -> "Sending request of size " + batchSize + " to: " + TTD_UID_IDENTITY_MAP)
.handle(Http.outboundGateway(TTD_UID_IDENTITY_MAP)
.requestFactory(
alliantPooledHttpConnection.get_httpComponentsClientHttpRequestFactory())
.httpMethod(HttpMethod.POST)
.expectedResponseType(TradeDeskUIDResponsePayloadBean.class)
.extractPayload(true)
)
.log(Level.INFO, m -> "Writing response to output file" )
.handle((p, h) -> ((TradeDeskUIDResponsePayloadBean) p).printMappedBodyAsCSV2())
.handle(Files.outboundAdapter(new File(outputFilePath))
.autoCreateDirectory(true)
.fileExistsMode(FileExistsMode.APPEND)
//.appendNewLine(true)
.fileNameGenerator(m -> m.getHeaders().getOrDefault("file_name", "outputFile") + "_out.csv")
)
.get();
}
public class CorrelationStrategyIml implements CorrelationStrategy {
#Override
public Object getCorrelationKey(Message<?> message) {
return message.getHeaders().getOrDefault("", 1);
}
}
#Component
public class CSVUtils {
#ServiceActivator
String csvColumnSelector(String inputStr, Integer colNum) {
return StringUtils.commaDelimitedListToStringArray(inputStr)[colNum];
}
}
private FileListFilter<File> getFileFilters(){
ChainFileListFilter<File> cflf = new ChainFileListFilter<>();
cflf.addFilter(new LastModifiedFileListFilter(30));
cflf.addFilter(new AcceptOnceFileListFilter<>());
cflf.addFilter(new SimplePatternFileListFilter(fileExtention));
return cflf;
}
If you know the file, then there is no reason in any special component from the framework. You just start your flow from a channel and send a message to it with File object as a payload. That message is going to be carried on to the slitter in your flow and everything is going to work OK.
If you really want to have a high-level API on the matter, you can expose a #MessagingGateway as a beginning of that flow and end-user is going to call your gateway method with desired file as an argument. The framework will create a message on your behalf and send it to the message channel in the flow for processing.
See more info in docs about gateways:
https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#gateway
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#integration-flow-as-gateway
And also a DSL definition starting from some explicit channel:
https://docs.spring.io/spring-integration/docs/current/reference/html/dsl.html#java-dsl-channels
In the below example, I'm getting [Manoj, Jeeva] as output. But [Hello Manoj, Hello Jeeva] is the expected. Why serviceChnl is NOT giving output to aggregate?
#Bean
public IntegrationFlow sayHelloIntFlow() {
return IntegrationFlows.from("serviceChnl")
.handle(new GenericHandler<String>() {
public Object handle(String payload, Map<String, Object> headers) {
return "Hello " + payload;
}
})
.get();
}
#Bean
public IntegrationFlow splitFlow() {
return IntegrationFlows.from("splitChnl")
.split()
.channel("serviceChnl")
.aggregate()
.handle(new GenericHandler() {
public Object handle(Object payload, Map headers) {
System.out.println(payload);
return null;
}
})
.channel("nullChannel")
.get();
}
#Test
public void test() {
String[] strArr = new String[] {"Manoj", "Jeeva"};
Message msg = MessageBuilder.withPayload(strArr)
.build();
splitChnl.send(msg);
}
I got it now, after split the message, i either do enrich or transform. I should not put it into the channel.
#Bean
public IntegrationFlow splitFlow() {
return IntegrationFlows.from("splitChnl")
.split()
.transform(new HelloTransformer())
.aggregate()
.handle(new ShowOutput<String>())
.channel("nullChannel")
.get();
}
That's correct answer and you can accept it yourself. Your issue was that you misunderstood a bit inter-channel concept. They aren't intended to send messages to the separate flow (also we can do that), but they connect those endpoints in the one flow. They are there anyway, even if you don't declare .channel() there. For the other flow we have .wireTap() and .gateway(). Please, read Spring Integration Reference Manual about <chain> and DSL Manual. There is enough info to not confuse on the development phase...
Below is an example using .gateway to delegate to another flow.
DefaultAggregatingMessageGroupProcessor is used to aggregate the individual Message payloads into a Collection of payloads.
#Bean
IntegrationFlow splitAndDelegate(IntegrationFlow delegateFlow) {
return flowDef -> flowDef.split()
.gateway(delegateFlow)
.aggregate(aggregatorSpec -> aggregatorSpec.outputProcessor(new DefaultAggregatingMessageGroupProcessor()));
}
I am using a JMS Outbound Gateway to send messages to a request queue and receive messages from a separate response queue. I would like to add functionality so that a call is made to a specific bean's method once a message has been successfully sent to the request queue.
I am using spring-integration 4.0.4 APIs and spring-integration-java-dsl 1.0.0 APIs for this and, I have so far been able to achieve the above functionality as follows:
#Configuration
#EnableIntegration
public class IntegrationConfig {
...
#Bean
public IntegrationFlow requestFlow() {
return IntegrationFlows
.from("request.ch")
.routeToRecipients(r ->
r.ignoreSendFailures(false)
.recipient("request.ch.1", "true")
.recipient("request.ch.2", "true"))
.get();
}
#Bean
public IntegrationFlow sendReceiveFlow() {
return IntegrationFlows
.from("request.ch.1")
.handle(Jms.outboundGateway(cachingConnectionFactory)
.receiveTimeout(45000)
.requestDestination("REQUEST_QUEUE")
.replyDestination("RESPONSE_QUEUE")
.correlationKey("JMSCorrelationID"), e -> e.requiresReply(true))
.channel("response.ch").get();
}
#Bean
public IntegrationFlow postSendFlow() {
return IntegrationFlows
.from("request.ch.2")
.handle("requestSentService", "fireRequestSuccessfullySentEvent")
.get();
}
...
}
Now, although the above configuration works, I have noticed that the only apparent reason request.ch.1 is called before request.ch.2 seems to be because of the channel names' alphabetical order and not because of the order in which they where added to the RecipientListRouter itself. Is this correct? Or am I missing something here?
* EDIT below shows solution using Aggregator between JMS Outbound/Inbound Adapters approach (without Messaging Gateway) *
Integration Config:
#Configuration
#EnableIntegration
public class IntegrationConfig {
...
#Bean
public IntegrationFlow reqFlow() {
return IntegrationFlows
.from("request.ch")
.enrichHeaders(e -> e.headerChannelsToString())
.enrichHeaders(e -> e.headerExpression(IntegrationMessageHeaderAccessor.CORRELATION_ID, "headers['" + MessageHeaders.REPLY_CHANNEL + "']"))
.routeToRecipients(r -> {
r.ignoreSendFailures(false);
r.recipient("jms.req.ch", "true");
r.recipient("jms.agg.ch", "true");
})
.get();
}
#Bean
public IntegrationFlow jmsReqFlow() {
return IntegrationFlows
.from("jms.req.ch")
.handle(Jms.outboundAdapter(cachingConnectionFactory)
.destination("TEST_REQUEST_CH")).get();
}
#Bean
public IntegrationFlow jmsPostReqFlow() {
return IntegrationFlows
.from("jms.req.ch")
.handle("postSendService", "postSendProcess")
.get();
}
#Bean
public IntegrationFlow jmsResFlow() {
return IntegrationFlows
.from(Jms.inboundAdapter(cachingConnectionFactory).destination(
"TEST_RESPONSE_CH"),
c -> c.poller(Pollers.fixedRate(1000).maxMessagesPerPoll(10)))
.channel("jms.agg.ch").get();
}
#Bean
public IntegrationFlow jmsAggFlow() {
return IntegrationFlows
.from("jms.agg.ch")
.aggregate(a -> {
a.outputProcessor(g -> {
List<Message<?>> l = new ArrayList<Message<?>>(g.getMessages());
Message<?> firstMessage = l.get(0);
Message<?> lastMessage = (l.size() > 1) ? l.get(l.size() - 1) : firstMessage;
Message<?> messageOut = MessageBuilder.fromMessage(lastMessage)
.setHeader(MessageHeaders.REPLY_CHANNEL, (String) firstMessage.getHeaders().getReplyChannel())
.build();
return messageOut;
});
a.releaseStrategy(g -> g.size() == 2);
a.groupTimeout(45000);
a.sendPartialResultOnExpiry(false);
a.discardChannel("jms.agg.timeout.ch");
}, null)
.channel("response.ch")
.get();
}
}
#Bean
public IntegrationFlow jmsAggTimeoutFlow() {
return IntegrationFlows
.from("jms.agg.timeout.ch")
.handle(Message.class, (m, h) -> new ErrorMessage(new MessageTimeoutException(m), h))
.channel("error.ch")
.get();
}
}
Cheers,
PM
H-m... Looks like. It is really a bug in the DslRecipientListRouter logic: https://github.com/spring-projects/spring-integration-java-dsl/issues/9
Will be fixed soon and released over a couple of days.
Thank you for pointing that out!
BTW. your logic isn't correct a bit: Even when we fix that RecipientListRouter, the second recipinet will receive the same request message only after JmsOutboundGateway will have received the reply, not just after request has been sent to the request-queue.
It is blocked request-reply process. And there is no a hook to get a point between reqeust and reply in the JmsOutboundGateway.
Is that OK for you?