All the examples I've seen of the Spring Integration DSL Scatter-Gatherer explicitly set the .applySequence(true) on the scatterer.
E.g. like this:
#Bean
public IntegrationFlow helloFlow() {
return IntegrationFlows
.from(Http.inboundChannelAdapter("hello").get())
.scatterGather(s -> s
.applySequence(true)
.recipientFlow(f -> f.handle((m, h) -> 33))
.recipientFlow(f -> f.handle((m, h) -> 444))
)
.split()
.log()
.get();
}
If I omit .applySequence(true) I get
java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?
Why is the sequence needed in this case ?
If it needed in so many cases, why isn't .applySequence(true) just the default with the option of explicitly setting it to false if desired for some reason ? And when would you explicitly want it to be set to false ?
The scatterer part of this component is fully based on the Recipient List Router which comes with false for that option by default. So, for consistency and runtime optimization we keep it false in scatter-gather as well: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router-implementations-recipientlistrouter. In real world it is really rear case when gathered messages come back with those generated sequence details headers. Typically the gatherer is configured to custom correlation and release strategies. It is more demo and samples feature to applySequence to be honest. Plus, don’t forget the message immutability and with option we enforce the framework to create a new message to add sequence details headers.
Related
Spring Integration here. I was expecting to see a normalize(...) method off the IntegrationFlow DSL and was surprised to find there wasn't one (like .route(...) or .aggregate(...), etc.).
In fact, some digging on Google and the Spring Integration docs, and I can't seem to find any built-in support for the Normalizer EIP. So I've taken a crack at my own:
public class Normalizer extends AbstractTransformer {
private Class<?> targetClass;
private GenericConverter genericConverter;
public Normalizer(Class<?> targetClass, GenericConverter genericConverter) {
Optional<GenericConverter.ConvertiblePair> maybePair = genericConverter.getConvertibleTypes().stream()
.filter(convertiblePair -> !convertiblePair.getTargetType().equals(targetClass))
.findAny();
assert(maybePair.isEmpty());
this.targetClass = targetClass;
this.genericConverter = genericConverter;
}
#Override
protected Object doTransform(Message<?> message) {
Object inbound = message.getPayload();
return genericConverter.convert(inbound, TypeDescriptor.forObject(inbound), TypeDescriptor.valueOf(targetClass));
}
}
The idea is that Spring already provides the GenericConverter SPI for converting multiple source types to 1+ target type instance. We just need a specialized flavor of that that has the same target type for all convertible pairings. So here we extend AbstractTransformer and pass it one of these GenericConverters to use. During initialization we just verify that all the possible convertible pairs convert to the same targetClass specified for the Normalizer.
The idea is I could instantiate it like so:
#Bean
public Normalizer<Fizz> fizzNormalizer(GenericConverter fizzConverter) {
return new Normalizer(Fizz.class, fizzConverter);
}
And then put it in a flow:
IntegrationFlow someFlow = IntegrationFlows.from(someChannel())
.transform(fizzNormalizer())
// other components
.get();
While I believe this will work, before I start using it too heavily I want to make sure I'm not overlooking anything in the Spring Integration framework that will accomplish/satisfy the Normalizer EIP for me. No point in trying to reinvent the wheel and all that jazz. Thanks for any insight.
If you take a closer look into that EI pattern, then you see:
Use a Normalizer to route each message type through a custom Message Translator so that the resulting messages match a common format.
The crucial part of this pattern that it is a composed one with a router as input endpoint and a set of transformers for each inbound message type.
Since it is that kind of component which is data model dependent and more over the routing and transforming logic might differ from use-case to use-case, it is really hard to make an out-of-the-box single configurable component.
Therefore you need to investigate what type of routing you need to do to chose a proper one for input: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#router
Then for every routed type you nee to implement respective transformer to produce a canonical data mode.
All of the can be just wrapped into a #MessagegingGateway API to hide the normalize behind so-called pattern implementation.
That's what I would do to follow that EI pattern recommendations.
However if your use-case is so simple as just convert from one type to another, so yes, then you can rely on the ConversionService. You register your custom Converter: https://docs.spring.io/spring-integration/docs/current/reference/html/endpoint.html#payload-type-conversion. And then just use a .convert(Class) API from IntegrationFlowDefinition.
But again: since there is no easy way to cover all the possible domain use-cases, we cannot provide an out-of-the-box Normalizer implementation.
I am trying to learn about how to build IntegrationFlows as units, and join them up.
I set up a very simple processing integration flow:
IntegrationFlow processingFlow = f -> f
.<String>handle((p, h) -> process(p))
.log();
flowContext.registration(processingFlow)
.id("testProcessing")
.autoStartup(false)
.register();
Processing is very simple:
public String process(String process) {
return process + " has been processed";
}
Then I compose a flow from a source, using .gateway() to join the source to the processing:
MessageChannel beginningChannel = MessageChannels.direct("beginning").get();
StandardIntegrationFlow composedFlow = IntegrationFlows
.from(beginningChannel)
.gateway(processingFlow)
.log()
.get();
flowContext.registration(composedFlow)
.id("testComposed")
.autoStartup(false)
.addBean(processingFlow)
.register();
Then I start the flow and send a couple of messages:
composedFlow.start();
beginningChannel.send(MessageBuilder.withPayload(new String("first string")).build());
beginningChannel.send(MessageBuilder.withPayload(new String("second string")).build());
The logging handler confirms the handle method has been called for the first message, but the main thread then sits idle, and the second message is never processed.
Is this not the correct way to compose integration flow from building blocks? Doing so with channels requires registering the channel as a bean, and I'm trying to do all this dynamically.
It must be logAndReply() in the processingFlow. See their JavaDocs for difference. The log() in the end of flow makes it one-way. That’s why you are blocked since gateway waits for reply, but there is no one according your current flow definition. Unfortunately we can’t determine that from the framework level: there might be cases when you indeed don’t return according some routing or filtering logic. The gateway can be configured with a reply timeout. By default it is an infinite.
Error message: Caused by: java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?
My implementation,
#Bean
public IntegrationFlow start() {
return IntegrationFlows
.from("getOrders")
.split()
.publishSubscribeChannel(c -> c.subscribe(s -> s.channel(q -> q.queue(1))
.<Order, Message<?>>transform(p -> MessageBuilder.withPayload(new Item(p.getItems())).setHeader(ORDERID, p.getOrderId()).build())
.split(Item.class, Item::getItems)
.transform() // let's assume, an object created for each item, let's say ItemProperty to the object.
// Transform returns message; MessageBuilder.withPayload(createItemProperty(getItemName, getItemId)).build();
.aggregate() // so, here aggregate method needs to aggregate ItemProperties.
.handle() // handler gets List<ItemProperty> as an input.
))
.get();
}
Both splitters works fine. I've also tested the transformer after the second splitter, works fine. But, when it comes to aggregate it is failing. What I am missing here?
You are missing the fact that transformer is that type of endpoint which deal with the whole message as is. And if you create a message by yourself, it doesn't modify it.
So, with your MessageBuilder.withPayload(createItemProperty(getItemName, getItemId)).build(); you just miss important sequence details headers after splitter. Therefore an aggregator after that doesn't know what to do with your message since you configure it for default correlation strategies but you don't provide respective headers in the message.
Technically I don't see a reason to create a message over there manually: the simple return createItemProperty(getItemName, getItemId); should be enough for you. And the framework will take about message creation on your behalf with respective request message headers copying.
If you really still think that you need to create a message yourself in that transform, then you need to consider to copyHeaders() on that MessageBuilder from the request message to carry required sequence details headers.
I have following code:
return IntegrationFlows
.from(Files.inboundAdapter(new File("data"))
.filter(new SimplePatternFileListFilter("*.txt"))
.filter(new AcceptOnceFileListFilter<>()),
e -> e.poller(Pollers.fixedDelay("1000"))
.id("fileInboundChannelAdapter"))
.split(new FileSplitter())
.<Object, Class<?>>route(Object::getClass, m -> m.channelMapping(String.class, "tranform.input")).get();
My SimplePatternFileListFilter is not working, but if I remove AcceptOnceFileListFilter, it works fine.
Is it intended, that only one FileListerFilter can be passed? If yes, any workaround possible?
That's correct. Since we don't know how you are going to combine them and what is the order, therefore only one .filter() can be configured. However at the same time there is a CompositeFileListFilter and ChainFileListFilter for your choice to compose a set of filters. And the order there matters already.
All the hard work underneath is delegated to the FileListFilterFactoryBean and the composition and mutually exclusivity is dictated by that one.
I guess we need to provide more cleaner JavaDocs on the matter. Feel free to raise a JIRA and we will fix it soon.
I used to route messages by a header (boolean) value in Spring Integration (before 5.0.0.M3) with the help of a flow like this:
.<Boolean, HeaderValueRouter>
route(new HeaderValueRouter(REGISTRATION_MODE__HEADER),
routerSpec -> routerSpec
.subFlowMapping(true /* registering*/,
f -> f.handle(String.class, /*some logic*/))
.subFlowMapping(false /* unregistering */,
f -> f.handle(String.class, /*some other logic*/)),
endpointSpec -> endpointSpec.id("registrationRouter"))
But in 5.0.0.M3 this code became invalid because there is no route method with such signature anymore. The reason is clearly stated in Java DSL breaking changes chapter of SI 4.3 to 5.0 Migration Guide:
The AbstractRouterSpec now extends ConsumerEndpointSpec instead of
MessageHandlerSpec and, therefore, methods in the
IntegrationFlowDefintion like: ...
route(R router, Consumer<RouterSpec<K, R>> routerConfigurer,
Consumer<GenericEndpointSpec<R>> endpointConfigurer)
... have been removed in favor of those methods without the
Consumer<GenericEndpointSpec<?>> since all its options are now
supported by the AbstractRouterSpec directly.
But the alternative is not clear. None of two current route methods accepting AbstractMessageRouter is capable of handling the new routerConfigurer. As a consequence, none of them can be configured with subFlowMapping.
Potential alternative - route methods accepting Function - is not applicable because those functions operate on message payload while I need to base routing decision on particular message header. Another similar solution might be to use MessageProducerSpec but I don't see how to combine it with HeaderValueRouter.
Is there a way to route messages with HeaderValueRouter and subFlowMapping at the same time?
To be honest the HeaderValueRouter became obsolete when SpEL expressions has been added for support.
So, I suggest you do not hesitate to use a simple expression for this use-case:
.route("headers." + REGISTRATION_MODE__HEADER,
routerSpec -> routerSpec
.subFlowMapping(true /* registering*/,
f -> f.handle(String.class, /*some logic*/))
.subFlowMapping(false /* unregistering */,
f -> f.handle(String.class, /*some other logic*/))
.id("registrationRouter"))
Pay attention how is there is no one more endpointSpec argument, too.