Multiple #ServiceActivator methods with the same inputChannel and different signature - spring-integration

I'm trying to implement an annotation driven event bus (e.g. Guava Event Bus) using spring integration.
I have a PublishSubscribeChannel where I publish my events and the idea is to use methods annotated with #ServiceActivator as event handlers.
Each method can have a different signature based on the event (payload) they need to handle.
What I noticed is that when an event is published, all instances of ServiceActivatingHandler created by the ServiceActivatorAnnotationPostProcessor are called and an exception for each method that has a signature that does not match the payload. E.g.
Caused by: org.springframework.expression.spel.SpelEvaluationException: EL1004E:(pos 8): Method call: Method handle(model.api.ServiceAvailableEvent) cannot be found on service.eai.TestServiceActivatorImpl2 type
Is there a way to define a #ServiceActivator method only for specific payload types?

That's correct, all the subscribers for the PublishSubscribeChannel accept the same message. And if there is no any chance to convert incoming payload into expected method argument type, we get that exception.
If you would like to filter unexpected types, you definitely have to use #Filter before your #ServiceActivator. In other words you do the same as now, but make your flow(s) a bit complex with front filters as subscribers to that PublishSubscribeChannel.
You even can rely on the existing PayloadTypeSelector:
#Bean
#Filter(inputChannel = "publishSubscribeChannel", outputChannel="service1")
public MessageSelector payloadTypeSelector() {
return new PayloadTypeSelector(...);
}
Or, yes, just simple POJO method which checks the payload type and marked with the same #Filter.
I guess your next question will be like: why doesn't #ServiceActivator ignore those types which aren't unsuitable for the target method?
Just don't mix concerns. Service Activator is for Message handling in the target business logic. For filtering and skipping we have a difefrent EI pattern - filter.

Related

IntegrationFlowDefinition.aggregate doesn't work: Maybe the CorrelationStrategy is failing?

Error message: Caused by: java.lang.IllegalStateException: Null correlation not allowed. Maybe the CorrelationStrategy is failing?
My implementation,
#Bean
public IntegrationFlow start() {
return IntegrationFlows
.from("getOrders")
.split()
.publishSubscribeChannel(c -> c.subscribe(s -> s.channel(q -> q.queue(1))
.<Order, Message<?>>transform(p -> MessageBuilder.withPayload(new Item(p.getItems())).setHeader(ORDERID, p.getOrderId()).build())
.split(Item.class, Item::getItems)
.transform() // let's assume, an object created for each item, let's say ItemProperty to the object.
// Transform returns message; MessageBuilder.withPayload(createItemProperty(getItemName, getItemId)).build();
.aggregate() // so, here aggregate method needs to aggregate ItemProperties.
.handle() // handler gets List<ItemProperty> as an input.
))
.get();
}
Both splitters works fine. I've also tested the transformer after the second splitter, works fine. But, when it comes to aggregate it is failing. What I am missing here?
You are missing the fact that transformer is that type of endpoint which deal with the whole message as is. And if you create a message by yourself, it doesn't modify it.
So, with your MessageBuilder.withPayload(createItemProperty(getItemName, getItemId)).build(); you just miss important sequence details headers after splitter. Therefore an aggregator after that doesn't know what to do with your message since you configure it for default correlation strategies but you don't provide respective headers in the message.
Technically I don't see a reason to create a message over there manually: the simple return createItemProperty(getItemName, getItemId); should be enough for you. And the framework will take about message creation on your behalf with respective request message headers copying.
If you really still think that you need to create a message yourself in that transform, then you need to consider to copyHeaders() on that MessageBuilder from the request message to carry required sequence details headers.

Spring Integration - AMQP Inferred Types In Java DSL?

I have been working on a "paved road" for setting up asynchronous messaging between two micro services using AMQP. We want to promote the use of separate domain objects for each service, which means that each service must define their own copy of any objects passed across the queue.
We are using Jackson2JsonMessageConverter on both the producer and the consumer side and we are using the Java DSL to wire the flows to/from the queues.
I am sure there is a way to do this, but it is escaping me: I want the consumer side to ignore the __TypeID__ header that is passed from the producer, as the consumer may have a different representation of that event (and it will likely be in in a different java package).
It appears there was work done such that if using the annotation #RabbitListener, an inferredArgumentTypeargument is derived and will override the header information. This is exactly what I would like to do, but I would like to use the Java DSL to do it. I have not yet found a clean way in which to do this and maybe I am just missing something obvious. It seems it would be fairly straight forward to derive the type when using the following DSL:
return IntegrationFlows
.from(
Amqp.inboundAdapter(factory, queueRemoteTaskStatus())
.concurrentConsumers(10)
.errorHandler(errorHandler)
.messageConverter(messageConverter)
)
.channel(channelRemoteTaskStatusIn())
.handle(listener, "handleRemoteTaskStatus")
.get();
However, this results in a ClassNotFound exception. The only way I have found to get around this, so far, is to set a custom message converter, which requires explicit definition of the type.
public class ForcedTypeJsonMessageConverter extends Jackson2JsonMessageConverter {
ForcedTypeJsonMessageConverter(final Class<?> forcedType) {
setClassMapper(new ClassMapper() {
#Override
public void fromClass(Class<?> clazz, MessageProperties properties) {
//this class is only used for inbound marshalling.
}
#Override
public Class<?> toClass(MessageProperties properties) {
return forcedType;
}
});
}
}
I would really like this to be derived, so the developer does not have to really deal with this.
Is there an easier way to do this?
The simplest way is to configure the Jackson converter's DefaultJackson2JavaTypeMapper with TypeIdMapping (setIdClassMapping()).
On the sending system, map foo:com.one.Foo and on the receiving system map foo:com.two.Foo.
Then, the __TypeId__ header gets foo and the receiving system will map it to its representation of a Foo.
EDIT
Another option would be to add an afterReceiveMessagePostProcessor to the inbound channel adapter's listener container - it could change the __TypeId__ header.

GatewayProxyFactoryBean doesn't consider Future<Void> as a no-reply method

Looking at the message gateway methods return type semantics, the void return type indicates no reply is produced (no reply channel will be created), and the Future return type indicates asynchronous invocation mode (utilizing AsyncTaskExecutor).
Now, if one wishes to combine those two and make the no-reply method asynchronous, one could argue that the mere possibility of declaring a return type of Future<Void> would mean just that: the method is invoked asynchronously (by declaring a Future), and the method doesn't expect any reply (by declaring a type parameter Void).
Looking at the source code of GatewayProxyFactoryBean, it is clear this is not the case:
private Object invokeGatewayMethod(MethodInvocation invocation, boolean runningOnCallerThread) throws Exception {
...
boolean shouldReply = returnType != void.class;
...
Only the simple void return type is checked. So I'm wondering if this is a feature or a bug. If this is a feature, the Future<Void> return type is not behaving as one could be led to expect, and (in my opinion) should be handled differently (causing a validation error or something similar).
It's not clear what is the point of returning a Future<Void> in this case.
The reason we can't treat Future<Void> as "special" is that the downstream flow might return such an object; the framework can't imply intent.
If you want to run a flow that doesn't return a reply asynchronously, simply make the request channel an ExecutorChannel; if you are using XML configuration, documentation is here.
If you are using java configuration define the channel #Bean with type ExecutorChannel.

Spring integration - AMQP backed message channels and message conversion

I am trying to use AMQP-backed message channels in my Spring Integration app, but I think I am fundamentally misunderstanding something, specifically around the Message<?> interface and how instances of GenericMessage<?> are written to and read from, a RabbitMQ queue.
Given I have a Spring Integration app containing the following domain model object:
#Immutable
class Foo {
String name
long quantity
}
and I declare an AMQP backed message channel called fooChannel as follows:
#Bean
public AmqpChannelFactoryBean deliveryPlacementChannel(CachingConnectionFactory connectionFactory) {
AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true)
factoryBean.setConnectionFactory(connectionFactory)
factoryBean.setQueueName("foo")
factoryBean.beanName = 'fooChannel'
factoryBean.setPubSub(false)
factoryBean
}
When I initially tried to send a message to my fooChannel I received a java.io.NotSerializableException. I realised this to be caused by the fact that the RabbitTemplate used by my AMQP-backed fooChannel was using a org.springframework.amqp.support.converter.SimpleMessageConverter which can only work with Strings, Serializable instances, or byte arrays, of which my Foo model is none of those things.
Therefore, I thought that I should use a org.springframework.amqp.support.converter.Jackson2JsonMessageConverter to ensure my Foo model is properly converted to/from and AMQP message. However, it appears that the type of the message that is being added to the RabbitMQ queue which is backing my fooChannel is of type org.springframework.messaging.support.GenericMessage. This means that when my AMQP backed fooChannel tries to consume messages from the RabbitMQ queue it receives the following exception:
Caused by: com.fasterxml.jackson.databind.JsonMappingException: No suitable constructor found for type [simple type, class org.springframework.messaging.support.GenericMessage]: can not instantiate from JSON object (missing default constructor or creator, or perhaps need to add/enable type information?)
From looking at the GenericMessage class I see that is designed to be immutable, which clearly explains why the Jackson2JsonMessageConverter can't convert from JSON to the GenericMessage type. However, I am unsure what I should be doing in order to allow my fooChannel to be backed by AMQP and have the conversion of my Spring Integration messages containing my Foo model work correctly?
In terms of the flow of my application I have the following Transformer component which consumes Bar models from the (non-AMQP backed) barChannel and places Foo models on the fooChannel as follows:
#Transformer(inputChannel = 'barChannel', outputChannel = 'fooChannel')
public Foo transform(Bar bar) {
//transform logic removed for brevity
new Foo(name: 'Foo1', quantity: 1)
}
I then have a ServiceActivator component which I wish to have consume from my fooChannel as follows:
#ServiceActivator(inputChannel = 'fooChannel')
void consumeFoos(Foo foo){
// Do something with foo
}
I am using spring-integration-core:4.2.5.RELEASE and spring-integration-amqp:4.2.5.RELEASE.
Can anyone please advise where I am going wrong with the configuration of my Spring Integration application?
If any further information is needed to in order to better clarify my question or problem, please let me know. Thanks
Yes - amqp-backed channels are currently limited to Java serializable objects.
We should provide an option to map the Message<?> to a Spring AMQP Message (like the channel adapters do) rather than...
this.amqpTemplate.convertAndSend(this.getExchangeName(), this.getRoutingKey(), message);
...which converts the entire message.
You could use a pair of channel adapters (outbound/inbound) instead of a channel.
Since you are using Java config, you could wrap the adapter pair in a new MessageChannel implementation.
I opened a JIRA Issue for this.

Why i can't put more than 1 setters of same argument type in class which implements genericHandler in dsl?

I have created a class implementing GenericHandler to use in .handle() method. I have setters for the class, but if i have more than 1 setter with same argument type, i am getting "Found Ambiguous parameter type".
Why there is such restriction?
That's just because ServiceActivatingHandler is based on the MessagingMethodInvokerHelper logic on background to determine the appropriate messaging method. And setters are candidate for that purpose.
So, if you really hae several of them with the same param type, we end up with ambiguity issue.
To fix your case, I suggest mark your Object handle(P payload, Map<String, Object> headers); implementation with #ServiceActivator.
From other side I agree that it is not so good as we expect from Framework. So, feel free to raise a JIRA issue on the matter and we will fix .handle() to be more strict and rely only on the handle() method from the GenericHandler implementation.
I faced the same problem while using Spring integration while using a service adaptor. Could not define multiple properties of type java.lang.String - I would get a IllegalArgumentException claiming "ambiguous parameters".
After finding no solution to the issue, decided to just create a class to encapsulate those properties, configure this as a bean, and then inject it into the spring-integration config.

Resources