1) I would like to create a bean of HttpRequestExecutingMessageHandler (outbound channel adapter for HTTP) and specify the channel via an annotation like #OutboundChannelAdapter, why this is not possible? I suppose there is some design decision that I'm not understanding.
2) What is the suggested way of define HttpRequestExecutingMessageHandler without using XML configuration files? Do I have to configure the bean and set it manually?
Thanks in advance.
The #ServiceActivator fully covers that functionality. Unlike #Transformer it doesn't require a return value. So, your POJO method can be just void and the flow is going to stop there similar way a <outbound-channel-adapter> does that in XML configuration.
But in case of HttpRequestExecutingMessageHandler we need to worry about some extra option to make it one-way and stop there without care about any HTTP reply.
So, for the HttpRequestExecutingMessageHandler you need to declare a bean like:
#Bean
#ServiceActivator(inputChannel = )
public HttpRequestExecutingMessageHandler httpRequestExecutingMessageHandler() {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler();
handler.setExpectReply(false)
return handler;
}
I think we need to improve docs on the matter anyway, but you can take a look into Java DSL configuration instead: https://docs.spring.io/spring-integration/docs/current/reference/html/#http-java-config. There is an Http.outboundChannelAdapter() for convenience.
Related
Do I really need to define 2 advices over ServiceActivator (RequestHandlerRetryAdvice) if I need to use RetryTemplate (with AlwaysRetryPolicy) and ExpressionEvaluatingRequestHandlerAdvice where I filter the error that I don't want to retry on?
#Bean
#ServiceActivator(inputChannel = "outboundChannel", adviceChain = {"retry", "filter"})
public MessageHandler handler() {
JdbcMessageHandler ...
}
This works fine, but why can't I do it within one place only?
Or I should override canRetry method of AlwaysRetryPolicy and do this form there?
I tired that (retuned false) but it caused some circular loop.
I think I said you before over here: RetryTemplate with ServiceActivator and JdbcMessageHandler
See different RetryPolicy strategy implementations
I think a BinaryExceptionClassifierRetryPolicy should meet your requirements:
* A policy, that is based on {#link BinaryExceptionClassifier}. Usually, binary
* classification is enough for retry purposes. If you need more flexible classification,
* use {#link ExceptionClassifierRetryPolicy}.
You also can implement a custom RetryPolicy which would delegate to something like BinaryExceptionClassifierRetryPolicy but then do some some other logic according to the SQLException and its error code.
The RequestHandlerRetryAdvice.recoveryCallback may be used to deal with exceptions which were retries or not.
The reference manul talked about how to set this in XML, and I saw the Jira issue for advanced configuration for #Aggregator completed but not seeing those advanced properties. So if using annotation, how to set expire group?
Well, according that JIRA ticket there is indeed a sample in the Reference Manual:
#ServiceActivator(inputChannel = "aggregatorChannel")
#Bean
public MessageHandler aggregator(MessageGroupStore jdbcMessageGroupStore) {
AggregatingMessageHandler aggregator =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
jdbcMessageGroupStore);
aggregator.setOutputChannel(resultsChannel());
aggregator.setGroupTimeoutExpression(new ValueExpression<>(500L));
aggregator.setTaskScheduler(this.taskScheduler);
return aggregator;
}
And there is an explicit note on the matter:
Annotation configuration (#Aggregator and others) for the Aggregator component covers only simple use cases, where most default options are sufficient. If you need more control over those options using Annotation configuration, consider using a #Bean definition for the AggregatingMessageHandler and mark its #Bean method with #ServiceActivator
Even would be better to use this:
Starting with the version 4.2 the AggregatorFactoryBean is available, to simplify Java configuration for the AggregatingMessageHandler.
Seems for me everything is covered in the Docs. Is anything missed?
I mean the AggregatorFactoryBean has an option you need:
public void setExpireGroupsUponCompletion(Boolean expireGroupsUponCompletion) {
Is that not enough?
I have been working on a "paved road" for setting up asynchronous messaging between two micro services using AMQP. We want to promote the use of separate domain objects for each service, which means that each service must define their own copy of any objects passed across the queue.
We are using Jackson2JsonMessageConverter on both the producer and the consumer side and we are using the Java DSL to wire the flows to/from the queues.
I am sure there is a way to do this, but it is escaping me: I want the consumer side to ignore the __TypeID__ header that is passed from the producer, as the consumer may have a different representation of that event (and it will likely be in in a different java package).
It appears there was work done such that if using the annotation #RabbitListener, an inferredArgumentTypeargument is derived and will override the header information. This is exactly what I would like to do, but I would like to use the Java DSL to do it. I have not yet found a clean way in which to do this and maybe I am just missing something obvious. It seems it would be fairly straight forward to derive the type when using the following DSL:
return IntegrationFlows
.from(
Amqp.inboundAdapter(factory, queueRemoteTaskStatus())
.concurrentConsumers(10)
.errorHandler(errorHandler)
.messageConverter(messageConverter)
)
.channel(channelRemoteTaskStatusIn())
.handle(listener, "handleRemoteTaskStatus")
.get();
However, this results in a ClassNotFound exception. The only way I have found to get around this, so far, is to set a custom message converter, which requires explicit definition of the type.
public class ForcedTypeJsonMessageConverter extends Jackson2JsonMessageConverter {
ForcedTypeJsonMessageConverter(final Class<?> forcedType) {
setClassMapper(new ClassMapper() {
#Override
public void fromClass(Class<?> clazz, MessageProperties properties) {
//this class is only used for inbound marshalling.
}
#Override
public Class<?> toClass(MessageProperties properties) {
return forcedType;
}
});
}
}
I would really like this to be derived, so the developer does not have to really deal with this.
Is there an easier way to do this?
The simplest way is to configure the Jackson converter's DefaultJackson2JavaTypeMapper with TypeIdMapping (setIdClassMapping()).
On the sending system, map foo:com.one.Foo and on the receiving system map foo:com.two.Foo.
Then, the __TypeId__ header gets foo and the receiving system will map it to its representation of a Foo.
EDIT
Another option would be to add an afterReceiveMessagePostProcessor to the inbound channel adapter's listener container - it could change the __TypeId__ header.
Is there any way we can use file outbound adapter to write the entire SOAP envelop to file rather than just payload ? (I am able to write the payload into a file).
Assuming you use there <int-ws:inbound-gateway>. Or it doesn't matter for the target solution...
You should implement some EndpointInterceptor add it to the UriEndpointMapping. With the handleRequest implementation you have access to the whole MessageContext where you are able to do something like:
((SoapMessage) messageContext.getRequest()).getEnvelope()
And send this object to the channel for the outbound-channel-adapter to stream its Source to the file.
From the gateway perspective you don't have the access to the messageContext anymore.
UPDATE
According to your comment below (please, consider to place similar info in the question directly in the future), I can suggests something like headers trick:
You implement custom SoapHeaderMapper (extends DefaultSoapHeaderMapper) and overriding toHeadersFromRequest() store passed in SoapMessage to your won header and use it the same way as we discussed in the EndpointInterceptor case. From the <int-file:outbound-channel-adapter> perspective you just should consult that header to extract the Source and its InputStream to store in the file.
UPDATE 2
public class MySoapHeaderMapper extends DefaultSoapHeaderMapper {
#Override
public Map<String, Object> toHeadersFromRequest(SoapMessage source) {
Map<String, Object> headers = super.toHeadersFromRequest(source);
headers.put("soapMessage", source);
return headers;
}
}
You should inject it into <int-ws:inbound-gateway> as a header-mapper. Afterwards that soapMessage header will be available in any downstream component. E.g.
<chain input-channel="storeToFile">
<transformer expression="headers.soapMessage.envelope.source.inputStream"/>
<int-file:outbound-channel-adapter />
</chain>
I think i understood how CDI works and in order to dive deep in it, i would like to try using it with something real world example. I am stuck with one thing where i need your help to make me understand. I would really appreciate your help in this regard.
I have my own workflow framework developed using Java reflection API and XML configurations where based on specific type of "source" and "eventName" i load appropriate Module class and invoke "process" method on that. Everything is working fine in our project.
I got excited with CDI feature and wanted to give it try with workflow framework where i am planning inject Module class instead of loading them using Reflection etc...
Just to give you an idea, I will try to keep things simple here.
"Message.java" is a kind of Transfer Object which carries "Source" and "eventName", so that we can load module appropriately.
public class Message{
private String source;
private String eventName;
}
Module configurations are as below
<modules>
<module>
<source>A</source>
<eventName>validate</eventName>
<moduleClass>ValidatorModule</moduleClass>
</module>
<module>
<source>B</source>
<eventName>generate</eventName>
<moduleClass>GeneratorModule</moduleClass>
</module>
</modules>
ModuleLoader.java
public class ModuleLoader {
public void loadAndProcess(Message message){
String source=message.getSource();
String eventName=message.getEventName();
//Load Module based on above values.
}
}
Question
Now , if i want to implement same via CDI to inject me a Module (in ModuleLoader class), I can write Factory class with #Produce method , which can do that. BUT my question is,
a) how can pass Message Object to #Produce method to do lookup based on eventName and source ?
Can you please provide me suggestions ?
Thanks in advance.
This one is a little tricky because CDI doesn't work the same way as your custom solution (if I understand it correctly). CDI must have all the list of dependencies and resolutions for those dependencies at boot time, where your solution sounds like it finds everything at runtime where things may change. That being said there are a couple of things you could try.
You could try injecting an InjectionPoint as a parameter to a producer method and returning the correct object, or creating the correct type.
There's also creating your own extension of doing this and creating dependencies and wiring them all up in the extension (take a look at ProcessInjectionTarget, ProcessAnnotatedType, and 'AfterBeanDiscovery` events. These two quickstarts may also help get some ideas going.
I think you may be going down the wrong path regarding a producer. Instead it more than likely would be much better to use an observer especially based on what you've described.
I'm making the assumption that the "Message" transfer object is used abstractly like a system wide event where basically you fire the event and you would like some handler defined in your XML framework you've created to determine the correct manager for the event, instantiate it (if need be), and then call the class passing it the event.
#ApplicationScoped
public class MyMessageObserver {
public void handleMessageEvent(#Observes Message message) {
//Load Module based on above values and process the event
}
}
Now let's assume you want to utilize your original interface (I'll guess it looks like):
public interface IMessageHandler {
public void handleMessage(final Message message);
}
#ApplicationScoped
public class EventMessageHandler implements IMessageHandler {
#Inject
private Event<Message> messageEvent;
public void handleMessage(Message message) {
messageEvent.fire(message);
}
}
Then in any legacy class you want to use it:
#Inject
IMessageHandler handler;
This will allow you to do everything you've described.
May be you need somthing like that:
You need the qualifier. Annotation like #Module, which will take two paramters source and eventName; They should be non qualifier values. See docs.
Second you need a producer:
#Produces
#Module
public Module makeAmodule(InjectionPoint ip) {
// load the module, take source and eventName from ip
}
Inject at proper place like that:
#Inject
#Module(source="A", eventName="validate")
Module modulA;
There is only one issue with that solution, those modules must be dependent scope, otherwise system will inject same module regardles of source and eventName.
If you want to use scopes, then you need make source and eventName qualified parameters and:
make an extension for CDI, register programmatically producers
or make producer method for each and every possible combinations of source and eventName (I do not think it is nice)