Exception Router using Annotation - spring-integration

I am trying to convert my code into java annotation but i am stuck with
<int:exception-type-router input-channel="failed-email-fetch" default-output-channel="errorChannel">
<int:mapping exception-type="com.XXXXXX.RateException" channel="customError" />
</int:exception-type-router>
if i used #Router i did not know what to return and this what i used but did not work
#ServiceActivator(inputChannel = "failedEmailFetch")
public ErrorMessageExceptionTypeRouter handleError(MessageHandlingException messageHandlingException) {
ErrorMessageExceptionTypeRouter errorMessageExceptionTypeRouter = new ErrorMessageExceptionTypeRouter();
errorMessageExceptionTypeRouter.setChannelMapping("com.XXXXXX.exception.MessageException","customError");
errorMessageExceptionTypeRouter.setDefaultOutputChannelName("errorChannel");
return errorMessageExceptionTypeRouter;
}

You also need #Bean when the #ServiceActivator annotation is on a MessageHandler.
#ServiceActivator alone is for POJO messaging.
See Annotations on Beans.
Consuming endpoints have 2 beans, the handler and a consumer; the #ServiceActivator defines the consumer. The #Bean is the hander.

I ended up using the below not sure if it is the best way
#Router(inputChannel = "failedEmailFetch",defaultOutputChannel = "errorChannel")
public String handleError(Message<AggregateMessageDeliveryException> message) {
log.info("{}",message.getPayload().getCause().getCause());
if( message.getPayload().getRootCause() instanceof MessageException)
return "customError";
else
return "errorChannel";
}

Related

Spring Integration - Customize ObjectMapper used by WebFlux OutboundGateway

How do we customize the Jackson ObjectMapper used by WebFlux OutboundGateway? The normal customization done via Jackson2ObjectMapperBuilder or Jackson2ObjectMapperBuilderCustomizer is NOT respected.
Without this customization, LocalDate is serialized as SerializationFeature.WRITE_DATES_AS_TIMESTAMPS. Sample output - [2022-10-20] and there is NO way to customize the format
I assume you really talk about Spring Boot auto-configuration which is applied to the WebFlux instance. Consider to use an overloaded WebFlux.outboundGateway(String uri, WebClient webClient) to be able to auto-wire a WebClient.Builder which might be already configured with the mentioned customized ObjectMapper.
Registering a bean of type com.fasterxml.jackson.databind.module.SimpleModule will automatically be used by the pre-configured ObjectMapper bean. In SimpleModule, it is possible to register custom serialization and deserialization specifications.
To put that into code, a very simple solution would be the following:
#Bean
public SimpleModule odtModule() {
SimpleModule module = new SimpleModule();
JsonSerializer<LocalDate> serializer = new JsonSerializer<>() {
#Override
public void serialize(LocalDate odt, JsonGenerator jgen, SerializerProvider provider) throws IOException {
String formatted = odt.format(DateTimeFormatter.ISO_LOCAL_DATE);
jgen.writeString(formatted);
}
};
JsonDeserializer<LocalDate> deserializer = new JsonDeserializer<>() {
#Override
public LocalDate deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException {
return LocalDate.parse(jsonParser.getValueAsString());
}
};
module.addSerializer(LocalDate.class, serializer);
module.addDeserializer(LocalDate.class, deserializer);
return module;
}
Note that using lambdas for the implementations has sometimes resulted in weird behaviors for me, so I tend not to do that.

spring-integration-kafka: Annotation-driven handling of KafkaProducerMessageHandler result?

Is there a way to achieve the behavior of the code below using annotation driven code?
#Bean
#ServiceActivator(inputChannel = "toKafka")
public MessageHandler handler() throws Exception {
KafkaProducerMessageHandler<String, String> handler =
new KafkaProducerMessageHandler<>(kafkaTemplate());
handler.setTopicExpression(new LiteralExpression("someTopic"));
handler.setMessageKeyExpression(new LiteralExpression("someKey"));
handler.setSendSuccessChannel(success());
handler.setSendFailureChannel(failure());
return handler;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
// set more properties
return new DefaultKafkaProducerFactory<>(props);
}
Can I specify the send success/failure channels using Spring Integration annotations?
I'd like as much as possible to keep a consistent pattern of doing things (e.g., specifying the flow of messages) throughout my app, and I like the Spring Integration diagrams (e.g., of how channels are connected) IntelliJ automatically generates when you configure your Spring Integration app with XML or Java annotations.
No; it is not possible, the success/failure channels have to be set explicitly when using Java configuration.
This configuration is specific to the Kafka handler and #ServiceActivator is a generic annotation for all types of message handler.

Why does a AmqpChannelFactoryBean with Jackson2JsonMessageConverter not store type?

I'm trying to use Spring integration with RabbitMQ, using RabbitMQ backed Spring integration channels. (Which seems almost not documented for some reason, is this new?).
To do this, it seems I can use AmqpChannelFactoryBean to create a channel.
To set up message conversion, I use a Jackson2JsonMessageConverter.
When I use a GenericMessage with a POJO payload, it refuses to de-serialize it from Java, basically because it doesn't know the type. I would have expected the type to be automagically be put on the header, but on the header there is only __TypeId__=org.springframework.messaging.support.GenericMessage.
In Spring boot my configuration class looks like this:
#Configuration
public class IntegrationConfiguration {
#Bean
public MessageConverter messageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public AmqpChannelFactoryBean myActivateOutChannel(CachingConnectionFactory connectionFactory,
MessageConverter messageConverter) {
AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true);
factoryBean.setConnectionFactory(connectionFactory);
factoryBean.setQueueName("myActivateOut");
factoryBean.setPubSub(false);
factoryBean.setAcknowledgeMode(AcknowledgeMode.AUTO);
factoryBean.setDefaultDeliveryMode(MessageDeliveryMode.PERSISTENT);
factoryBean.setMessageConverter(messageConverter);
return factoryBean;
}
#Bean
#ServiceActivator(inputChannel = "bsnkActivateOutChannel", autoStartup="true")
public MessageHandler mqttOutbound() {
return m -> System.out.println(m);
}
}
Sending is done like this:
private final MessageChannel myActivateOutChannel;
#Autowired
public MySender(MessageChannel myActivateOutChannel) {
this.myActivateOutChannel = myActivateOutChannel;
}
#Override
public void run(ApplicationArguments args) throws Exception {
MyPojo pojo = new MyPojo();
Message<MyPojo> msg = new GenericMessage<>(pojo);
myActivateOutChannel.send(msg);
}
If I set my own classmapper, things do work as they should. But I would have to use many MessageConverters if I set up things like that.
E.g.
converter.setClassMapper(new ClassMapper() {
#Override
public void fromClass(Class< ? > clazz, MessageProperties properties) {
}
#Override
public Class< ? > toClass(MessageProperties properties) {
return MyPojo.class;
}
});
Am I using this wrong? Am I missing some configuration? Any other suggestions?
Thanks!! :)
Note: Looking more at things, I'm guessing the 'Spring integration' way would be to add a Spring integration JSON transformer on each side, which means also adding two additional direct channels per RabbitMQ queue?
This feels wrong to me, since I've got triple the channels then (6! for in/out), but mayby that's how the framework is supposed to be used? Couple all the simple steps with direct channels? (Do I keep the persistence which the RabbitMQ channels offer in that case? Or do I need some transaction mechanism if I want that? Or is it inherent in how direct channels work?)
I've also noticed now there's both a Spring-integration MessageConverter, and a Spring-amqp MessageConverter. The latter being the one I've used. Would the other work the way I want it to? A quick glance at the code suggests it doesn't store the object type in the message header?
Prior to version 4.3, amqp-backed channels only supported serializable payloads; the work around was to use channel adapters instead (which support mapping).
INT-3975 introduced a new property extractPayload which causes the message headers to be mapped to rabbitmq headers and the message body is just the payload instead of a serialized GenericMessage.
Setting extractPayload to true should solve your problem.

Spring Integration Cassandra persistence workflow

I try to realize the following workflow with Spring Integration:
1) Poll REST API
2) store the POJO in Cassandra cluster
It's my first try with Spring Integration, so I'm still a bit overwhelmed about the mass of information from the reference. After some research, I could make the following work.
1) Poll REST API
2) Transform mapped POJO JSON result into a string
3) save string into file
Here's the code:
#Configuration
public class ConsulIntegrationConfig {
#InboundChannelAdapter(value = "consulHttp", poller = #Poller(maxMessagesPerPoll = "1", fixedDelay = "1000"))
public String consulAgentPoller() {
return "";
}
#Bean
public MessageChannel consulHttp() {
return MessageChannels.direct("consulHttp").get();
}
#Bean
#ServiceActivator(inputChannel = "consulHttp")
MessageHandler consulAgentHandler() {
final HttpRequestExecutingMessageHandler handler =
new HttpRequestExecutingMessageHandler("http://localhost:8500/v1/agent/self");
handler.setExpectedResponseType(AgentSelfResult.class);
handler.setOutputChannelName("consulAgentSelfChannel");
LOG.info("Created bean'consulAgentHandler'");
return handler;
}
#Bean
public MessageChannel consulAgentSelfChannel() {
return MessageChannels.direct("consulAgentSelfChannel").get();
}
#Bean
public MessageChannel consulAgentSelfFileChannel() {
return MessageChannels.direct("consulAgentSelfFileChannel").get();
}
#Bean
#ServiceActivator(inputChannel = "consulAgentSelfFileChannel")
MessageHandler consulAgentFileHandler() {
final Expression directoryExpression = new SpelExpressionParser().parseExpression("'./'");
final FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
handler.setFileNameGenerator(message -> "../../agent_self.txt");
handler.setFileExistsMode(FileExistsMode.APPEND);
handler.setCharset("UTF-8");
handler.setExpectReply(false);
return handler;
}
}
#Component
public final class ConsulAgentTransformer {
#Transformer(inputChannel = "consulAgentSelfChannel", outputChannel = "consulAgentSelfFileChannel")
public String transform(final AgentSelfResult json) throws IOException {
final String result = new StringBuilder(json.toString()).append("\n").toString();
return result;
}
This works fine!
But now, instead of writing the object to a file, I want to store it in a Cassandra cluster with spring-data-cassandra. For that, I commented out the file handler in the config file, return the POJO in transformer and created the following, :
#MessagingGateway(name = "consulCassandraGateway", defaultRequestChannel = "consulAgentSelfFileChannel")
public interface CassandraStorageService {
#Gateway(requestChannel="consulAgentSelfFileChannel")
void store(AgentSelfResult agentSelfResult);
}
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {} in Cassandra cluster...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
But this seems to be a wrong approach, the service method is never triggered.
So my question is, what would be a correct approach for my usecase? Do I have to implement the MessageHandler interface in my service component, and use a #ServiceActivator in my config. Or is there something missing in my current "gateway-approach"?? Or maybe there is another solution, that I'm not able to see..
Like mentioned before, I'm new to SI, so this may be a stupid question...
Nevertheless, thanks a lot in advance!
It's not clear how you are wiring in your CassandraStorageService bean.
The Spring Integration Cassandra Extension Project has a message-handler implementation.
The Cassandra Sink in spring-cloud-stream-modules uses it with Java configuration so you can use that as an example.
So I finally made it work. All I needed to do was
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#ServiceActivator(inputChannel="consulAgentSelfFileChannel")
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {}...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
The CassandraMessageHandler and the spring-cloud-streaming seemed to be a to big overhead to my use case, and I didn't really understand yet... And with this solution, I keep control over what happens in my spring component.

Spring Integration 4 - configuring a LoadBalancingStrategy in Java DSL

I have a simple Spring Integration 4 Java DSL flow which uses a DirectChannel's LoadBalancingStrategy to round-robin Message requests to a number of possible REST Services (i.e. calls a REST service from one of two possible service endpoint URIs).
How my flow is currently configured:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
LoadBalancingStrategy loadBalancingStrategy = new RoundRobinLoadBalancingStrategy();
DirectChannel directChannel = new DirectChannel(loadBalancingStrategy);
return directChannel;
}
#Bean
public IntegrationFlow testLoadBalancing0Flow() {
return IntegrationFlows.from("test.load.balancing.ch")
.handle(restHandler0())
.channel("test.result.ch")
.get();
}
#Bean
public IntegrationFlow testLoadBalancing1Flow() {
return IntegrationFlows.from("test.load.balancing.ch")
.handle(restHandler1())
.channel("test.result.ch")
.get();
}
#Bean
public HttpRequestExecutingMessageHandler restHandler0() {
return createRestHandler(endpointUri0, 0);
}
#Bean
public HttpRequestExecutingMessageHandler restHandler1() {
return createRestHandler(endpointUri1, 1);
}
private HttpRequestExecutingMessageHandler createRestHandler(String uri, int order) {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler(uri);
// handler configuration goes here..
handler.setOrder(order);
return handler;
}
My configuration works, but I am wondering whether there is a simpler/better way of configuring the flow using Spring Integration's Java DSL?
Cheers,
PM
First of all the RoundRobinLoadBalancingStrategy is the default one for the DirectChannel.
So, can get rid of the testLoadBalancingCh() bean definition at all.
Further, to avoid duplication for the .channel("test.result.ch") you can configure it on the HttpRequestExecutingMessageHandler as setOutputChannel().
From other side your configuration is so simple that I don't see reason to use DSL. You can achieve the same just with annotation configuration:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
return new DirectChannel();
}
#Bean(name = "test.result.ch")
public DirectChannel testResultCh() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "test.load.balancing.ch")
public HttpRequestExecutingMessageHandler restHandler0() {
return createRestHandler(endpointUri0, 0);
}
#Bean
#ServiceActivator(inputChannel = "test.load.balancing.ch")
public HttpRequestExecutingMessageHandler restHandler1() {
return createRestHandler(endpointUri1, 1);
}
private HttpRequestExecutingMessageHandler createRestHandler(String uri, int order) {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler(uri);
// handler configuration goes here..
handler.setOrder(order);
handler.setOutputChannel(testResultCh());
return handler;
}
From other side there is MessageChannels builder factory to allow to simplify loadBalancer for your case:
#Bean(name = "test.load.balancing.ch")
public DirectChannel testLoadBalancingCh() {
return MessageChannels.direct()
.loadBalancer(new RoundRobinLoadBalancingStrategy())
.get();
}
However, I can guess that you want to avoid duplication within DSL flow definition to DRY, but it isn't possible now. That's because IntegrationFlow is linear to tie endoints bypassing the boilerplate code for standard objects creation.
As you see to achieve Round-Robin we have to duplicate, at least, inputChannel, to subscribe several MessageHandlers to the same channel. And we do that in the XML, via Annotations and, of course, from DSL.
I'm not sure that it will be so useful for real applications to provide a hook to configure several handlers using single .handle() for the same Round-Robin channel. Because the further downstream flow may not be so simple as your .channel("test.result.ch").
Cheers

Resources