Spring Integration - Customize ObjectMapper used by WebFlux OutboundGateway - spring-integration

How do we customize the Jackson ObjectMapper used by WebFlux OutboundGateway? The normal customization done via Jackson2ObjectMapperBuilder or Jackson2ObjectMapperBuilderCustomizer is NOT respected.
Without this customization, LocalDate is serialized as SerializationFeature.WRITE_DATES_AS_TIMESTAMPS. Sample output - [2022-10-20] and there is NO way to customize the format

I assume you really talk about Spring Boot auto-configuration which is applied to the WebFlux instance. Consider to use an overloaded WebFlux.outboundGateway(String uri, WebClient webClient) to be able to auto-wire a WebClient.Builder which might be already configured with the mentioned customized ObjectMapper.

Registering a bean of type com.fasterxml.jackson.databind.module.SimpleModule will automatically be used by the pre-configured ObjectMapper bean. In SimpleModule, it is possible to register custom serialization and deserialization specifications.
To put that into code, a very simple solution would be the following:
#Bean
public SimpleModule odtModule() {
SimpleModule module = new SimpleModule();
JsonSerializer<LocalDate> serializer = new JsonSerializer<>() {
#Override
public void serialize(LocalDate odt, JsonGenerator jgen, SerializerProvider provider) throws IOException {
String formatted = odt.format(DateTimeFormatter.ISO_LOCAL_DATE);
jgen.writeString(formatted);
}
};
JsonDeserializer<LocalDate> deserializer = new JsonDeserializer<>() {
#Override
public LocalDate deserialize(JsonParser jsonParser, DeserializationContext deserializationContext) throws IOException {
return LocalDate.parse(jsonParser.getValueAsString());
}
};
module.addSerializer(LocalDate.class, serializer);
module.addDeserializer(LocalDate.class, deserializer);
return module;
}
Note that using lambdas for the implementations has sometimes resulted in weird behaviors for me, so I tend not to do that.

Related

spring-integration-kafka: Annotation-driven handling of KafkaProducerMessageHandler result?

Is there a way to achieve the behavior of the code below using annotation driven code?
#Bean
#ServiceActivator(inputChannel = "toKafka")
public MessageHandler handler() throws Exception {
KafkaProducerMessageHandler<String, String> handler =
new KafkaProducerMessageHandler<>(kafkaTemplate());
handler.setTopicExpression(new LiteralExpression("someTopic"));
handler.setMessageKeyExpression(new LiteralExpression("someKey"));
handler.setSendSuccessChannel(success());
handler.setSendFailureChannel(failure());
return handler;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
// set more properties
return new DefaultKafkaProducerFactory<>(props);
}
Can I specify the send success/failure channels using Spring Integration annotations?
I'd like as much as possible to keep a consistent pattern of doing things (e.g., specifying the flow of messages) throughout my app, and I like the Spring Integration diagrams (e.g., of how channels are connected) IntelliJ automatically generates when you configure your Spring Integration app with XML or Java annotations.
No; it is not possible, the success/failure channels have to be set explicitly when using Java configuration.
This configuration is specific to the Kafka handler and #ServiceActivator is a generic annotation for all types of message handler.

Why does a AmqpChannelFactoryBean with Jackson2JsonMessageConverter not store type?

I'm trying to use Spring integration with RabbitMQ, using RabbitMQ backed Spring integration channels. (Which seems almost not documented for some reason, is this new?).
To do this, it seems I can use AmqpChannelFactoryBean to create a channel.
To set up message conversion, I use a Jackson2JsonMessageConverter.
When I use a GenericMessage with a POJO payload, it refuses to de-serialize it from Java, basically because it doesn't know the type. I would have expected the type to be automagically be put on the header, but on the header there is only __TypeId__=org.springframework.messaging.support.GenericMessage.
In Spring boot my configuration class looks like this:
#Configuration
public class IntegrationConfiguration {
#Bean
public MessageConverter messageConverter() {
return new Jackson2JsonMessageConverter();
}
#Bean
public AmqpChannelFactoryBean myActivateOutChannel(CachingConnectionFactory connectionFactory,
MessageConverter messageConverter) {
AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true);
factoryBean.setConnectionFactory(connectionFactory);
factoryBean.setQueueName("myActivateOut");
factoryBean.setPubSub(false);
factoryBean.setAcknowledgeMode(AcknowledgeMode.AUTO);
factoryBean.setDefaultDeliveryMode(MessageDeliveryMode.PERSISTENT);
factoryBean.setMessageConverter(messageConverter);
return factoryBean;
}
#Bean
#ServiceActivator(inputChannel = "bsnkActivateOutChannel", autoStartup="true")
public MessageHandler mqttOutbound() {
return m -> System.out.println(m);
}
}
Sending is done like this:
private final MessageChannel myActivateOutChannel;
#Autowired
public MySender(MessageChannel myActivateOutChannel) {
this.myActivateOutChannel = myActivateOutChannel;
}
#Override
public void run(ApplicationArguments args) throws Exception {
MyPojo pojo = new MyPojo();
Message<MyPojo> msg = new GenericMessage<>(pojo);
myActivateOutChannel.send(msg);
}
If I set my own classmapper, things do work as they should. But I would have to use many MessageConverters if I set up things like that.
E.g.
converter.setClassMapper(new ClassMapper() {
#Override
public void fromClass(Class< ? > clazz, MessageProperties properties) {
}
#Override
public Class< ? > toClass(MessageProperties properties) {
return MyPojo.class;
}
});
Am I using this wrong? Am I missing some configuration? Any other suggestions?
Thanks!! :)
Note: Looking more at things, I'm guessing the 'Spring integration' way would be to add a Spring integration JSON transformer on each side, which means also adding two additional direct channels per RabbitMQ queue?
This feels wrong to me, since I've got triple the channels then (6! for in/out), but mayby that's how the framework is supposed to be used? Couple all the simple steps with direct channels? (Do I keep the persistence which the RabbitMQ channels offer in that case? Or do I need some transaction mechanism if I want that? Or is it inherent in how direct channels work?)
I've also noticed now there's both a Spring-integration MessageConverter, and a Spring-amqp MessageConverter. The latter being the one I've used. Would the other work the way I want it to? A quick glance at the code suggests it doesn't store the object type in the message header?
Prior to version 4.3, amqp-backed channels only supported serializable payloads; the work around was to use channel adapters instead (which support mapping).
INT-3975 introduced a new property extractPayload which causes the message headers to be mapped to rabbitmq headers and the message body is just the payload instead of a serialized GenericMessage.
Setting extractPayload to true should solve your problem.

Spring Integration Cassandra persistence workflow

I try to realize the following workflow with Spring Integration:
1) Poll REST API
2) store the POJO in Cassandra cluster
It's my first try with Spring Integration, so I'm still a bit overwhelmed about the mass of information from the reference. After some research, I could make the following work.
1) Poll REST API
2) Transform mapped POJO JSON result into a string
3) save string into file
Here's the code:
#Configuration
public class ConsulIntegrationConfig {
#InboundChannelAdapter(value = "consulHttp", poller = #Poller(maxMessagesPerPoll = "1", fixedDelay = "1000"))
public String consulAgentPoller() {
return "";
}
#Bean
public MessageChannel consulHttp() {
return MessageChannels.direct("consulHttp").get();
}
#Bean
#ServiceActivator(inputChannel = "consulHttp")
MessageHandler consulAgentHandler() {
final HttpRequestExecutingMessageHandler handler =
new HttpRequestExecutingMessageHandler("http://localhost:8500/v1/agent/self");
handler.setExpectedResponseType(AgentSelfResult.class);
handler.setOutputChannelName("consulAgentSelfChannel");
LOG.info("Created bean'consulAgentHandler'");
return handler;
}
#Bean
public MessageChannel consulAgentSelfChannel() {
return MessageChannels.direct("consulAgentSelfChannel").get();
}
#Bean
public MessageChannel consulAgentSelfFileChannel() {
return MessageChannels.direct("consulAgentSelfFileChannel").get();
}
#Bean
#ServiceActivator(inputChannel = "consulAgentSelfFileChannel")
MessageHandler consulAgentFileHandler() {
final Expression directoryExpression = new SpelExpressionParser().parseExpression("'./'");
final FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
handler.setFileNameGenerator(message -> "../../agent_self.txt");
handler.setFileExistsMode(FileExistsMode.APPEND);
handler.setCharset("UTF-8");
handler.setExpectReply(false);
return handler;
}
}
#Component
public final class ConsulAgentTransformer {
#Transformer(inputChannel = "consulAgentSelfChannel", outputChannel = "consulAgentSelfFileChannel")
public String transform(final AgentSelfResult json) throws IOException {
final String result = new StringBuilder(json.toString()).append("\n").toString();
return result;
}
This works fine!
But now, instead of writing the object to a file, I want to store it in a Cassandra cluster with spring-data-cassandra. For that, I commented out the file handler in the config file, return the POJO in transformer and created the following, :
#MessagingGateway(name = "consulCassandraGateway", defaultRequestChannel = "consulAgentSelfFileChannel")
public interface CassandraStorageService {
#Gateway(requestChannel="consulAgentSelfFileChannel")
void store(AgentSelfResult agentSelfResult);
}
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {} in Cassandra cluster...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
But this seems to be a wrong approach, the service method is never triggered.
So my question is, what would be a correct approach for my usecase? Do I have to implement the MessageHandler interface in my service component, and use a #ServiceActivator in my config. Or is there something missing in my current "gateway-approach"?? Or maybe there is another solution, that I'm not able to see..
Like mentioned before, I'm new to SI, so this may be a stupid question...
Nevertheless, thanks a lot in advance!
It's not clear how you are wiring in your CassandraStorageService bean.
The Spring Integration Cassandra Extension Project has a message-handler implementation.
The Cassandra Sink in spring-cloud-stream-modules uses it with Java configuration so you can use that as an example.
So I finally made it work. All I needed to do was
#Component
public final class CassandraStorageServiceImpl implements CassandraStorageService {
#ServiceActivator(inputChannel="consulAgentSelfFileChannel")
#Override
public void store(AgentSelfResult agentSelfResult) {
//use spring-data-cassandra repository to store
LOG.info("Received 'AgentSelfResult': {}...");
LOG.info("Trying to store 'AgentSelfResult' in Cassandra cluster...");
}
}
The CassandraMessageHandler and the spring-cloud-streaming seemed to be a to big overhead to my use case, and I didn't really understand yet... And with this solution, I keep control over what happens in my spring component.

dynamic template generation and formatting using freemarker

My goal is to format a collection of java map to a string (basically a csv) using free marker or anything else that would do smartly. I want to generate the template using a configuration data stored in database and managed from an admin application.
The configuration will tell me at what position a given data (key in hash map) need to go and also if any script need to run on this data before applying it at a given position. Several positions may be blank if the data in not in map.
I am thinking to use free-marker to build this generic tool and would appreciate if you could share how I should go about this.
Also would like to know if there is any built is support in spring-integration for building such process as the application is a SI application.
I am no freemarker expert, but a quick look at their quick start docs led me here...
public class FreemarkerTransformerPojo {
private final Configuration configuration;
private final Template template;
public FreemarkerTransformerPojo(String ftl) throws Exception {
this.configuration = new Configuration(Configuration.VERSION_2_3_23);
this.configuration.setDirectoryForTemplateLoading(new File("/"));
this.configuration.setDefaultEncoding("UTF-8");
this.template = this.configuration.getTemplate(ftl);
}
public String transform(Map<?, ?> map) throws Exception {
StringWriter writer = new StringWriter();
this.template.process(map, writer);
return writer.toString();
}
}
and
public class FreemarkerTransformerPojoTests {
#Test
public void test() throws Exception {
String template = System.getProperty("user.home") + "/Development/tmp/test.ftl";
OutputStream os = new FileOutputStream(new File(template));
os.write("foo=${foo}, bar=${bar}".getBytes());
os.close();
FreemarkerTransformerPojo transformer = new FreemarkerTransformerPojo(template);
Map<String, String> map = new HashMap<String, String>();
map.put("foo", "baz");
map.put("bar", "qux");
String result = transformer.transform(map);
assertEquals("foo=baz, bar=qux", result);
}
}
From a Spring Integration flow, send a message with a Map payload to
<int:transformer ... ref="fmTransformer" method="transform" />
Or you could do it with a groovy script (or other supported scripting language) using Spring Integration's existing scripting support without writing any code (except the script).

Enable gzip/deflate compression

I'm using ServiceStack (version 3.9.44.0) as a Windows Service (so I'm not using IIS) and I use both its abilities both as an API and for serving web pages.
However, I haven't been able to find how exactly I should enable compression when the client supports it.
I imagined that ServiceStack would transparently compress data if the client's request included the Accept-Encoding:gzip,deflate header, but I'm not seeing any corresponding Content-Encoding:gzip in the returned responses.
So I have a couple of related questions:
In the context of using ServiceStack as a standalone service (without IIS), how do I enable compression for the responses when the browser accepts it.
In the context of a C# client, how do similarly I ensure that communication between the client/server is compressed.
If I'm missing something, any help would be welcome.
Thank you.
If you want to enable compression globally across your API, another option is to do this:
Add this override to your AppHost:
public override IServiceRunner<TRequest> CreateServiceRunner<TRequest>(ActionContext actionContext)
{
return new MyServiceRunner<TRequest>(this, actionContext);
}
Then implement that class like this:
public class MyServiceRunner<TRequest> : ServiceRunner<TRequest>
{
public MyServiceRunner(IAppHost appHost, ActionContext actionContext) : base(appHost, actionContext)
{
}
public override void OnBeforeExecute(IRequestContext requestContext, TRequest request)
{
base.OnBeforeExecute(requestContext, request);
}
public override object OnAfterExecute(IRequestContext requestContext, object response)
{
if ((response != null) && !(response is CompressedResult))
response = requestContext.ToOptimizedResult(response);
return base.OnAfterExecute(requestContext, response);
}
public override object HandleException(IRequestContext requestContext, TRequest request, Exception ex)
{
return base.HandleException(requestContext, request, ex);
}
}
OnAfterExecute will be called and give you the chance to change the response. Here, I am compressing anything that is not null and not already compressed (in case I'm using ToOptimizedResultUsingCache somewhere). You can be more selective if you need to but in my case, I'm all POCO objects with json.
References
ServiceStack New Api
For those interested, a partial answer to my own question, you can use the extension method ToOptimizedResult() or, if you are using caching ToOptimizedResultUsingCache().
For instance, returning a compressed result:
public class ArticleService : Service
{
public object Get(Articles request) {
return base.RequestContext.ToOptimizedResult(
new List<Articles> {
new Article {Ref = "SILVER01", Description = "Silver watch"},
new Article {Ref = "GOLD1547", Description = "Gold Bracelet"}
});
}
}
References
CachedServices.cs example
CompressedResult.cs
Google Group question on Compression in ServiceStack

Resources