AMQP-backed publish-subscribe channel and message conversion - spring-integration

We use an <int-amqp:publish-subscribe-channel/> as a kind of an event bus in our service-based application. The send method as well as the message handler are based on the Message class from spring-messaging (as of spring-integration 4.0 (+). Events are changes to entities that need to be picked up by other services.
The problem is: the spring-messaging Message class is treated as arbitrary object payload by spring-amqp as it is not recognized as a spring-amqp Message. This causes the following problems:
default message format is serialized Java objects. spring-amqp does not only serialize our original payload object only, but also the wrapping spring-messaging Message, which is not compatible between Spring Framework 4.0 and 4.1
configuring a message converter for JSON (Jackson2JsonMessageConverter to be exact) doesn't solve the problem, as it also converts the Message instance - which is spring-integration's GenericMessage, and that can't be instantiated from JSON as it lacks an appropriate constructor
We need to mix Spring versions, as we have services implemented with Grails 2.4 (based on Spring 4.0) and with current Spring Boot (relies on Spring 4.1).
Is there any way out of this, preferably an idiomatic spring-integration way? Is there maybe another abstraction instead or in addition to the PublishSubscribeAmqpChannel? Or any other means of message conversion that we could apply?

Instead of using an amqp-backed channel, use an outbound-channel-adapter to send and an inbound-channel-adapter to receive.
The channel holds the entire message (serialized) whereas the adapters transport the payload as the message body and (optionally) map headers to/from amqp headers.
You will need to configure a fanout exchange for pub/sub (the channel will create one called si.fanout.<channelName> by default. You can then bind a queue for each recipient.

Related

Is it possible to make a Poller (or PollableMessageSource) to poll messages as List?

Following the example found in GitHub https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-pubsub-polling-binder-sample regarding polling messages from a PubSub subscription, I was wondering...
Is it possible to make a PollableMessageSource retrieve List<Message<?>> instead of a single message per poll?
I've seen the #Poller notation only being used in Source typed objects, never in Processor or Sink. Is it possible to use in such context when for example using #StreamListener or with a functional approach?
The PollableMessageSource binding and Source stream applications are fully based on the Poller and MessageSource abstraction from Spring Integration where its contract is to produce a single message to the channel configured. The point of the messaging is really to process a single message not affecting others. The failure for one message doesn't mean to fail others in the flow.
On the other hand you probably mean GCP Pub/Sub messages to be produced as a list in the Spring message payload. That is really possible, but via some custom code from Pub/Sub consumer and MessageSource impl. Although I would think twice to expect some batched from the source. Probably you may utilize an aggregator to build some small windows if your further logic is about processing as list. But again: it is going to be a single Spring message.
May be better to start thinking about a reactive function implementation where you indeed can expect a Flux<Message<?>> as an input and Spring Cloud Stream framework will take care for you how to emit the data from Pub/Sub into the reactive stream you expect.
See more info in docs: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_reactive_functions_support

how to threat errors on spring cloud dataflow?

When deploying my microservice on spring cloud dataflow I get the following log:
No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created
how do I direct error flows?
My guess is to create an errorChannel bean (as the message says). But I did not find any docs about it nor sample usages.
For example, I have a sink that executes an Insert on a database and want to direct it elsewhere if insert fails.
The default errorChannel bean has a LoggingHandler subscribed to it.
If you define your own errorHandler channel, it won't get the default LoggingHandler.
The error channel is automatically wired in.
Each consumer (or #StreamListener) gets a dedicated error channel binding.group.errors which is bridged to the global errorChannel.
You can add a #ServiceActivator to consume ErrorMessages from either of these channels.
Error channels are not applied on the producer side; you have to catch the exception yourself.

Spring Cloud Stream #StreamListener and Spring Integration's Resequencer Pattern

AFAIK the Spring Cloud Stream project is based on Spring Integration. Hence I was wondering if there is a nice way to resequence a subset of inbound messages before the StreamListener handler is triggered? Or do I need to assemble the whole IntegrationFlow from scratch using XML or Java DSL config from Spring Integration?
My use case is as follows. Most of the time I process inbound messages on a Kafka topic as they come. However, a few events have to be resequenced based on CORRELATION_ID, SEQUENCE_NUMBER, and SEQUENCE_SIZE headers. In other words I'd like to keep using StreamListener as much as possible and simply plug in resequencing strategy for some events.
Yes, you would need to use Spring Integration for it. In fact Spring Cloud Stream is effectively a binding framework only. It binds message handlers to the message brokers via binders. The message handlers themselves are provided by the users.
The #StreamListener annotation is pretty much an equivalent of Spring Integration's #ServiceActivator with few extra features (e.g., conditional routing), but other then it is just a message handler.
Now, as you eluded to, you are aware that you can use Spring Integration (SI) to implement a message handler or an internal SI flow, and that is normal and recommended for complex cases.
That said, we do provide out of the box apps that implements certain EIP components and we do have, for example, and aggregator app which you can use as a starting point in implementing resequencer. Further more, given that we have an aggregator app and not resequencer, we would be glad to accept a contribution for it if you're interested.
I hope this answers you question.

Message persistence in Spring Integration Aggregator without MessageStore by using AMQP?

I would like to know if I can have persistence in my Spring Integration setup when I use a aggregator, which is not backed by a MessageStore, by leveraging the persistence of AMQP (RabbitMQ) queues before and after the aggregator.
I imagine that this would use ack's: The aggregator won't ack a message before it's collected all the parts and sent out the resulting message.
Additionally I would like to know if this is ever a good idea :)
I am new working with queue's, and am trying to get a good feel for patterns to use.
My business logic for this is as follows:
I receive a messages on one queue.
Each message must result in two unrelated webservice calls (preferably in parallel).
The results of these two calls must be combined with details from the original message.
The combination must then be sent out as a new message on a queue.
Messages are important, so they must not be lost.
I was/am hoping to use only one 'persistent' system, namely RabbitMQ, and not having to add a database as well.
I've tried to keep the question specific, but any other suggestions on how to approach this are greatly appreciated :)
What you would like to do recalls me Scatter-Gather EI Pattern.
So, you get a message from the AMQP send it into the ScatterGather endpoint and wait for the aggregated reply. That's enough for to stick with the default acknowledge.
Right, the scatterChannel can be PublishSubscribeChannel with an executor to call Web Services in parallel. Anyway the gatherer process will wait for replies according the release strategy and will block the original AMQP listener do not ack the message prematurely.

Difference between Codec and MessageConverter in Spring Integration

Codec's were introduced in Spring Integration 4.2.
However, the description in the docs doesn't really describe how a Codec is different from a MessageConverter or in which scenarios to use which abstraction?
Basically what I want to know is:
Why was the Codec abstraction introduced when it seems similar to what a MessageConverter does?
Why would you use a Codec over a MessageConverter and vice versa?
When would you choose to use one over the other?
This question was highlighted in the context of Spring Cloud Stream where there is a default Kryo Codec configured but recently there has been work around MessageConverter's.
It's a bit of a grey area.
MessageConverters are used in Spring Integration in two areas:
To convert some external representation of a message to a spring-messaging Message<?> - e.g. to/from an mqtt message.
To implement the datatype on message channels.
Codecs, on the other hand only deal with message payloads when putting them on the wire (MessageBus in XD or Binder in Spring Cloud Stream). Kryo is an alternative to Java serialization.
Applications will typically not deal with Codecs directly, but Spring Integration provides a CodecMessageConverter which takes a codec to encode/decode the payload while converting.
It also provides a codec-based transformer so an app can do the encoding/decoding (if it wishes) somewhere else in the flow.
So, in the context of Spring Cloud Stream, the Kryo Codec is used to encode/decode the payload within the Binder.
The message converters are used to implement conversion within the application bound to the transport by the Binder, using the channel dataType feature.
Let's look at an example using Spring Cloud DataFlow:
stream create foo --definition "source | processor --outputType=application/json | sink"
Let's say the source emits some POJO that is received by the processor, and the processor internally normally emits a Map object, but the sink wants to receive JSON, then a MessageConverter does that for you because of the outputType declaration.
The data between source and processor, and processor and sink, are transported as kryo.

Resources