Spring Integration - Kafka Producer Error Channel - spring-integration

I am using Kafka producer to publish messages to some other kafka topic and its working pretty fine. sample template below:
<int-kafka:outbound-channel-adapter
kafka-template="template"
channel="inputToKafka"
topic="foo"/>
Does the above statement support errorchannel as supported in kafka message driven inbound channel adapter?
I needed that to audit the error count whenever my outgoing kafka server is down and i am unable to publish it.

Since any Outbound is passive component and it can only do its purpose by the external call, that is not a surprise that the error handling should be similar to the try...catch in Java when we call service method.
So, one way is to have error channel upstream - Messaging Gateway or Inbound channel Adapter.
Another way is to use ExpressionEvaluatingRequestHandlerAdvice in the request-handler-advice-chain of the <int-kafka:outbound-channel-adapter>.
Also, bear in mind that you should use async = false option to get all the errors from Kafka interaction in the same thread.

Related

Acknowledeging a spring message

I have a spring integration application and I am using message driven channel adapter for consuming the messages. This is the definition of the adapter -
<jms:message-driven-channel-adapter id="messageAdapter" destination="inQueue"
connection-factory="connectionFactory"
error-channel="errorChannel"
concurrent-consumers="${consumer.concurrent-consumers}"
acknowledge="transacted"
transaction-manager="transactionManager"
channel="channel"
auto-startup="true"
receive-timeout="50000"/>
So this message goes to my core channel and then goes through a series of service activators. In between if there is a error than this message is moved to errorChannel where I handle the errors and decide on what needs to be done with this message. For one scenario I want the message to not rollback to the queue, is it possible? I am using 'transacted' in my adapter definition so I am not sure how to drive this behaviour. Any help is greatly appreciated!
You don't describe what the transactionManager bean is. If it's a JmsTransactionManager, remove it and the container will just use local transactions.
Then, the transaction will only roll back if the flow on the error-channel throws an exception. If that error flow exits normally ("consuming" the error), the transaction will not roll back.
If it's some other transaction manager (e.g. JDBC) then remove it and start the JDBC transaction later in the flow (i.e. don't synchronize the JMS and JDBC transactions; again using a local JMS transaction).

JMS Problems Spring Batch With Partitioned Jobs On JBoss 5.2 EAP

We are using Spring Batch and partitioned job extensively with our project. Occasionally we see problems with partitioned jobs getting "hung" because of what apepars to be lost messages. The remote partitions all complete but the parent step stays in STARTED. Our configuration uses 1 connection factory for reading messages from the queues (inbound gateway) and a different clustered connection to send out the partition messages (outbound gateway). The reason for this is the JBoss messaging doesnt uniformly distribute messages around the cluster and the client connection factory provides that functionality.
Redhat came in and frankly threw mud at Spring and the configuration. The following are excerpts from their report
The Spring JMSTemplate code employs several anti-patterns, like creating a new connection, session, producer just to send a message, then closing the connection. Also, when receiving a message it can create a consumer each time,
receive the message, then close the consumer. This can results in poor performance under load. The use of anti-patterns not only results in poor performance, but can deplete operating system resources such as
threads and file handles, since some of the connection resources are released asynchronously. Moreover, with non-durable topic subscribers you can end up losing messages, since any messages received between the closing of
the last and opening of the next consumer will be lost. There is one place where it may be acceptable to use the Spring JMSTemplate is inside the application server using the JCA managed connection factory (normally at "java:/JmsXA") and that only works when you're sending messages.
The JCA managed connection factory caches connections so they will not actually be created each time. However using the JCA managed connection factory will not resolve the issue with consumers since they are not cached.
In summary, the Spring JMSTemplate is not safe to use apart from the very specific use case of using it inside the application server with the JCA managed connection factory (java:/JmsXA) and only in that case to send messages
(do not use it to consume messages).
Using it from a JMS client application outside the application server is never safe, and using it with a standard connection factory (e.g. "ConnectionFactory," "ClusteredConnectionFactory", "jms/RemoteConnectionFactory," etc.) is
never safe; also using it to receive messages is never safe. To safely receive messages using Spring, consider the use of MessageListenerContainers [7] with MessageDriven Pojos [8].
Finally, note that issues encountered are based on JMS anti-patterns and is thus not a problem specific to JBoss EAP. For example, see a similar discussion with regard to ActiveMQ [9].
Red Hat does not support using the Spring JMSTemplate with JBoss Messaging apart from the one acceptable use case for sending message via JCA managed connection factory.
RECOMMENDATIONS
● As to Spring JMS, as a rule, use JCA managed connection factories configured in JBoss EAP. Do not use the Spring configured connection factories. Use JNDI template to pull in the connection factories to Spring from JBoss. This will get rid of most of the Spring JMS problems.
● Use standard JMS instead of Spring JMS for the batch job. Spring is a non-standard (and probably sub-standard implementation of JMS). Standard JMS uses a pool of a few senders to send the message and close the session after the message is sent. On the listener side, standard JMS uses a pool of works listening to a distributed Queue or Topic. Each web server has JMS listener deployed as singleton and uses standard java observer to
notify any caller that is expecting a call back.
The JMS connection factories are configured in JBoss and loaded via JNDI.
Can you provide your feedback on their assessment?
To avoid the overhead of creating new connections/sessions per send, you need to wrap the provider's connection factory in a CachingConnectionFactory. It reuses the same connection for sends and caches sessions, producers, consumers.

How is jms session handled in a flow containing inbound adapter, outbound adapter, error channel and configured with same CachingConnectionfactory

I have the following flow:
1) message-driven-channel-adapter ->
1.1) output-channel connected to -> service-activator -> outbound-channel-adapter (for sending response)
1.2) error-channel connected to -> exception-type-router
1.2.1) message is sent to different queues depending on the exception type using outbound-channel-adapter
MessageDrivenChannelAdapter uses DefaultMessageListenrContainer and OutboundAdapter uses JMSTemplate
Have used same cachingconnectionfactory for inbound and outbound adapters,
set acknowledge="transacted" in messageDrivenChannelAdapter
set cacheLevel as CACHE_CONSUMER in DefaultMessageListenerContainer
set cacheProducers=true and cacheConsumers=false in CachingConnectionFactory
I am so confused as how jms session/producer/consumer is created and handled in this flow.
1) whether the consumers and producers used by inbound adapter, outbound adapters(used for response and the error queues) are created from the same session i.e. whether producers and consumers used in a thread are created from the same session?
2) And Just wanted to confirm as whether there are any disadvantage/issues in using cachingconnectionfactory even after setting 1)cacheConsmers to false in the factory and 2)cache level to CACHE_CONSUMER at the DefaultMessageListenerContainer. Because , it is confusing to read the forums saying that cachingconnectionfactory should not be used.
3)Also , have a doubt on the execution flow: In the flow, when will the service activator method execution complete ? Will it complete only after the message is sent to the output queue?
Please advise
It's ok to use a caching connection factory with the listener container, as long as you don't use variable concurrency, or you disable caching consumers in the factory.
In your scenario, you should use a caching connection factory because you really want the producers to be cached for performance reasons.
When the container acknowledgemode is transacted, the container session is bound to the thread and will be used by any upstream JmsTemplate that is configured to use the same connection factory, as long as there is no asynch handoff (QueueChannel or ExecutorChannel); the default DirectChannel runs the downstream endpoints on the container thread.
The service activator method is invoked (and "completes") before sending the message to the outbound adapter.

How to cache producer in outbound-channel-adapter when it uses session from upstream message-driven-channel-adapter

I have designed the following messageflow
1) message-driven-channel-adapter ->
1.1) service-activator -> outbound-channel-adapter (for sending response)
1.2) in a chain - transformer -> outbound-channel-adapter (for sending error)
The message driven channel adapter picks message from websphere MQ and it is configured with DefaultMessageListenercontainer. Outbound channel adapter sends the message to websphere MQ and have configured JMS template for that.
The problem is, performance looks very low. I have used cache_consumer and acknowledge="transacted at message-driven-channel-adapter. I dont feel message-driven-channel-adapter would be an issue. I feel performance issue is due to jmstemplate used in outbound-channel-adapter, because everytime it creates a producer from the session provided downstream from message-driven-channel-adapter.
Is there a way to cache the producer used by jmstemplate.Can anyone please tell me as how i could improve the performance?
If you use a CachingConnectionFactory, the producer will be cached by default in the connection factory. Note: if you use variable concurrency in the inbound adapter, be sure to set cacheConsumers to false in the connection factory; we don't want consumers cached there (it's ok in the container).

Spring Integration - JMS outbound adapter post-send database update

We previously used to have a Spring Integration flow (XML configuration-based) where we would do an update in a database after sending a message to a JMS queue. To achieve this, the SI flow was configured with a publish-subscribe queue channel as an input to a JMS Outbound Channel Adapter (order 0) and a Service Activator (order 1). The idea here being that after a successful JMS send, the service activator would be called thus, updating the data in the database.
We are now in the process of updating our flows to work with spring-integration:4.0.x APIs and wanted to use this opportunity to see if the described flow pattern is still a good/recommended way of doing a database update after a successful JMS send or if there is now a simpler/better way of achieving this? As a side note, our flows are now being implemented using spring-integration-java-dsl:1.0.0.M3 APIs.
Thanks in advance for any input on this,
PM.
publish-subscribe queue channel
There's no such thing as a pub-sub queue channel; by definition, it's a subscribable channel; so I assume that's what you mean.
It is one of the ways to do what you need, and perfectly fine; you can also achieve the same result with a RecipientListRouter. The dsl syntax is quite nice, especially with Java 8; see the SpringOne demo app for an example.

Resources