We are using Spring Batch and partitioned job extensively with our project. Occasionally we see problems with partitioned jobs getting "hung" because of what apepars to be lost messages. The remote partitions all complete but the parent step stays in STARTED. Our configuration uses 1 connection factory for reading messages from the queues (inbound gateway) and a different clustered connection to send out the partition messages (outbound gateway). The reason for this is the JBoss messaging doesnt uniformly distribute messages around the cluster and the client connection factory provides that functionality.
Redhat came in and frankly threw mud at Spring and the configuration. The following are excerpts from their report
The Spring JMSTemplate code employs several anti-patterns, like creating a new connection, session, producer just to send a message, then closing the connection. Also, when receiving a message it can create a consumer each time,
receive the message, then close the consumer. This can results in poor performance under load. The use of anti-patterns not only results in poor performance, but can deplete operating system resources such as
threads and file handles, since some of the connection resources are released asynchronously. Moreover, with non-durable topic subscribers you can end up losing messages, since any messages received between the closing of
the last and opening of the next consumer will be lost. There is one place where it may be acceptable to use the Spring JMSTemplate is inside the application server using the JCA managed connection factory (normally at "java:/JmsXA") and that only works when you're sending messages.
The JCA managed connection factory caches connections so they will not actually be created each time. However using the JCA managed connection factory will not resolve the issue with consumers since they are not cached.
In summary, the Spring JMSTemplate is not safe to use apart from the very specific use case of using it inside the application server with the JCA managed connection factory (java:/JmsXA) and only in that case to send messages
(do not use it to consume messages).
Using it from a JMS client application outside the application server is never safe, and using it with a standard connection factory (e.g. "ConnectionFactory," "ClusteredConnectionFactory", "jms/RemoteConnectionFactory," etc.) is
never safe; also using it to receive messages is never safe. To safely receive messages using Spring, consider the use of MessageListenerContainers [7] with MessageDriven Pojos [8].
Finally, note that issues encountered are based on JMS anti-patterns and is thus not a problem specific to JBoss EAP. For example, see a similar discussion with regard to ActiveMQ [9].
Red Hat does not support using the Spring JMSTemplate with JBoss Messaging apart from the one acceptable use case for sending message via JCA managed connection factory.
RECOMMENDATIONS
● As to Spring JMS, as a rule, use JCA managed connection factories configured in JBoss EAP. Do not use the Spring configured connection factories. Use JNDI template to pull in the connection factories to Spring from JBoss. This will get rid of most of the Spring JMS problems.
● Use standard JMS instead of Spring JMS for the batch job. Spring is a non-standard (and probably sub-standard implementation of JMS). Standard JMS uses a pool of a few senders to send the message and close the session after the message is sent. On the listener side, standard JMS uses a pool of works listening to a distributed Queue or Topic. Each web server has JMS listener deployed as singleton and uses standard java observer to
notify any caller that is expecting a call back.
The JMS connection factories are configured in JBoss and loaded via JNDI.
Can you provide your feedback on their assessment?
To avoid the overhead of creating new connections/sessions per send, you need to wrap the provider's connection factory in a CachingConnectionFactory. It reuses the same connection for sends and caches sessions, producers, consumers.
Related
What is the different use cases of Java NIO transport connector vs PoolConnectionFactory in ActiveMQ. Both serves the pool of connections.I want to use thousand of clients connect to the broker and maintain a seperate queue for each client. Where is is use case for both of this in the scenario?
The NIO Transport connector is a server side incoming connection API that utilizes a selector based event loop to share the load of multiple active connections where normally on the normal transport connector a single thread is created per connection to process IO leading to higher thread counts when large numbers of connections are active.
The PooledConnectionFactory is a client side device that provides a pool of one or more open connections that can be used by application code to reduce the number of connection create / destroy events thereby leading to faster client side code in some cases and lower overhead on the remote broker as it would not need to process connection create / destroy events from an application whose model causes this sort of behavior. Depending on how you've coded your application or what API layering you have such as Camel or Spring etc a pool may or may not be of benefit.
The two things are not related and should not be equated with one another.
NIO transport uses on low level the selector which is much more performant then Pool connectionfactory.
It means it get notification if any new data is ready while Pool wait for each Connection. For your use case i would strongly suggest NIO Connector
I wrote a simple Apache Pulsar client with Spring boot - a pulsar-producer initialized as beans that will be used in the rest controller to publish incoming api messages to Pulsar, and a consumer that consumes message, prints some values in console & acknowledge.
As of now the application is very simple, but the moment this spring-boot app loads I see memory peak, at times getting OOM. Is there any specific configuration to be used when using Pulsar client with Spring-boot?
The code is mostly the one found the Pulsar doc.
I am answering this to doc this issue - do not use the loops to consume messages, instead adopt the MessageListener subscribed to consumer via
consumer.messageListener(new Myconsumer())
or
consumer.messageListener((consumer, msg)->{//do something})
Docs didnt mention this, but I found surfing the consumer api.
Spring Cloud Stream is based on At least once method,This means that in some rare cases a duplicate message can arrive at an endpoint.
Does Spring Cloud Stream keep a buffer of already received messages?
The IdempotentReceiver in Enterprise Integration Patterns book suggests :
Design a receiver to be an Idempotent Receiver,one that can safely receive the same message multiple times.
Does Spring Cloud Stream control duplicate messages in consumers?
Update:
A paragraph from Spring Cloud Stream says :
4.5.1. Durability
Consistent with the opinionated application model of Spring Cloud Stream, consumer group subscriptions are durable. That is, a binder implementation ensures that group subscriptions are persistent and that, once at least one subscription for a group has been created, the group receives messages, even if they are sent while all applications in the group are stopped.
Anonymous subscriptions are non-durable by nature. For some binder implementations (such as RabbitMQ), it is possible to have non-durable group subscriptions.
In general, it is preferable to always specify a consumer group when binding an application to a given destination. When scaling up a Spring Cloud Stream application, you must specify a consumer group for each of its input bindings. Doing so prevents the application’s instances from receiving duplicate messages (unless that behavior is desired, which is unusual).
I think your assumption on the responsibility of the spring-cloud-stream framework are incorrect.
Spring-cloud-stream in a nutshell is a framework responsible for connecting and adapting producers/consumers provided by the developer to the message broker(s) exposed by the spring-cloud-stream binder (e.g., Kafka, Rabbit, Kinesis etc).
So connecting to a broker, receiving message from the broker, deserialising it, invoking user code, serialising message and sending it back to the broker is in the scope of framework responsibility. So you can look at it as purely infrastructure.
What you're describing is more of an application concern since the actual receiver is something that user would develop as part of the spring-cloud-stream development experience, hence responsibility for idempotence would reside with such user.
Also, on top of that most brokers already handle idempotency (in a way) by ensuring that a particular message has been delivered only once. That said, if someone sends identical message to such broker, it will have no idea that it is duplicate so the requirement for idempotency and/or deduplication is still valid, but as you can see it is not as straight forward given the amount of factor that are in play where your understanding of idempotence could be different from mine, hence our approaches could be different as well.
One last thing (partially to prove my last point): can safely receive the same message multiple times. - That is all it states, but what does safely really mean to you vs. me vs. some other person?
If you are concerned about a case where the application receives and processes message from the broker but crashes before it acknowledges the message, that can happen. Spring cloud stream app starters provides support for auto-configuration of a persistent message metadata store which backs Spring Integration's IdempotentReceiverInterceptor. An example of this is in the SFTP source app starter. By default, the sftp source uses an in-memory metadata store, so it would not survive a restart, but can be customized to use a persistent store.
I would like process the message programmatically using message-driven-channel-adapter. Here is scenario which I have to implement:
My application during the startup read the configuration from a service. The configuration provides information about the queues which will contain the messages. Hence I would like to create a message-driven-channel-adapter for each queue to listen to messages asynchronously.
Any example which initializes all the spring integration context programatically instead of using XML will be useful.
If you are going to do everything programmatically, I'd suggest you bypass Spring Integration magic and just use DefaultMessageListenerContainer directly.
Afterwards you can send messages to an existing MessageChannel directly from the MessageListener implementation or using Messaging Gateway.
Please, be careful with programmatic configuration with that do not miss important attributes like ApplicationContext or invocation for afterPropertiesSet().
I have designed the following messageflow
1) message-driven-channel-adapter ->
1.1) service-activator -> outbound-channel-adapter (for sending response)
1.2) in a chain - transformer -> outbound-channel-adapter (for sending error)
The message driven channel adapter picks message from websphere MQ and it is configured with DefaultMessageListenercontainer. Outbound channel adapter sends the message to websphere MQ and have configured JMS template for that.
The problem is, performance looks very low. I have used cache_consumer and acknowledge="transacted at message-driven-channel-adapter. I dont feel message-driven-channel-adapter would be an issue. I feel performance issue is due to jmstemplate used in outbound-channel-adapter, because everytime it creates a producer from the session provided downstream from message-driven-channel-adapter.
Is there a way to cache the producer used by jmstemplate.Can anyone please tell me as how i could improve the performance?
If you use a CachingConnectionFactory, the producer will be cached by default in the connection factory. Note: if you use variable concurrency in the inbound adapter, be sure to set cacheConsumers to false in the connection factory; we don't want consumers cached there (it's ok in the container).