In my Singleton class I use MessagingTemplate for sending messages to channel. Currently, I create the instance of MessagingTemplate each time I need to send a message. I wonder a) how expensive this operation, b) is it thread safe object, so I can initialise it once and use it in multithreaded environment.
It's not expensive to create, but it's unnecessary. The template is thread-safe. This is true for all the framework components.
Related
I am now working on the application saving data into the database using the REST API. The basic flow is: REST API -> object -> save to database. I wanted to introduce the queue to the application, having in mind the idea of the producer and consumer being a part of one, abovementioned application.
Is it possible for the Node.js application to act as both producer and consumer of the queue? Knowing that Node.js is single-threaded language, does it give me any other choice instead of creating two applications - one producing to the queue and the second one - waiting actively for messages in a queue and saving to the database?
Also, the requirement here would be for an application to process any item that hasn't been acknowledged on the queue on the restart. That also makes me think that the 'two applications' architecture is the best idea here.
Thank you for the help.
Yes, nodejs is able to do that and is well suited for every I/O intensive application use case. The point here is "what are you trying to achieve"? message queues are meant to make different applications communicate together, while if you need an in-process event bus is a total overkill. There are many easier and efficient ways to propagate messages between decoupled components of the same nodejs app; one of these way is EventEmitter that let your components collaborate in a pubsub fashion
If you are convinced that an AMQP broker is you solution, you just need to
Define a "producer" class that publishes data on an exchange myExchange
Define a "consumer" queue that declares a queue myQueue
Create a binding at application startup between myExchange and myQueue, based on some routing key. Then, when a message is received from "consumer" you need to acknowledge after db saving. When a message is acked, it will be destroyed since it's already been consumed. You can decide, after an error, to recover the message via NACK
There are nodejs libraries that make code easier, such as Rascal
Short answer: YES and use two separate connections for publishing and consuming
Is it possible for the NodeJS application to act as both producer and consumer of the queue?
I would even state that it is a good usecase matching extremely well with NodeJS philosophy and threading mechanism.
Knowing that Node.js is single-threaded language, does it give me any other choice instead of creating two applications - one producing to the queue and the second one - waiting actively for messages in a queue and saving to the database?
You can have one application handling both, just be aware that if your client is publish too fast for the server to handle, RabbitMQ can apply back pressure on the TCP connection, thus consuming on a back-pressured TCP connection would greatly affect consumer performance.
My MessageListener implementation is not thread safe.
This causes issues when i try to wire it in DefaultMessageListenerContainer with multiple consumers, since, all the consumers share the same MessageListener object.
Is there a way to overcome this problem by making the DefaultMessageListener container create multiple instances of MessageListeners, so that, MessageListener is not shared among consumer threads.
In that way each consumer thread will have its own MessageListener instance.
Please advise.
There's nothing built in to support this. It is generally considered best practice to make services stateless (and thus thread-safe).
If that's not possible, you would need to create a wrapper listener; two simple approaches would be to store instances of your listener in a ThreadLocal or maintain a pool of objects and retrieve/return instances from/to the pool on each message.
What is the difference between #Async annotated method and Reactor set to use thread pool of the same size. Is there any advantage to one of this methods and what it would be? For my use, I do not care to return any value with the asynced method.
The most obvious difference is that Reactor doesn't crosscut #Async annotated methods and implicitly submit events to a Reactor. If you're using the Reactor #Selector annotation on beans then you're getting the opposite of what you would with #Async: an event handler, not an event publisher.
With that said, there is some support in Reactor for #Async-style event publishing through the DynamicReactorFactory. It uses an interface instead of an annotation, but the concept is similar.
Regarding "advantages" to using one or the other: it really depends on what other things you're doing in your application and whether or not you're using Reactor in a more general sense. Reactor isn't designed to be a thread pool replacement. The ThreadPoolExecutorDispatcher in Reactor just uses a plain ThreadPoolExecutor underneath. The advantages to using Reactor in that scenario come from the optimized event publishing used in Reactor versus creating new Callables and Runnables all the time, as well as using Reactor's Stream and Promise API to handle asynchronous execution.
Looked at from the API prespective, there is a distinct and measurable advantage to using Reactor over a plain TaskExecutor for background jobs.
I am working on a multi-threaded based Web application developed using Java EE.
I have two threads inside that application similar to a producer and consumer, where one thread continuously reads the data from a third party API (Socket connection), and updates it to a cache. The other thread (consumer) continuously tries to read from the cache.
My question is if there is any way that I can improve the performance of the consumer thread (I mean it only reads the data from the cache) when and only there is a change in data.
Sure, use a BlockingQueue (choose an implementation like ArrayBlockingQueue for example). It will block (suspend) the consumer calling take until there is data available in the buffer.
There are lots of way to do.
Producer-consumer queue (aka BlockingQueue in Java, as suggested by Tudor) is one way.
You may also make use of wait()/notify() so the consumer wait for a object, and producer notify the consumer when there is update. Start from Java5, you can even make use of Condition for such purpose.
When an EJB application receives several requests (work load) it can manage this work load just POOLING the EJBs, so when each EJB object is being used by a thread, the next threads will have to wait queued until some EJB ends up the work (avoiding overloading and efficiency degradation of the system).
Spring is using stateless singletons (not pooling at all) that are used by an "out of control" number of threads.
Is there a way to do something to control the way the work load is going to be delivered? (equivalent to the EJB instance pooling).
Thank you!
In the case of the web app, the servlet container has a pool of threads that determine how many incoming HTTP requests it can handle simultaneously. In the case of the message driven POJO the JMS configuration defines a similar thread pool handing incoming JMS messages. Each of these threads would then access the Spring beans.
Googling around for RMI threading it looks like there is no way to configure thread pooling for RMI. Each RMI client is allocated a thread. In this case you could use Spring's Task Executor framework to do the pooling. Using <task:executor id="executor" pool-size="10"/> in your context config will set up a executor with 10 threads. Then annotate the methods of your Spring bean that will be handling the work with #Async.
Using the Spring task executor you could leave the Servlet and JMS pool configuration alone and configure the pool for your specific work in one place.
To achieve a behaviour similar to the EJB pooling, you could define your own custom scope. Have a look at SimpleThreadScope and the example referenced from this class' javadoc.
The difference between Spring and EJB is, that Spring allows multiple threads on an single instance of an bean, while in EJB you have only one tread per bean (at one point in time).
So you do not need any pooling in Spring for this topic. But on the other hand you need take care that you implement your beans in a threadsave way.
From the comments:
Yes I need it if I want to limit the number of threads that can use my beans simultaneously
One (maybe not the best) way to handle this is to implement the application in normal spring style (no limits). And than have a "front-controller" that accept the client request. But instead of invoking the service directly, it invokes the service asyncron (#Async). May you use some kind of async proxy instead of making the service itselfe asyncron.
class Controller{...
Object doStuff() {return asyncProxy.doStuffAsync().get();}
}
class AsyncProxy{...
#Async Future<Object> duStuffAscny{return service.doStuff();
}
class Service{...
Object doStuff{return new Object();}
}
Then you only need to enable springs Async Support, and there you can configure the Pool used for the Threads.
In this case I would use some kind of front controller, that starts an new Async