When an EJB application receives several requests (work load) it can manage this work load just POOLING the EJBs, so when each EJB object is being used by a thread, the next threads will have to wait queued until some EJB ends up the work (avoiding overloading and efficiency degradation of the system).
Spring is using stateless singletons (not pooling at all) that are used by an "out of control" number of threads.
Is there a way to do something to control the way the work load is going to be delivered? (equivalent to the EJB instance pooling).
Thank you!
In the case of the web app, the servlet container has a pool of threads that determine how many incoming HTTP requests it can handle simultaneously. In the case of the message driven POJO the JMS configuration defines a similar thread pool handing incoming JMS messages. Each of these threads would then access the Spring beans.
Googling around for RMI threading it looks like there is no way to configure thread pooling for RMI. Each RMI client is allocated a thread. In this case you could use Spring's Task Executor framework to do the pooling. Using <task:executor id="executor" pool-size="10"/> in your context config will set up a executor with 10 threads. Then annotate the methods of your Spring bean that will be handling the work with #Async.
Using the Spring task executor you could leave the Servlet and JMS pool configuration alone and configure the pool for your specific work in one place.
To achieve a behaviour similar to the EJB pooling, you could define your own custom scope. Have a look at SimpleThreadScope and the example referenced from this class' javadoc.
The difference between Spring and EJB is, that Spring allows multiple threads on an single instance of an bean, while in EJB you have only one tread per bean (at one point in time).
So you do not need any pooling in Spring for this topic. But on the other hand you need take care that you implement your beans in a threadsave way.
From the comments:
Yes I need it if I want to limit the number of threads that can use my beans simultaneously
One (maybe not the best) way to handle this is to implement the application in normal spring style (no limits). And than have a "front-controller" that accept the client request. But instead of invoking the service directly, it invokes the service asyncron (#Async). May you use some kind of async proxy instead of making the service itselfe asyncron.
class Controller{...
Object doStuff() {return asyncProxy.doStuffAsync().get();}
}
class AsyncProxy{...
#Async Future<Object> duStuffAscny{return service.doStuff();
}
class Service{...
Object doStuff{return new Object();}
}
Then you only need to enable springs Async Support, and there you can configure the Pool used for the Threads.
In this case I would use some kind of front controller, that starts an new Async
Related
My understanding is that in Tomcat, each request will take up one Java/(and thus OS) thread.
Imagine I have an app with lots of long-running requests (eg a poker game with multiple players,) that involves in-game chat, and AJAX long-polling etc.
Is there a way to change the tomcat configuration/architecture for my webapp so that I'm not using a thread for each request but 'intercept' the request and response so they can be processed as part of a queue?
I think you're right about tomcat likes to handle each request in its own thread. This could be problematic for several concurrent threads. I have the following suggestions:
Configure maxThreads and acceptCount attributes of the Connector elements in server.xml. In this way you limit the number of threads that can get spawned to a threshold. Once that limit is reached, requests get queued. The acceptCount attribute is to set this queue size. Simplest to implement but not a good long term solution
Configure multiple Connector elements in server.xml and make them share a threadpool by adding an Executor element in server.xml. You probably want to point tomcat to your own implementation of Executor interface.
If you want finer grain control no how requests are serviced, consider implementing your own connector. The 'protocol' attribute of the Connector element in server.xml should point to your new connector. I have done this to add a custom SSL connector and this works great.
Would you reduce this problem to a general requirement to make tomcat more scalable in terms of the number of requests/connections? The generic solution to that would be configuring a loadbalancer to handle multiple instances of tomcat.
I'm currently writing a new EJB application which basically is supposed to receive messages from a web service and launch a downloading process based on this message content. This application will run on Glassfish 3.1.1.
My first idea was to create a singleton bean that would read the messages from the web service and use a stateful session bean to initiate and handle the download itself. I need to use stateful beans because I need to have a convertational state between my singleton and stateful bean (download status, etc.)
The "problem" is if I receive several messages from the web service I'm supposed to start several downloads in parallel, each download with its own context of course. How am I supposed to achieve this as if I invoke a stateful session bean from my singleton I'll always get the same bean, correct? The only solution I see is to use threads that would be created and launched from my singleton but this is not permitted by EJB specification...
Thanks for your help !
I don't think you want a stateful session bean here. The point of a stateful bean is that that maintains state in the scope of a session, which is a relationship with a particular client. In your case, there isn't one download per client (are there even any clients?), which means that this is not an appropriate scope.
If you just want multiple threads, use a stateless bean with an #Asynchronous method. You would probably have to handle status updates using a callback to the singleton.
Why do you need a singleton bean here? Is just stateful session bean is not good enough? You want simultaneous downloads, you want statefulness, so why to use singleton? Can you explain a little bit more?
I am building a system, where each request from a client side spawns multiple threads on server side. Each thread then is using one or more DAOs (some DAOs can be used by more than one thread at the time). All DAOs are injected (#Autowired) to my thread classes by Spring. Each DAO receives SessionFactory injected as well.
What would be proper way of managing Hibernate sessions across these multiple DAOs so I would not run into problems because of multithreaded environment (e.g. few DAOs from different threads are trying to use the same session at the same time)?
Would be enough that I specify hibernate.current_session_context_class=thread in Hibernate configuration and then everytime in DAO simply use SessionFactory.getCurrentSession() to do the work? Would it properly detect and create sessions per thread as needed?
Yes. It is enough.
When setting hibernate.current_session_context_class to thread , the session returned from SessionFactory.getCurrentSession() is from the ThreadLocal instance.
Every thread will have their own, independently ThreadLocal instance, so different threads will not access to the same hibernate session.
The behaviour of SessionFactory.getCurrentSession() is that: if it is called for the first time in the current thread, a new Session is opened and returned. If it is called again in the same thread, the same session will be returned.
As a result , you can get the same session to use in different DAO methods in the same transaction code by simply calling SessionFactory.getCurrentSession(). It prevents you from passing the Hibernate session through the DAO method 's input parameters in the case that you have to call many different DAO methods in the same transaction code.
I have a Java servlet that acts as a facade to other webservices deployed on the same Tomcat instance. My wrapper servlet creates N more threads, each which invokes a webservice, collates the response and sends it back to the client. The webservices are deployed all on the same Tomcat instance as different applications.
I am seeing thread blocking on this facade wrapper service after a few hours of deployment which brings down the Tomcat instance. All blocked threads are endpoints to this facade webservice (like http://domain/appContext/facadeService)
Is there a way to control such thread-blocking, due to starvation of available threads that actually do the processing? What are the best practices to prevent such deadlocks?
The common solution to this problem is to use the Executor framework. You need to express your web service call as Callable and pass it to the executor either as it stands, or as a Collection<Callable> (see the Javadoc for complete list of options).
You have two choices to control the time. First is to use parameters of an appropriate method of the Executor class where you specify the max web service timeout. Another option is to do get the result (which is expressed as Future<T>) and use .get(long, TimeUnit) to specify the maximum amount of time you can wait for a result.
Hey, I have a question regarding multithreading. First off, how many instances of DispatcherServlet / DispatcherPorlet is there ? Is it always the only one ? Even when there are let say 10 requests per second ? What about the services that are singleton by default. If I have a validationService bean that is injected into handler to provide request validation, as a singleton (by default), can I rely on the fact that it is a singleton and that it won't be reinstantiated in some cases ?
This is an interesting question.
As mentioned in this previous question, the container is only permitted to instantiate one servlet instance. In this case, you're guaranteed to have one spring context, and one singleton.
The question is what happens for previous versions of the Servlet spec, which I'm not sure specify this behaviour explicitly.
In practice, though, containers only ever instantiate one servlet instance - I've never seen one do otherwise. So it's safe to assume that you'll only get one app context.
Depending on the load, servlet container creates number of servlet instances, developer does not have any control over that. But in most of the cases, the container maintains a single instance of each servlet (as servlets are supposed to be thread-safe anyway).
For as for Spring singleton beans, these are singletons per web application - the Spring application context is stored in servlet context (you can get access to it with WebApplicationContextUtils.getWebApplicationContext(ServletContext)).
As for reliability: yes, you can rely on the fact that in the scope on one Spring application context, there is only one instance of each singleton bean.