Spring singleton beans in high load - multithreading

Hey, I have a question regarding multithreading. First off, how many instances of DispatcherServlet / DispatcherPorlet is there ? Is it always the only one ? Even when there are let say 10 requests per second ? What about the services that are singleton by default. If I have a validationService bean that is injected into handler to provide request validation, as a singleton (by default), can I rely on the fact that it is a singleton and that it won't be reinstantiated in some cases ?

This is an interesting question.
As mentioned in this previous question, the container is only permitted to instantiate one servlet instance. In this case, you're guaranteed to have one spring context, and one singleton.
The question is what happens for previous versions of the Servlet spec, which I'm not sure specify this behaviour explicitly.
In practice, though, containers only ever instantiate one servlet instance - I've never seen one do otherwise. So it's safe to assume that you'll only get one app context.

Depending on the load, servlet container creates number of servlet instances, developer does not have any control over that. But in most of the cases, the container maintains a single instance of each servlet (as servlets are supposed to be thread-safe anyway).
For as for Spring singleton beans, these are singletons per web application - the Spring application context is stored in servlet context (you can get access to it with WebApplicationContextUtils.getWebApplicationContext(ServletContext)).
As for reliability: yes, you can rely on the fact that in the scope on one Spring application context, there is only one instance of each singleton bean.

Related

HazelCast, Glassfish 3.1.2.2 and CDI/Weld Serialization class not found error?

I am testing HazelCast 3.1.3 and its HTTP Session Clustering/WM. My target applications is a JSF 2.1/PrimeFaces app and it makes heavy use of CDI.
It has a some javax.enterprise.context.SessionScoped beans in it, among many other things.
I have written a simple WAR matching this and it uses very simple SessionScoped Bean. I have configured HC/WM following the HC directions here: http://www.hazelcast.org/docs/latest/manual/html-single/#HttpSessionClustering
Note: I am not running an embedded HC; but rather configured WM to be client to an already running HC 'server' instance. So far I have my single GF instance and the HC sever running on same box for this test.
This sorta works, in the WM/HC connects and creates sessions and stuff. HC Server sees and accepts the WM client connects.
However, once more interesting stuff (interactions with SessionScoped objects in web app) starts to happen HC/WM starts tossing ClassNotFound exceptions. In particular CNF's for org.jboss.weld.context.conversation.ConversationIdGenerator.
I think this is because CDI in GF 3.1.2.2 is provided by a WELD OSGI 'thing' and that gets loaded by a lower level class loader that is 'closer' to the session manager within GF. However, when WM/HC filter (loaded by the WAR classloader) visits the CDI/WELD proxied or wrapped session object to serialize it -- it cannot see the WELD classes (I have verified that ConversationIdGenerator is serializable).
Does anybody have any ideas on how to work around this issue?
I suppose delivering weld in my WAR may work or making WELD available in the common class loader may work -- but that is sub-optimal.
Hmm...Will this be a endemic problem with CDI provided as a service by an App container but then the session clustering provided as an application-level facet? (or Will this sort of issue happen in WildFly/other too?)

web app, jsp, and multithreaded

I'm currently building a new web app, in Java EE with Apache Tomcat as a webserver.
I'm using jsps and servlets in my work.
my question is:
I have a very standard business logic. None of it is synchronized, will only a single thread of my web app logic will run at any given time?
Since making all functions "synchronized" will cause a huge overhead, is there any alternative for it?
If i have a static method in my project. that makes a very simple thing, such as:
for (int i=0;i<10000;i++ ) {
counter++;
}
If the method is not thread safe, what is the behavior of such method in a multi user app? Is it unexpected behavior?!
and again in a simple manner:
if i mean to build a multi user web app, should i "synchronize" everything in my project? if not all , than what should i sync?
will only a single thread of my web app logic will run at any given time?
No, the servlet container will create only one instance of each servlet and call that servlet's doService() method from multiple threads. In Tomcat by default you can expect up to 200 threads calling your servlet at the same time.
If the servlet was single-thread, your application would be slow like hell, see SingleThreadModel - deprecated for a reason.
making all function "synchronized" will make a huge overhead, is there any alternative for it?
There is - your code should be thread safe or better - stateless. Note that if your servlet does not have any state (like mutable fields - unusual), it can be safely accessed by multiple threads. The same applies to virtually all objects.
In your sample code with a for loop - this code is thread safe if counter is a local variable. It is unsafe if it is a field. Each thread has a local copy of the local variables while all threads accessing the same object access the same fields concurrently (and need synchronization).
what should i sync?
Mutable, global, shared state. E.g. In your sample code if counter is a field and is modified from multiple threads, incrementing it must be synchronized (consider AtomicInteger).

EJBs - Architectural issue

I'm currently writing a new EJB application which basically is supposed to receive messages from a web service and launch a downloading process based on this message content. This application will run on Glassfish 3.1.1.
My first idea was to create a singleton bean that would read the messages from the web service and use a stateful session bean to initiate and handle the download itself. I need to use stateful beans because I need to have a convertational state between my singleton and stateful bean (download status, etc.)
The "problem" is if I receive several messages from the web service I'm supposed to start several downloads in parallel, each download with its own context of course. How am I supposed to achieve this as if I invoke a stateful session bean from my singleton I'll always get the same bean, correct? The only solution I see is to use threads that would be created and launched from my singleton but this is not permitted by EJB specification...
Thanks for your help !
I don't think you want a stateful session bean here. The point of a stateful bean is that that maintains state in the scope of a session, which is a relationship with a particular client. In your case, there isn't one download per client (are there even any clients?), which means that this is not an appropriate scope.
If you just want multiple threads, use a stateless bean with an #Asynchronous method. You would probably have to handle status updates using a callback to the singleton.
Why do you need a singleton bean here? Is just stateful session bean is not good enough? You want simultaneous downloads, you want statefulness, so why to use singleton? Can you explain a little bit more?

EJB pooling vs Spring: how to manage work load in spring?

When an EJB application receives several requests (work load) it can manage this work load just POOLING the EJBs, so when each EJB object is being used by a thread, the next threads will have to wait queued until some EJB ends up the work (avoiding overloading and efficiency degradation of the system).
Spring is using stateless singletons (not pooling at all) that are used by an "out of control" number of threads.
Is there a way to do something to control the way the work load is going to be delivered? (equivalent to the EJB instance pooling).
Thank you!
In the case of the web app, the servlet container has a pool of threads that determine how many incoming HTTP requests it can handle simultaneously. In the case of the message driven POJO the JMS configuration defines a similar thread pool handing incoming JMS messages. Each of these threads would then access the Spring beans.
Googling around for RMI threading it looks like there is no way to configure thread pooling for RMI. Each RMI client is allocated a thread. In this case you could use Spring's Task Executor framework to do the pooling. Using <task:executor id="executor" pool-size="10"/> in your context config will set up a executor with 10 threads. Then annotate the methods of your Spring bean that will be handling the work with #Async.
Using the Spring task executor you could leave the Servlet and JMS pool configuration alone and configure the pool for your specific work in one place.
To achieve a behaviour similar to the EJB pooling, you could define your own custom scope. Have a look at SimpleThreadScope and the example referenced from this class' javadoc.
The difference between Spring and EJB is, that Spring allows multiple threads on an single instance of an bean, while in EJB you have only one tread per bean (at one point in time).
So you do not need any pooling in Spring for this topic. But on the other hand you need take care that you implement your beans in a threadsave way.
From the comments:
Yes I need it if I want to limit the number of threads that can use my beans simultaneously
One (maybe not the best) way to handle this is to implement the application in normal spring style (no limits). And than have a "front-controller" that accept the client request. But instead of invoking the service directly, it invokes the service asyncron (#Async). May you use some kind of async proxy instead of making the service itselfe asyncron.
class Controller{...
Object doStuff() {return asyncProxy.doStuffAsync().get();}
}
class AsyncProxy{...
#Async Future<Object> duStuffAscny{return service.doStuff();
}
class Service{...
Object doStuff{return new Object();}
}
Then you only need to enable springs Async Support, and there you can configure the Pool used for the Threads.
In this case I would use some kind of front controller, that starts an new Async

Prevent thread blocking in Tomcat

I have a Java servlet that acts as a facade to other webservices deployed on the same Tomcat instance. My wrapper servlet creates N more threads, each which invokes a webservice, collates the response and sends it back to the client. The webservices are deployed all on the same Tomcat instance as different applications.
I am seeing thread blocking on this facade wrapper service after a few hours of deployment which brings down the Tomcat instance. All blocked threads are endpoints to this facade webservice (like http://domain/appContext/facadeService)
Is there a way to control such thread-blocking, due to starvation of available threads that actually do the processing? What are the best practices to prevent such deadlocks?
The common solution to this problem is to use the Executor framework. You need to express your web service call as Callable and pass it to the executor either as it stands, or as a Collection<Callable> (see the Javadoc for complete list of options).
You have two choices to control the time. First is to use parameters of an appropriate method of the Executor class where you specify the max web service timeout. Another option is to do get the result (which is expressed as Future<T>) and use .get(long, TimeUnit) to specify the maximum amount of time you can wait for a result.

Resources