are there any performance limitations using IBM's asynchbeans?
my apps jvm core dumps are showing numerous occurences of orphaned threads. Im currently using native jdk unmanaged threads. Is it worth changing over to managed threads?
In my perspective asynchbeans are a workaround to create threads inside Websphere J2EE server. So far so good, websphere lets you create pool of "worker" threads, controlling this way the maximum number of threads, typical J2EE scalability concern.
I had some problems using asynchbeans inside websphere on "unmanaged" threads (hacked callbacks from JMS Listener via the "outlawed" setMessageListener). I was "asking for it" not using MDBs in the first place, but I have requisites that do not feet MDB way.
Related
I have been using Tomcat for a long time, and I am frustrated with the lack of control over threads. Some threads may eat up all the resources of the server, and that can't be controlled in Tomcat.
I'm exploring more advanced JavaEE containers like WebSphere, WebLogic, and JBoss. Do they allow controlling or changing the priority of a thread, or a group of threads, even manually? Furthermore, would they allow controlling the amount of CPU used by a thread?
Thanks,
Luis
Read the following artical on Weblogic Server:
Thread Management
WebLogic Server Performance and Tuning
This question is rather broad.
There are threads created by the container and there are threads created by applications. Tomcat thread priorities can be changed statically through configuration.
However you have no control over those created by applications unless they have made use of the javax.enterprise.concurrent facilities that have been added to Java EE 7. Different implementations of this may or may not provide a way of dynamically reconfiguring threads created in this way.
Some Java EE implementations prior to 7 may provide vendor dependent APIs for applications to get access to concurrent capabilities.
Hazel cast creates a number of threads. If my application is deployed in a j2ee environment, creating threads is discouraged. Is there a way to enable hazelcast to use application server managed threads?
There is no way to do that and there are no plans to make that happen.
Why do you think that app servers don't like you creating threads? I have done on quite a few and never had problems.
I have been reading a lot about Node.JS and i'm with a doubt.
NodeJS is single thread but, let's say for example, IIS it isn't also single thread?
If my server has a CPU with one core shouldn't be single thread also? I read that the number of threads are relative to the number of CPU cores.
I ask that because I also read that in IIS we have one thread by connection. Is it possible?
Thanks for reading
In a general application, the number of threads is not bound by the number of cores. IIS may have an arbitrary number of threads on a single core CPU.
Javascript, Flash, and most other web applications are single threaded by the design of the Web Browser. This does not extend to the server though.
In Java web applications it is typical to spawn threads to process the web requests. I am referring to the application code and not the container's threads to accept incoming client connections.
In scripting languages, e.g Perl or Python my understanding is that it most frequent to use the multiprocessing paradigm (fork processes) than the multithreaded one (fork threads).
I personally find forking processes instead of threads in a web server application code "weird" and heavier.
Am I correct on this? Is forking processes usual during web processing in these frameworks or not?
Perl threads are really heavy (see How do I reduce memory consumption when using many threads in Perl?). And from what I read threads in python are hampered by the global interpreter lock. Threads in Java seems to be more lightweight, but not as lightweight as OS threads in Linux.
If you want to do heavy networking in Perl you don't use threads, but event based programming like with AnyEvent or POE, similar to Python which has the Twisted framework. There are several web servers based on these frameworks. Java has also the NIO framework and even in C modern fast web server like nginx uses event based programming instead of threads or processes.
I don't know of any common web server, which forks to process a request. If they fork at all they use a pre-forking model, e.g. they fork a number of worker processes up-front (or worker threads if they use threads instead of processes) and if a new request comes in, it is handled by one of the existing workers. This is much less overhead than a fork-on-request model, which only very simple servers use. Servers with event-based processing might fork too, but usually only to make effective use of multiple CPU (e.g. one process per CPU).
With pre-forking web servers the web application usually does not fork at all, but just uses the current process. Event based web servers often only handle static content internally and fast, for the slower dynamic content they connect via interfaces like FCGI to other processes, which are often pre-forking. This saves resources, because with normal web pages most requests are for static content.
There might be still a reason to fork within a web application. This is, if you need to do some work in the background (like resizing uploaded images) while the page is already finished and the content should be sent to the user. But even in this case it scales much better to have a dedicated process/thread doing this work and only feeding it with tasks.
As for the performance of creating a thread vs. a process: the fork of a process is in Unix/Linux (but not in windows) inexpensive, because it simple clones the existing process structures and marks all shared memory pages (e.g. initially all pages) as copy-on-write. Only if the new process does work, the changed memory pages get copied (that's the expensive part). The cost for creating threads differs vastly between programming languages and operating systems and is not necessary faster than forking a new process.
I'm trying to execute subprocesses from within my application server (Glassfish 3.1.2)
Therefore I discovered the Apache Commons Exec library. The problem is that this library creates threads which should not be done on an application server because the server is not aware of these threads.
What could be a solution to this problem?
Would it be possible to create a message component written in Java SE who consumes messages containing information about pending jobs and register it with the application server?
The application server would then not have to deal with runtime exceptions and threads but just consume messages which contain the result or an exception.
Do you have any better ideas?
You could either use:
MDB (as pointed by duffymo),
Servlets 3.0 asynchronous processing,
Asynchronous EJB invocation.
Effectively, it should give you similar functionality as plain subprocesses.
Using Java SE component which communicates with Java EE just to overcome using threads on your own sounds a bit like an overkill. Just read about mentioned solutions and try if any of them fits your needs.
Message driven beans were designed for asynchronous processing. It could be a solution to your problem. You can create a separate listener thread pool sized to handle the traffic.