When i use Java VisualVM to monitor my JBoss Application.
It shows
Live Threads as: 155
Daemon Threads as: 135
When i use JMX Web Console of JBoss.
It shows
Current Busy Threads as: 40
Current Thread Count as: 60
Why is there so much discrepancy between what Java Visual VM is reporting and what JMX Web Console shows. (How is Live Threads different from Busy Threads)
A live thread is one that exists and is not Terminated. (See Thread.State)
A busy thread is one that is actually working or, more precisely, Runnable.
JBoss's Web Console tends to report fewer threads because it is very non-invasive. In other words, it does not have to spawn additional threads just to render you a web page. It's already running a web server and it already allocated threads to handle web requests before you went into JMX Console.
Visual VM on the other hand, starts up several threads to support the JMX remoting (usually RMI) which comes with a little extra baggage. You might see extra threads like:
RMI TCP Connection(867)
RMI TCP Connection(868)
RMI TCP Connection(869)
JMX server connection timeout
Having said that, the discrepancy you are reporting is way out of line and makes me think that you're not looking at the same JVM.
The JMX Console is obvious :), so I would guess your Visual VM is connected elsewhere. See if you can correlate similar thread name (using the MBean jboss.system:type=ServerInfo listThreadDump operation), or browse the MBeans in Visual VM and inspect the JBoss MBeans. These mbeans are good ones to look at because they indicate a binding to a socket so they could not have the same values if they were not the same JVM process:
jboss.web:name=HttpRequest1,type=RequestProcessor,worker=http-0.0.0.0-18080
Of course, the other thing would be that if you start VisualVM first, have it running and then go to JMX Console and don't see as many threads, you're definitely in a different VM.
Cheers.
//Nicholas
Related
I have written an application (Qt/C++) that creates a lot of concurrent worker threads to accomplish its task, utilizing QThreadPool (from the Qt framework). It has worked flawlessly running on a dedicated server/hardware.
A copy of this application is now running in a virtual machine (RHEL 7), and performance has suffered significantly in that the queue (from the thread pool) is being exercised quite extensively. This has resulted in things getting backed up a bit. This, despite having more cores available to the application through this VM version than the dedicated, non-virtualized server.
Today, I did some troubleshooting with the top -H -p <pid> command, and found that there were 16 total llvmpipe-# threads running all at once, apparently for software rendering of my application's very simple graphical display. It looks to me like the presence of so many of these rendering threads has left limited resources available for my actual application's threads to be concurrently running. Meaning, my worker threads are yielding/taking a back seat to these.
As this is a small/simple GUI running on a server, I don't care to dedicate so many threads to software rendering of its display. I read some Mesa3D documentation about utilizing the LP_NUM_THREADS environment variable, to limit its use. I set it to LP_NUM_THREADS=4, and as a result I seem to have effectively opened up 12 cores for my application to now use for its worker threads.
Does this sound reasonable, or will I pay some sort of other consequence for doing this?
Recently due to following problem my website stopped working. After restarting tomcat my issue is solved, but I want to know why and when tomcat generates maximum threads.
The problem was as follows:
Maximum number of threads (150) created for connector with address null and port 443
And suddenly my website stopped working.
Few Pointers :
Connectors are defined in server.xml file in $(TOMCAT_HOME)/conf directory. You can check the settings in this file and compare it with default connector setup.
Usually number of threads is equal to the number of incoming requests to the Server. You can check if there is some script which is triggering such requests.
you can also check if the request threads for the webapps are completing their processing normally and getting released for other requests.
if you are using an IDE like eclipse, etc. to start tomcat, then you shall be able to see which threads are being generated when running in debug mode.
Hope this helps.
I have a web application that simply acts as a Front Controller using Spring Boot to call other remote REST services where I am combining Spring's DeferredResult with Observables subscribed on Scheduler.computation().
We are also using JMeter to stress out the web application, and we have noticed that requests start to fail with a 500 status, no response data and no logs anywhere when the number of concurrent threads scheduled in JMeter increases from 25, which obviously is a very "manageable" number for Tomcat.
Digging into the issue with the use of VisualVM to analyze how the threads were being created and used, we realized that the use of rx.Schedulers was somehow impacting the number of threads created by Tomcat NIO. Let me summarize our tests based on the rx.Scheduler used and a test in JMeter with 100 users (threads):
SCHEDULERS.COMPUTATION()
As we're using the Schedulers.computation() and my local machine has 4 available processors, then 4 EventLoop thread pools are created by RxJava (named RxComputationThreadPool-XXX) and ONLY 10 of Tomcat (named http-nio-8080-exec-XXX), as per VisualVM:
http://screencast.com/t/7C9La6K4Kt6
SCHEDULERS.IO() / SCHEDULERS.NEWTHREAD()
This scheduler seems to basically act as the Scheduler.newThread(), so a new thread is always created when required. Again, we can see lots of threads created by RxJava (named RxNewThreadScheduler-XXX), but ONLY 10 for Tomcat (named http-nio-8080-exec-XXX), as per VisualVM:
http://screencast.com/t/K7VWhkxci09o
SCHEDULERS.IMMEDIATE() / NO SCHEDULER
If we disable the creation of new threads in RxJava, either by setting the Schedulers.immediate() or removing it from the Observable, then we see the expected behaviour from Tomcat's threads, i.e. 100 http-nio-8080-exec corresponding to the number of users defined for the JMeter test:
http://screencast.com/t/n9TLVZGJ
Therefore, based on our testing, it's clear to us that the combination of RxJava with Schedulers and Tomcat 8 is somehow constraining the number of threads created by Tomcat... And we have no idea why or how this is happening.
Any help would be much appreciated as this is blocking our development so far.
Thanks in advance.
I'm trying to stress test a server with JMeter. I followed the manual and successfully created the tests (Test are running ok and response is correct).
However even if I keep increasing the number of threads it never fails, but I keep reading that there must be limitations? So what am I doing wrong?
My CPU is running on +/-5% when I'm not running JMeter. Running 3000 threads I see the number of threads increase by 3000 and CPU usage goes to +/-15%. Also JMeter never complains something went wrong.
My JMeter configuration is:
Number of threads: 3000
Ramp-Up Period: 30
LoopCount: Forever (Let it run for over an hour and still nothing goes wrong)
The bottleneck now is my internet connection which simply can't handle this load and maxes out at 2.1Mbps. Is this causing the problem? It is increasing my latency from 10ms per thread to over 5000ms per thread, but threads are still running.
Assuming you have confirmed that you definitely aren't getting back any errors (e.g. using a results table listener, or logging/displaying only errors using a results graph listener) and your internet connection is running at capacity then yes, it does sound like your internet connection is the bottleneck. It doesn't sound like your server is being stressed at all.
If you can easily make use of other machines (e.g. servers in the same location as the server you are testing), you could try using JMeter remote (distributed) testing to sidestep the limitations of your internet connection. See http://jmeter.apache.org/usermanual/remote-test.html for details.
Alternatively, if it's easy (e.g. if you're using VM's in a cloud and can easily spin one up with your software on), you could try using the least-powerful server you can instead and stress testing that to see if you can make it struggle even with your internet connection (just as a sanity check).
If this doesn't help, more details on your server (hardware specifications, web server software and thread pool settings, language) and the site/pages you are testing (mostly static or dynamic? large requests/responses?) would be useful. I've certainly managed to make lower-powered machines (e.g. EC2 m1.small) struggle using JMeter over a 2Mbps connection, but it depends on the site you're testing.
We have noticed the following problem: whenever our Tomcat JVM performs full GC, the requests to create a connection between the LB ant the Tomcat are failed. This is very problematic since all these requests will never get the chance to arrive to the application server.
This problem occured even when we have pointed one Tomcat to the other without any LB in between.
Are there any definition that can be done in the JVM / Tomcat / Linux that will make the HTTP connection to wait the time till the GC ends and the application JVM will receive the request.
We are using Java6, Tomcat7, and Linux Ubuntu OS.
Thanks,
Yosi
Have you looked into using the concurrent garbage collector using the 'XX:+UseConcMarkSweepGC' option? This essentially performs garbage collection in the background so that there aren't nearly as many (if any) "stop the world" full GCs.
You may need to enable concurrent garbage collection as described in http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html
-XX:+UseConcMarkSweepGC
Also try other GC configs.