I use mule 2.2.1 with the following http incoming receiver configuration.
<http:connector name="abc.connector.http" >
<receiver-threading-profile maxThreadsActive="500"
maxThreadsIdle="50" threadTTL="60000"
poolExhaustedAction="WAIT" maxBufferSize="100" />
</http:connector>
On production server, JVM frequently crashed. The JVM dump created as "hs_err_pid.log" is having threads like: 0x07990c00 JavaThread "ActiveMQ Session Task" [_thread_blocked, id=69807, stack(0x08770000,0x087b0000)].
There are around 2100 to 2300 threads in this crash every time.
My Question is:
Why it shows _thread_blocked?
When there is no load on Server, the count of threads are not reduced then 2000. Why it is so? I use jstack -l PID to check the no of running threads and prstat | grep PID to monitor the NLWP on solaris. It gives result like:
17725 application_pprd 3409M 2593M sleep 59 0 0:10:51 0.1% **java/2375**
How to remove this unused/inactive threads from pool to avoid crash?
How to increase this limit of NLWP for java process?
Related
I have some questions about thread name in the PlayFramework.
I've developed Rest-API service on the Play for about 5 months.
The app simply accesses MySQL, and send back json formatted data to clients.
I've already understood the pit fall of the 'blocking io', so
I create a thread pool for blocking io, and use it all the Future block that
block thread execution.
The definition of the thread pool is as follows.
akka {
actor-system = "myActorSystem"
blocking-io-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 64
}
throughput = 10
}
}
I checked the log file, and be sure that all non-blocking logics
run under thread named 'application-akka.actor.default-dispatcher-#' where
is integer value and that all blocking logics run under thread named
'application-blocking-io-dispatcher'.
Then I checked the all thread name and count using 'Jconsole'.
The number of thread named 'application-akka.actor.default-dispatcher-#' is
always under 13, and thread count of 'application-blocking-io-dispatcher-#'
is always under 30.
However, the total thread count of the JVM under which my app runs increases
constantly. The total number of thread is more than 10,000.
There is so many threads whose name start with 'default-scheduler-' or
'default-akka.actor.default-dispatcher'.
My questions are
a. What's the difference between 'application-akka.actor.default-dispatcher'
and 'default-akka.actor.default-dispatcher-' ?
b. Is there any reason thread count increases?
I want to solve this issue.
Here's my environment.
OS : Windows 10 Pro. 64bit
CPU : Intel(R) Core i7 # 3.5GHz
RAM : 64GB
JVM : 1.8.0_162 64bit
PlayFramework : 2.6
RDBMS : MySQL 5.7.21
Any suggestions will be greatly appreciated. Thanks in advance.
Finally I solved the problem. There was a bug that would not shutdown the instance of
akka's Materializor. After modifying the code, thread count in the VM keeps stable value.
Thanks.
I have a similar issue as Synchronizing timer hangs with simple setup, but with Precise Throughput Timer which suppose to replace Synchronizing timer:
Certain cases might be solved via Synchronizing Timer, however Precise Throughput Timer has native way to issue requests in packs. This behavior is disabled by default, and it is controlled with "Batched departures" settings
Number of threads in the batch (threads). Specifies the number of samples in a batch. Note the overall number of samples will still be in line with Target Throughput
Delay between threads in the batch (ms). For instance, if set to 42, and the batch size is 3, then threads will depart at x, x+42ms, x+84ms
I'm setting 10 thread number , 1 ramp up and 1 loop count,
I'm adding 1 HTTP Request only (less than 1 seconds response) and before it Test Action with Precise Throughput Timer as a child with the following setup:
Thread stuck after 5 threads succeeded:
EDIT 1
According to #Dimitri T solution:
Change Duration to 100 and add line to logging configuration and got 5 errors:
2018-03-12 15:43:42,330 INFO o.a.j.t.JMeterThread: Stopping Thread: org.apache.jorphan.util.JMeterStopThreadException: The thread is scheduled to stop in -99886 ms and the throughput timer generates a delay of 20004077. JMeter (as of 4.0) does not support interrupting of sleeping threads, thus terminating the thread manually.
EDIT 2
According to #Dimitri T solution set "Loop Count" to -1 executed 10 threads, but if I change Number of threads in batch from 2 to 5 it execute only 3 threads and stops
INFO o.a.j.t.JMeterThread: Stopping Thread: org.apache.jorphan.util.JMeterStopThreadException: The thread is scheduled to stop in -89233 ms and the throughput timer generates a delay of 19999450. JMeter (as of 4.0) does not support interrupting of sleeping threads, thus terminating the thread manually.
Set "Duration (seconds)" in your Thread Group to something non-zero (i.e. to 100)
Depending on what you're trying to achieve you might also want to set "Loop Count" to -1
You can also add the following line to log4j2.xml file:
<Logger name="org.apache.jmeter.timers" level="debug" />
This way you will be able to see what's going on with your timer(s) in jmeter.log file
I have a multi-threaded (three threads) application in Linux 3.4.0 with RT7 (realtime) patch. The application needs realtime execution with ~20ms tolerance.The application runs for a while (1 min to 50min) with realtime then I find that while one of the threads is doing some processing, a context switch happens and it comes back to the thread about 80 to 500ms later. I need to find out what process takes away the time slice. All my threads together consume ~5% CPU time. Is there any tool to see process execution history with time stamp?
Thanks,
Hakim
Consider using SystemTap. It is dynamic instrumentation engine inspired by DTrace. It dynamically patches kernel (so it will need debuginformation for it).
For example, your task may be achieved with this script:
probe scheduler.cpu_on, scheduler.cpu_off {
if(pid() == target()) {
printf("%ld %s\n", gettimeofday_us(), pn());
}
}
Use -c option to attach this script to a command or -x to a running PID:
root#lkdevel:~# stap -c 'dd if=/dev/zero of=/dev/null count=1' ./schedtrace.stp
...
1423701880670656 scheduler.cpu_on
1423701880673498 scheduler.cpu_off
1423701880674208 scheduler.cpu_on
1423701880689407 scheduler.cpu_off
1423701880689829 scheduler.cpu_on
...
I don't understand behavior of glassfish v3.1.2.
I run my java web-application with such glassfish thread-pool parameters:
Class Name: com.sun.grizzly.http.StatsThreadPool
Max Queue Size: 4096
Max Thread Pool Size: 10
Min Thread Pool Size: 10
Idle Thread
Timeout: 900
Then I send many requests to my servlet. The logic of my servlet is like this:
//do some action
Thread.currentThread().sleep(5000);
Netbeans profiler shows these results in threads window:
http://s8.postimage.org/5hupqk4ad/profiler.png
It seems that all 10 threads were created, but only 5 can run simultaneously.
Of course I want to use max number of threads simultaneously.
Can somebody explain such behavior and suggest how to fix it.
Tell me if you need more information.
Thanks
Try to check your client side, may be you have restrictions there.
I am investigating how to have my Linux desktop experience remain smooth and interactive while I run CPU intensive tasks in the background. Here is the sample program (written in Java) which I am using to simulate CPU load:
public class Spinner {
public static void main(String[] args)
{
for (int i = 0; i < 100; i++) {
(new Thread(new Runnable() {
public void run() {
while (true);
}
})).start();
}
}
}
When I run this on the command line, I notice that the interactivity of my desktop applicaitons (e.g. text editor) drops significantly. I have a dual core machine, so I am not suprised by this.
To combat this my first thought was to nice the process with renice -p 20 <pid>. I found however that this doesn't have much affect. I instead have to renice all of the child processes with something like ls /proc/<pid>/task | xargs renice 20 -p -- which has a much greater effect.
I am very confused by this, as I would not expect threads to have their own process IDs. Even if they did I would expect renice to act on the entire process, not just the main thread of the process.
Does anyone have a clear understanding of what is happening here? It appears that each thread is actually a seperate process (at least it has a valid PID). I knew that historically Linux worked like this, but I believed NPTL fixed that years ago.
I am testing on RHEL 5.4 (linux kernel 2.6.18).
(As an aside. I notice the same effect if I try to use sched_setscheduler(<pid>, SCHED_BATCH, ..) to try to solve this interactivity problem. Ie, I need to make this call for all the "child" processes I seee in /proc/<pid>/task, it is not enough to perform it once on the main program pid.)
Thread IDs come from the same namespace as PIDs. This means that each thread is invididually addressable by its TID - some system calls do apply to the entire process (for example, kill) but others apply only to a single thread.
The scheduler system calls are generally in the latter class, because this allows you to give different threads within a process different scheduler attributes, which is often useful.
As I understand it, on Linux threads and processes are pretty much the same thing; threads just happen to be processes which share the same memory rather than doing fork's copy-on-write thing, and fork(2) and pthread_create(3) are presumably both just layered onto a call to clone(2) with different arguments.
The scheduling stuff is very confusing because e.g the pthreads(7) man page starts off by telling you Posix threads share a common nice value but then you have to get down to
NPTL still has a few non-conformances with POSIX.1: Threads do not
share a common nice value
to see the whole picture (and I'm sure there are plenty of even less helpful man pages).
I've written GUI apps which spawn multiple compute threads from a main UI thread, and have always found the key to getting the app to remain very responsive is to invoke nice(2) in the compute threads (only); increasing it by 4 or so seems to work well.
Or at least that's what I remembered doing. I just looked at the code for the first time in years and see what I actually did was this:
// Note that this code relies on Linux NPTL's non-Posix-compliant
// thread-specific nice value (although without a suitable replacement
// per-thread priority mechanism it's just as well it's that way).
// TODO: Should check some error codes,
// but it's probably pretty harmless if it fails.
const int current_priority=getpriority(PRIO_PROCESS,0);
setpriority(PRIO_PROCESS,0,std::min(19u,current_priority+n));
Which is interesting. I probably tried nice(2) and found it did actually apply to the whole process (all threads), which wasn't what I wanted (but maybe you do). But this is going back years now; behaviour might have changed since.
One essential tool when you're playing with this sort of stuff: if you hit 'H' (NB not 'h') in top(1), it changes from process view to showing all the threads and the individual thread nice values. e.g If I run [evolvotron][7] -t 4 -n 5 (4 compute threads at nice 5) I see (I'm just on an old single core non-HT machine, so not actually much point in multiple threads here):
Tasks: 249 total, 5 running, 244 sleeping, 0 stopped, 0 zombie
Cpu(s): 17.5%us, 6.3%sy, 76.2%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1025264k total, 984316k used, 40948k free, 96136k buffers
Swap: 1646620k total, 0k used, 1646620k free, 388596k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4911 local 25 5 81096 23m 15m R 19.7 2.4 0:04.03 evolvotron
4912 local 25 5 81096 23m 15m R 19.7 2.4 0:04.20 evolvotron
4913 local 25 5 81096 23m 15m R 19.7 2.4 0:04.08 evolvotron
4914 local 25 5 81096 23m 15m R 19.7 2.4 0:04.19 evolvotron
4910 local 20 0 81096 23m 15m S 9.8 2.4 0:05.83 evolvotron
...