I'm running an operation that involves numerous remote EJB calls to a service hosted on glassfish.
There is high latency between the client and the server, so I'm trying to speed up the operation by multi-threading the requests.
Unfortunately, I don't seem to be gaining any speed improvements by multi-threading the requests - instead - my threaded requests are being queued.
My issue is that I can't figure out why they are being queued.
I have checked the thread pool configuration on the server - and inspected it while it is running - I see a corresponding number of threads on the server to the number of threads I have on the client making EJB calls. Most of those threads are idle, most of the time however - the requests simply aren't getting here.
I feel like my calls are being blocked on the client side - from what I'm seeing on the server - both from looking at the threads in jVisualVM - and from looking at the resource usage on the server (its almost idle).
On the client, I'm creating a complete, new InitialContext object / connection for each thread.
On the client, I see most of my threads being blocked within corba:
"p: default-threadpool; w: 29" daemon prio=10 tid=0x00007f18b4001000 nid=0x608f in Object.wait() [0x00007f19ae32b000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000000f2ef5388> (a java.util.LinkedList)
at java.lang.Object.wait(Object.java:503)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.resumeOptimizedReadProcessing(CorbaMessageMediatorImpl.java:791)
- locked <0x00000000f2ef5388> (a java.util.LinkedList)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.setWorkThenReadOrResumeOptimizedRead(CorbaMessageMediatorImpl.java:869)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleInput(CorbaMessageMediatorImpl.java:1326)
at com.sun.corba.ee.impl.protocol.giopmsgheaders.FragmentMessage_1_2.callback(FragmentMessage_1_2.java:122)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.handleRequest(CorbaMessageMediatorImpl.java:742)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.dispatch(CorbaMessageMediatorImpl.java:539)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.doWork(CorbaMessageMediatorImpl.java:2324)
at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.performWork(ThreadPoolImpl.java:497)
at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:540)
my threads are blocked on
"Pool-27" daemon prio=10 tid=0x00007f19b403c800 nid=0x4ef8 waiting on condition [0x00007f19ac70e000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000f2f66078> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at com.sun.corba.ee.impl.transport.CorbaResponseWaitingRoomImpl.waitForResponse(CorbaResponseWaitingRoomImpl.java:173)
at com.sun.corba.ee.impl.transport.SocketOrChannelConnectionImpl.waitForResponse(SocketOrChannelConnectionImpl.java:1021)
at com.sun.corba.ee.impl.protocol.CorbaMessageMediatorImpl.waitForResponse(CorbaMessageMediatorImpl.java:279)
at com.sun.corba.ee.impl.protocol.CorbaClientRequestDispatcherImpl.marshalingComplete1(CorbaClientRequestDispatcherImpl.java:407)
at com.sun.corba.ee.impl.protocol.CorbaClientRequestDispatcherImpl.marshalingComplete(CorbaClientRequestDispatcherImpl.java:368)
at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.invoke(CorbaClientDelegateImpl.java:273)
at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.privateInvoke(StubInvocationHandlerImpl.java:200)
at com.sun.corba.ee.impl.presentation.rmi.StubInvocationHandlerImpl.invoke(StubInvocationHandlerImpl.java:152)
at com.sun.corba.ee.impl.presentation.rmi.codegen.CodegenStubBase.invoke(CodegenStubBase.java:227)
Something else that bugs me, is no matter how many threads I run on the client - I only get one - sometimes 2 - 'RMI TCP Connection' threads. But I'm not sure if this is involved in EJB remoting or not.
Why / Where are my client threads being blocked on these remote calls?
Note, this isn't an bandwidth issue - the amount of data being transferred is small - but there is high latency. So making more requests in parallel should be giving me an improvement. Instead - my overall time stays about the same - but individual requests go from taking 500 ms to taking 5000 ms, for example, as I increase the client thread count.
Anything above 2 or 3 threads just makes no difference at all to the overall time.
If I take the latency out of the picture, by running the same process local on the same system as the GlassFish server, performance is as expected - too fast to really even debug - so there is no question of the server being able to handle the workload.
I just can't seem to issue requests in parallel from a client.
Check of what type your EJBs are. If you call the same Stateful or Singleton EJB, your calls are synchronized - only single call is allowed for them at a time by default, no matter if you call the same method or different ones. With Stateless EJBs this is not the case, but your calls may be blocked if your pool is depleted - check the size of EJB pool on your server and increase it.
With Stateful, you cannot change the behavior (see this answer, but you may instantiate a new EJB for every remote client. With Singleton, you should consider adjusting concurrency management to allow more throughput.
Related
I am doing an upgrade from JBPM 3 to 7. The Process Instances runs on a different thread, Sometimes the UI thread needs to access the process Instance via the KIESession. If I try to execute an operation for example sending a signal the UI is blocked until the process instances finish.
I noticed this also happens if I try to about the process Instance, get the status of the the ProcessInstance and get a variable from the process Instance.
I looked further into it and the PersistableRunner.execute() is synchronized .
FYI I am using the per-process Instance strategy.
Is there a way to get around this issue?
A snippet of the thread DUMP:
java.lang.Thread.State: BLOCKED
waiting for JBPM-Processor-5692548#955268 to release lock on <0xb176> (a org.drools.persistence.PersistableRunner)
at org.drools.persistence.PersistableRunner.execute(PersistableRunner.java:400)
at org.drools.persistence.PersistableRunner.execute(PersistableRunner.java:68)
I tried using the singleton strategy
I would like to be able to access a running KieSession from a different threat without being blocked.
I'm dealing with a legacy synchronous server that has operations running for upto a minute and exposes 3 ports to overcome this problem. There is "light-requests" port, "heavy-but-important" requests port and "heavy" port.
They all expose the same service, but since they run on separate ports, they end up with dedicated thread pools.
Now this approach is running into a problem with load balancing, as Envoy can't handle a single services exposing the same proto on 3 different ports.
I'm trying to come up with a single threadpool configuration that would work (probably an extremely overprovisioned one), but I can't find any documentation on what the threadpool settings actually do.
NUM_CQS
Number of completion queues.
MIN_POLLERS
Minimum number of polling threads.
MAX_POLLERS
Maximum number of polling threads.
CQ_TIMEOUT_MSEC
Completion queue timeout in milliseconds.
Is there some reason why you need the requests split into three different thread pools? By default, there is no limit to the number of request handler threads. The sync server will spawn a new thread for each request, so the number of threads will be determined by the number of concurrent requests -- the only real limit is what your server machine can handle. (If you actually want to bound the number of threads, I think you can do so via ResourceQuota::SetMaxThreads(), although that's a global limit, not one per class of requests.)
Note that the request handler threads are independent from the number of polling threads set via MIN_POLLERS and MAX_POLLERS, so those settings probably aren't relevant here.
UPDATE: Actually, I just learned that my description above, while correct in a practical sense, got some of the internal details wrong. There is actually just one thread pool for both polling and request handlers. When a request comes in, an existing polling thread basically becomes a request handler thread, and when the request handler completes, that thread is available to become a polling thread again. The MIN_POLLERS and MAX_POLLERS options allow tuning the number of threads that are used for polling: when a polling thread becomes a request handler thread, if there are not enough polling threads remaining, a new one will be spawned, and when a request handler finishes, if there are too many polling threads, the thread will terminate. But none of this affects the number of threads used for request handlers -- that is still unbounded by default.
As per the various documentation available online eg this, I understand that with NIO mode maxConnections are not dependent on maxThreads parameter and each thread can serve any number of connections.
If I take a thread-dump, I get to see what all my threads are doing. Each of these threads is handling one request and this trace remains the same for long-running requests between multiple dumps taken in a quick interval, so how can these threads serve multiple request at the same time. I am using Tomcat v8.0.23, with Java v8.0.45.
"http-nio-8080-exec-35" #151 daemon prio=5 os_prio=0 tid=0x00007f5e70021000 nid=0x7337 runnable [0x00007f5f4ebe8000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
- locked <0x000000061d4a4070> (a java.io.BufferedInputStream)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
You have misunderstood.
Assuming we are only considering the synchronous, blocking Servlet API then Tomcat will support maxConnections but only maxThreads of those can be allocated to a processing thread at any one time.
The idea is that the majority of the connections will be essentially idle in HTTP keep-alive between requests. Hence a single Poller thread monitors these idle connections and passes them to a processing thread when there is data to process.
In earlier Tomcat releases, the BIO connector allocated 1 thread to a connection so that thread was in use even if the connection was idle which was inefficient.
I understand that the power of Node.js is that it processes all user requests on a single thread working on a queue of request. The idea being there is no context switch of this thread, no system calls.
input thread ---> | request queue| ---> output thread --(processes tasks if not causing system call, else delegates to thread pool).
The thread pool will:-
- execute tasks involving system calls (usually somewhat long running
ones.. e.g. IO tasks)
- put the results as another request task in the queue..
- which will be processed by the single thread working on queue
My question is, inevitable, Node.js code will need to put data in an RDBMS or JMS system. This is most definitely synchronous (even putting in JMS is synchronous.. although producer - consumer are not synchronous). So the thread pool processing these IO tasks will not only make system calls, but also be blocked during this period. JDBC in any case does not support synch calls (I guess due to need to be transactional, and maybe security issues, since txn and security context are attached to threads).
So how do we actually put data in RDBMS efficiently from a Node.js server?
there was a similar question asked java-thread-dump-waiting-on-object-monitor-line-not-followed-by-waiting-on, but there was no concrete answer, so I will ask my question in hopes to get more info...
In the following thread dump I see that the thread is in the "WAITING (on object monitor)" state - but there is no line with "waiting on " that would indicate what it is waiting for. How do I interpret this thread stack and find out why (and what resource) this thread is waiting on?
"eventTaskExecutor-50" prio=10 tid=0x0000000004117000 nid=0xd8dd in Object.wait() [0x00007f8f457ad000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at com.tibco.tibjms.TibjmsxLink.sendRequest(TibjmsxLink.java:359)
- locked <0x00007f98cbebe5d8> (a com.tibco.tibjms.TibjmsxResponse)
at com.tibco.tibjms.TibjmsxSessionImp._confirmTransacted(TibjmsxSessionImp.java:2934)
at com.tibco.tibjms.TibjmsxSessionImp._confirm(TibjmsxSessionImp.java:3333)
- locked <0x00007f90101399b8> (a java.lang.Object)
at com.tibco.tibjms.TibjmsxSessionImp._commit(TibjmsxSessionImp.java:2666)
at com.tibco.tibjms.TibjmsxSessionImp.commit(TibjmsxSessionImp.java:4516)
at org.springframework.jms.support.JmsUtils.commitIfNecessary(JmsUtils.java:217)
at org.springframework.jms.listener.AbstractMessageListenerContainer.commitIfNecessary(AbstractMessageListenerContainer.java:577)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:482)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:325)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:263)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1102)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:996)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Locked ownable synchronizers:
- <0x00007f901011ca88> (a java.util.concurrent.ThreadPoolExecutor$Worker)
This thread is one of the listener threads configured to accept messages from the Tibco bus.
thanks!
Marina
It's a peculiarity of HotSpot JVM. When dumping a stack, JVM recovers the wait object from the method local variables. This info is available for interpreted methods, but not for compiled native wrappers.
When Object.wait is executed frequently enough, it gets JIT-compiled.
After that there will be no "waiting on" line in a thread dump.
Since wait() must be called on a synchronized object, most often the wait object is the last locked object in the stack trace. In your case it is
- locked <0x00007f98cbebe5d8> (a com.tibco.tibjms.TibjmsxResponse)
To prevent Object.wait from being JIT-compiled (and thus making wait info always available) use the following JVM option
-XX:CompileCommand="exclude,java/lang/Object.wait"
This thread is waiting the notification from another thread (thread name is TCPLinkReader, if you look over the full thread dump you should be able to find it) which is created by the TIBCO EMS client library.
The stacktrace shows that the Spring application is trying to commit a session. To commit the session EMS client needs to send some data to server and waiting for the confirmation from server that the session is committed successfully or not.
TCPLinkReader thread is a dedicated thread that EMS client use to receive downstream (from server to client) TCP packets.
If you are seeing this thread lasts for long, there are 2 scenarios:
Something wrong on the EMS server side, possibly hung
there are some defects in the client library that caused deadlock, so server did send the response back but the TCPLinkReader thread did not notify the caller thread.
Last, post the full thread dump if problem persists.