QT GUI hang while swapping framebuffers - linux

I am using Qt QML. I see some random GUI freezes. I was able to get the back trace of the hang and it looks as follows:
It looks like GUI/render thread is stuck in swap buffers, which is basically the call to the graphics driver to execute the rest of the rendering commands to post the contents to the window.
Has anybody faced this issue ? Any clue about what could be causing this?
- #3 <signal handler called>
#4 0x75a88a0c in pthread_cond_wait () from /opt/btl/data/libpthread-2.23.so
#5 0x74af14ac in gcoOS_GetDisplayBackbuffer (Display=0x16dc710, Window=<optimized out>, context=context#entry=0x7ef910b8,
surface=<optimized out>, Offset=0x7ef910c0, X=0x7ef910c4,
Y=0x7ef910c8) at gc_hal_user_fbdev.c:1015
#6 0x74af2744 in gcoOS_GetDisplayBackbufferEx (Display=<optimized out>, Window=<optimized out>, localDisplay=<optimized out>,
context=context#entry=0x7ef910b8, surface=<optimized out>,
surface#entry=0x7ef910bc, Offset=0x7ef910c4, Offset#entry=0x7ef910c0,
X=0x7ef910c8, X#entry=0x7ef910c4, Y=Y#entry=0x7ef910c8) at
gc_hal_user_fbdev.c:2401
#7 0x74c3cb70 in veglGetDisplayBackBuffer (Display=Display#entry=0x16dd00c, Surface=Surface#entry=0x26bc6f4,
BackBuffer=0x7ef910b8, BackBuffer#entry=0x7ef91138) at
gc_egl_platform.c:217
#8 0x74c37bc4 in _SwapBuffersRegion (Rects=<optimized out>, NumRects=1, Draw=0x26bc6f4, Dpy=0x16dd00c, Thread=0x16dcb5c) at
gc_egl_swap.c:3338
#9 _eglSwapBuffersRegion (Dpy=0x16dd00c, Dpy#entry=<error reading variable: value has been optimized out>, Draw=0x26bc6f4,
Draw#entry=<error reading variable: value has been optimized out>,
NumRects=1, NumRects#entry=<error reading variable: value has been
optimized out>, Rects=<optimized out>, Rects#entry=<error reading
variable: value has been optimized out>) at gc_egl_swap.c:4246
#10 0x7535e708 in veglSwapBuffer_es3 (Dpy=<optimized out>, Draw=<optimized out>, Callback=<optimized out>) at
src/glcore/gc_es_egl.c:354
#11 0x74c38964 in eglSwapBuffers (Dpy=0x16dd00c, Draw=0x26bc6f4) at gc_egl_swap.c:4400
#12 0x72ffc9d8 in QEGLPlatformContext::swapBuffers (this=this#entry=0x26bf928, surface=surface#entry=0x1bddca8) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/platformsupport/eglconvenience/qeglplatformcontext.cpp:447
#13 0x72fba380 in QEglFSContext::swapBuffers (this=0x26bf928, surface=0x1bddca8) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/plugins/platforms/eglfs/api/qeglfscontext.cpp:115
#14 0x765da978 in QOpenGLContext::swapBuffers (this=0x26bc6a0, surface=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/gui/kernel/qopenglcontext.cpp:1111
#15 0x76a814d4 in QSGGuiThreadRenderLoop::renderWindow (this=0x1bd9a20, window=0x350020) at
/usr/src/debug/qtdeclarative/5.9.5+gitAUTOINC+dfbe918537-r0/git/src/quick/scenegraph/qsgrenderloop.cpp:445
#16 0x76af071c in QQuickWindow::event (this=0x1bd95a8, e=0x7ef912f4) at
/usr/src/debug/qtdeclarative/5.9.5+gitAUTOINC+dfbe918537-r0/git/src/quick/items/qquickwindow.cpp:1588
#17 0x75d2285c in doNotify (event=<optimized out>, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1099
#18 QCoreApplication::notify (this=<optimized out>, receiver=<optimized out>, event=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1085
#19 0x75d229bc in QCoreApplication::notifyInternal2 ( receiver=receiver#entry=0x1bd95a8, event=event#entry=0x7ef912f4) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1024
#20 0x765aa600 in QCoreApplication::sendEvent (event=0x7ef912f4, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.h:233
#21 QWindowPrivate::deliverUpdateRequest (this=this#entry=0x1bd9618) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/gui/kernel/qwindow.cpp:2305
#22 0x765aadac in QWindow::event (this=this#entry=0x1bd95a8, ev=ev#entry=0x7ef913f0) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/gui/kernel/qwindow.cpp:2276
#23 0x76af06e0 in QQuickWindow::event (this=0x1bd95a8, e=0x7ef913f0) at
/usr/src/debug/qtdeclarative/5.9.5+gitAUTOINC+dfbe918537-r0/git/src/quick/items/qquickwindow.cpp:1607
#24 0x75d2285c in doNotify (event=<optimized out>, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1099
#25 QCoreApplication::notify (this=<optimized out>, receiver=<optimized out>, event=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1085
#26 0x75d229bc in QCoreApplication::notifyInternal2 (receiver=0x1bd95a8, event=event#entry=0x7ef913f0) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1024
#27 0x75d7e208 in QCoreApplication::sendEvent (event=0x7ef913f0, receiver=<optimized out>) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.h:233
#28 QTimerInfoList::activateTimers (this=0x16df7b4) at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qtimerinfo_unix.cpp:643
#29 0x75d7ea50 in timerSourceDispatch (source=<optimized out>) at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventdispatcher_glib.cpp:182
#30 idleTimerSourceDispatch (source=<optimized out>) at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventdispatcher_glib.cpp:229
#31 0x74ddcaec in g_main_dispatch (context=0x16df708) at /usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3154
#32 g_main_context_dispatch (context=context#entry=0x16df708) at /usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3769
#33 0x74ddcd14 in g_main_context_iterate (context=context#entry=0x16df708, block=block#entry=1,
dispatch=dispatch#entry=1, self=<optimized out>) at
/usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3840
#34 0x74ddcdc0 in g_main_context_iteration (context=0x16df708, may_block=may_block#entry=1) at
/usr/src/debug/glib-2.0/1_2.46.2-r0/glib-2.46.2/glib/gmain.c:3901
#35 0x75d7ecb4 in QEventDispatcherGlib::processEvents (this=0x146f2c0, flags=...) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventdispatcher_glib.cpp:423
#36 0x75d208e0 in QEventLoop::exec (this=this#entry=0x7ef91524, flags=flags#entry=...) at
/usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qeventloop.cpp:212
#37 0x75d29d08 in QCoreApplication::exec () at /usr/src/debug/qtbase/5.9.5+gitAUTOINC+f4c2fcc052-r08/git/src/corelib/kernel/qcoreapplication.cpp:1297
#38 0x00043c24 in ?? ()
#39 0x7576dcf8 in __libc_start_main () from /opt/btl/data/libc-2.23.s
#40 0x0034fd2c in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Related

Play framework futures not being parallelised by default-dispatcher [duplicate]

This question already has answers here:
Why are Futures within Futures running sequentially when started on Akka Dispatcher
(3 answers)
Closed 3 years ago.
I'm trying to test the ExecutionContext behaviour in a play app, and found that I'm not able to achieve any degree of parallelism when I'm using the default dispatcher either by calling as.dispatcher, as.dispatchers.lookup("akka.actor.default-dispatcher") or passing the default execution context as a parameter to my Controller class:
class HomeController #Inject()(cc: ControllerComponents)(implicit ec: ExecutionContext)
I'm building on the play examples available in here. And adding/altering the following configuration:
routes
GET /futures controllers.HomeController.testFutures(dispatcherId: String)
common.conf
akka {
my-dispatcher {
executor = "fork-join-executor"
fork-join-executor {
# vm-cores = 4
parallelism-min = 4
parallelism-factor = 2.0
# 2x vm-cores
parallelism-max = 8
}
}
actor.default-dispatcher {
executor = "fork-join-executor"
fork-join-executor {
# vm-cores = 4
parallelism-min = 4
parallelism-factor = 2.0
# 2x vm-cores
parallelism-max = 8
}
}
}
HomeController
#Singleton
class HomeController #Inject()(cc: ControllerComponents, as: ActorSystem) extends AbstractController(cc) {
import HomeController._
def testFutures(dispatcherId: String) = Action.async { implicit request =>
implicit val dispatcher = as.dispatchers.lookup(dispatcherId)
Future.sequence((0 to 10).map(i => Future {
val time = 1000 + Random.nextInt(200)
log.info(s"Sleeping #$i for $time ms")
Thread.sleep(time)
log.info(s"Awakening #$i")
})).map(_ => Ok("ok"))
}
}
For some reason, calls to http://localhost:9000/futures?dispatcherId=akka.actor.default-dispatcher (default dispatcher) don't parallelize and produce the following output:
[info] c.HomeController - Sleeping #0 for 1044 ms
[info] c.HomeController - Awakening #0
[info] c.HomeController - Sleeping #1 for 1034 ms
[info] c.HomeController - Awakening #1
[info] c.HomeController - Sleeping #2 for 1031 ms
[info] c.HomeController - Awakening #2
[info] c.HomeController - Sleeping #3 for 1065 ms
[info] c.HomeController - Awakening #3
[info] c.HomeController - Sleeping #4 for 1082 ms
[info] c.HomeController - Awakening #4
[info] c.HomeController - Sleeping #5 for 1057 ms
[info] c.HomeController - Awakening #5
[info] c.HomeController - Sleeping #6 for 1090 ms
[info] c.HomeController - Awakening #6
[info] c.HomeController - Sleeping #7 for 1165 ms
[info] c.HomeController - Awakening #7
[info] c.HomeController - Sleeping #8 for 1173 ms
[info] c.HomeController - Awakening #8
[info] c.HomeController - Sleeping #9 for 1034 ms
[info] c.HomeController - Awakening #9
[info] c.HomeController - Sleeping #10 for 1056 ms
[info] c.HomeController - Awakening #10
But calls to this http://localhost:9000/futures?dispatcherId=akka.my-dispatcher (using another dispatcher) parallelize correclty and produce the following output.
[info] c.HomeController - Sleeping #1 for 1191 ms
[info] c.HomeController - Sleeping #0 for 1055 ms
[info] c.HomeController - Sleeping #7 for 1196 ms
[info] c.HomeController - Sleeping #4 for 1121 ms
[info] c.HomeController - Sleeping #6 for 1040 ms
[info] c.HomeController - Sleeping #2 for 1016 ms
[info] c.HomeController - Sleeping #5 for 1107 ms
[info] c.HomeController - Sleeping #3 for 1165 ms
[info] c.HomeController - Awakening #2
[info] c.HomeController - Sleeping #8 for 1002 ms
[info] c.HomeController - Awakening #6
[info] c.HomeController - Sleeping #9 for 1127 ms
[info] c.HomeController - Awakening #0
[info] c.HomeController - Sleeping #10 for 1016 ms
[info] c.HomeController - Awakening #5
[info] c.HomeController - Awakening #4
[info] c.HomeController - Awakening #3
[info] c.HomeController - Awakening #1
[info] c.HomeController - Awakening #7
[info] c.HomeController - Awakening #8
[info] c.HomeController - Awakening #10
[info] c.HomeController - Awakening #9
Any ideas why this could be happening?
I think the behavior is given by akka.actor.default-dispatcher that is of the type BatchingExecutor and this will try optimize in the cases of operations such as map/flatmap by executing them in the same thread to avoid unnecessary schedules . In the case where we are going to block we can indicate it with a hint as scala.concurrent.blocking (Thread.sleep (time)) and in this way a mark is stored in a ThreadLocal[BlockContext] which indicates the intention to block and does not apply the optimizations but throws the operation in another thread.
if you change this line Thread.sleep(time) for this scala.concurrent.blocking(Thread.sleep(time)) you will get the desired behavior
#Singleton
class HomeController #Inject()(cc: ControllerComponents, as: ActorSystem) extends AbstractController(cc) {
import HomeController._
def testFutures(dispatcherId: String) = Action.async { implicit request =>
implicit val dispatcher = as.dispatchers.lookup(dispatcherId)
Future.sequence((0 to 10).map(i => Future {
val time = 1000 + Random.nextInt(200)
log.info(s"Sleeping #$i for $time ms")
scala.concurrent.blocking(Thread.sleep(time))
log.info(s"Awakening #$i")
})).map(_ => Ok("ok"))
}
}
[info] play.api.Play - Application started (Dev) (no global state)
Sleeping #0 for 1062 ms
Sleeping #1 for 1128 ms
Sleeping #2 for 1189 ms
Sleeping #3 for 1105 ms
Sleeping #4 for 1169 ms
Sleeping #5 for 1178 ms
Sleeping #6 for 1057 ms
Sleeping #7 for 1003 ms
Sleeping #8 for 1164 ms
Sleeping #9 for 1029 ms
Sleeping #10 for 1005 ms
Awakening #7
Awakening #10
Awakening #9
Awakening #6
Awakening #0
Awakening #3
Awakening #1
Awakening #8
Awakening #4
Awakening #5
Awakening #2

Failed to persist etcd/raft data: input/output error

I'm running a Hyperledger Fabric 1.4.1 using etcd/raft and I'm running on this error on the orderers of the network whenever I send a transaction to the orderer.
We have observed this 12 times in a period of 9 days.
What might be the causes of it?
2019-08-07 17:06:07.241 UTC [orderer.consensus.etcdraft] 2 -> DEBU 7fc91 Proposed block [15] to raft consensus channel=pocccssinciensaled node=2
2019-08-07 17:06:07.242 UTC [orderer.consensus.etcdraft] run -> PANI 7fc92 Failed to persist etcd/raft data: input/output error channel=pocccssinciensaled node=2
panic: Failed to persist etcd/raft data: input/output error
goroutine 68 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000ede40, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000e258, 0x104, 0x1033fcb, 0x24, 0xc0002f9bc8, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000e258, 0x1033fcb, 0x24, 0xc0002f9bc8, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00000e260, 0x1033fcb, 0x24, 0xc0002f9bc8, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*node).run(0xc000586500, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/node.go:118 +0x43c
created by github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*node).start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/node.go:80 +0x218

cassandra 4.0.0 driver - PreparedStatement uses 100% of one CPU node?

With cassandra driver 4.0.0 (com.datastax.oss / java-driver-core / 4.0.0), a PreparedStatement (just by existing) uses 100% of one CPU node, even when the application is idle:
import com.datastax.oss.driver.api.core.CqlSession;
import java.net.InetSocketAddress;
public class Demo_4_0_0 {
public static void main(String[] args) throws Exception {
CqlSession session = CqlSession.builder().addContactPoint(new InetSocketAddress("localhost", 9042)).withLocalDatacenter("datacenter1").build();
System.out.println("before preparing select");
Thread.sleep(15000);
session.prepare("SELECT value FROM demo.demo WHERE partition = 0;");
System.out.println("after preparing select");
Thread.sleep(15000);
session.close();
}
}
The same PreparedStatement with cassandra driver 3.7.1 (com.datastax.cassandra / cassandra-driver-core / 3.7.1) behaves fine - no CPU load when the application is idle:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
public class Demo_3_7_1 {
public static void main(String[] args) throws Exception {
Session session = Cluster.builder().addContactPoints("127.0.0.1").build().connect();
System.out.println("before preparing select");
Thread.sleep(15000);
session.prepare("SELECT value FROM demo.demo WHERE partition = 0;");
System.out.println("after preparing select");
Thread.sleep(15000);
session.close();
}
}
For the examples to work, in cassandra first execute:
CREATE KEYSPACE demo WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1};
CREATE TABLE demo.demo (partition INT PRIMARY KEY, value INT);
My environment: Cassandra 3.11.2 (as docker container), jdk1.8.0_111 x64, Windows 10, 8 CPUs.
Any ideas?
Additional information: The thread dump before and after preparing the statement looks identical with one exception, that the "s0-timer-0" thread has appeared:
"s0-timer-0" #12 prio=5 os_prio=0 tid=0x0000000021392000 nid=0xcd0 runnable [0x0000000024f1e000]
java.lang.Thread.State: RUNNABLE
at java.lang.Thread.sleep(Native Method)
at io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:579)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:478)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
Here is the full thread dump after the statement has been prepared:
2019-04-06 10:50:54
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.202-b08 mixed mode):
"s0-timer-0" #12 prio=5 os_prio=0 tid=0x0000000021392000 nid=0xcd0 runnable [0x0000000024f1e000]
java.lang.Thread.State: RUNNABLE
at java.lang.Thread.sleep(Native Method)
at io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:579)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:478)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"JMX server connection timeout 20" #20 daemon prio=5 os_prio=0 tid=0x000000001ff32000 nid=0xb90 in Object.wait() [0x0000000024a1f000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000076b779218> (a [I)
at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(Unknown Source)
- locked <0x000000076b779218> (a [I)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"RMI Scheduler(0)" #19 daemon prio=5 os_prio=0 tid=0x000000001ff31000 nid=0x13ac waiting on condition [0x000000002491f000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076f8c4ca0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(Unknown Source)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.getTask(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"RMI TCP Connection(1)-10.0.75.1" #18 daemon prio=5 os_prio=0 tid=0x0000000020be7800 nid=0x27b0 runnable [0x000000002481d000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
- locked <0x000000076b73dae0> (a java.io.BufferedInputStream)
at java.io.FilterInputStream.read(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$180/1667804988.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- <0x000000076b553ee8> (a java.util.concurrent.ThreadPoolExecutor$Worker)
"RMI TCP Accept-0" #17 daemon prio=5 os_prio=0 tid=0x0000000020b3c000 nid=0x448c runnable [0x000000002461f000]
java.lang.Thread.State: RUNNABLE
at java.net.DualStackPlainSocketImpl.accept0(Native Method)
at java.net.DualStackPlainSocketImpl.socketAccept(Unknown Source)
at java.net.AbstractPlainSocketImpl.accept(Unknown Source)
at java.net.PlainSocketImpl.accept(Unknown Source)
- locked <0x000000076f400858> (a java.net.SocksSocketImpl)
at java.net.ServerSocket.implAccept(Unknown Source)
at java.net.ServerSocket.accept(Unknown Source)
at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(Unknown Source)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"s0-io-1" #16 prio=5 os_prio=0 tid=0x0000000020b34000 nid=0x33f8 runnable [0x000000002310f000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
at sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
- locked <0x000000076f413268> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x000000076f413280> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000076f4131e8> (a sun.nio.ch.WindowsSelectorImpl)
at sun.nio.ch.SelectorImpl.select(Unknown Source)
at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:786)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:434)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"s0-io-0" #15 prio=5 os_prio=0 tid=0x00000000203cd800 nid=0x368c runnable [0x0000000021aff000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
at sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
at sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
- locked <0x000000076f40b900> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x000000076f40b918> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000076f40b880> (a sun.nio.ch.WindowsSelectorImpl)
at sun.nio.ch.SelectorImpl.select(Unknown Source)
at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:786)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:434)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"s0-admin-1" #14 prio=5 os_prio=0 tid=0x00000000202c7000 nid=0x4aac waiting on condition [0x0000000020a2f000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076f418190> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(Unknown Source)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(Unknown Source)
at java.util.concurrent.LinkedBlockingQueue.poll(Unknown Source)
at io.netty.util.concurrent.SingleThreadEventExecutor.takeTask(SingleThreadEventExecutor.java:251)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:52)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"s0-admin-0" #13 prio=5 os_prio=0 tid=0x00000000202b5800 nid=0x4158 waiting on condition [0x000000002072e000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076f420180> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(Unknown Source)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(Unknown Source)
at java.util.concurrent.LinkedBlockingQueue.take(Unknown Source)
at io.netty.util.concurrent.SingleThreadEventExecutor.takeTask(SingleThreadEventExecutor.java:238)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:52)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Unknown Source)
Locked ownable synchronizers:
- None
"Service Thread" #10 daemon prio=9 os_prio=0 tid=0x000000001eb95000 nid=0x2b30 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"C1 CompilerThread3" #9 daemon prio=9 os_prio=2 tid=0x000000001eb11800 nid=0x1948 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"C2 CompilerThread2" #8 daemon prio=9 os_prio=2 tid=0x000000001eb05800 nid=0x56c waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"C2 CompilerThread1" #7 daemon prio=9 os_prio=2 tid=0x000000001eafd800 nid=0x4a04 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"C2 CompilerThread0" #6 daemon prio=9 os_prio=2 tid=0x000000001eafa800 nid=0x1e0c waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"Attach Listener" #5 daemon prio=5 os_prio=2 tid=0x000000001eaf8800 nid=0x34c4 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"Signal Dispatcher" #4 daemon prio=9 os_prio=2 tid=0x000000001cc1e000 nid=0x36f0 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"Finalizer" #3 daemon prio=8 os_prio=1 tid=0x000000000319e800 nid=0x4acc in Object.wait() [0x000000001efce000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000076f438180> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(Unknown Source)
- locked <0x000000076f438180> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(Unknown Source)
at java.lang.ref.Finalizer$FinalizerThread.run(Unknown Source)
Locked ownable synchronizers:
- None
"Reference Handler" #2 daemon prio=10 os_prio=2 tid=0x0000000003195000 nid=0x4118 in Object.wait() [0x000000001eace000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x000000076f430468> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Unknown Source)
at java.lang.ref.Reference.tryHandlePending(Unknown Source)
- locked <0x000000076f430468> (a java.lang.ref.Reference$Lock)
at java.lang.ref.Reference$ReferenceHandler.run(Unknown Source)
Locked ownable synchronizers:
- None
"main" #1 prio=5 os_prio=0 tid=0x00000000011be000 nid=0x3c90 waiting on condition [0x000000000301f000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at bug.Demo_4_0_0.main(Demo_4_0_0.java:14)
Locked ownable synchronizers:
- None
"VM Thread" os_prio=2 tid=0x000000001cc0a000 nid=0x17c8 runnable
"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x00000000030b7800 nid=0x4b9c runnable
"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x00000000030b9800 nid=0x9c8 runnable
"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x00000000030bb000 nid=0x46e8 runnable
"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x00000000030bd000 nid=0x3234 runnable
"GC task thread#4 (ParallelGC)" os_prio=0 tid=0x00000000030be800 nid=0x3160 runnable
"GC task thread#5 (ParallelGC)" os_prio=0 tid=0x00000000030bf800 nid=0xc20 runnable
"GC task thread#6 (ParallelGC)" os_prio=0 tid=0x00000000030c3800 nid=0x4298 runnable
"GC task thread#7 (ParallelGC)" os_prio=0 tid=0x00000000030c5000 nid=0x429c runnable
"VM Periodic Task Thread" os_prio=2 tid=0x000000001ebaf000 nid=0x4a20 waiting on condition
JNI global references: 664
I looked into this a little bit today as we've had several reports of this happening now (JAVA-2264).
It appears this is caused by the fact that there is some special logic in netty's HashedWheelTimer timer code to round sleep duration to a factor of 10:
// Check if we run on windows, as if thats the case we will need
// to round the sleepTime as workaround for a bug that only affect
// the JVM if it runs on windows.
//
// See https://github.com/netty/netty/issues/356
if (PlatformDependent.isWindows()) {
sleepTimeMs = sleepTimeMs / 10 * 10;
}
Typically, this would be ok, however the java driver by default configures HashedWheelTimers tick duration to 1ms. As sleepTimeMs is a long, my guess is we're sleeping in a tight loop as 1/10*10 = 0.
We will find a solution for this in the driver, whether it be adjusting the default tick duration overall (maybe 1ms is too aggressive), or just for windows.
Until then you can work around this in one of two ways:
Pass in the system property -Ddatastax-java-driver.advanced.netty.timer.tick-duration="100 millseconds" to your application
Set datastax-java-driver.advanced.netty.timer.tick-duration=100 millseconds in application.conf

Consumres are waiting for run with Java ExecutorService

I have one producer vs. multiple consumers sample. The producer put an event to a blocking queue. Consumers take message from the queue and then insert them to DB. This works for many days.
However, it went down several times yesterday and the producer is blocked, when the traffic was very tough.
I checked the stack using jstack and I saw all EnterClassData-Consumer-Service threads are waiting for running while all ConsumeMessageThread are waiting for PUT
Sample stack is followed:
2019-01-29 20:46:39
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.65-b01 mixed mode):
"Attach Listener" #78 daemon prio=9 os_prio=0 tid=0x00007f97f0001000 nid=0x1301 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"DestroyJavaVM" #74 prio=5 os_prio=0 tid=0x00007f983c009800 nid=0x558f waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"ConsumeMessageThread_2" #57 prio=5 os_prio=0 tid=0x00007f9800025000 nid=0x5640 waiting on condition [0x00007f97980ee000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f590> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350)
at xx.xx.monitor.config.EnterClassMQConsumer.onMessage(EnterClassMQConsumer.java:80)
at xx.xx.vkmq.client.spring.VKMQConsumerInitializer$$Lambda$34/1277678493.onMessage(Unknown Source)
at xx.xx.vkmq.client.consumer.VKMQConsumer.lambda$subscribe$0(VKMQConsumer.java:52)
at xx.xx.vkmq.client.consumer.VKMQConsumer$$Lambda$35/1742448147.consumeMessage(Unknown Source)
at org.apache.rocketmq.client.impl.consumer.ConsumeMessageConcurrentlyService$ConsumeRequest.run(ConsumeMessageConcurrentlyService.java:417)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"ConsumeMessageThread_1" #55 prio=5 os_prio=0 tid=0x00007f9800023800 nid=0x563f waiting on condition [0x00007f97981ef000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f590> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:350)
at xx.xx.monitor.config.EnterClassMQConsumer.onMessage(EnterClassMQConsumer.java:80)
at xx.xx.vkmq.client.spring.VKMQConsumerInitializer$$Lambda$34/1277678493.onMessage(Unknown Source)
at xx.xx.vkmq.client.consumer.VKMQConsumer.lambda$subscribe$0(VKMQConsumer.java:52)
at xx.xx.vkmq.client.consumer.VKMQConsumer$$Lambda$35/1742448147.consumeMessage(Unknown Source)
at org.apache.rocketmq.client.impl.consumer.ConsumeMessageConcurrentlyService$ConsumeRequest.run(ConsumeMessageConcurrentlyService.java:417)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"PullMessageServiceScheduledThread" #56 prio=5 os_prio=0 tid=0x00007f97e0a1a000 nid=0x563e waiting on condition [0x00007f97982f0000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006775910a0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"NettyClientSelector_1" #41 prio=5 os_prio=0 tid=0x00007f983db61800 nid=0x5630 runnable [0x00007f97a63f2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000006773a05b0> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x0000000677590c20> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000067739dd90> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:692)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:352)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877)
at java.lang.Thread.run(Thread.java:745)
"RebalanceService" #38 prio=5 os_prio=0 tid=0x00007f983e617000 nid=0x562f waiting on condition [0x00007f97a64f3000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006773a0050> (a org.apache.rocketmq.common.CountDownLatch2$Sync)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at org.apache.rocketmq.common.CountDownLatch2.await(CountDownLatch2.java:114)
at org.apache.rocketmq.common.ServiceThread.waitForRunning(ServiceThread.java:116)
at org.apache.rocketmq.client.impl.consumer.RebalanceService.run(RebalanceService.java:40)
at java.lang.Thread.run(Thread.java:745)
"PullMessageService" #37 prio=5 os_prio=0 tid=0x00007f983e59e800 nid=0x562e waiting on condition [0x00007f97a65f4000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000677591100> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.rocketmq.client.impl.consumer.PullMessageService.run(PullMessageService.java:88)
at java.lang.Thread.run(Thread.java:745)
"CleanExpireMsgScheduledThread_1" #40 prio=5 os_prio=0 tid=0x00007f983e580000 nid=0x562d waiting on condition [0x00007f97a66f5000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000677592558> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"MQClientFactoryScheduledThread" #39 prio=5 os_prio=0 tid=0x00007f983e533000 nid=0x562c waiting on condition [0x00007f97a67f6000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000677591208> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"ClientHouseKeepingService" #36 daemon prio=5 os_prio=0 tid=0x00007f983dfc0000 nid=0x562b in Object.wait() [0x00007f97a6af7000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x0000000677592090> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)
"pool-6-thread-1" #34 prio=5 os_prio=0 tid=0x00007f983cb58000 nid=0x5621 waiting on condition [0x00007f97a6df8000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006775e6a78> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"pool-3-thread-2" #33 prio=5 os_prio=0 tid=0x00007f97a0172800 nid=0x5617 waiting on condition [0x00007f97a70f9000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000675d44aa0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"Abandoned connection cleanup thread" #32 daemon prio=5 os_prio=0 tid=0x00007f979c135800 nid=0x5613 in Object.wait() [0x00007f97a73fa000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
- locked <0x00000006772ef050> (a java.lang.ref.ReferenceQueue$Lock)
at com.mysql.jdbc.AbandonedConnectionCleanupThread.run(AbandonedConnectionCleanupThread.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"Tomcat JDBC Pool Cleaner[1365202186:1548743608273]" #31 daemon prio=5 os_prio=0 tid=0x00007f979c102000 nid=0x5612 in Object.wait() [0x00007f97a74fb000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.util.TimerThread.mainLoop(Timer.java:552)
- locked <0x00000006772ef098> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)
"pool-3-thread-1" #30 prio=5 os_prio=0 tid=0x00007f97a045f800 nid=0x5611 waiting on condition [0x00007f97a75fc000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000675d44aa0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"EnterClassData-Consumer-Service-thread-5" #26 prio=5 os_prio=0 tid=0x00007f983c985800 nid=0x55e9 waiting on condition [0x00007f97f42cf000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f2a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"EnterClassData-Consumer-Service-thread-4" #25 prio=5 os_prio=0 tid=0x00007f983c985000 nid=0x55e8 waiting on condition [0x00007f97f43d0000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f2a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"EnterClassData-Consumer-Service-thread-3" #24 prio=5 os_prio=0 tid=0x00007f983cc18000 nid=0x55e7 waiting on condition [0x00007f97f44d1000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f2a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"EnterClassData-Consumer-Service-thread-2" #23 prio=5 os_prio=0 tid=0x00007f983cc17800 nid=0x55e6 waiting on condition [0x00007f97f45d2000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f2a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"EnterClassData-Consumer-Service-thread-1" #22 prio=5 os_prio=0 tid=0x00007f983d031800 nid=0x55e5 waiting on condition [0x00007f97f46d3000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067675f2a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
"container-0" #21 prio=5 os_prio=0 tid=0x00007f983d70c800 nid=0x55db waiting on condition [0x00007f97f4bd4000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.apache.catalina.core.StandardServer.await(StandardServer.java:427)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer$1.run(TomcatEmbeddedServletContainer.java:177)
"ContainerBackgroundProcessor[StandardEngine[Tomcat]]" #20 daemon prio=5 os_prio=0 tid=0x00007f983cdeb800 nid=0x55da waiting on condition [0x00007f97f4cd5000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1355)
at java.lang.Thread.run(Thread.java:745)
"AsyncAppender-Worker-ASYNC_SLOW_SQL" #15 daemon prio=5 os_prio=0 tid=0x00007f983d876000 nid=0x55ae waiting on condition [0x00007f9810bd5000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000674ce2a78> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
at ch.qos.logback.core.AsyncAppenderBase$Worker.run(AsyncAppenderBase.java:264)
"RMI TCP Accept-0" #13 daemon prio=5 os_prio=0 tid=0x00007f983c301000 nid=0x55a2 runnable [0x00007f9810fd7000]
java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:400)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
at java.lang.Thread.run(Thread.java:745)
"RMI TCP Accept-9999" #12 daemon prio=5 os_prio=0 tid=0x00007f983c2f4000 nid=0x55a1 runnable [0x00007f98110d8000]
java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:400)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
at java.lang.Thread.run(Thread.java:745)
"RMI TCP Accept-0" #11 daemon prio=5 os_prio=0 tid=0x00007f983c2e0000 nid=0x55a0 runnable [0x00007f98111d9000]
java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:400)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
at java.lang.Thread.run(Thread.java:745)
"Service Thread" #9 daemon prio=9 os_prio=0 tid=0x00007f983c1da800 nid=0x559f runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C1 CompilerThread2" #8 daemon prio=9 os_prio=0 tid=0x00007f983c1cd000 nid=0x559e waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread1" #7 daemon prio=9 os_prio=0 tid=0x00007f983c1cb000 nid=0x559d waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"C2 CompilerThread0" #6 daemon prio=9 os_prio=0 tid=0x00007f983c1c8800 nid=0x559c waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Signal Dispatcher" #5 daemon prio=9 os_prio=0 tid=0x00007f983c1c6800 nid=0x559b runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Surrogate Locker Thread (Concurrent GC)" #4 daemon prio=9 os_prio=0 tid=0x00007f983c1c5000 nid=0x559a waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Finalizer" #3 daemon prio=8 os_prio=0 tid=0x00007f983c18d800 nid=0x5599 in Object.wait() [0x00007f981c158000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
- locked <0x0000000674ce2b08> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x00007f983c18b000 nid=0x5598 in Object.wait() [0x00007f981c259000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:157)
- locked <0x0000000674f0eee8> (a java.lang.ref.Reference$Lock)
"VM Thread" os_prio=0 tid=0x00007f983c186000 nid=0x5597 runnable
"Gang worker#0 (Parallel GC Threads)" os_prio=0 tid=0x00007f983c01a800 nid=0x5591 runnable
"Gang worker#1 (Parallel GC Threads)" os_prio=0 tid=0x00007f983c01c000 nid=0x5592 runnable
"Gang worker#2 (Parallel GC Threads)" os_prio=0 tid=0x00007f983c01e000 nid=0x5593 runnable
"Gang worker#3 (Parallel GC Threads)" os_prio=0 tid=0x00007f983c01f800 nid=0x5594 runnable
"Concurrent Mark-Sweep GC Thread" os_prio=0 tid=0x00007f983c062800 nid=0x5596 runnable
"VM Periodic Task Thread" os_prio=0 tid=0x00007f983c303800 nid=0x55a3 waiting on condition
JNI global references: 462
I also post main part of my code here.
public class EnterClassMQConsumer implements MessageListenerAdapter {
public static final int NUM_WORKING_THREADS=
public static final int MAX_QUEUE_SIZE =
public static ExecutorService executorService = ExecutorsFactory.newNameThreadPool(NUM_WORKING_THREADS, MAX_QUEUE_SIZE, "EnterClassData-Consumer-Service");
private final static int QUEUE_SIZE=100;
private static BlockingQueue<EnterClassRoomStatus> queue = new LinkedBlockingQueue<EnterClassRoomStatus>(QUEUE_SIZE);
#PostConstruct
public void init() {
for(int i=0; i<NUM_WORKING_THREADS; i++ ) {
executorService.submit(new DBUpdateOperator());
}
}
#Override
public boolean onMessage(ReceivedMessage message) {
EnterClassRoomStatus enterClassRoomStatus = EnterClassRoomStatusFactory.getInstance().getRoomStatus();
enterClassRoomStatus.setMessage(message);
try {
queue.put(enterClassRoomStatus);
} catch (InterruptedException e) {
}
return true;
}
class DBUpdateOperator implements Runnable{
Set<EnterClassRoomStatus> pendSet = Sets.newLinkedHashSet();
#Override
public void run() {
while(true) {
EnterClassRoomStatus event = null;
try {
event = queue.poll(1, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
processAndInsertDB(event);
}
}
}
and the ExecutorService is defined as
public static ThreadPoolExecutor newNameThreadPool(int maxSize, int queueSize, RejectedExecutionHandler rejectedExecutionHandler, String prefix) {
BlockingQueue<Runnable> workQueue = new LinkedBlockingDeque<Runnable>(queueSize);
ThreadFactory threadFactory = new NamedThreadFactory(prefix);
int maxPoolSize = maxSize > DEFAULT_MAX_POOL_SIZE ? maxSize : DEFAULT_MAX_POOL_SIZE;
return new ThreadPoolExecutor(DEFAULT_CORE_POOL_SIZE, maxPoolSize,
DEFAULT_KEEP_ALIVE_TIME, TimeUnit.MILLISECONDS, workQueue, threadFactory, rejectedExecutionHandler); //rejectedExecutionHandler is a reject handler that blocks thread until queue has available space.
}
What might be the reason for EnterClassData-Consumer-Service-thread waiting? Is it related to ExecutorService and how can I improve it?
Thanks in advance.
The reason the call to BlockingQueue#put is waiting is because the queue is full, i.e. it reaches its capacity (which seems to be QUEUE_SIZE in your case). This is the expected behaviour as per the javadoc:
Inserts the specified element into this queue, waiting if necessary for space to become available.
As a result, you will need to implement onMessage differently based on what you want to do when your system undergoes a traffic burst:
Shed requests in excess of queue capacity?
Increase queue size? [not recommended in most cases]
Also, note that you have two explicit queues in your system: one in EnterClassMQConsumer and one for the thread pool executor, resulting in queueing tasks up to QUEUE_SIZE + MAX_QUEUE_SIZE - is this what you really want?

Spark Stream Kafka hang at JavaStreamingContext.start, no spark job create

OS:Red Hat Enterprise Linux Server release 6.5
JRE:Oracle 1.8.0.144-b01
spark-streaming_2.11:2.1.0
spark-streaming-kafka-0-10_2.11:2.1.0
Spark stream Kafka jar submitted by spark-submit to standalone spark cluster, and running well for a few days. But recently, we find that no new job generated for the stream, we tried to restart the job, and restart the cluster, the stream just stuck at JavaStreamingContext.start, and WAITING (on object monitor). Thread dump as below, and no error log from spark and Kafka. I wonder what spark stream is waiting for...
"shuffle-server-3-4" #35 daemon prio=5 os_prio=0 tid=0x00007f76a0041800 nid=0x3d34 runnable [0x00007f76911e5000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000f8ea3be8> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000f8ee3600> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000f8ea3ae0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"shuffle-server-3-3" #34 daemon prio=5 os_prio=0 tid=0x00007f76a0040800 nid=0x3d33 runnable [0x00007f76912e6000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fc2747c0> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fc2874c0> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fc2746c8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"shuffle-server-3-2" #33 daemon prio=5 os_prio=0 tid=0x00007f76a003e800 nid=0x3d32 runnable [0x00007f76913e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fb227370> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fb2296a0> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fb227278> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"ForkJoinPool-1-worker-5" #80 daemon prio=5 os_prio=0 tid=0x00007f76a0034800 nid=0x3d31 runnable [0x00007f76916e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000f8e8ed98> (a sun.nio.ch.Util$3)
- locked <0x00000000f8e8ed88> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000f8e7d008> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:454)
at org.apache.kafka.common.network.Selector.poll(Selector.java:277)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
at org.apache.kafka.clients.consumer.internals.Fetcher.listOffset(Fetcher.java:324)
at org.apache.kafka.clients.consumer.internals.Fetcher.resetOffset(Fetcher.java:298)
at org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:174)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1409)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:983)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:168)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:244)
at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach_quick(ParArray.scala:143)
at scala.collection.parallel.mutable.ParArray$ParArrayIterator.foreach(ParArray.scala:136)
at scala.collection.parallel.ParIterableLike$Foreach.leaf(ParIterableLike.scala:972)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
at scala.collection.parallel.ParIterableLike$Foreach.tryLeaf(ParIterableLike.scala:969)
at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
- None
"JobGenerator" #79 daemon prio=5 os_prio=0 tid=0x00007f76a0007800 nid=0x3d30 waiting on condition [0x00007f76917e9000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000fe48b8d8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:46)
Locked ownable synchronizers:
- None
"JobScheduler" #78 daemon prio=5 os_prio=0 tid=0x00007f76a0004800 nid=0x3d2f waiting on condition [0x00007f76918ea000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000fe48cb98> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:46)
Locked ownable synchronizers:
- None
"streaming-start" #77 daemon prio=5 os_prio=0 tid=0x00007f77323a1000 nid=0x3d2e in Object.wait() [0x00007f76919ea000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at scala.concurrent.forkjoin.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:295)
- locked <0x00000000fa037d50> (a scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask)
at scala.concurrent.forkjoin.ForkJoinTask.doJoin(ForkJoinTask.java:341)
at scala.concurrent.forkjoin.ForkJoinTask.join(ForkJoinTask.java:673)
at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.sync(Tasks.scala:378)
at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:443)
at scala.collection.parallel.ForkJoinTasks$class.executeAndWaitResult(Tasks.scala:426)
at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:56)
at scala.collection.parallel.ExecutionContextTasks$class.executeAndWaitResult(Tasks.scala:558)
at scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:80)
at scala.collection.parallel.ParIterableLike$class.foreach(ParIterableLike.scala:463)
at scala.collection.parallel.mutable.ParArray.foreach(ParArray.scala:56)
at org.apache.spark.streaming.DStreamGraph.start(DStreamGraph.scala:49)
- locked <0x00000000fa0380d0> (a org.apache.spark.streaming.DStreamGraph)
at org.apache.spark.streaming.scheduler.JobGenerator.startFirstTime(JobGenerator.scala:194)
at org.apache.spark.streaming.scheduler.JobGenerator.start(JobGenerator.scala:100)
- locked <0x00000000fe48b4d0> (a org.apache.spark.streaming.scheduler.JobGenerator)
at org.apache.spark.streaming.scheduler.JobScheduler.start(JobScheduler.scala:102)
- locked <0x00000000fe48b170> (a org.apache.spark.streaming.scheduler.JobScheduler)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply$mcV$sp(StreamingContext.scala:583)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:578)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:578)
at org.apache.spark.util.ThreadUtils$$anon$2.run(ThreadUtils.scala:126)
Locked ownable synchronizers:
- None
"SparkListenerBus" #21 daemon prio=5 os_prio=0 tid=0x00007f7732291800 nid=0x3d2d waiting on condition [0x00007f7691cec000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000081c9be70> (a java.util.concurrent.Semaphore$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
Locked ownable synchronizers:
- None
"Spark Context Cleaner" #74 daemon prio=5 os_prio=0 tid=0x00007f773228a000 nid=0x3d2b in Object.wait() [0x00007f7691eee000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
- locked <0x00000000fe4675d0> (a java.lang.ref.ReferenceQueue$Lock)
at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
Locked ownable synchronizers:
- None
"shuffle-server-6-1" #70 daemon prio=5 os_prio=0 tid=0x00007f77321b5800 nid=0x3d2a runnable [0x00007f7691fef000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fa182e28> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fa1b3938> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fa182d90> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"threadDeathWatcher-4-1" #65 daemon prio=1 os_prio=0 tid=0x00007f7704019800 nid=0x3d29 waiting on condition [0x00007f76932f1000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at io.netty.util.ThreadDeathWatcher$Watcher.run(ThreadDeathWatcher.java:150)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"shuffle-client-1-3" #30 daemon prio=5 os_prio=0 tid=0x00007f76fc006800 nid=0x3d28 runnable [0x00007f76933f2000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fd326748> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fd328838> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fd3266a0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"shuffle-client-1-2" #29 daemon prio=5 os_prio=0 tid=0x00007f770801a800 nid=0x3d27 runnable [0x00007f76934f3000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fe49d458> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fe4b02d0> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fe49d360> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"shuffle-client-1-1" #28 daemon prio=5 os_prio=0 tid=0x00007f7700005000 nid=0x3d26 runnable [0x00007f76935f4000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fe4b24a0> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fe4b4570> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fe4b23a8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"appclient-registration-retry-thread" #61 daemon prio=5 os_prio=0 tid=0x00007f76a800f000 nid=0x3d22 waiting on condition [0x00007f76939f8000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000fe4e0e28> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"driver-revive-thread" #57 daemon prio=5 os_prio=0 tid=0x00007f76b0004000 nid=0x3d1e waiting on condition [0x00007f7693dfc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000fe4e0450> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"dag-scheduler-event-loop" #56 daemon prio=5 os_prio=0 tid=0x00007f77321a5800 nid=0x3d1d waiting on condition [0x00007f7693efd000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000fe38ff08> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:492)
at java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:680)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:46)
Locked ownable synchronizers:
- None
"shuffle-server-3-1" #32 daemon prio=5 os_prio=0 tid=0x00007f7731fea800 nid=0x3d07 runnable [0x00007f76f4863000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000fc26eeb8> (a io.netty.channel.nio.SelectedSelectionKeySet)
- locked <0x00000000fc271470> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000fc26edb0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:760)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:401)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"kafka-producer-network-thread | producer-1" #18 daemon prio=5 os_prio=0 tid=0x00007f773187a800 nid=0x3ce5 runnable [0x00007f76f5d6b000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x0000000081bb6f80> (a sun.nio.ch.Util$3)
- locked <0x0000000081bb6f70> (a java.util.Collections$UnmodifiableSet)
- locked <0x0000000081bb6e48> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:454)
at org.apache.kafka.common.network.Selector.poll(Selector.java:277)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:229)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:134)
at java.lang.Thread.run(Thread.java:748)
"main" #1 prio=5 os_prio=0 tid=0x00007f773000d800 nid=0x3ca7 in Object.wait() [0x00007f7736da7000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
- locked <0x00000000fe463ad0> (a org.apache.spark.util.ThreadUtils$$anon$2)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.spark.util.ThreadUtils$.runInNewThread(ThreadUtils.scala:135)
at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:578)
at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:572)
- locked <0x0000000081c9ea00> (a java.lang.Object)
- locked <0x0000000081f19c80> (a org.apache.spark.streaming.StreamingContext)
at org.apache.spark.streaming.api.java.JavaStreamingContext.start(JavaStreamingContext.scala:556)
at com.ccb.iomp.appmon.analysis.statistic.processor.StatisticProcessor.start(StatisticProcessor.java:780)
at com.ccb.iomp.appmon.analysis.statistic.TranLogStatisticApp.main(TranLogStatisticApp.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Locked ownable synchronizers:
- None
As per the following stack trace:
"ForkJoinPool-1-worker-5" #80 daemon prio=5 os_prio=0 tid=0x00007f76a0034800 nid=0x3d31 runnable [0x00007f76916e7000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x00000000f8e8ed98> (a sun.nio.ch.Util$3)
- locked <0x00000000f8e8ed88> (a java.util.Collections$UnmodifiableSet)
- locked <0x00000000f8e7d008> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:454)
at org.apache.kafka.common.network.Selector.poll(Selector.java:277)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
at org.apache.kafka.clients.consumer.internals.Fetcher.listOffset(Fetcher.java:324)
at org.apache.kafka.clients.consumer.internals.Fetcher.resetOffset(Fetcher.java:298)
at org.apache.kafka.clients.consumer.internals.Fetcher.updateFetchPositions(Fetcher.java:174)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:1409)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:983)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.paranoidPoll(DirectKafkaInputDStream.scala:168)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.start(DirectKafkaInputDStream.scala:244)
at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
at org.apache.spark.streaming.DStreamGraph$$anonfun$start$5.apply(DStreamGraph.scala:49)
It stuck at fetching offsets from Kafka. You should check your Kafka cluster.
I ran into this recently too. In my case, the problem was that there was only one Kafka broker running in the dev environment, but, offsets.topic.replication.factor was set to 2. Try setting it to 1. This isn't a good idea in production but may workaround the issue where you just need 1 broker for testing.

Resources