Apache Cassandra 3.11.2 - errors on startup - cassandra

I have suddenly encountered some errors (Java exceptions) on startup of my Apache Cassandra 3.11.2 database on Windows 10. I didn't have such errors before on that version of the database. Here is an excerpt from my debug.log file:
DEBUG [SSTableBatchOpen:1] 2018-05-21 13:42:39,806 SSTableReader.java:504 - Opening C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data\system\IndexInfo-9f5c6374d48532299a0a5094af9ad1e3\mc-53-big (0,139KiB)
ERROR [SSTableBatchOpen:1] 2018-05-21 13:42:40,092 DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
java.lang.NoClassDefFoundError: com/github/benmanes/caffeine/cache/LocalCacheFactory$WISWR
at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalManualCache.<init>(BoundedLocalCache.java:2727) ~[caffeine-2.2.6.jar:na]
at com.github.benmanes.caffeine.cache.BoundedLocalCache$BoundedLocalLoadingCache.<init>(BoundedLocalCache.java:2944) ~[caffeine-2.2.6.jar:na]
at com.github.benmanes.caffeine.cache.Caffeine.build(Caffeine.java:830) ~[caffeine-2.2.6.jar:na]
at org.apache.cassandra.cache.ChunkCache.<init>(ChunkCache.java:145) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.cache.ChunkCache.<clinit>(ChunkCache.java:47) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:763) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:737) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:517) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:385) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader$3.run(SSTableReader.java:564) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]
Caused by: java.lang.ClassNotFoundException: com.github.benmanes.caffeine.cache.LocalCacheFactory$WISWR
at java.net.URLClassLoader$1.run(URLClassLoader.java:370) ~[na:1.8.0_60]
at java.net.URLClassLoader$1.run(URLClassLoader.java:362) ~[na:1.8.0_60]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_60]
at java.net.URLClassLoader.findClass(URLClassLoader.java:361) ~[na:1.8.0_60]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_60]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[na:1.8.0_60]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_60]
... 16 common frames omitted
Caused by: java.util.zip.ZipException: invalid distance too far back
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) ~[na:1.8.0_60]
at sun.misc.Resource.getBytes(Resource.java:124) ~[na:1.8.0_60]
at java.net.URLClassLoader.defineClass(URLClassLoader.java:462) ~[na:1.8.0_60]
at java.net.URLClassLoader.access$100(URLClassLoader.java:73) ~[na:1.8.0_60]
at java.net.URLClassLoader$1.run(URLClassLoader.java:368) ~[na:1.8.0_60]
... 22 common frames omitted
DEBUG [main] 2018-05-21 13:42:40,119 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.IndexInfo
DEBUG [main] 2018-05-21 13:42:40,136 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,138 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.IndexInfo
INFO [main] 2018-05-21 13:42:40,154 ColumnFamilyStore.java:411 - Initializing system.batches
INFO [main] 2018-05-21 13:42:40,166 ColumnFamilyStore.java:411 - Initializing system.paxos
DEBUG [main] 2018-05-21 13:42:40,168 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.paxos
DEBUG [main] 2018-05-21 13:42:40,168 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,168 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.paxos
INFO [main] 2018-05-21 13:42:40,184 ColumnFamilyStore.java:411 - Initializing system.local
DEBUG [SSTableBatchOpen:1] 2018-05-21 13:42:40,189 SSTableReader.java:504 - Opening C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data\system\local-7ad54392bcdd35a684174e047860b377\mc-187-big (5,049KiB)
ERROR [SSTableBatchOpen:1] 2018-05-21 13:42:40,189 DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:763) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:737) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:517) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:385) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader$3.run(SSTableReader.java:564) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]
DEBUG [main] 2018-05-21 13:42:40,190 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.local
DEBUG [main] 2018-05-21 13:42:40,190 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,190 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.local
INFO [main] 2018-05-21 13:42:40,201 ColumnFamilyStore.java:411 - Initializing system.peers
DEBUG [main] 2018-05-21 13:42:40,202 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.peers
DEBUG [main] 2018-05-21 13:42:40,202 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,202 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.peers
INFO [main] 2018-05-21 13:42:40,211 ColumnFamilyStore.java:411 - Initializing system.peer_events
DEBUG [main] 2018-05-21 13:42:40,213 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.peer_events
DEBUG [main] 2018-05-21 13:42:40,213 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,213 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.peer_events
INFO [main] 2018-05-21 13:42:40,223 ColumnFamilyStore.java:411 - Initializing system.range_xfers
DEBUG [main] 2018-05-21 13:42:40,224 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.range_xfers
DEBUG [main] 2018-05-21 13:42:40,224 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,224 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.range_xfers
INFO [main] 2018-05-21 13:42:40,237 ColumnFamilyStore.java:411 - Initializing system.compaction_history
DEBUG [SSTableBatchOpen:1] 2018-05-21 13:42:40,244 SSTableReader.java:504 - Opening C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data\system\compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca\mc-32-big (3,121KiB)
DEBUG [SSTableBatchOpen:2] 2018-05-21 13:42:40,244 SSTableReader.java:504 - Opening C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data\system\compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca\mc-33-big (0,354KiB)
ERROR [SSTableBatchOpen:2] 2018-05-21 13:42:40,244 DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:763) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:737) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:517) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:385) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader$3.run(SSTableReader.java:564) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]
ERROR [SSTableBatchOpen:1] 2018-05-21 13:42:40,245 DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:763) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:737) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:517) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:385) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader$3.run(SSTableReader.java:564) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]
DEBUG [main] 2018-05-21 13:42:40,246 DiskBoundaryManager.java:53 - Refreshing disk boundary cache for system.compaction_history
DEBUG [main] 2018-05-21 13:42:40,246 DiskBoundaryManager.java:92 - Got local ranges [] (ringVersion = 0)
DEBUG [main] 2018-05-21 13:42:40,246 DiskBoundaryManager.java:56 - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data}], positions=null, ringVersion=0, directoriesVersion=0} for system.compaction_history
INFO [main] 2018-05-21 13:42:40,262 ColumnFamilyStore.java:411 - Initializing system.sstable_activity
DEBUG [SSTableBatchOpen:2] 2018-05-21 13:42:40,268 SSTableReader.java:504 - Opening C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data\system\sstable_activity-5a1ff267ace03f128563cfae6103c65e\mc-40-big (1,243KiB)
DEBUG [SSTableBatchOpen:1] 2018-05-21 13:42:40,268 SSTableReader.java:504 - Opening C:\Users\Michał\Downloads\apache-cassandra-3.11.2\data\data\system\sstable_activity-5a1ff267ace03f128563cfae6103c65e\mc-39-big (0,778KiB)
ERROR [SSTableBatchOpen:2] 2018-05-21 13:42:40,269 DebuggableThreadPoolExecutor.java:239 - Error in ThreadPoolExecutor
java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:763) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:737) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:517) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:385) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableReader$3.run(SSTableReader.java:564) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]
(...)
ERROR [main] 2018-05-21 13:42:51,336 CassandraDaemon.java:708 - Exception encountered during startup
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:385) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:819) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.SystemKeyspace.removeTruncationRecord(SystemKeyspace.java:670) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:553) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.ColumnFamilyStore.invalidate(ColumnFamilyStore.java:529) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.schema.LegacySchemaMigrator.lambda$unloadLegacySchemaTables$1(LegacySchemaMigrator.java:137) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_60]
at org.apache.cassandra.schema.LegacySchemaMigrator.unloadLegacySchemaTables(LegacySchemaMigrator.java:137) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:83) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:256) [apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:602) [apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:691) [apache-cassandra-3.11.2.jar:3.11.2]
Caused by: java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_60]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_60]
at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:381) ~[apache-cassandra-3.11.2.jar:3.11.2]
... 11 common frames omitted
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cache.ChunkCache
at org.apache.cassandra.io.sstable.format.big.BigTableWriter.<init>(BigTableWriter.java:64) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:92) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:102) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:119) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:587) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:1027) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:518) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:504) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.Memtable$FlushRunnable.<init>(Memtable.java:443) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.Memtable$FlushRunnable.<init>(Memtable.java:420) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.Memtable.createFlushRunnables(Memtable.java:307) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.Memtable.flushRunnables(Memtable.java:298) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1140) ~[apache-cassandra-3.11.2.jar:3.11.2]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1105) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_60]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) ~[apache-cassandra-3.11.2.jar:3.11.2]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_60]
Is this issue caused by an old version of JDK or rather by Cassandra? How can I address it?

After Windows reboot, these errors no longer occur.

Related

ERROR PythonRDD.collectAndServe: Python worker exited unexpectedly (crashed)

I am trying to run a pyspark job but it is failing on RDD collectAndServe method. I do not have any memory issues. I have all updated jars in my jars folder. Python worker is crashing with below error.
An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.32.157.249 executor 0): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:595)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:577)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:718)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:695)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:508)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:703)
... 29 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2454)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2403)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2402)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2402)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1160)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1160)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2642)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2584)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2573)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2214)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2235)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2254)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2279)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:595)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:577)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:718)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:695)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:508)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:703)
On Spark master UI page, I am getting this error in stderr logs:
22/10/31 21:43:25 INFO CoarseGrainedExecutorBackend: Got assigned task 3
22/10/31 21:43:25 INFO Executor: Running task 0.3 in stage 0.0 (TID 3)
22/10/31 21:43:25 INFO BlockManager: Found block rdd_3_0 locally
22/10/31 21:43:30 ERROR Executor: Exception in task 0.3 in stage 0.0 (TID 3)
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:595)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:577)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:38)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:718)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:695)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:508)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
at scala.collection.TraversableOnce.to(TraversableOnce.scala:366)
at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358)
at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345)
at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1030)
at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2254)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:131)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:703)
... 29 more
22/10/31 21:43:30 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
Environment:
Windows 10
Python version: 3.7.0
Java version is "1.8.0_333"
Spark Version: 3.2.1
I have tried upgrading to python 3.8 version and have also tried updating the java version. But none has worked.

java.io.IOException: Filesystem closed

my spark job;
val df = spark.sql(s"""""")
df.write.mode("append").json("hdfs://xxx-nn-ha/user/b_me/df")
Error:
2019-01-31 19:56:36 INFO CoarseGrainedExecutorBackend:54 - Driver commanded a shutdown
2019-01-31 19:56:36 INFO MemoryStore:54 - MemoryStore cleared
2019-01-31 19:56:36 INFO BlockManager:54 - BlockManager stopped
2019-01-31 19:56:36 INFO ShutdownHookManager:54 - Shutdown hook called
2019-01-31 19:56:36 ERROR Executor:91 - Exception in task 4120.0 in stage 5.0 (TID 5402)
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:868)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:735)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.io.SequenceFile$Reader.readBlock(SequenceFile.java:2178)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2585)
at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:277)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:214)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage12.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:369)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2019-01-31 19:56:36 INFO Executor:54 - Not reporting error to driver during JVM shutdown.
End of LogType:stdout
I got the java.io.IOException: Filesystem closed error. Why? I have no hints. Any ideas welcomed. Thanks
UPDATE
There is a WARN:
2019-02-01 12:01:39 INFO YarnAllocator:54 - Driver requested a total number of 2007 executor(s).
2019-02-01 12:01:39 INFO ExecutorAllocationManager:54 - Requesting 968 new executors because tasks are backlogged (new desired total will be 2007)
2019-02-01 12:01:39 INFO ExecutorAllocationManager:54 - New executor 25 has registered (new total is 26)
2019-02-01 12:01:39 WARN ApplicationMaster:87 - Reporter thread fails 1 time(s) in a row.
org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Too many containers asked, 1365198
at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:128)
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:511)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1714)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2202)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:79)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy22.allocate(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.allocate(AMRMClientImpl.java:277)
at org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:268)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:556)
If you look closer at DFSClient.checkOpen code you will see the following:
void checkOpen() throws IOException {
if (!clientRunning) {
IOException result = new IOException("Filesystem closed");
throw result;
}
}
Let's find all accessors of clientRunning field.
Only close method really changes it. Lets take a look at it:
#Override
public synchronized void close() throws IOException {
try {
if(clientRunning) {
closeAllFilesBeingWritten(false);
clientRunning = false;
getLeaseRenewer().closeClient(this);
// close connections to the namenode
closeConnectionToNamenode();
}
} finally {
if (provider != null) {
provider.close();
}
}
}
So the main problem in your job is that it tries to write the value, although the FS is already closed.
Make sure you don't close your FS before you do any job. You can also increase the logging level to find the cause.

NegativeArraySizeException while training prediction io universal recommender

I am trying to deploy a prediction io system.
I am getting the NegativeArraySizeException while training phase.
Help is appreciated.
The events I have pushed has entityType user and targetEntityType as item as verified with
http://localhost:7070/events.json?accessKey=<MyAcccessKey>
[{
"eventId": "AAX2w8B2UFaxUYDlzyigBgAAAVgABV1uhz7ErglAtBA",
"event": "purchase",
"entityType": "user",
"entityId": "b571c84da7104d339a436b40d07ba59c",
"targetEntityType": "item",
"targetEntityId": "00572208a2e742f397f7e082aa40ae2e",
"properties": {},
"eventTime": "2016-10-26T08:05:01.422Z",
"creationTime": "2016-10-26T08:05:01.423Z"
}]
[INFO] [Engine] Extracting datasource params...
[INFO] [WorkflowUtils$] No 'name' is found. Default empty String will be used.
[INFO] [Engine] Datasource params: (,DataSourceParams(JuggernautRecommendor,List(purchase, view)))
[INFO] [Engine] Extracting preparator params...
[INFO] [Engine] Preparator params: (,Empty)
[INFO] [Engine] Extracting serving params...
[INFO] [Engine] Serving params: (,Empty)
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriver#172.17.0.2:34162]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because spark.app.id is not set.
[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: com.juggernaut.DataSource#5c1b89ac
[INFO] [Engine$] Preparator: com.juggernaut.Preparator#2b79c8ff
[INFO] [Engine$] AlgorithmList: List(com.juggernaut.URAlgorithm#5d14e99e)
[INFO] [Engine$] Data sanity check is on.
[INFO] [Engine$] com.juggernaut.TrainingData does not support data sanity check. Skipping check.
[INFO] [Engine$] com.juggernaut.PreparedData does not support data sanity check. Skipping check.
[INFO] [URAlgorithm] Actions read now creating correlators
[ERROR] [Executor] Exception in task 0.0 in stage 29.0 (TID 20)
[WARN] [TaskSetManager] Lost task 0.0 in stage 29.0 (TID 20, localhost): java.lang.NegativeArraySizeException
at org.apache.mahout.math.DenseVector.<init>(DenseVector.java:57)
at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:78)
at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:77)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[ERROR] [TaskSetManager] Task 0 in stage 29.0 failed 1 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 29.0 failed 1 times, most recent failure: Lost task 0.0 in stage 29.0 (TID 20, localhost): java.lang.NegativeArraySizeException
at org.apache.mahout.math.DenseVector.<init>(DenseVector.java:57)
at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:78)
at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:77)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1942)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1003)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:985)
at org.apache.mahout.sparkbindings.SparkEngine$.numNonZeroElementsPerColumn(SparkEngine.scala:86)
at org.apache.mahout.math.drm.CheckpointedOps.numNonZeroElementsPerColumn(CheckpointedOps.scala:37)
at org.apache.mahout.math.cf.SimilarityAnalysis$.sampleDownAndBinarize(SimilarityAnalysis.scala:286)
at org.apache.mahout.math.cf.SimilarityAnalysis$$anonfun$cooccurrences$1.apply(SimilarityAnalysis.scala:89)
at org.apache.mahout.math.cf.SimilarityAnalysis$$anonfun$cooccurrences$1.apply(SimilarityAnalysis.scala:84)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.mahout.math.cf.SimilarityAnalysis$.cooccurrences(SimilarityAnalysis.scala:84)
at org.apache.mahout.math.cf.SimilarityAnalysis$.cooccurrencesIDSs(SimilarityAnalysis.scala:141)
at com.juggernaut.URAlgorithm.calcAll(URAlgorithm.scala:143)
at com.juggernaut.URAlgorithm.train(URAlgorithm.scala:117)
at com.juggernaut.URAlgorithm.train(URAlgorithm.scala:102)
at io.prediction.controller.P2LAlgorithm.trainBase(P2LAlgorithm.scala:46)
at io.prediction.controller.Engine$$anonfun$18.apply(Engine.scala:689)
at io.prediction.controller.Engine$$anonfun$18.apply(Engine.scala:689)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at io.prediction.controller.Engine$.train(Engine.scala:689)
at io.prediction.controller.Engine.train(Engine.scala:174)
at io.prediction.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:65)
at io.prediction.workflow.CreateWorkflow$.main(CreateWorkflow.scala:247)
at io.prediction.workflow.CreateWorkflow.main(CreateWorkflow.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NegativeArraySizeException
at org.apache.mahout.math.DenseVector.<init>(DenseVector.java:57)
at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:78)
at org.apache.mahout.sparkbindings.SparkEngine$$anonfun$5.apply(SparkEngine.scala:77)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:706)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
All the events defined in engine.json must have atleast one event in data set.
https://groups.google.com/forum/#!topic/predictionio-user/FDGOY4DisCg

Exception while executing Spark job in cluster mode

I have a simple spark job as below
SparkConf conf = new SparkConf();
conf.setMaster("spark://v-in-spark-01.synygy.net:7077");
SparkContext sc = new SparkContext(conf);
JavaSparkContext context = new JavaSparkContext(sc);
List<Integer> data = Arrays.asList(1, 2, 3, 4, 5);
JavaRDD<Integer> distData = context.parallelize(data);
distData.foreach(MainClass::printValue);
sc.stop();
and a launcher app
Process spark = new SparkLauncher()
.setAppName("simplesparkjob")
.setAppResource("file:///home/spark/programs/simplesparkjob-all.jar") .setMainClass("com.spmsoftware.main.MainClass")
.setMaster("SPARK_MASTER:6066")
.setSparkHome("SPARK_HOME")
.setVerbose(true)
.setDeployMode("cluster")
.launch();
After executing launcher the job gets submitted properly and I can see it on mster UI but I can see the output in stdout.
Instead I am getting following exception in stderr :
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:70)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:174)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:270)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$$anonfun$run$1.apply$mcV$sp(CoarseGrainedExecutorBackend.scala:188)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:71)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:70)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
... 4 more
Caused by: java.io.IOException: Failed to connect to /127.0.0.1:42460
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /127.0.0.1:42460
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
What this error is? what is it that I am missing?

Consuming from Kafka failed Iterator is in failed state

I am getting exception while consuming the messages from kafka.
org.springframework.messaging.MessagingException: Consuming from Kafka failed; nested exception is java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state
I have one consumer in the application context with one outbound adapter.
Consumer configuration in application context
<int-kafka:consumer-context id="consumerContext" consumer-timeout="4000" zookeeper-connect="zookeeperConnect">
<int-kafka:consumer-configurations>
<int-kafka:consumer-configuration group-id="GR1"
value-decoder="valueDecoder" key-decoder="valueDecoder" max-messages="1000">
<int-kafka:topic-filter pattern="SOME_TOPIC" streams="13"/>
</int-kafka:consumer-configuration>
</int-kafka:consumer-configurations>
</int-kafka:consumer-context>
and one outbound channel adapter
<int-kafka:inbound-channel-adapter id="kafkaInboundChannelAdapter"
kafka-consumer-context-ref="consumerContext" auto-startup="true" channel="kafka" group-id="GR1">
<int:poller fixed-delay="100" time-unit="MILLISECONDS" />
</int-kafka:inbound-channel-adapter>
I am deploying my application on glassfish 3 server
When consumer starts to work i see following exception in application logs.
2015-01-20 12:51:48.218 UTC || ERROR || [task-scheduler-3 ] || [ErrorHandler] - Error processing message Consuming from Kafka failed; nested exception is java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state org.springframework.messaging.MessagingException: Consuming from Kafka failed; nested exception is java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state
at org.springframework.integration.kafka.support.ConsumerConfiguration.executeTasks(ConsumerConfiguration.java:110) ~[spring-integration-kafka-1.0.0.M1.jar:?]
at org.springframework.integration.kafka.support.ConsumerConfiguration.receive(ConsumerConfiguration.java:86) ~[spring-integration-kafka-1.0.0.M1.jar:?]
at org.springframework.integration.kafka.support.KafkaConsumerContext.receive(KafkaConsumerContext.java:56) ~[spring-integration-kafka-1.0.0.M1.jar:?]
at org.springframework.integration.kafka.inbound.KafkaHighLevelConsumerMessageSource.receive(KafkaHighLevelConsumerMessageSource.java:41) ~[spring-integration-kafka-1.0.0.M1.jar:?]
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:124) ~[spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:192) ~[spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:55) ~[spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:149) ~[spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:146) ~[spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:298) ~[spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:52) [spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) [spring-core-4.1.1.RELEASE.jar:4.1.1.RELEASE]
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49) [spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:292) [spring-integration-core-4.0.4.RELEASE.jar:?]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-4.1.1.RELEASE.jar:4.1.1.RELEASE]
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81) [spring-context-4.1.1.RELEASE.jar:4.1.1.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [?:1.7.0_71]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) [?:1.7.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) [?:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71] Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.7.0_71]
at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[?:1.7.0_71]
at org.springframework.integration.kafka.support.ConsumerConfiguration.executeTasks(ConsumerConfiguration.java:97) ~[spring-integration-kafka-1.0.0.M1.jar:?]
... 22 more Caused by: java.lang.IllegalStateException: Iterator is in failed state
at kafka.utils.IteratorTemplate.hasNext(Unknown Source) ~[kafka_2.10-0.8.0.jar:0.8.0]
at kafka.utils.IteratorTemplate.next(Unknown Source) ~[kafka_2.10-0.8.0.jar:0.8.0]
at kafka.consumer.ConsumerIterator.next(Unknown Source) ~[kafka_2.10-0.8.0.jar:0.8.0]
at org.springframework.integration.kafka.support.ConsumerConfiguration$1.call(ConsumerConfiguration.java:67) ~[spring-integration-kafka-1.0.0.M1.jar:?]
at org.springframework.integration.kafka.support.ConsumerConfiguration$1.call(ConsumerConfiguration.java:61) ~[spring-integration-kafka-1.0.0.M1.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_71]
... 3 more 2015-01-20 12:51:48.218 UTC || ERROR || [task-scheduler-3 ] || [LoggingHandler] - org.springframework.messaging.MessagingException: Consuming from Kafka failed; nested exception is java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state
at org.springframework.integration.kafka.support.ConsumerConfiguration.executeTasks(ConsumerConfiguration.java:110)
at org.springframework.integration.kafka.support.ConsumerConfiguration.receive(ConsumerConfiguration.java:86)
at org.springframework.integration.kafka.support.KafkaConsumerContext.receive(KafkaConsumerContext.java:56)
at org.springframework.integration.kafka.inbound.KafkaHighLevelConsumerMessageSource.receive(KafkaHighLevelConsumerMessageSource.java:41)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:124)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:192)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:149)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:146)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:298)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:52)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:292)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Iterator is in failed state
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.springframework.integration.kafka.support.ConsumerConfiguration.executeTasks(ConsumerConfiguration.java:97)
... 22 more Caused by: java.lang.IllegalStateException: Iterator is in failed state
at kafka.utils.IteratorTemplate.hasNext(Unknown Source)
at kafka.utils.IteratorTemplate.next(Unknown Source)
at kafka.consumer.ConsumerIterator.next(Unknown Source)
at org.springframework.integration.kafka.support.ConsumerConfiguration$1.call(ConsumerConfiguration.java:67)
at org.springframework.integration.kafka.support.ConsumerConfiguration$1.call(ConsumerConfiguration.java:61)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more
I am using follwoing version of kafka and zookeeper.
kafka_2.9.2-0.8.1.1
zookeeper-3.4.6
http://www.springframework.org/schema/integration/kafka/spring-
integration-kafka-1.0.xsd
glassfish3
It was working fine but suddenly it started failing consistently.
Other consumer in other Spring XD application reading from the same kafka server and zookeper are working fine.
Group of other consumer running in other application is completely different.
Looks like this is an issue from here: https://jira.spring.io/browse/INTEXT-112
Would you mind trying your application with the M2 release?

Resources