Cassandra : Exception on startup - cassandra

I am getting the following erorr on starting Cassandra:
ERROR [main] 2018-04-09 18:19:25,014 CassandraDaemon.java:675 -
Exception encountered during startup
java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote;
at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:150) ~[na:1.8.0_162]
at javax.management.remote.rmi.RMIJRMPServerImpl.export(RMIJRMPServerImpl.java:135) ~[na:1.8.0_162]
at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:405) ~[na:1.8.0_162]
at org.apache.cassandra.utils.JMXServerUtils.createJMXServer(JMXServerUtils.java:104) ~[main/:na]
at org.apache.cassandra.service.CassandraDaemon.maybeInitJmx(CassandraDaemon.java:139) [main/:na]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:183) [main/:na]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) [main/:na]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:658) [main/:na]
Can someone please tell me where I could be going wrong ? I haven't made any changes since the last successful run, so I'm confused.

Related

ERROR TransportClientFactory: Exception while bootstrapping client

I have a PySpark job which works successfully with a small cluster, but starts to get a lot of the following errors in the first few minutes when it starts up. Any idea on how I can solve it? This is with PySpark 2.2.0 and mesos.
17/09/29 18:54:26 INFO Executor: Running task 5717.0 in stage 0.0 (TID 5717)
17/09/29 18:54:26 INFO CoarseGrainedExecutorBackend: Got assigned task 5813
17/09/29 18:54:26 INFO Executor: Running task 5813.0 in stage 0.0 (TID 5813)
17/09/29 18:54:26 INFO CoarseGrainedExecutorBackend: Got assigned task 5909
17/09/29 18:54:26 INFO Executor: Running task 5909.0 in stage 0.0 (TID 5909)
17/09/29 18:54:56 ERROR TransportClientFactory: Exception while bootstrapping client after 30001 ms
java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.spark_project.guava.base.Throwables.propagate(Throwables.java:160)
at org.apache.spark.network.client.TransportClient.sendRpcSync(TransportClient.java:275)
at org.apache.spark.network.sasl.SaslClientBootstrap.doBootstrap(SaslClientBootstrap.java:70)
at org.apache.spark.network.crypto.AuthClientBootstrap.doSaslAuth(AuthClientBootstrap.java:117)
at org.apache.spark.network.crypto.AuthClientBootstrap.doBootstrap(AuthClientBootstrap.java:76)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:244)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:366)
at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:332)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:654)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:467)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$3.apply(Executor.scala:684)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$3.apply(Executor.scala:681)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:681)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:308)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at org.spark_project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:96)
at org.apache.spark.network.client.TransportClient.sendRpcSync(TransportClient.java:271)
... 23 more
17/09/29 18:54:56 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from /10.0.1.25:37314 is closed
17/09/29 18:54:56 INFO Executor: Fetching spark://10.0.1.25:37314/files/djinn.spark.zip with timestamp 1506711239350

Repeated AccessControlException: Missing permission ("java.lang.RuntimePermission" "exitVM.1") Errors in Datastax 5.0.1 spark node

I am seeing the following error repeatedly in the logs of several of my Datastax nodes. I haven't been able to find any references for what is causing it, how to prevent it. Why is it trying to exitVM?
ERROR [dispatcher-event-loop-3] 2017-04-13 10:39:47,540 Logging.scala:74 - All masters are unresponsive! Giving up.
ERROR [dispatcher-event-loop-3] 2017-04-13 10:39:47,540 Logging.scala:95 - Uncaught exception in thread Thread[dispatcher-event-loop-3,5,main]
java.security.AccessControlException: Missing permission ("java.lang.RuntimePermission" "exitVM.1")
at com.datastax.bdp.util.DseJavaSecurityManager.verify(DseJavaSecurityManager.java:36) ~[dse-core-5.0.1.jar:5.0.1]
at com.datastax.bdp.util.DseJavaSecurityManager.checkPermission(DseJavaSecurityManager.java:44) ~[dse-core-5.0.1.jar:5.0.1]
at java.lang.SecurityManager.checkExit(SecurityManager.java:761) ~[na:1.8.0_101]
at java.lang.Runtime.exit(Runtime.java:107) ~[na:1.8.0_101]
at java.lang.System.exit(System.java:971) ~[na:1.8.0_101]
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:300) ~[spark-core_2.10-1.6.1.2.jar:5.0.1]
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163) ~[spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:230) [spark-core_2.10-1.6.1.2.jar:5.0.1]
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:539) [spark-core_2.10-1.6.1.2.jar:5.0.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:166) [scala-library-2.10.6.jar:na]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:166) [scala-library-2.10.6.jar:na]
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
ERROR [dispatcher-event-loop-3] 2017-04-13 10:39:47,541 Logging.scala:95 - Ignoring error

how to understand yarn appattempt log and diagnostics?

I am running a spark application on YARN cluster(on AWS EMR). The application seems to be killed and I want to find the cause. I try to understand the YARN info given in the following screen.
The diagnostic line in the screen seems to show that YARN killing the app because of the memory limit:
Diagnostics: Container [pid=1540,containerID=container_1488651686158_0012_02_000001] is running beyond physical memory limits. Current usage: 1.6 GB of 1.4 GB physical memory used; 3.6 GB of 6.9 GB virtual memory used. Killing container.
However, the appattempt log shows completely different exception, something related to the IO/network. My question is : should I trust the diagnostic in the screen or the appattempt log? Is the IO exception causing the kill or the out of memory cause the IO exception in the appattempt log? Is it another log/diagnostic I should look at? Thanks.
17/03/04 21:59:02 ERROR Utils: Uncaught exception in thread task-result-getter-0
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:104)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:579)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-0" java.lang.Error: java.lang.InterruptedException
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:202)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
at org.apache.spark.network.BlockTransferService.fetchBlockSync(BlockTransferService.scala:104)
at org.apache.spark.storage.BlockManager.getRemoteBytes(BlockManager.scala:579)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:82)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
... 2 more
17/03/04 21:59:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/04 21:59:02 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from ip-172-31-9-207.ec2.internal/172.31.9.207:38437 is closed
17/03/04 21:59:02 INFO RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000 ms
17/03/04 21:59:02 ERROR DiskBlockManager: Exception while deleting local spark dir: /mnt/yarn/usercache/hadoop/appcache/application_1488651686158_0012/blockmgr-941a13d8-1b31-4347-bdec-180125b6f4ca
java.io.IOException: Failed to delete: /mnt/yarn/usercache/hadoop/appcache/application_1488651686158_0012/blockmgr-941a13d8-1b31-4347-bdec-180125b6f4ca
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)
at org.apache.spark.storage.DiskBlockManager$$anonfun$org$apache$spark$storage$DiskBlockManager$$doStop$1.apply(DiskBlockManager.scala:169)
at org.apache.spark.storage.DiskBlockManager$$anonfun$org$apache$spark$storage$DiskBlockManager$$doStop$1.apply(DiskBlockManager.scala:165)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.storage.DiskBlockManager.org$apache$spark$storage$DiskBlockManager$$doStop(DiskBlockManager.scala:165)
at org.apache.spark.storage.DiskBlockManager.stop(DiskBlockManager.scala:160)
at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1361)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:89)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/03/04 21:59:02 INFO MemoryStore: MemoryStore cleared
17/03/04 21:59:02 INFO BlockManager: BlockManager stopped
17/03/04 21:59:02 INFO BlockManagerMaster: BlockManagerMaster stopped
17/03/04 21:59:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/04 21:59:02 ERROR Utils: Uncaught exception in thread Thread-3
java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
at java.nio.file.FileSystems.getDefault(FileSystems.java:176)
at java.nio.file.Paths.get(Paths.java:138)
at org.apache.spark.util.Utils$.isSymlink(Utils.scala:1021)
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:991)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:102)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/03/04 21:59:02 WARN ShutdownHookManager: ShutdownHook '$anon$2' failed, java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
java.lang.NoClassDefFoundError: Could not initialize class java.nio.file.FileSystems$DefaultFileSystemHolder
at java.nio.file.FileSystems.getDefault(FileSystems.java:176)
at java.nio.file.Paths.get(Paths.java:138)
at org.apache.spark.util.Utils$.isSymlink(Utils.scala:1021)
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:991)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:102)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1842)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1283)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1841)
at org.apache.spark.SparkContext$$anonfun$2.apply$mcV$sp(SparkContext.scala:581)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
The information in your screenshot is the most relevant. Your ApplicationMaster container ran out of memory. You need to increase yarn.app.mapreduce.am.resource.mb which is set in mapred-site.xml. I recommend a value of 2000 since that will usually accommodate running Spark and MapReduce applications at scale.
The container was killed (memory exceeds physical memory limits) so any attempt to reach this container fails.
Yarn is fine to have an overall view of the process, but you should prefer spark history server to analyse better your job (check unbalanced memory in spark history).

Connection refused while using sstableLoader to stream data to cassandra

I am trying to stream some sstables to Cassandra cluster using SStableLoader utility. I am getting a streaming error. Here is the stack.
Established connection to initial hosts
Opening sstables and calculating sections to stream
18:05:04.058 [main] DEBUG o.a.c.i.s.m.MetadataSerializer - Load metadata for /path/new/xyz/search/xyz-search-ka-1
18:05:04.073 [main] INFO o.a.c.io.sstable.SSTableReader - Opening /path/new/xyz/new/xyz_search/search/xyz_search-search-ka-1 (330768 bytes)
Streaming relevant part of /path/new/xyz/xyz_search/search/xyz_search-search-ka-1-Data.db to [/10.XXX.XXX.XXX, /10.XXX.XXX.XXX, /10.XXX.XXX.XXX, /10.XXX.XXX.XXX, /10.XXX.XXX.XXX]
18:05:04.411 [main] INFO o.a.c.streaming.StreamResultFuture - [Stream #ed3a0cd0-fd25-11e5-8509-63e9961cf787] Executing streaming plan for Bulk Load
Streaming relevant part of /path/xyz-search-ka-1-Data.db to [/10.XXX.XXX.XXX, /10.XXX.XXX.XXX, /10.XXX.XXX.XXX, /10.XXX.XXX.XXX, /10.XXX.XXX.XXX]
17:22:44.175 [main] INFO o.a.c.streaming.StreamResultFuture - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Executing streaming plan for Bulk Load
17:22:44.177 [StreamConnectionEstablisher:1] INFO o.a.c.streaming.StreamSession - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Starting streaming to /10.XX.XX.XX
17:22:44.177 [StreamConnectionEstablisher:1] DEBUG o.a.c.streaming.ConnectionHandler - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Sending stream init for incoming stream
17:22:44.183 [StreamConnectionEstablisher:2] INFO o.a.c.streaming.StreamSession - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Starting streaming to /10.XX.XX.XX
17:22:44.183 [StreamConnectionEstablisher:2] DEBUG o.a.c.streaming.ConnectionHandler - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Sending stream init for incoming stream
17:23:47.191 [StreamConnectionEstablisher:2] ERROR o.a.c.streaming.StreamSession - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Streaming error occurred
java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_45]
at sun.nio.ch.Net.connect(Net.java:458) ~[na:1.8.0_45]
at sun.nio.ch.Net.connect(Net.java:450) ~[na:1.8.0_45]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) ~[na:1.8.0_45]
at java.nio.channels.SocketChannel.open(SocketChannel.java:189) ~[na:1.8.0_45]
at org.apache.cassandra.tools.BulkLoadConnectionFactory.createConnection(BulkLoadConnectionFactory.java:62) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.StreamSession.createConnection(StreamSession.java:236) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.ConnectionHandler.initiate(ConnectionHandler.java:79) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:223) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:208) [cassandra-all-2.1.6.jar:2.1.6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
17:23:47.202 [StreamConnectionEstablisher:2] DEBUG o.a.c.streaming.ConnectionHandler - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Closing stream connection handler on /10.XXX.XXX.XXX
17:23:47.205 [StreamConnectionEstablisher:1] ERROR o.a.c.streaming.StreamSession - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Streaming error occurred
java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.8.0_45]
at sun.nio.ch.Net.connect(Net.java:458) ~[na:1.8.0_45]
at sun.nio.ch.Net.connect(Net.java:450) ~[na:1.8.0_45]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648) ~[na:1.8.0_45]
at java.nio.channels.SocketChannel.open(SocketChannel.java:189) ~[na:1.8.0_45]
at org.apache.cassandra.tools.BulkLoadConnectionFactory.createConnection(BulkLoadConnectionFactory.java:62) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.StreamSession.createConnection(StreamSession.java:236) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.ConnectionHandler.initiate(ConnectionHandler.java:79) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:223) ~[cassandra-all-2.1.6.jar:2.1.6]
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:208) [cassandra-all-2.1.6.jar:2.1.6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Also the machine where I am running the sstableloader is out of the cassandra cluster.
Thanks
After debugging a little more, found that sstableloader also uses port 7000 while streaming sstables to cassandra cluster. My local machine did not have access to port 7000 on the machines on cassandra cluster. That's why i was getting connection time out exception.
Anyone who encounters this make sure that your machine from where you are running the sstableloader has access to port 9160,7000 and 9042 of all the casssandra nodes you are trying to stream to.
DEBUG o.a.c.streaming.ConnectionHandler - [Stream #0327a9e0-fd20-11e5-b350-63e9961cf787] Closing stream connection handler on /10.XXX.XXX.XXX
Hint: I suspect the machine 10.xxx.xxx.xxx is under heavy load. Worth checking the /var/log/cassandra/system.log file on this machine to narrow down the root cause

Cassandra exceptions during repair

I am currently having issues with cassandra when running a nodetool repair.
I ran a nodetool repair on each of our cassandra nodes and came across the exception mentioned below.
After doing some reading I ran a "nodetool scrub" and "sstablescrub" with no success, I am still seeing the same errors when running a nodetool repair.
We are using cassandra version 2.0.1.
Has anyone else seen this problem?
ERROR [AntiEntropySessions:3] 2013-11-26 15:25:03,315 RepairSession.java (line 278) [repair #f9f86700-56f1-11e3-9885-5938b4e97c9c] session completed with the following error
org.apache.cassandra.exceptions.RepairException: [repair #f9f86700-56f1-11e3-9885-5938b4e97c9c on <keyspace>/<table-1>, (-3566327001497837731,-3559225618918749690]] Validation failed in /<ipaddress-1>
at org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:152)
at org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:188)
at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:59)
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
ERROR [AntiEntropySessions:3] 2013-11-26 15:25:03,315 CassandraDaemon.java (line 185) Exception in thread Thread[AntiEntropySessions:3,5,RMI Runtime]
java.lang.RuntimeException: org.apache.cassandra.exceptions.RepairException: [repair #f9f86700-56f1-11e3-9885-5938b4e97c9c on <keyspace>/<table-1>, (-3566327001497837731,-3559225618918749690]] Validation failed in /<ipaddress-1>
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.cassandra.exceptions.RepairException: [repair #f9f86700-56f1-11e3-9885-5938b4e97c9c on <keyspace>/<table-1>, (-3566327001497837731,-3559225618918749690]] Validation failed in /<ipaddress-1>
at org.apache.cassandra.repair.RepairSession.validationComplete(RepairSession.java:152)
at org.apache.cassandra.service.ActiveRepairService.handleMessage(ActiveRepairService.java:188)
at org.apache.cassandra.repair.RepairMessageVerbHandler.doVerb(RepairMessageVerbHandler.java:59)
at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
... 3 more
INFO [AntiEntropySessions:4] 2013-11-26 15:25:03,319 RepairSession.java (line 236) [repair #fa075b20-56f1-11e3-9885-5938b4e97c9c] new session: will sync /<ipaddress1>, /<ipaddress2>, /<ipaddress3> on range (2637939872511762724,2642446772090452246] for <keyspace>.[<table-1>, <table-2>, <table-3>, <table-4>, <table-5>]
INFO [AntiEntropySessions:4] 2013-11-26 15:25:03,319 RepairJob.java (line 116) [repair #fa075b20-56f1-11e3-9885-5938b4e97c9c] requesting merkle trees for <table-2> (to [/<ipaddress1>, <ipaddress2>, <ipaddress3>])
ERROR [ValidationExecutor:5] 2013-11-26 15:25:03,353 Validator.java (line 242) Failed creating a merkle tree for [repair #f9f86700-56f1-11e3-9885-5938b4e97c9c on <keyspace>/<table-1>, (-3566327001497837731,-3559225618918749690]], /<ipaddress-2> (see log for details)
ERROR [ValidationExecutor:5] 2013-11-26 15:25:03,353 CassandraDaemon.java (line 185) Exception in thread Thread[ValidationExecutor:5,1,main]
java.lang.AssertionError
at org.apache.cassandra.db.compaction.PrecompactedRow.update(PrecompactedRow.java:171)
at org.apache.cassandra.repair.Validator.rowHash(Validator.java:198)
at org.apache.cassandra.repair.Validator.add(Validator.java:151)
at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:798)
at org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:60)
at org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:395)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Depending on your data model, you may be seeing the effects CASSANDRA-6152. See this comment for details:
https://issues.apache.org/jira/browse/CASSANDRA-6152?focusedCommentId=13793977&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13793977
Regardless, you should update to 2.0.3 immediately as there are several other fairly severe bugs lurking on your current version.

Resources