Cassandra compaction fails: IOError IOException Map failed - cassandra

We have been running Cassandra 1.0.2 in production for many months, with minimal trouble. Lately, we have started to see consistent failures in all nodes, with this error:
ERROR [CompactionExecutor:199] 2013-03-02 00:21:26,179 AbstractCassandraDaemon.java (line 133) Fatal exception in thread Thread[CompactionExecutor:199,1,RMI Runtime]
java.io.IOError: java.io.IOException: Map failed
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:225)
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:202)
at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:308)
at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:174)
at org.apache.cassandra.db.compaction.CompactionManager$4.call(CompactionManager.java:275)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:217)
... 9 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:755)
... 10 more
INFO [Thread-2] 2013-03-02 00:21:26,222 MessagingService.java (line 488) Shutting down MessageService...
The node dies after this. We remove the oldest data files and start the node again. The node runs for a while, then dies again. This happens for all eight nodes in our ring.
We are using the default compaction strategy (size tiered) and we think it is the best choice for our problem domain.
Q: Why is this happening and what should we do to fix it?

Related

Spark job fails: storage.DiskBlockObjectWriter: Uncaught exception while reverting partial writes to file

I have a Spark (1.4.1) application, running on Yarn, that fails with the following executor log entry:
16/07/21 23:09:08 ERROR executor.CoarseGrainedExecutorBackend: Driver 9.4.136.20:55995 disassociated! Shutting down.
16/07/21 23:09:08 ERROR storage.DiskBlockObjectWriter: Uncaught exception while reverting partial writes to file /dfs1/hadoop/yarn/local/usercache/mitchus/appcache/application_1465987751317_1172/blockmgr-f367f43b-f4c8-4faf-a829-530da30fb040/1c/temp_shuffle_581adb36-1561-4db8-a556-c4ac0e6400ed
java.io.FileNotFoundException: /dfs1/hadoop/yarn/local/usercache/mitchus/appcache/application_1465987751317_1172/blockmgr-f367f43b-f4c8-4faf-a829-530da30fb040/1c/temp_shuffle_581adb36-1561-4db8-a556-c4ac0e6400ed (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.revertPartialWritesAndClose(BlockObjectWriter.scala:189)
at org.apache.spark.util.collection.ExternalSorter.spillToMergeableFile(ExternalSorter.scala:328)
at org.apache.spark.util.collection.ExternalSorter.spill(ExternalSorter.scala:257)
at org.apache.spark.util.collection.ExternalSorter.spill(ExternalSorter.scala:95)
at org.apache.spark.util.collection.Spillable$class.maybeSpill(Spillable.scala:83)
at org.apache.spark.util.collection.ExternalSorter.maybeSpill(ExternalSorter.scala:95)
at org.apache.spark.util.collection.ExternalSorter.maybeSpillCollection(ExternalSorter.scala:240)
at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:220)
at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Any clues as to what might have gone wrong?
The reason caused by temp shuffle file is deleted. There are many reasons, for one which I met is because the other executor was killed by Yarn. After the executor killed, a SHUT_DOWN signal will be sent to other executors, then the ShutdownHookManager will delete all the temp files which have registered to ShutdownHookManager. That's why you see the error. So you maybe need to check whether there are any ShutdownHookManager called log.
You can try to improve spark.yarn.executor.memoryOverhead.

runtime exception during nutch generate

I'm trying to run nutch for the first time and while executing
/bin/nutch generate -topN 5
I get the following exception:
GeneratorJob: starting at 2016-02-13 21:01:42
GeneratorJob: Selecting best-scoring urls due for fetch.
GeneratorJob: starting
GeneratorJob: filtering: true
GeneratorJob: normalizing: true
GeneratorJob: topN: 5
GeneratorJob: java.lang.RuntimeException: job failed: name=apache-nutch- 2.3.1.jar, jobid=job_local1061440919_0001
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:120)
at org.apache.nutch.crawl.GeneratorJob.run(GeneratorJob.java:227)
at org.apache.nutch.crawl.GeneratorJob.generate(GeneratorJob.java:256)
at org.apache.nutch.crawl.GeneratorJob.run(GeneratorJob.java:322)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.GeneratorJob.main(GeneratorJob.java:330)
Here is the stacktrace from hadoop.log:
2016-02-13 21:01:44,541 ERROR mapreduce.GoraRecordReader - Error reading Gora records: null
2016-02-13 21:01:44,557 WARN mapred.LocalJobRunner - job_local1061440919_0001
java.lang.Exception: java.lang.RuntimeException: java.util.NoSuchElementException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: java.util.NoSuchElementException
at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:122)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.NoSuchElementException
at java.util.concurrent.ConcurrentSkipListMap.firstKey(ConcurrentSkipListMap.java:2036)
at org.apache.gora.memory.store.MemStore.execute(MemStore.java:128)
at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:73)
at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:67)
at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:109)
... 12 more
I've been following the tutorial here: https://github.com/renepickhardt/metalcon/wiki/simpleNutchSolrSetup for setting up nutch.
I've seen a few posts on stackoverflow and the nutch archives with similar exceptions, and they've suggested that I might be running out of disk space in my /tmp directory but the /tmp directory only has about 8MB worth of data on it.
Other than this, I'm clueless about what is causing this exception
What could be the cause of this exception?
I'm using Nutch 2.3.1 along with HBase 1.1.3 as the datastore and I'm running it on Ubuntu 15.10
Thanks
Looking the hadoop's log, I think you are using MemStore, and not HBaseStore. Did you configure gora.properties?
Copied from my comment :)
1) You must configure gora.properties,
2) Also whatever you have behind Gora (Mongo, HBase, Cassandra, etc ...) isn't responding so nutch needs to "waitForCompletion", so make sure it is up and running.
Make sure you kill old defunct processes with a kill -9, and old java nutch processes, and reboot if you can't find them (hopefully it won't come to that.

Connection error: Request did not complete within rpc_timeout when doing cqlsh

I have installed Cassandra v2.0.16 on CentOS release 6.6 (Final).
But when I try to do cqlsh on it, It gives the following error,
Connection error: Request did not complete within rpc_timeout
On My local machine, I am running ubuntu 12.04, it works fine, but on my production machine (User Managed VPS running Cent OS) its not working properly.
I went to the /var/log/cassandra/system.log
ERROR [main] 2015-07-02 04:42:12,973 CassandraDaemon.java (line 571) Exception encountered during startup
org.jboss.netty.channel.ChannelException: Failed to bind to: localhost/127.0.0.1:9042
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.apache.cassandra.transport.Server.run(Server.java:159)
at org.apache.cassandra.transport.Server.start(Server.java:110)
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:489)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:643)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I am just stuck here.
Please any help would be appreciated.
I am not getting how to solve this problem.
Note: I have reinstalled Cassandra 2 to 3 times. But stuck with same problem

sstableloader fails to upload data if node is down

I have a test setup in which sstableloader fails to upload data if one of the Cassandra node is down. I can see that
Is there a way to instruct sstableloader not to connect (or open a stream) to the dead node (I don't want to decommission/remove the node from cluster)?
Cassandra cluster info: Datastax community version 2.1.2, 3 node cluster out of which 2 are seed nodes.
During testing bulk upload, one of the seed node was down. The keyspace has replication factor = 2.
Exception encountered:
progress: total: 100% 0 MB/s(avg: 0 MB/s)ERROR 09:07:48 [Stream #8972f510-efe1-11e4-abad-9d409520f182] Streaming error occurred
java.net.ConnectException: Connection refused
at sun.nio.ch.Net.connect0(Native Method) ~[na:1.7.0_65]
at sun.nio.ch.Net.connect(Net.java:465) ~[na:1.7.0_65]
at sun.nio.ch.Net.connect(Net.java:457) ~[na:1.7.0_65]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670) ~[na:1.7.0_65]
at java.nio.channels.SocketChannel.open(SocketChannel.java:184) ~[na:1.7.0_65]
at org.apache.cassandra.tools.BulkLoadConnectionFactory.createConnection(BulkLoadConnectionFactory.java:62) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.streaming.StreamSession.createConnection(StreamSession.java:229) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.streaming.ConnectionHandler.initiate(ConnectionHandler.java:79) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.streaming.StreamSession.start(StreamSession.java:216) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.streaming.StreamCoordinator$StreamSessionConnector.run(StreamCoordinator.java:208) [apache-cassandra-2.1.2.jar:2.1.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
progress: [/192.168.1.17]0:1/1 100% total: 100% 0 MB/s(avg: 1 MB/s)WARN 09:07:48 [Stream #8972f510-efe1-11e4-abad-9d409520f182] Stream failed
Streaming to the following hosts failed:
[/192.168.1.15]
java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:121)
Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
at org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:208)
at org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184)
at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:382)
at org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:574)
at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:438)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:251)
at java.lang.Thread.run(Thread.java:745)
Thanks in advance,
Anirban.
I just figured that I can pass an ignore list to sstableloader. By passing dead nodes in the ignore list sstableloader ran successfully in my test setup.

Pig Cassandra cluster ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit

I am trying to run Cassandra-0.8.5, Hadoop 0.2.0 and Pig 0.8.1. I run a very simple pig scripts as
rows = LOAD 'cassandra://pygmalion/$CF' USING CassandraStorage() AS (key, columns: bag {T: tuple(name, value)});
counted = foreach (group rows all) generate COUNT($1);
dump counted;
it worked well if i run local mode. But if I run mapreduce mode, i alway get error message as below, i run out of idea. any help or hind will be greatly appreciated.
pig_813399.log:
Backend error message
java.io.IOException: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit.readFields(PigSplit.java:225)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:349)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:611)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264) Caused by: java.lang.ClassNotFoundException: org.apache.c
Pig Stack Trace
ERROR 2997: Unable to recreate exception from backed error: java.io.IOException: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias counted. Backend error : Unable to recreate exception from backed error: java.io.IOException: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
at org.apache.pig.PigServer.openIterator(PigServer.java:753)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:615)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:303)
at org.apache.pig.tools.grunt.GruntParser.loadScript(GruntParser.java:477)
at org.apache.pig.tools.grunt.GruntParser.processScript(GruntParser.java:422)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.Script(PigScriptParser.java:692)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:425)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:168)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:144)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:76)
at org.apache.pig.Main.run(Main.java:455)
at org.apache.pig.Main.main(Main.java:107)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2997: Unable to recreate exception from backed error: java.io.IOExcepti
on: java.lang.ClassNotFoundException: org.apache.cassandra.hadoop.ColumnFamilySplit
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getErrorMessages(Launcher.java:221)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getStats(Launcher.java:151)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:337)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.execute(HExecutionEngine.java:382)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1209)
at org.apache.pig.PigServer.storeEx(PigServer.java:885)
at org.apache.pig.PigServer.store(PigServer.java:827)
at org.apache.pig.PigServer.openIterator(PigServer.java:739)
It sounds like you need to update the HADOOP_CLASSPATH environment variable to include the cassandra jars. You will need to restart the hadoop processes after you do.
export HADOOP_CLASSPATH=/path/to/cassandra/lib/*:$HADOOP_CLASSPATH
See http://wiki.apache.org/cassandra/HadoopSupport for other possible problems with running hadoop/pig against cassandra.

Resources