We have a 3-node cassandra cluster on AWS. These nodes are running cassandra 1.2.2 and have 8GB memory. We didn't change any of the default heap or GC settings. So each node is allocating 1.8GB of heap space. The rows are wide; each row stores around 260,000 columns. We are reading the data using Astyanax. If our application tries to read 80,000 columns each from 10 or more rows at the same time, some of the nodes run out of heap space and terminate with OOM error. Here is the error message:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.duplicate(HeapByteBuffer.java:107)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getBytes(AbstractCompositeType.java:50)
at org.apache.cassandra.db.marshal.AbstractCompositeType.getWithShortLength(AbstractCompositeType.java:60)
at org.apache.cassandra.db.marshal.AbstractCompositeType.split(AbstractCompositeType.java:126)
at org.apache.cassandra.db.filter.ColumnCounter$GroupByPrefix.count(ColumnCounter.java:96)
at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:164)
at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:136)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:84)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:294)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1363)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1220)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1132)
at org.apache.cassandra.db.Table.getRow(Table.java:355)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70)
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
ERROR 02:14:05,351 Exception in thread Thread[Thrift:6,5,main] java.lang.OutOfMemoryError: Java heap space
at java.lang.Long.toString(Long.java:269)
at java.lang.Long.toString(Long.java:764)
at org.apache.cassandra.dht.Murmur3Partitioner$1.toString(Murmur3Partitioner.java:171)
at org.apache.cassandra.service.StorageService.describeRing(StorageService.java:1068)
at org.apache.cassandra.thrift.CassandraServer.describe_ring(CassandraServer.java:1192)
at org.apache.cassandra.thrift.Cassandra$Processor$describe_ring.getResult(Cassandra.java:3766)
at org.apache.cassandra.thrift.Cassandra$Processor$describe_ring.getResult(Cassandra.java:3754)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722) ERROR 02:14:05,350 Exception in thread Thread[ACCEPT-/10.0.0.170,5,main] java.lang.RuntimeException: java.nio.channels.ClosedChannelException
at org.apache.cassandra.net.MessagingService$SocketThread.run(MessagingService.java:893) Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:211)
at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:99)
at org.apache.cassandra.net.MessagingService$SocketThread.run(MessagingService.java:882)
The data in each column is less than 50 bytes. After adding all the column overheads (column name + metadata), it should not be more than 100 bytes. So reading 80,000 columns from 10 rows each means that we are reading 80,000 * 10 * 100 = 80 MB of data. It is large, but not large enough to fill up the 1.8 GB heap. So I wonder why the heap is getting full. If the data request is to big to fill in a reasonable amount of time, I would expect Cassandra to return a TimeOutException instead of terminating.
One easy solution is to increase the heap size, but that will only mask the problem. Reading 80MB of data should not make a 1.8 GB heap full.
Is there some other Cassandra setting that I can tweak to prevent the OOM exception?
No, there is no write operation in progress when I read the data. I am
sure that increasing the heap space may help. but I am trying to
understand why reading 80MB of data is making a 1.8GB heap full.
Cassandra uses Heap and OfHeap chaching.
First loading of 80MB userdata may result in 200-400 MB of Java Heap usage. (which vm? 64 bit?)
Secondly this memory is added to memory allready used for caches. It seemes that cassandra does not frees that caches to serve your private query. Could make sence for overal throughput.
Did you meanwhile solved your problem by increasing MaxHeap?
Related
Using Cassandra 3.11 with 18 nodes and 32 GB of memory in production, the weekly compaction job throws the following error in system.log and debug.log then the Cassandra process dies and I have to start Cassandra.
ERROR [ReadStage-4] JVMStabilityInspector.java:142 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Direct buffer memory
ERROR [ReadStage-5] JVMStabilityInspector.java:142 - JVM state determined to be unstable. Exiting forcefully due to: java.lang.OutOfMemoryError: Direct buffer memory
DEBUG [ReadRepairStage:29517] ReadCallback.java:242 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(-3787568997731881233, 0000000004ca5c48) (2b912cd2000e6bb5b481fa849e438ae4 vs 962e899380ce22ac970c6be0014707de)
Java heap size is 8 GB
/opt/apache-cassandra-3.11.0/conf/jvm.options
-Xms8G
-Xmx8G
Any other workaround rather than increasing heap size to prevent out-of-memory issue during compaction?
On their own, the errors you posted don't indicate to me that they're related to compaction. Instead the thread IDs (ReadStage-*) indicate that they're the result of read requests.
If anything, the DigestMismatchException is a bit more telling because it indicates that your replicas are out-of-sync. If you're seeing dropped mutations in the logs, that's a clear indication that the nodes are overloaded. That symptom is more in-line with the OOM you're seeing which I believe is a result of your cluster not able to keep up with the app traffic.
With 32GB RAM systems, we recommend bumping up the heap to 16GB for production systems. I realise you prefer not to do this but it's an appropriate action and it's something I would do if I were managing the cluster.
I'd also make sure that data/ and commitlog/ are on separate disks/volumes so they're not competing for the same IO (unless you have direct-attached SSDs). If you're still seeing lots of dropped mutations, consider increasing the capacity of your cluster by adding more nodes. Cheers!
I'm getting OutOfMemoryError when run compaction on some big sstables in production, table size is around 800 GB, compaction on small sstables is working properly though.
$ noodtool compact keyspace1 users
error: Direct buffer memory
-- StackTrace --
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at org.apache.cassandra.io.compress.BufferType$2.allocate(BufferType.java:35)
Java heap memory(Xms and Xmx) have been set to 8 GB, wondering if I should increase Java heap memory to 12 or 16 GB?
It's not the Heap size, but it's instead so-called "direct memory" - you need to check what amount you have (it's could be specified by something like this -XX:MaxDirectMemorySize=512m, or it will take the same max size as heap). You can increase it indirectly by increasing the heap size, or you can control it explicitly via -XX flag. Here is the good article about non-heap memory in Java.
I tried to broadcast a not-so-large map (~ 70 MB when saved to HDFS as text file), and I got out of memory errors. I tried to increase the driver memory to 11G and executor memory to 11G, and still got the same error. The memory.fraction is set to 0.3, and there's not many data (less than 1G) cached either.
When the map is only around 2 MB, there's no problem. I wonder if there is a size limit when broadcasting objects. How can I solve this problem using the bigger map? Thank you!
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.IdentityHashMap.resize(IdentityHashMap.java:469)
at java.util.IdentityHashMap.put(IdentityHashMap.java:445)
at org.apache.spark.util.SizeEstimator$SearchState.enqueue(SizeEstimator.scala:159)
at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:229)
at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:194)
at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:186)
at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:54)
at org.apache.spark.util.collection.SizeTracker$class.takeSample(SizeTracker.scala:78)
at org.apache.spark.util.collection.SizeTracker$class.afterUpdate(SizeTracker.scala:70)
at org.apache.spark.util.collection.SizeTrackingVector.$plus$eq(SizeTrackingVector.scala:31)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:278)
at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:165)
at org.apache.spark.storage.MemoryStore.putIterator(MemoryStore.scala:143)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:801)
at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:648)
at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1006)
at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:99)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:85)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1327)
Edit:
Add more information according to the comments:
I use spark-submit to submit the compiled jar file in client mode. Spark 1.5.0
spark.yarn.executor.memoryOverhead 600
set("spark.kryoserializer.buffer.max", "256m")
set("spark.speculation", "true")
set("spark.storage.memoryFraction", "0.3")
set("spark.driver.memory", "15G")
set("spark.executor.memory", "11G")
I tried set("spar.sql.tungsten.enabled", "false") and it doesn't help.
The master machine has 60G memory. Around 30G is used for Spark/Yarn. I'm not sure how much heap size is for my job, but there's not much other process going on at the same time. Especially the map is only around 70MB.
Some code related to the broadcasting:
val mappingAllLocal: Map[String, Int] = mappingAll.rdd.map(r => (r.getAs[String](0), r.getAs[Int](1))).collectAsMap().toMap
// I can use the above mappingAll to HDFS, and it's around 70MB
val mappingAllBrd = sc.broadcast(mappingAllLocal) // <-- this is where the out of memory happens
Using set("spark.driver.memory", "15G") has no effect on client mode. You have to use the command line parameter --conf="spark.driver.memory=15G" when submitting the application to increase the driver's heap size.
You can try increase the JVM heap size :
-Xmx2g : max size of 2Go
-Xms2g : initial size of 2Go (default size is 256mo)
I have a RDD[((Long,Long),Float)] about 150G (shown in web ui storage).
When I groupby this RDD, driver program throws following error
15/07/16 04:37:08 ERROR actor.ActorSystemImpl: Uncaught fatal error from thread [sparkDriver-akka.remote.default-remote-dispatcher-39] shutting down ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2271)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply$mcV$sp(Serializer.scala:129)
at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply(Serializer.scala:129)
at akka.serialization.JavaSerializer$$anonfun$toBinary$1.apply(Serializer.scala:129)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at akka.serialization.JavaSerializer.toBinary(Serializer.scala:129)
at akka.remote.MessageSerializer$.serialize(MessageSerializer.scala:36)
at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:845)
at akka.remote.EndpointWriter$$anonfun$serializeMessage$1.apply(Endpoint.scala:845)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at akka.remote.EndpointWriter.serializeMessage(Endpoint.scala:844)
at akka.remote.EndpointWriter.writeSend(Endpoint.scala:747)
The executors didn't even start the stage.
This RDD has 120000 partitions. Could this be the cause of the error?
The size of a at least one of the partitions is more that the memory you have allocated to the executor (you can do that with the --executor-memory flag on the command line running the spark job
After grouping by (Long, Long), at least one of your groups are big to fit in memory. Spark expects each record after grouping ((Long,long), Iterator[Float]) to fit in memory. and this is not the case for your data. see this https://spark.apache.org/docs/1.2.0/tuning.html look for Memory Usage of Reduce Tasks
I suggest to have a work around by increasing your data parallelism. Add a mapping step before the group by and break down your data.
ds.Map(x=>(x._1._1,x._1._2,x._1._1%2),float))
Then group by the new key (you might do something more sophisticated than this x._1._1%2).
We are experimenting a bit with Cassandra by trying out some of the long running test cases (stress test) and we are experiencing some memory issues on one node of the cluster at any given time (It could be any machine on the cluster!)
We am running DataStax Community with Cassandra 1.1.6 on a machine with Windows Server 2008 and 8 GB of RAM. Also, we have configured the Heap size to be 2GB as against the default value of 1GB.
A snippet from the logs:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid2440.hprof ...
Heap dump file created [1117876234 bytes in 11.713 secs]
ERROR 22:16:56,756 Exception in thread Thread[CompactionExecutor:399,1,main]
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.io.util.FastByteArrayOutputStream.expand(FastByteArrayOutputStream.java:104)
at org.apache.cassandra.io.util.FastByteArrayOutputStream.write(FastByteArrayOutputStream.java:220)
at java.io.DataOutputStream.write(Unknown Source)
Any pointers/help to investigate and fix this.??
You're doing the right thing by long-running your load tests but in a production use case you wouldn't be writing data like this.
Your rows are probably growing too big to fit in RAM when it comes time to compact them. A compaction requires the entire row to fit in RAM.
There's also a hard limit of 2 billion columns per row but in reality you shouldn't ever let rows grow that wide. Bucket them by adding a day or server name or some other value common across your dataset to your row keys.
For a "write-often read-almost-never" workload you can have very wide rows but you shouldn't come close to the 2 billion column mark. Keep it in millions with bucketing.
For a write/read mixed workload where you're reading entire rows frequently even hundreds of columns may be too much.
If you treat Cassandra right you'll easily handle thousands of reads and writes per second per node. I'm seeing about 2.5k reads and writes concurrently per node on my main cluster.