JVM Generational Spacing - garbage-collection

Using the following options in OpenJDK 1.6, I end up with 600mb eden, 2mb survivor, and 3gb tenured (from JConsole). I'm just messing with settings trying to figure out why the JVM won't allocate memory the way I want it to.
This application needs to lean heavier on the Eden than the Tenured.
JAVA_OPTS="-Duser.timezone=US/Eastern -Xms6000m -Xmx8000m -Xmn3000m -XX:SurvivorRatio=1 -XX:MaxPermSize=1500m -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled -XX:ReservedCodeCacheSize=200m -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=35 -XX:+PrintGCDetails -Xloggc:/var/log/tomcat6/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -XX:+PrintGCDateStamps -verbose:gc -XX:+PrintTenuringDistribution -XX:+PrintPromotionFailure -XX:PrintFLSStatistics=1 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8090 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
My expectation based on reading is that I would have 6000mb in the heap to start, 8000mb max. 3000mb would be given to new gen, and the survivor ratio would put eden memory : survivor at 1500mb : 1500mb. That is clearly not what I'm seeing in JConsole (above).
Can someone provide some insight as to why the flags I have would show the eden / survivor / tenured in JConsole the way they do vs the way I expect? I have been pouring over JVM documentation for a while and this does not seem like the behavior in the documentation.
Thanks so much in advance for the input!

Related

Compaction cause OutOfMemoryError

I'm getting OutOfMemoryError when run compaction on some big sstables in production, table size is around 800 GB, compaction on small sstables is working properly though.
$ noodtool compact keyspace1 users
error: Direct buffer memory
-- StackTrace --
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at org.apache.cassandra.io.compress.BufferType$2.allocate(BufferType.java:35)
Java heap memory(Xms and Xmx) have been set to 8 GB, wondering if I should increase Java heap memory to 12 or 16 GB?
It's not the Heap size, but it's instead so-called "direct memory" - you need to check what amount you have (it's could be specified by something like this -XX:MaxDirectMemorySize=512m, or it will take the same max size as heap). You can increase it indirectly by increasing the heap size, or you can control it explicitly via -XX flag. Here is the good article about non-heap memory in Java.

spark GC time improvement just by turning on GC log print options

My Spark job showed for a task, about 75% or higher GC time out of total run-time. It uses almost full CPU (about 85% by spark configuration) and memories. While following this reference for spark tuning, I turned on the GC log print options by adding:
spark.executor.extraJavaOptions = " -XX:+PrintFlagsFinal -XX:+PrintReferenceGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintAdaptiveSizePolicy -XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark"
As you can see, all options are about logging and diagnosis.
However, these options improved the runtime from 5 hours to 2 hours!
How can we explain this improvement?
[Update]
With -XX:+PrintFlagsFinal option only, 1 hr 23 min.
With -XX:+UseG1GC option only, 7+ hrs.

java.lang.OutOfMemory:Java heap space error in spark-submit

I am running Spark application using spark-submit and defined JVM parameters. With this set of parameters I get java heap space error:
EXTRA_JVM_FLAGS="-server -XX:+UseG1GC
-XX:ReservedCodeCacheSize=384m
-XX:MaxDirectMemorySize=2G
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005
--master "local[4]"
--driver-memory 2G
--driver-java-options "${EXTRA_JVM_FLAGS}"
I tried to increase driver memory but it caused JVM crash. Also, I tried to increase max direct memory size which did not help in any way. What options should I change to fix heap space error?
You should try the most basic option -Xmx - this is the max heap space size.
Code cache and direct memory size are native memory areas and don't affect the size of the heap.
By default, the JVM allocates 1/4 of RAM available on the box as max heap size. You can increase that if the machine is dedicated to the one JVM process pretty safely.

How can I tune memory setting for apache spark 1.5.0?

How can I adjust the in-heap and off-heap memory for application running on spark 1.5.0? By using -XX+PrintGCDetails -XX:+PrintGCTimeStamps, I noticed that in GC reports retrieved from the file $SPARK_HOME/work/application_id/stdout, JVM keeps on GC in about every 1 minute. Though 50g executor memory is allocated via --executor-memory 50g option and various --conf spark.storage.memoryFranction value, PSYoungGen region always occupied 30% of (PSYoungGen+ParOldGen). PSPermGen always stays in the value aroud 54,272KB with 99% usage.
What I have tried:
spark.executor.extraJavaOptions='-XX:Xms50g -XX:Xmx50g -XX:PermSize=8g' doesn't work, though loads of blogs ensures this setting works.
JAVA_OPTS setting in both spark-env.sh and spark-default.conf doesn't work
With no explicit in-heap and off-heap memory setting in spark 1.5.0, what's the solution for my problem?
JVM keeps on GC in about every 1 minute
Since you haven't posted any actual data, only your own analysis of the data, I cannot say for certain, but generally speaking 1 GC event per minute is perfectly normal, quite good even. So there is no tuning required.

Cassandra : java.lang.OutOfMemoryError: Java heap space

I am using cassandra 2.0.8 and getting this exception
INFO 16:44:50,132 Initializing system.batchlog
INFO 16:44:50,138 Initializing system.sstable_activity
INFO 16:44:50,142 Opening /var/lib/cassandra/data/system/sstable_activity/system-sstable_activity-jb-10 (826 bytes)
INFO 16:44:50,142 Opening /var/lib/cassandra/data/system/sstable_activity/system-sstable_activity-jb-9 (827 bytes)
INFO 16:44:50,142 Opening /var/lib/cassandra/data/system/sstable_activity/system-sstable_activity-jb-11 (825 bytes)
INFO 16:44:50,150 reading saved cache /var/lib/cassandra/saved_caches/system-sstable_activity-KeyCache-b.db
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid3460.hprof ...
Heap dump file created [13378724 bytes in 0.071 secs]
ERROR 16:44:50,333 Exception encountered during startup
java.lang.OutOfMemoryError: Java heap space
at java.util.ArrayList.<init>(ArrayList.java:144)
at org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:120)
at org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:365)
at org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:119)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:262)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:421)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309)
at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:110)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:88)
at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
java.lang.OutOfMemoryError: Java heap space
at java.util.ArrayList.<init>(ArrayList.java:144)
at org.apache.cassandra.db.RowIndexEntry$Serializer.deserialize(RowIndexEntry.java:120)
at org.apache.cassandra.service.CacheService$KeyCacheSerializer.deserialize(CacheService.java:365)
at org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:119)
at org.apache.cassandra.db.ColumnFamilyStore.<init>(ColumnFamilyStore.java:262)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:421)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:392)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:309)
at org.apache.cassandra.db.Keyspace.<init>(Keyspace.java:266)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:110)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:88)
at org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:536)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Exception encountered during startup: Java heap space
Can anyone tell me the reason and solution:
Reach out to cassandra/conf/cassandra-env.sh location
Check out the current heap size
You can assign at max of 1/2 RAM to the HEAP
#MAX_HEAP_SIZE="4G"
#HEAP_NEWSIZE="800M"
if you are changing your current heap-size then remove comment..
MAX_HEAP_SIZE="4G"
HEAP_NEWSIZE="800M"
Its possible your key cache is taking up too much space (since thats where it died) but it seems unlikely. You can try to delete your KeyCache before starting
/var/lib/cassandra/saved_caches
and set
key_cache_size_in_mb: 0
in your cassandra.yaml as a test (I would not recommend this permanently) to have it disabled.
You can actually determine whats filling up your heap by opening up the java_pid3460.hprof file it created in yourkit or some heap analyzer to determine whats taking up the space. There may be something funny going on, very strange to be dying at 13mb or so (size of heap).
Delete all log files in usr/local/var/lib/cassandra/commitlog/ and restart Cassandra.

Resources