Firstly forgive me for what might be a very naive question.
I am on a mission to identify the right nosql database for my project.
I was inserting and updating records in the table (column family) in highly concurrent fashion.
Then i encountered this.
INFO 11:55:20,924 Writing Memtable-scan_request#314832703(496750/1048576 serialized/live bytes, 8204 ops)
INFO 11:55:21,084 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-14-Data.db (115527 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=24665321)
INFO 11:55:21,085 Writing Memtable-scan_request#721424982(1300975/2097152 serialized/live bytes, 21494 ops)
INFO 11:55:21,191 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-15-Data.db (304269 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=26554523)
WARN 11:55:21,268 Heap is 0.829968311377531 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
WARN 11:55:21,268 Flushing CFS(Keyspace='mykey', ColumnFamily='scan_request') to relieve memory pressure
INFO 11:55:25,451 Enqueuing flush of Memtable-scan_request#714386902(324895/843149 serialized/live bytes, 5362 ops)
INFO 11:55:25,452 Writing Memtable-scan_request#714386902(324895/843149 serialized/live bytes, 5362 ops)
INFO 11:55:25,490 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-16-Data.db (76213 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=27025950)
WARN 11:55:30,109 Heap is 0.9017950505664833 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8849.hprof ...
Heap dump file created [1359702396 bytes in 105.277 secs]
WARN 12:25:26,656 Flushing CFS(Keyspace='mykey', ColumnFamily='scan_request') to relieve memory pressure
INFO 12:25:26,657 Enqueuing flush of Memtable-scan_request#728952244(419985/1048576 serialized/live bytes, 6934 ops)
Its to be noticed that i was able to insert & update around 6 million records before i got this. I am using cassandra on a single node. In-spite of the hint in the logs, i am not able to decide what config to change. I did check into the bin/cassandra shell script and i see they have done lots of manipulation before they came up with the -Xms & -Xmx values.
Kindly advice.
First, you can run
ps -ef|grep cassandra
to see what -Xmx is set to in your Cassandra. The default values of -Xms and -Xmx are based on the amount of your system's memory.
Check this for details:
http://www.datastax.com/documentation/cassandra/1.2/index.html?pagename=docs&version=1.2&file=index#cassandra/operations/ops_tune_jvm_c.html
You can try to increase MAX_HEAP_SIZE (in conf/cassandra-env.sh) to see if the problem would go away.
For example, you can replace
MAX_HEAP_SIZE="${max_heap_size_in_mb}M"
with
MAX_HEAP_SIZE="2048M"
I assume that tuning the Garbage Collector for Cassandra might solve the OOM error. Cassandra use Concurrent mark-and-sweep(CMS) JVM implementation for Garbage Collector when we use default settings.Most oftenly the CMS garbage collector would only start after the heap is almost fully populated. But the CMS process itself takes some time to finish and the problem is that the JVM runs out of space before the CMS process finished.We can set the percentage of used old generation space which triggers the CMS with following options in bin/cassandra.in.sh file under JAVA_OPTS variable
-XX:CMSInitiatingOccupancyFraction={percentage} - This controls the percentage of the old generation when the CMS is triggered and we can set this bit lower value to hold until CMS process finished.
-XX:+UseCMSInitiatingOccupancyOnly - This parameter ensures the percentage is kept fixed
Also with the following options we can achieve incremental CMS
-XX:+UseConcMarkSweepGC \
-XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing \
-XX:CMSIncrementalDutyCycleMin=0 \
-XX:+CMSIncrementalDutyCycle=10
We can increase the parallel CMS threads considering the number of cores of the CPU
-XX:ParallelCMSThreads={numberOfTreads}
Further we can tune the garbage collection for young generation for make the process optimum. Here we have to control the amount of re-used objects
Increasing the size of the young generation
Delay the young generation objects promotion for old generation
To achieve this we can set following parameters
-XX:NewSize={size} - Determine the size of the young generation
-XX:NewMaxSize={size} - This is the maximum size of the young generation
-Xmn{size} - Fix the maximum size
-XX:NewRatio={n} - Set the ratio of young generation to old generation
Before the objects are migrate to the old generation from young generation they are put in to phase called "young survior". So we can controll the migration of the objects to old generation using following parameters
-XX:SurvivorRatio={n} - ration of "young eden" to "young survivors"
-XX:MaxTenuringThreshold={age} Number of objects to be migrated to old generation
Related
From the Spark configuration docs, we understand the following about the spark.memory.fraction configuration parameter:
Fraction of (heap space - 300MB) used for execution and storage. The lower this is, the more frequently spills and cached data eviction occur. The purpose of this config is to set aside memory for internal metadata, user data structures, and imprecise size estimation in the case of sparse, unusually large records. Leaving this at the default value is recommended.
The default value for this configuration parameter is 0.6 at the time of writing this question. This means that for an executor with, for example, 32GB of heap space and the default configurations we have:
300MB of reserved space (a hardcoded value on this line)
(32GB - 300MB) * 0.6 = 19481MB of shared memory for execution + storage
(32GB - 300MB) * 0.4 = 12987MB of user memory
This "user memory" is (according to the docs) used for the following:
The rest of the space (40%) is reserved for user data structures, internal metadata in Spark, and safeguarding against OOM errors in the case of sparse and unusually large records.
On an executor with 32GB of heap space, we're allocating 12,7GB of memory for this, which feels rather large!
Do these user data structures/internal metadata/safeguarding against OOM errors really need that much space? Are there some striking examples of user memory usage which illustrate the need of this big of a user memory region?
I did some research and imo its 0.6 not to ensure enough memory for user memory but to ensure that execution + storage can fit into old gen region of jvm
Here i found something interesting: Spark tuning
The tenured generation size is controlled by the JVM’s NewRatio
parameter, which defaults to 2, meaning that the tenured generation is
2 times the size of the new generation (the rest of the heap). So, by
default, the tenured generation occupies 2/3 or about 0.66 of the
heap. A value of 0.6 for spark.memory.fraction keeps storage and
execution memory within the old generation with room to spare. If
spark.memory.fraction is increased to, say, 0.8, then NewRatio may
have to increase to 6 or more.
So by default in OpenJvm this ratio is set to 2 so you have 0,66% for old-gen, they choose to use 0,6 to have small margin
I found that in version 1.6 this was changed to 0,75 and it was causing some issues, here is Jira ticket
In the description you will find sample code which is adding records to cache just to use whole memory reserved for exeution + storage. With storage + execution set to higher amount than old gen overhead for gc was really high and code which was executed on older version (with this setting equal to 0.6) was 6 time faster (40-50 sec vs 6 min)
There was discussion and community decided to roll it back to 0.6 in Spark 2.0, here is PR with changes
I think that if you want to increase performance a little bit, you can try to change it up to 0.66 but if you want to have more memory for execution+storageyou need to also adjust your jvm and change old/new ratio as well otherwise you may face performance issues
I'm getting the following error in the application logs that trying to connect to a Cassandra cluster with 6 nodes
com.datastax.driver.core.exceptions.OperationTimedOutException: [DB1:9042] Timed out waiting for server response
I have set java heap memory to 8GB (-Xms8G -Xmx8G), wondering if 8 GB is too much?
Below is time out configuration in cassandra.yaml
read_request_timeout_in_ms: 10000
range_request_timeout_in_ms: 20000
write_request_timeout_in_ms: 10000
request_timeout_in_ms: 20000
In the application there aren't heavy delete or update statements, so my question is what else may cause the long GC pause? the majority types of the GC pause that I can see in the log is G1 Evacuation Pause, what does it mean exactly?
The heap size heavily depends on the amount of data that you need to process. Usually for production workload minimum of 16Gb was recommended. Also, G1 isn't very effective on the small heaps (less than 12Gb) - it's better to use default ParNewGC for such heap sizes, but you may need to tune GC to get better performance. Look into this blog post that explains tuning of the GC.
Regarding your question on the "G1 Evacuation Pause" - look into this blog posts: 1 and 2. Here is quote from 2nd post:
Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all
the application threads are stopped at a safepoint during this time.
For you this means that you're filling regions very fast and regions are big, so it requires significant amount of time to copy data.
i have a data of 100 nodes and 165 relations to be inserted into one keyspace. My grakn image have 4 core CPU and 3 GB Memory. While i try to insert the data i am getting an error [grpc-request-handler-4] ERROR grakn.core.server.Grakn - Uncaught exception at thread [grpc-request-handler-4] java.lang.OutOfMemoryError: Java heap space. It was noticed that the image used 346 % CPU and 1.46 GB RAM only. Also a finding for the issue in log was Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=3cb85440): io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.socket.nio.NioSocketChannel)
Could you please help me with this?
It sounds like Cassandra ran out of memory - currently, Grakn spawns to processes: one for Cassandra and one for Grakn server. You can increase your memory limit with the following flags (unix):
SERVER_JAVAOPTS=-Xms1G STORAGE_JAVAOPTS=-Xms2G ./grakn server start
this would give the server 1GB, and the storage engine (cassandra) 2gb of memory, for instance. 3 GB may be a bit on the low end once your data grows so keep these flags in mind :)
I see this error continuously in the debug.log in cassandra,
WARN [SharedPool-Worker-2] 2018-05-16 08:33:48,585 BatchStatement.java:287 - Batch of prepared statements for [test, test1] is of size 6419, exceeding specified threshold of 5120 by 1299.
In this
where,
6419 - Input payload size (Batch)
5120 - Threshold size
1299 - Byte size above threshold value
so as per this ticket in Cassandra, https://github.com/krasserm/akka-persistence-cassandra/issues/33 I see that it is due to the increase in input payload size so I Increased the commitlog_segment_size_in_mb in cassandra.yml to 60mb and we are not facing this warning anymore.
Is this Warning harmful? Increasing the commitlog_segment_size_in_mb will it affect anything in performance?
This is not related to the commit log size directly, and I wonder why its change lead to disappearing of the warning...
The batch size threshold is controlled by batch_size_warn_threshold_in_kb parameter that is default to 5kb (5120 bytes).
You can increase this parameter to higher value, but you really need to have good reason for using batches - it would be nice to understand the context of their usage...
commit_log_segment_size_in_mb represents your block size for commit log archiving or point-in-time backup. These are only active if you have configured archive_command or restore_command in your commitlog_archiving.properties file.
Default size is 32mb.
As per Expert Apache Cassandra Administration book:
you must ensure that value of commitlog_segment_size_in_mb must be twice the value of max_mutation_size_in_kb.
you can take reference of this:
Mutation of 17076203 bytes is too large for the maxiumum size of 16777216
I'm using apache cassandra-3.0.6 ,4 node cluster, RF=3, CONSISTENCY is '1', Heap 16GB.
Im getting info message in system.log as
INFO [SharedPool-Worker-1] 2017-03-14 20:47:14,929 NoSpamLogger.java:91 - Maximum memory usage reached (536870912 bytes), cannot allocate chunk of 1048576 bytes
don't know exactly which memory it mean and I have tried by increasing the file_cache_size_in_mb to 1024 from 512 in Cassandra.yaml file But again it immediatly filled the remaining 512MB increased and stoping the application recording by showing the same info message as
INFO [SharedPool-Worker-5] 2017-03-16 06:01:27,034 NoSpamLogger.java:91 - Maximum memory usage reached (1073741824 bytes), cannot allocate chunk of 1048576 bytes
please suggest if anyone has faced the same issue..Thanks!!
Bhargav
As far as I can tell with Cassandra 3.11, no matter how large you set file_cache_size_in_mb, you will still get this message. The cache fills up, and writes this useless message. It happens in my case whether I set it to 2GB or 20GB. This may be a bug in the cache eviction strategy, but I can't tell.
The log message indicates that the node's off-heap cache is full because the node is busy servicing reads.
The 536870912 bytes in the log message is equivalent to 512 MB which is the default file_cache_size_in_mb.
It is fine to see the occasional occurrences of the message in the logs which is why it is logged at INFO level but if it gets logged repeatedly, it is an indicator that the node is getting overloaded and you should consider increasing the capacity of your cluster by adding more nodes.
For more info, see my post on DBA Stack Exchange -- What does "Maximum memory usage reached" mean in the Cassandra logs?. Cheers!