I'm using apache cassandra-3.0.6 ,4 node cluster, RF=3, CONSISTENCY is '1', Heap 16GB.
Im getting info message in system.log as
INFO [SharedPool-Worker-1] 2017-03-14 20:47:14,929 NoSpamLogger.java:91 - Maximum memory usage reached (536870912 bytes), cannot allocate chunk of 1048576 bytes
don't know exactly which memory it mean and I have tried by increasing the file_cache_size_in_mb to 1024 from 512 in Cassandra.yaml file But again it immediatly filled the remaining 512MB increased and stoping the application recording by showing the same info message as
INFO [SharedPool-Worker-5] 2017-03-16 06:01:27,034 NoSpamLogger.java:91 - Maximum memory usage reached (1073741824 bytes), cannot allocate chunk of 1048576 bytes
please suggest if anyone has faced the same issue..Thanks!!
Bhargav
As far as I can tell with Cassandra 3.11, no matter how large you set file_cache_size_in_mb, you will still get this message. The cache fills up, and writes this useless message. It happens in my case whether I set it to 2GB or 20GB. This may be a bug in the cache eviction strategy, but I can't tell.
The log message indicates that the node's off-heap cache is full because the node is busy servicing reads.
The 536870912 bytes in the log message is equivalent to 512 MB which is the default file_cache_size_in_mb.
It is fine to see the occasional occurrences of the message in the logs which is why it is logged at INFO level but if it gets logged repeatedly, it is an indicator that the node is getting overloaded and you should consider increasing the capacity of your cluster by adding more nodes.
For more info, see my post on DBA Stack Exchange -- What does "Maximum memory usage reached" mean in the Cassandra logs?. Cheers!
Related
I'm getting the following error in the application logs that trying to connect to a Cassandra cluster with 6 nodes
com.datastax.driver.core.exceptions.OperationTimedOutException: [DB1:9042] Timed out waiting for server response
I have set java heap memory to 8GB (-Xms8G -Xmx8G), wondering if 8 GB is too much?
Below is time out configuration in cassandra.yaml
read_request_timeout_in_ms: 10000
range_request_timeout_in_ms: 20000
write_request_timeout_in_ms: 10000
request_timeout_in_ms: 20000
In the application there aren't heavy delete or update statements, so my question is what else may cause the long GC pause? the majority types of the GC pause that I can see in the log is G1 Evacuation Pause, what does it mean exactly?
The heap size heavily depends on the amount of data that you need to process. Usually for production workload minimum of 16Gb was recommended. Also, G1 isn't very effective on the small heaps (less than 12Gb) - it's better to use default ParNewGC for such heap sizes, but you may need to tune GC to get better performance. Look into this blog post that explains tuning of the GC.
Regarding your question on the "G1 Evacuation Pause" - look into this blog posts: 1 and 2. Here is quote from 2nd post:
Evacuation Pause (G1 collection) in which live objects are copied from one set of regions (young OR young+old) to another set. It is a stop-the-world activity and all
the application threads are stopped at a safepoint during this time.
For you this means that you're filling regions very fast and regions are big, so it requires significant amount of time to copy data.
i have a data of 100 nodes and 165 relations to be inserted into one keyspace. My grakn image have 4 core CPU and 3 GB Memory. While i try to insert the data i am getting an error [grpc-request-handler-4] ERROR grakn.core.server.Grakn - Uncaught exception at thread [grpc-request-handler-4] java.lang.OutOfMemoryError: Java heap space. It was noticed that the image used 346 % CPU and 1.46 GB RAM only. Also a finding for the issue in log was Caused by: com.datastax.oss.driver.api.core.AllNodesFailedException: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=3cb85440): io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.socket.nio.NioSocketChannel)
Could you please help me with this?
It sounds like Cassandra ran out of memory - currently, Grakn spawns to processes: one for Cassandra and one for Grakn server. You can increase your memory limit with the following flags (unix):
SERVER_JAVAOPTS=-Xms1G STORAGE_JAVAOPTS=-Xms2G ./grakn server start
this would give the server 1GB, and the storage engine (cassandra) 2gb of memory, for instance. 3 GB may be a bit on the low end once your data grows so keep these flags in mind :)
Nodes will frequently be marked down with the following message:
ERROR [MessagingService-Incoming-/192.168.1.34] 2019-09-12 10:52:54,054 MessagingService.java:823 - java.util.concurrent.RejectedExecutionException while receiving WRITES.WRITE from /192.168.1.34, caused by: Too many pending remote requests!
I have tried narrowing the problem down and I think its memtable related.
Because when I run a nodetool tpstats it shows TPC/all/WRITE_MEMTABLE_FULL having approx 4000 active.
Looking through opscenter graphs I can see
that TP: Write Memtable Full Active is at approx 4000 and TP: Gossip Tasks Pending keeps increasing and OS: CPU User sits at 80%.
I have also noticed commit log directory exceeds what we set of 16GB and have seen a node with 7911 files(248GB)
The nodes were previously set to have
cassandra-env.sh
MAX_HEAP_SIZE="64G"
HEAP_NEWSIZE="16G"
cassandra.yaml
commitlog_segment_size_in_mb: 32
memtable_heap_space_in_mb: 4096
memtable_offheap_space_in_mb: 4096
memtable_cleanup_threshold: 0.5
commitlog_total_space_in_mb: 16384
memtable_flush_writers: 2
I am currently trying to use these new settings on one of the nodes to see if it fixes the servers performance
memtable_heap_space_in_mb: 10240
memtable_offheap_space_in_mb: 2048
memtable_cleanup_threshold: 0.11
memtable_flush_writers: 8
If anyone has any ideas on what else I can look at.
Also how do I view MemtablePool.BlockedOnAllocation metric that it details against the memtable_flush_writers section in http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html
I've put up a test cluster - four nodes. Severely underpowered(!) - ok CPU, only 2 gigs of ram, shared non ssd storage. Hey, it's test :)
I just kept it running for three days. No data going in or out..everything's just idle. Connected with opscenter.
This morning, we found one of the nodes went down around 2 am last night. The OS didn't go down (was responding to pings). The cassandra log around that time is:
INFO [MemtableFlushWriter:114] 2014-07-29 02:07:34,952 Memtable.java:360 - Completed flushing /var/lib/cassandra/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-107-Data.db (686 bytes) for commitlog position ReplayPosition(segmentId=1406304454537, position=29042136)
INFO [ScheduledTasks:1] 2014-07-29 02:08:24,227 GCInspector.java:116 - GC for ParNew: 276 ms for 1 collections, 648591696 used; max is 1040187392
Next entry is:
INFO [main] 2014-07-29 09:18:41,661 CassandraDaemon.java:102 - Hostname: xxxxx
i.e. when we restarted the node through opscenter.
Does that mean it crashed on GC, or that GC finished and something else crashed? Is there some other log I should be looking at?
Note: In opscenter eventlog, we see this:
7/29/2014, 2:15am Warning Node reported as being down: xxxxxxx
I appreciate the nodes are underpowered, but for being completely idle, it shouldn't crash, should it?
Using 2.1.0-rc4 btw.
My guess is your node was shut down by the OOM killer. Because the Linux system over commits ram, when a heavy stress is on the system it may shut down applications to recover memory for the os. With 2G total ram this can happen very easily.
Firstly forgive me for what might be a very naive question.
I am on a mission to identify the right nosql database for my project.
I was inserting and updating records in the table (column family) in highly concurrent fashion.
Then i encountered this.
INFO 11:55:20,924 Writing Memtable-scan_request#314832703(496750/1048576 serialized/live bytes, 8204 ops)
INFO 11:55:21,084 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-14-Data.db (115527 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=24665321)
INFO 11:55:21,085 Writing Memtable-scan_request#721424982(1300975/2097152 serialized/live bytes, 21494 ops)
INFO 11:55:21,191 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-15-Data.db (304269 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=26554523)
WARN 11:55:21,268 Heap is 0.829968311377531 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
WARN 11:55:21,268 Flushing CFS(Keyspace='mykey', ColumnFamily='scan_request') to relieve memory pressure
INFO 11:55:25,451 Enqueuing flush of Memtable-scan_request#714386902(324895/843149 serialized/live bytes, 5362 ops)
INFO 11:55:25,452 Writing Memtable-scan_request#714386902(324895/843149 serialized/live bytes, 5362 ops)
INFO 11:55:25,490 Completed flushing /var/lib/cassandra/data/mykey/scan_request/mykey-scan_request-ic-16-Data.db (76213 bytes) for commitlog position ReplayPosition(segmentId=1372313109304, position=27025950)
WARN 11:55:30,109 Heap is 0.9017950505664833 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8849.hprof ...
Heap dump file created [1359702396 bytes in 105.277 secs]
WARN 12:25:26,656 Flushing CFS(Keyspace='mykey', ColumnFamily='scan_request') to relieve memory pressure
INFO 12:25:26,657 Enqueuing flush of Memtable-scan_request#728952244(419985/1048576 serialized/live bytes, 6934 ops)
Its to be noticed that i was able to insert & update around 6 million records before i got this. I am using cassandra on a single node. In-spite of the hint in the logs, i am not able to decide what config to change. I did check into the bin/cassandra shell script and i see they have done lots of manipulation before they came up with the -Xms & -Xmx values.
Kindly advice.
First, you can run
ps -ef|grep cassandra
to see what -Xmx is set to in your Cassandra. The default values of -Xms and -Xmx are based on the amount of your system's memory.
Check this for details:
http://www.datastax.com/documentation/cassandra/1.2/index.html?pagename=docs&version=1.2&file=index#cassandra/operations/ops_tune_jvm_c.html
You can try to increase MAX_HEAP_SIZE (in conf/cassandra-env.sh) to see if the problem would go away.
For example, you can replace
MAX_HEAP_SIZE="${max_heap_size_in_mb}M"
with
MAX_HEAP_SIZE="2048M"
I assume that tuning the Garbage Collector for Cassandra might solve the OOM error. Cassandra use Concurrent mark-and-sweep(CMS) JVM implementation for Garbage Collector when we use default settings.Most oftenly the CMS garbage collector would only start after the heap is almost fully populated. But the CMS process itself takes some time to finish and the problem is that the JVM runs out of space before the CMS process finished.We can set the percentage of used old generation space which triggers the CMS with following options in bin/cassandra.in.sh file under JAVA_OPTS variable
-XX:CMSInitiatingOccupancyFraction={percentage} - This controls the percentage of the old generation when the CMS is triggered and we can set this bit lower value to hold until CMS process finished.
-XX:+UseCMSInitiatingOccupancyOnly - This parameter ensures the percentage is kept fixed
Also with the following options we can achieve incremental CMS
-XX:+UseConcMarkSweepGC \
-XX:+CMSIncrementalMode \
-XX:+CMSIncrementalPacing \
-XX:CMSIncrementalDutyCycleMin=0 \
-XX:+CMSIncrementalDutyCycle=10
We can increase the parallel CMS threads considering the number of cores of the CPU
-XX:ParallelCMSThreads={numberOfTreads}
Further we can tune the garbage collection for young generation for make the process optimum. Here we have to control the amount of re-used objects
Increasing the size of the young generation
Delay the young generation objects promotion for old generation
To achieve this we can set following parameters
-XX:NewSize={size} - Determine the size of the young generation
-XX:NewMaxSize={size} - This is the maximum size of the young generation
-Xmn{size} - Fix the maximum size
-XX:NewRatio={n} - Set the ratio of young generation to old generation
Before the objects are migrate to the old generation from young generation they are put in to phase called "young survior". So we can controll the migration of the objects to old generation using following parameters
-XX:SurvivorRatio={n} - ration of "young eden" to "young survivors"
-XX:MaxTenuringThreshold={age} Number of objects to be migrated to old generation