Cassandra CPU Load (too much) - cassandra

Using top
8260 root 20 0 5163m 4.7g **133m** S 144.6 30.5 2496:46 java
Most of the time %CPU is >170.
I am trying to identify the issue. I think GC or flushing is too blame.
S0 S1 E O P YGC YGCT FGC FGCT GCT LGCC GCC
0.00 16.73 74.74 29.33 59.91 27819 407.186 206 10.729 417.914 Allocation Failure No GC
0.00 16.73 99.57 29.33 59.91 27820 407.186 206 10.729 417.914 Allocation Failure Allocation Failure
Also from Cassandra logs, it says Replaying position with the same segment ID and memtable is flushing too often.
INFO [SlabPoolCleaner] 2015-01-20 13:55:48,515 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 112838010 (11%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1587] 2015-01-20 13:55:48,516 Memtable.java:325 - Writing Memtable-bid_list#2003093066(23761503 serialized bytes, 211002 ops, 11%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1587] 2015-01-20 13:55:49,251 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3965-Data.db (4144688 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25289038)
INFO [SlabPoolCleaner] 2015-01-20 13:56:23,429 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 104056985 (10%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1589] 2015-01-20 13:56:23,429 Memtable.java:325 - Writing Memtable-bid_list#1124683519(21909522 serialized bytes, 194778 ops, 10%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1589] 2015-01-20 13:56:24,130 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3967-Data.db (3830733 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25350445)
INFO [SlabPoolCleaner] 2015-01-20 13:56:55,493 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 95807739 (9%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1590] 2015-01-20 13:56:55,494 Memtable.java:325 - Writing Memtable-bid_list#473510037(20170635 serialized bytes, 179514 ops, 9%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1590] 2015-01-20 13:56:56,151 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3968-Data.db (3531752 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25373052)
Any help or suggestion would be great. I have also disabled durable write false for the KeySpace. Thanks
Just found out after restarting all the nodes, YGC on one of the server is kicking in even if nothing is happening. Stopped the dumping of data etc.

What type of compaction do you use? Size tiered or Leveled?
If you are using leveled compaction, can you switch off over to Size tiered as you seem to have too many compactions. Increasing the sstable size for leveled compaction may also help.
sstable_size_in_mb (Default: 160MB)
The target size for SSTables that use the leveled compaction strategy. Although SSTable sizes
should be less or equal to sstable_size_in_mb, it is possible to have
a larger SSTable during compaction. This occurs when data for a given
partition key is exceptionally large. The data is not split into two
SSTables.
(http://www.datastax.com/documentation/cassandra/1.2/cassandra/reference/referenceTableAttributes.html#reference_ds_zyq_zmz_1k__sstable_size_in_mb)
If you are using size tiered compaction, increase the number of SS Tables before you see a minor compaction. This is set when the table is created, so you can change it using ALTER command. Example below:
ALTER TABLE users WITH
compaction_strategy_class='SizeTieredCompactionStrategy' AND
min_compaction_threshold = 6;
Compact after 6 SSTables are created

Related

Troubleshooting and fixing Cassandra OOM issue

Although there are multiple threads regarding the OOM issue would like to clarify certain things. We are running a 36 node Cassandra cluster of 3.11.6 version in K8's with 32gigs allocated for the container.
The container is getting OOM killed (Note:- Not java heap OOM error rather linux cgroup OOM killer) since it's reaching the memory limit of 32 gigs for its cgroup.
Stats and configs
map[limits:map[ephemeral-storage:2Gi memory:32Gi] requests:map[cpu:7 ephemeral-storage:2Gi memory:32Gi]]
Cgroup Memory limit
34359738368 -> 32 Gigs
The JVM spaces auto calculated by Cassandra -Xms19660M -Xmx19660M -Xmn4096M
Grafana Screenshot
Cassandra Yaml --> https://pastebin.com/ZZLTc1cM
JVM Options --> https://pastebin.com/tjzZRZvU
Nodetool info output on a node which is already consuming 98% of the memory
nodetool info
ID : 59c53bdb-4f61-42f5-a42c-936ea232e12d
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 179.71 GiB
Generation No : 1643635507
Uptime (seconds) : 9134829
Heap Memory (MB) : 5984.30 / 19250.44
Off Heap Memory (MB) : 1653.33
Data Center : datacenter1
Rack : rack1
Exceptions : 5
Key Cache : entries 138180, size 99.99 MiB, capacity 100 MiB, 9666222 hits, 10281941 requests, 0.940 recent hit rate, 14400 save period in seconds
Row Cache : entries 10561, size 101.76 MiB, capacity 1000 MiB, 12752 hits, 88528 requests, 0.144 recent hit rate, 900 save period in seconds
Counter Cache : entries 714, size 80.95 KiB, capacity 50 MiB, 21662 hits, 21688 requests, 0.999 recent hit rate, 7200 save period in seconds
Chunk Cache : entries 15498, size 968.62 MiB, capacity 1.97 GiB, 283904392 misses, 34456091078 requests, 0.992 recent hit rate, 467.960 microseconds miss latency
Percent Repaired : 8.28107989669628E-8%
Token : (invoke with -T/--tokens to see all 256 tokens)
What had been done
We had made sure there is no memory leak on the cassandra process since we have a custom trigger code. Gc log analytics shows we occupy roughly 14 gigs of total jvm space.
Questions
Although we know cassandra does occupy off heap spaces (Bloom filter, Memtables , etc )
The grafana screenshot shows the node is occupying 98% of 32 gigs. JVM heap = 19.5 gigs + offheap space in nodetool info output = 1653.33 MB (1Gigs) (JVM heap + off heap = 22 gigs ). Where is the remaining memory (10 gigs) ?. How to exactly account what is occupying the remaining memory. (Nodetool tablestats and nodetool cfstats output are not shared for complaince reasons) ?
Our production cluster requires tons of approval so deploying them with jconsole remote is tough. Any other ways to account for this memory usage.
Once we account the memory usage what are the next steps to fix this and avoid OOM kill ?
There's a good chance that the SSTables are getting mapped to memory (cached with mmap()). If this is the case, it wouldn't be immediate and memory usage would grow over time depending on when SSTables are read which are then cached. I've written about this issue in https://community.datastax.com/questions/6947/.
There's an issue with a not-so-well-known configuration property called "disk access mode". When it's not set it cassandra.yaml, it defaults to mmap which means that all SSTables get mmaped to memory. If so, you'll see an entry in the system.log on startup that looks like:
INFO [main] 2019-05-02 12:33:21,572 DatabaseDescriptor.java:350 - \
DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
The solution is to configure disk access mode to only cache SSTable index files (not the *-Data.db component) by setting:
disk_access_mode: mmap_index_only
For more information, see the link I posted above. Cheers!

Understanding an Apache Cassandra Memtable Flush [duplicate]

This question already has an answer here:
Cassandra Mem table content
(1 answer)
Closed 12 months ago.
A memtable is created for every table or column family. There can be multiple memtables for a table but only one of them will be active. The rest will be waiting to be flushed. There are a few properties that affect a memtables size and flushing frequency. These include:
memtable_flush_writers – This is the number of threads allocated for flushing memtables to disk. This defaults to two.
memtable_heap_space_in_mb – This is the total allocated space for all memtables on an Apache Cassandra node. By default, this is one-fourth your heap size. Specifying this property results in an absolute heap size in MB as opposed to a percentage of the total JVM heap.
memtable_cleanup_threshold – A percentage of your total available memtable space that will trigger a memtable cleanup. memtable_cleanup_threshold defaults to 1 / (memtable_flush_writers + 1). By default this is essentially 33% of your memtable_heap_space_in_mb.
A scheduled cleanup results in flushing of the table/column family that occupies the largest portion of memtable space. This keeps happening till your available memtable memory drops below the cleanup threshold.
Let assume we have an Apache Cassandra instance that has allocated 4G of space. Out of this only 3,925.5MB is available to the Java runtime. Please look at the following StackOverflow question(Why do -Xmx and Runtime.maxMemory not agree) for the reasons behind this. Of this, by default, we have 981 MB allocated towards memtable i.e. 1/4the of 3,925.5. Our memtable_cleanup_threshold is the default value i.e. 33 percent of the total memtable heap and off heap memory. In our example that comes to 327 MB. Thus when total space allocated for all memtables is greater than 327 MB a memtable clean-up is triggered. The cleanup process looks for the largest memtable and flushes that to disk.
if I am allocating 981MB for mem table and cassandra initiates a flush after 327 Mb, that means at any point of time cassandra will have max of 327 mb of active memtables...then what about (981-327)mb = 654mb mem space.What is it used for. I could sense that memtables which are in queue to be flushes occupy some portion of this 654mb, but what about the rest of the spaces, it not it being wasted??
memtable_heap_space_in_mb decides how much heap can be used for memtable. It's not mandatory to allocate all of them to memtable. If there are 327 mb for memtable, the other memory (total heap) could be used for queries or repair operations.

Cassandra killed by O/S without error in logs

One node of my cassandra cluster breaks very often like every day. It mostly breaks in the morning most probably because of memory consumption by other jobs. But I don't see any error msg in the logs. Following are the log messages when it breaks:
INFO [MemtableFlushWriter:313] 2016-04-07 07:19:19,629 Memtable.java:385 - Completed flushing /data/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/system-compaction_history-ka-6666-Data.db (6945 bytes) for commitlog position ReplayPosition(segmentId=1459914506391, position=14934142)
INFO [MemtableFlushWriter:313] 2016-04-07 07:19:19,631 Memtable.java:346 - Writing Memtable-sstable_activity#823770964(21691 serialized bytes, 7605 ops, 0%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:313] 2016-04-07 07:19:19,650 Memtable.java:385 - Completed flushing /data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7558-Data.db (13689 bytes) for commitlog position ReplayPosition(segmentId=1459914506391, position=14934142)
INFO [CompactionExecutor:1176] 2016-04-07 07:19:19,651 CompactionTask.java:140 - Compacting [SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7557-Data.db'), SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7556-Data.db'), SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7555-Data.db'), SSTableReader(path='/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7558-Data.db')]
INFO [CompactionExecutor:1176] 2016-04-07 07:19:19,696 CompactionTask.java:270 - Compacted 4 sstables to [/data/cassandra/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/system-sstable_activity-ka-7559,]. 46,456 bytes to 9,197 (~19% of original) in 45ms = 0.194910MB/s. 1,498 total partitions merged to 216. Partition merge counts were {1:675, 2:33, 3:11, 4:181, }
INFO [MemtableFlushWriter:312] 2016-04-07 07:19:19,717 Memtable.java:385 - Completed flushing /data/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-8014-Data.db (849738 bytes) for commitlog position ReplayPosition(segmentId=1459914506391, position=14934142)
I know that the OS kills it but how to find the root cause of the problem and if any config changes can be done for cassandra?

Cassandra: Long Par New GC Pauses when Bootstrapping new nodes to cluster

I've seen an issue that happens fairly often when bootstrapping new nodes to a Datastax Enterprise Cassandra cluster (ver: 2.0.10.71)
When starting the new node to be bootstrapped, the bootstrap process starts to stream data from other nodes in the cluster. After a short period of time (usually a min or less) - other nodes in the cluster show high Par New GC pause times and then the nodes drop off from the cluster, failing the stream session.
INFO [main] 2015-04-27 16:59:58,644 StreamResultFuture.java (line 91) [Stream #d42dfef0-ecfe-11e4-8099-5be75b0950b8] Beginning stream session with /10.1.214.186
INFO [GossipTasks:1] 2015-04-27 17:01:06,342 Gossiper.java (line 890) InetAddress /10.1.214.186 is now DOWN
INFO [HANDSHAKE-/10.1.214.186] 2015-04-27 17:01:21,400 OutboundTcpConnection.java (line 386) Handshaking version with /10.1.214.186
INFO [RequestResponseStage:11] 2015-04-27 17:01:23,439 Gossiper.java (line 876) InetAddress /10.1.214.186 is now UP
Then on the other node:
10.1.214.186 ERROR [STREAM-IN-/10.1.212.233] 2015-04-27 17:02:07,007 StreamSession.java (line 454) [Stream #d42dfef0-ecfe-11e4-8099-5be75b0950b8] Streaming error occurred
Also see things in the logs:
10.1.219.232 INFO [ScheduledTasks:1] 2015-04-27 18:20:19,987 GCInspector.java (line 116) GC for ParNew: 118272 ms for 2 collections, 980357368 used; max is 12801015808
10.1.221.146 INFO [ScheduledTasks:1] 2015-04-27 18:20:29,468 GCInspector.java (line 116) GC for ParNew: 154911 ms for 1 collections, 1287263224 used; max is 12801015808`
It seems that it happens on different nodes each time we try to bootstrap a new node.
I've found this related ticket. https://issues.apache.org/jira/browse/CASSANDRA-6653
My only guess is that when the new node comes up a lot of compactions are firing off and that might be causing the GC pause times, I had considered setting concurrent_compactors = 1/2 my total CPU
Anyone have an idea?
Edit: More details around GC settings Using i2.2xlarge nodes on EC2:
MAX_HEAP_SIZE="12G"
HEAP_NEWSIZE="800M"
Also
JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC"
JVM_OPTS="$JVM_OPTS -XX:+UseConcMarkSweepGC"
JVM_OPTS="$JVM_OPTS -XX:+CMSParallelRemarkEnabled"
JVM_OPTS="$JVM_OPTS -XX:SurvivorRatio=8"
JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=1"
JVM_OPTS="$JVM_OPTS -XX:CMSInitiatingOccupancyFraction=75"
JVM_OPTS="$JVM_OPTS -XX:+UseCMSInitiatingOccupancyOnly"
JVM_OPTS="$JVM_OPTS -XX:+UseTLAB"
With the help from the DSE crew - the following settings helped us.
With an i2.2xlarge node (8 cpu, 60G of ram, local SSD only)
Increasing Heap New Size to 512M * num CPU (in our case 4G)
Setting memtable_flush_writers = 8
Setting concurrent_compactors = total CPU / 2 (in our case 4)
Making these changes no longer seeing ParNew GC times exceeding 1sec on bootstrap (previously we were seeing 50-100 SECOND Gc times). FWIW We don't see any ParNew GC times during normal operation - only bootstrap.

Cassandra repair failing because of GC

We have a 9 node cluster and are running repairs every night as recommended (1 node each night).
We recently started having problems during the repairs, some nodes would die OutOfMemory because the GC would not collect fast enough. In the beginning it was a promotion issue (as shown by the detailed GC logs).
So we assumed that the CMS was not triggered fast enough and prevented ParNew from promoting surviging objects. When then lowered XX:CMSInitiatingOccupancyFraction from 75 to 50 to force the old GC to trigger faster.
It seemed to work, but yesterday two nodes dies because the GC couldn't cope with the allocation speed, producing this kind of logs :
INFO [ScheduledTasks:1] 2013-09-27 23:36:38,111 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 21756 ms for 1 collections, 8003258240 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:36:38,878 GCInspector.java (line 142) Heap is 0.9746211436302873 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:36:57,018 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 17265 ms for 1 collections, 6587223560 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:36:57,243 GCInspector.java (line 142) Heap is 0.802179208376459 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:37:18,180 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 18437 ms for 1 collections, 6961687392 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:37:18,785 GCInspector.java (line 142) Heap is 0.8477806818323523 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:37:40,416 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 19032 ms for 1 collections, 7338693168 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:37:40,456 GCInspector.java (line 142) Heap is 0.893691708259552 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:38:02,994 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 18853 ms for 1 collections, 7570047632 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:38:03,008 GCInspector.java (line 142) Heap is 0.9218656026318086 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:38:26,110 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 19564 ms for 1 collections, 7714594464 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:38:26,132 GCInspector.java (line 142) Heap is 0.9394682332713986 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:38:49,733 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 20388 ms for 1 collections, 7843428464 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:38:49,748 GCInspector.java (line 142) Heap is 0.9551573859456055 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:39:14,564 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 20956 ms for 1 collections, 7934286376 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:39:14,578 GCInspector.java (line 142) Heap is 0.9662218848591505 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:39:40,186 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 22440 ms for 1 collections, 8008275464 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:39:40,915 GCInspector.java (line 142) Heap is 0.9752321313612954 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:40:01,836 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 19911 ms for 1 collections, 8022614576 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:40:06,032 GCInspector.java (line 142) Heap is 0.976978320390438 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [ScheduledTasks:1] 2013-09-27 23:40:27,407 GCInspector.java (line 119) GC for ConcurrentMarkSweep: 22590 ms for 1 collections, 8058828880 used; max is 8211660800
WARN [ScheduledTasks:1] 2013-09-27 23:40:31,091 GCInspector.java (line 142) Heap is 0.9813884275395302 full. You may need to reduce memtable and/or cache sizes. Cassandra will now flush up to the two largest memtables to free up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if you don't want Cassandra to do this automatically
INFO [GossipTasks:1] 2013-09-27 23:40:53,798 Gossiper.java (line 799) InetAddress /<datacenter02>.<node2> is now DOWN
INFO [GossipTasks:1] 2013-09-27 23:40:53,846 Gossiper.java (line 799) InetAddress /<datacenter01>.<node3> is now DOWN
INFO [GossipStage:1] 2013-09-27 23:40:53,857 Gossiper.java (line 785) InetAddress /<datacenter01>.<node3> is now UP
INFO [GossipStage:1] 2013-09-27 23:40:53,909 Gossiper.java (line 785) InetAddress /<datacenter02>.<node2> is now UP
This time the heap grows and the GC run for 10-20 seconds without reducing the heap size, causing nodes to think that each other is down because they are busy GCing. In the end the nodes died of OOM.
We then tried to update to the latest version of Cassandra (1.2.8 -> 1.2.10) even though no fixed bug in these versions suggested any improvement for our problem. We then reran a repair during last night, but even though no nodes crashed, they failed to repair some ranges because of GCs of this kind :
INFO [ScheduledTasks:1] 2013-09-29 04:45:05,467 GCInspector.java (line 119) GC for ParNew: 22875 ms for 2 collections, 4128819328 used; max is 8211660800
INFO [ScheduledTasks:1] 2013-09-29 04:53:24,597 GCInspector.java (line 119) GC for ParNew: 133643 ms for 2 collections, 3102634584 used; max is 8211660800
This time it's ParNew taking ridiculous amounts of time.
I first thought of a load issue, but it continued to happen during the w-e when only the repair is happening.
Any help would be appreciated to diagnose / fix our issue.
The StatusLogger info doesn't show anything unusual except for GC taking a while. (Are you running on VMs? That tends to reduce GC performance: http://www.slideshare.net/eonnen/high-performance-network-programming-on-the-jvm-oscon-2012/62.)
My guess: repair adds enough load to the system that it falls behind processing requests and spends too much memory buffering them. You can verify this by looking for "dropped" messages in the log. By default it will buffer 10s worth of requests; to reduce this, lower the appropriate rpc timeouts in cassandra.yaml.
Try using the G1 GC instead of the CMS. G1 does not pause like that:
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html

Resources