Determining how full a Cassandra cluster is - cassandra

I just imported a lot of data in a 9 node Cassandra cluster and before I create a new ColumnFamily with even more data, I'd like to be able to determine how full my cluster currently is (in terms of memory usage). I'm not too sure what I need to look at. I don't want to import another 20-30GB of data and realize I should have added 5-6 more nodes.
In short, I have no idea if I have too few/many nodes right now for what's in the cluster.
Any help would be greatly appreciated :)
$ nodetool -h 192.168.1.87 ring
Address DC Rack Status State Load Owns Token
151236607520417094872610936636341427313
192.168.1.87 datacenter1 rack1 Up Normal 7.19 GB 11.11% 0
192.168.1.86 datacenter1 rack1 Up Normal 7.18 GB 11.11% 18904575940052136859076367079542678414
192.168.1.88 datacenter1 rack1 Up Normal 7.23 GB 11.11% 37809151880104273718152734159085356828
192.168.1.84 datacenter1 rack1 Up Normal 4.2 GB 11.11% 56713727820156410577229101238628035242
192.168.1.85 datacenter1 rack1 Up Normal 4.25 GB 11.11% 75618303760208547436305468318170713656
192.168.1.82 datacenter1 rack1 Up Normal 4.1 GB 11.11% 94522879700260684295381835397713392071
192.168.1.89 datacenter1 rack1 Up Normal 4.83 GB 11.11% 113427455640312821154458202477256070485
192.168.1.51 datacenter1 rack1 Up Normal 2.24 GB 11.11% 132332031580364958013534569556798748899
192.168.1.25 datacenter1 rack1 Up Normal 3.06 GB 11.11% 151236607520417094872610936636341427313
-
# nodetool -h 192.168.1.87 cfstats
Keyspace: stats
Read Count: 232
Read Latency: 39.191931034482764 ms.
Write Count: 160678758
Write Latency: 0.0492021849459404 ms.
Pending Tasks: 0
Column Family: DailyStats
SSTable count: 5267
Space used (live): 7710048931
Space used (total): 7710048931
Number of Keys (estimate): 10701952
Memtable Columns Count: 4401
Memtable Data Size: 23384563
Memtable Switch Count: 14368
Read Count: 232
Read Latency: 29.047 ms.
Write Count: 160678813
Write Latency: 0.053 ms.
Pending Tasks: 0
Bloom Filter False Postives: 0
Bloom Filter False Ratio: 0.00000
Bloom Filter Space Used: 115533264
Key cache capacity: 200000
Key cache size: 1894
Key cache hit rate: 0.627906976744186
Row cache: disabled
Compacted row minimum size: 216
Compacted row maximum size: 42510
Compacted row mean size: 3453
-
[default#stats] describe;
Keyspace: stats:
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Durable Writes: true
Options: [replication_factor:3]
Column Families:
ColumnFamily: DailyStats (Super)
Key Validation Class: org.apache.cassandra.db.marshal.BytesType
Default column value validator: org.apache.cassandra.db.marshal.UTF8Type
Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type/org.apache.cassandra.db.marshal.UTF8Type
Row cache size / save period in seconds / keys to save : 0.0/0/all
Row Cache Provider: org.apache.cassandra.cache.ConcurrentLinkedHashCacheProvider
Key cache size / save period in seconds: 200000.0/14400
GC grace seconds: 864000
Compaction min/max thresholds: 4/32
Read repair chance: 1.0
Replicate on write: true
Built indexes: []
Column Metadata:
(removed)
Compaction Strategy: org.apache.cassandra.db.compaction.LeveledCompactionStrategy
Compression Options:
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor

Obviously, there are two types of memory -- disk and RAM. I'm going to assume you're talking about disk space.
First, you should find out how much space you're currently using per node. Check the on-disk usage of the cassandra data dir (by default /var/lib/cassandra/data) with this command: du -ch /var/lib/cassandra/data You should then compare that to the size of your disk, which can be found with df -h. Only consider the entry for the df results for the disk your cassandra data is on, by checking the Mounted on column.
Using those stats, you should be able to calculate how full in % the cassandra data partition. Generally you don't want to get too close to 100% because cassandra's normal compaction processes temporarily use more disk space. If you don't have enough, then a node can get caught with a full disk, which can be painful to resolve (as I side note I occasionally keep a "ballast" file of a few Gigs that I can delete just in case I need to open some extra space). I've generally found that not exceeding about 70% disk usage is on the safe side for the 0.8 series.
If you're using a newer version of cassandra, then I'd recommend giving the Leveled Compaction strategy a shot to reduce temporary disk usage. Instead of potentially using twice as much disk space, the new strategy will at most use 10x of a small, fixed size (5MB by default).
You can read more about how compaction temporarily increases disk usage on this excellent blog post from Datastax: http://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra It also explains the compaction strategies.
So to do a little capacity planning, you can figure up how much more space you'll need. With a replication factor of 3 (what you're using above), adding 20-30GB of raw data would add 60-90GB after replication. Split between your 9 nodes, that's maybe 3GB more per node. Does adding that kind of disk usage per node push you too close to having full disks? If so, you might want to consider adding more nodes to the cluster.
One other note is that your nodes' loads aren't very even -- from 2GB up to 7GB. If you're using the ByteOrderPartitioner over the random one, then that can cause uneven load and "hotspots" in your ring. You should consider using random if possible. The other possibility could be that you have extra data hanging out that needs to be taken care of (Hinted Handoffs and snapshots come to mind). Consider cleaning that up by running nodetool repair and nodetool cleanup on each node one at a time (be sure to read up on what those do first!).
Hope that helps.

Related

Troubleshooting and fixing Cassandra OOM issue

Although there are multiple threads regarding the OOM issue would like to clarify certain things. We are running a 36 node Cassandra cluster of 3.11.6 version in K8's with 32gigs allocated for the container.
The container is getting OOM killed (Note:- Not java heap OOM error rather linux cgroup OOM killer) since it's reaching the memory limit of 32 gigs for its cgroup.
Stats and configs
map[limits:map[ephemeral-storage:2Gi memory:32Gi] requests:map[cpu:7 ephemeral-storage:2Gi memory:32Gi]]
Cgroup Memory limit
34359738368 -> 32 Gigs
The JVM spaces auto calculated by Cassandra -Xms19660M -Xmx19660M -Xmn4096M
Grafana Screenshot
Cassandra Yaml --> https://pastebin.com/ZZLTc1cM
JVM Options --> https://pastebin.com/tjzZRZvU
Nodetool info output on a node which is already consuming 98% of the memory
nodetool info
ID : 59c53bdb-4f61-42f5-a42c-936ea232e12d
Gossip active : true
Thrift active : true
Native Transport active: true
Load : 179.71 GiB
Generation No : 1643635507
Uptime (seconds) : 9134829
Heap Memory (MB) : 5984.30 / 19250.44
Off Heap Memory (MB) : 1653.33
Data Center : datacenter1
Rack : rack1
Exceptions : 5
Key Cache : entries 138180, size 99.99 MiB, capacity 100 MiB, 9666222 hits, 10281941 requests, 0.940 recent hit rate, 14400 save period in seconds
Row Cache : entries 10561, size 101.76 MiB, capacity 1000 MiB, 12752 hits, 88528 requests, 0.144 recent hit rate, 900 save period in seconds
Counter Cache : entries 714, size 80.95 KiB, capacity 50 MiB, 21662 hits, 21688 requests, 0.999 recent hit rate, 7200 save period in seconds
Chunk Cache : entries 15498, size 968.62 MiB, capacity 1.97 GiB, 283904392 misses, 34456091078 requests, 0.992 recent hit rate, 467.960 microseconds miss latency
Percent Repaired : 8.28107989669628E-8%
Token : (invoke with -T/--tokens to see all 256 tokens)
What had been done
We had made sure there is no memory leak on the cassandra process since we have a custom trigger code. Gc log analytics shows we occupy roughly 14 gigs of total jvm space.
Questions
Although we know cassandra does occupy off heap spaces (Bloom filter, Memtables , etc )
The grafana screenshot shows the node is occupying 98% of 32 gigs. JVM heap = 19.5 gigs + offheap space in nodetool info output = 1653.33 MB (1Gigs) (JVM heap + off heap = 22 gigs ). Where is the remaining memory (10 gigs) ?. How to exactly account what is occupying the remaining memory. (Nodetool tablestats and nodetool cfstats output are not shared for complaince reasons) ?
Our production cluster requires tons of approval so deploying them with jconsole remote is tough. Any other ways to account for this memory usage.
Once we account the memory usage what are the next steps to fix this and avoid OOM kill ?
There's a good chance that the SSTables are getting mapped to memory (cached with mmap()). If this is the case, it wouldn't be immediate and memory usage would grow over time depending on when SSTables are read which are then cached. I've written about this issue in https://community.datastax.com/questions/6947/.
There's an issue with a not-so-well-known configuration property called "disk access mode". When it's not set it cassandra.yaml, it defaults to mmap which means that all SSTables get mmaped to memory. If so, you'll see an entry in the system.log on startup that looks like:
INFO [main] 2019-05-02 12:33:21,572 DatabaseDescriptor.java:350 - \
DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
The solution is to configure disk access mode to only cache SSTable index files (not the *-Data.db component) by setting:
disk_access_mode: mmap_index_only
For more information, see the link I posted above. Cheers!

Repair status not 100% after repair

I have noticed that some tables show less than 100% "Percent repaired" in the nodetool tablestatus output. I have manually executed repairs on all nodes (3 node cluster, RF=3) but the value doesnt seem to change.
Example output:
Table: users
SSTable count: 3
Space used (live): 66636
Space used (total): 66636
Space used by snapshots (total): 0
Off heap memory used (total): 688
SSTable Compression Ratio: 0.5731829674519404
Number of partitions (estimate): 162
Memtable cell count: 11
Memtable data size: 483
Memtable off heap memory used: 0
Memtable switch count: 27
Local read count: 120833
Local read latency: NaN ms
Local write count: 12094
Local write latency: NaN ms
Pending flushes: 0
Percent repaired: 91.54
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 568
Bloom filter off heap memory used: 544
Index summary off heap memory used: 112
Compression metadata off heap memory used: 32
Compacted partition minimum bytes: 30
Compacted partition maximum bytes: 1916
Compacted partition mean bytes: 420
Average live cells per slice (last five minutes): NaN
Maximum live cells per slice (last five minutes): 0
Average tombstones per slice (last five minutes): NaN
Maximum tombstones per slice (last five minutes): 0
Dropped Mutations: 0
Repair was done with nodetool repair -pr
What is going on?
Percent repaired seems to be a misleading metric as it refers to the percentage of SSTables repaired, but there are some conditions to be computed here:
- the tables should not be from systems keyspaces
- the tables should have a replication factor greater than 1
- the repair should be incremental or full (non-subrange)
When you use nodetool repair -pr, that will invoke a full repair that won't be able to update this value.
For more information regarding incremental repairs, I would recommend this article from the Last Pickle. Since they adopted the maintenance of the reaper tool, they have become an authority regarding repairs.
Executing nodetool repair -pr will repair the primary range owned by the node that command is executed on.
What does this mean? The node this command is executed on has data that it "owns", i.e., its primary range, but the node also contains data/replicas "owned" by other nodes. You are not repairing the replicas "owned" owned by other nodes.
Now, if you execute that command on every single node in the cluster (not data center), it will cover all the token ranges.
EDIT / NOTE:
My answer did not properly address the question. Although what I wrote is accurate, the answer to the question is stated in the answer above mine; basically, the percentage repaired is a value that is for incremental repair usage and is not affected by a full repair. (Incremental repair marks the repaired ranges as it works so it does not spend time re-repairing later.)

Why does nodetool status *keyspace* still show hundreds of MBs of data after TRUNCATE?

I have used the TRUNCATE command from the CQLSH at node .20 for my table.
20 Minutes have passed since I issued the command and the output of nodetool status *myKeyspace* still shows a lot of data on 4 out of 6 nodes.
I am using Cassandra 3.0.8
192.168.178.20:/usr/share/cassandra$ nodetool status *myKeyspace*
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.178.24 324,57 MB 256 32,7% 4d852aea-65c7-42e1-b2bd-f38a320ec827 rack1
UN 192.168.178.28 650,86 KB 256 35,7% 82b67dc5-9f4f-47e9-81d7-a93f28a3e9da rack1
UN 192.168.178.30 155,68 MB 256 31,9% 28cf5138-7b61-42ca-8b0c-e4be1b5418ba rack1
UN 192.168.178.32 321,62 MB 256 33,3% 64e106ed-770f-4654-936d-db5b80aa37dc rack1
UN 192.168.178.36 640,91 KB 256 33,0% 76152b07-caa6-4214-8239-e8a51bbc4b62 rack1
UN 192.168.178.20 103,07 MB 256 33,3% 539a6333-c4ef-487a-b1e4-aac40949af4c rack1
The following command was run on .24 node. It looks like there there are still snapshots/backups being saved somewhere? But the amount of data, 658 MB for Node .24, does not match the reported 324 MB from nodetool status. What's going on there?
192.168.178.24:/usr/share/cassandra$ nodetool cfstats *myKeyspace*
Keyspace: *myKeyspace*
Read Count: 0
Read Latency: NaN ms.
Write Count: 0
Write Latency: NaN ms.
Pending Flushes: 0
Table: data
SSTable count: 0
Space used (live): 0
Space used (total): 0
Space used by snapshots (total): 658570012
Off heap memory used (total): 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size: 0
Memtable off heap memory used: 0
Memtable switch count: 0
Local read count: 0
Local read latency: NaN ms
Local write count: 0
Local write latency: NaN ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0,00000
Bloom filter space used: 0
Bloom filter off heap memory used: 0
Index summary off heap memory used: 0
Compression metadata off heap memory used: 0
Compacted partition minimum bytes: 0
Compacted partition maximum bytes: 0
Compacted partition mean bytes: 0
Average live cells per slice (last five minutes): 3.790273556231003
Maximum live cells per slice (last five minutes): 103
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Note that there are no other tables than the one I cleaned in the keyspace. There might be some index data from cassandra-lucene-index though if they do not get cleared when using TRUNCATE.
nodetool status's keyspace option is really only for knowing the replication factor and datacenters to include when computing the ownership. The load is actually for all the sstables, not just the one keyspace. Just like how IP address, host id, and number of tokens is not affected by setting keyspace option. status is more of a global check.
Space used by snapshots is expected to still have old data. When you do a truncate it snapshots the data (can disable by setting auto_snapshot in cassandra.yaml to false). To clear all the snapshots you can use nodetool clearsnapshot <keyspace>

Why is the load different on a 3 node cluster with RF 3?

I have a 3 node Cassandra cluster with a replication factor of 3.
This means that all data should be replication on to all 3 nodes.
The following is the output of nodetool status:
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.0.1 27.66 GB 256 100.0% 2e89198f-bc7d-4efd-bf62-9759fd1d4acc RAC1
UN 192.168.0.2 28.77 GB 256 100.0% db5fd62d-3381-42fa-84b5-7cb12f3f946b RAC1
UN 192.168.0.3 27.08 GB 256 100.0% 1ffb4798-44d4-458b-a4a8-a8898e0152a2 RAC1
This is a graph of disk usage over time on all 3 of the nodes:
My question is why do these sizes vary so much? Is it that compaction hasn't run at the same time?
I would say several factors could play a role here.
As you note, compaction will not run at the same time, so the number and contents of the SSTables will be somewhat different on each node.
The memtables will also not have been flushed to SSTables at the same time either, so right from the start, each node will have somewhat different SSTables.
If you're using compression for the SSTables, given that their contents are somewhat different, the amount of space saved by compressing the data will vary somewhat.
And even though you are using a replication factor of three, I would imagine the storage space for non-primary range data is slightly different than the storage space for primary range data, and it's likely that more primary range data is being mapped to one node or the other.
So basically unless each node saw the exact same sequence of messages at exactly the same time, then they wouldn't have exactly the same size of data.

Actual Storage Load of a Cassandra node

Is there any way to get the actual data size stored on a node?
The nodetool outputs afaik only the compressed data size.
Thx!
Have you tried using nodetool cfstats <keyspace>? That should break it down on a per column family basis (for each column family in the specified keyspace). And it should give you more detail on space usage.
aploetz#ubuntu:/var/lib/cassandra/data$ nodetool cfstats products
Keyspace: products
Read Count: 3515
Read Latency: 0.4077462304409673 ms.
Write Count: 5434
Write Latency: 0.04547313213102686 ms.
Pending Tasks: 0
Table: itemmaster
SSTable count: 3
Space used (live), bytes: 1156013
Space used (total), bytes: 1266953
SSTable Compression Ratio: 0.2963641232834859
...

Resources