How does the Leveled Compaction Strategy ensure 90% of reads are from one sstable - cassandra

I am trying to understand how the Leveled Compaction Strategy in Cassandra works that guarantees 90% of all reads will be satisfied from a single sstable.
From DataStax Doc:
new sstables are added to the first level, L0, and immediately compacted with the sstables in L1. When L1 fills up, extra sstables are promoted to L2. Subsequent sstables generated in L1 will be compacted with the sstables in L2 with which they overlap.

LeveledCompactionStrategy (LCS) in Cassandra implements the internals of LevelDB. You can check the exact implementation details in LevelDB implementation doc.
In order to give you a simple explanation take into account the following points:
Every SSTable is created when a fixed (relatively small) size limit is reached. By default L0 gets 5MB files of files, and each subsequent level is 10x the size. (in L1 you'll have 50MB of data, L2 500MB, and so on).
SSTables are created with the guarantee that they don't overlap
When a level fills up, a compaction is triggered and stables from level-L are promoted to level-L+1. So, in L1 you'll have 50MB in ~10 files, L2 500MB in ~100 files, etc..
In bold are the relevant details that justify the 90% reads from the same file (SSTable). Let's do the math together and everything will become clearer.
Imagine you have keys A,B,C,D,E in L0, and each keys takes 1MB of data.
Next we insert key F. Because level 0 is filled a compaction will create a file with [A,B,C,D,E] in level 1, and F will remain in level 0.
That's ~83% of data in 1 file in L1.
Next we insert G,H,I,J and K. So L0 fills up again, L1 gets a new sstable with [I,G,H,I,J].
By now we have K in L0, [A,B,C,D,E] and [F,G,H,I,J] in L1
And that's ~90% of data in L1.
If we continue inserting keys we will get around the same behavior so, that's why you get 90% of reads served from roughly the same file/SSTable.
A more in depth and detailed (what happens with updates and tombstones) info is given in this paragraph on the link I mentioned (the sizes for compaction election are different because they are LevelDB defaults, not C*s):
When the size of level L exceeds its limit, we compact it in a background thread. The compaction picks a file from level L and all overlapping files from the next level L+1. Note that if a level-L file overlaps only part of a level-(L+1) file, the entire file at level-(L+1) is used as an input to the compaction and will be discarded after the compaction. Aside: because level-0 is special (files in it may overlap each other), we treat compactions from level-0 to level-1 specially: a level-0 compaction may pick more than one level-0 file in case some of these files overlap each other.
A compaction merges the contents of the picked files to produce a sequence of level-(L+1) files. We switch to producing a new level-(L+1) file after the current output file has reached the target file size (2MB). We also switch to a new output file when the key range of the current output file has grown enough to overlap more then ten level-(L+2) files. This last rule ensures that a later compaction of a level-(L+1) file will not pick up too much data from level-(L+2).

Related

What is the effect of number of levels in levelled compaction?

I know how levelled compaction works in DBS like Cassandra, rocksdb etc. Some have max number of levels 4 and some have 7. How does this number affect compaction process? Why can't I have just 2 levels, 1st one which has flushed mem-table data (overlap possible between files) and 2nd one which contains nonoverlapping SSTs?
If there is any doc or duplicate question, please redirect.
Edit-1: Duplicate data increases when the number of levels goes up.
LCS comes to solves STCS’s space-amplification problem. It also reduces read amplification (the average number of disk reads needed per read request).
Leveled compaction divides the small sstables (“fragments”) into levels:
Level 0 (L0) is the new sstables, recently flushed from memtables. As their number grows (and reads slow down), our goal is to move sstables out of this level to the next levels.
Each of the other levels, L1, L2, L3, etc., is a single run of an exponentially increasing size: L1 is a run of 10 sstables, L2 is a run of 100 sstables, L3 is a run of 1000 sstables, and so on. (Factor 10 is the default setting in both Scylla and Apache Cassandra).
While solving, or at least significantly improving, the space amplification problem, LCS makes another problem, write amplification, worse.
"Write amplification” is the amount of bytes we had to write to the disk for each one byte of newly flushed sstable data. Write amplification is always higher than 1.0 because we write each piece of data to the commit-log, and then write it again to an sstable, and then each time compaction involves this piece of data and copies it to a new sstable, that’s another write.
Read more about it here:
https://www.scylladb.com/2018/01/31/compaction-series-leveled-compaction/
https://docs.scylladb.com/kb/compaction/
https://docs.scylladb.com/architecture/compaction/compaction-strategies/
Leveled compaction works Scylla very similarly to how it works in Cassandra and Rocksdb (with some small differences). If you want a short overview on how leveled compaction works in Scylla and why, I suggest that you read my blog post https://www.scylladb.com/2018/01/31/compaction-series-leveled-compaction/.
Your specific question on why two levels (L0 of recently flushed sstables, Ln of disjoint-range sstables) are not enough - is a very good question:
The main problem is that a single flushed memtable (sstable in L0), containing a random collection of writes, will often intersect all of the sstables in Ln. This means rewriting the entire database every time there's a new memtable flushed, and the result is a super-huge amount of write amplification, which is completely unacceptable.
One way to reduce this write amplification significantly (but perhaps not enough) is to introduce a cascade of intermediate levels, L0, L1, ..., Ln. The end result is that we have L(n-1) which is 1/10th (say) the size of Ln, and we merge L(n-1) - not a single sstable - into Ln. This is the approach that leveled compaction strategy (LCS) uses in all systems you mentioned.
A completely different approach could be not to merge a single sstable into Ln, but rather try to collect a large amount of data first, and only then merge it into Ln. We can't just collect 1,000 tables in L0 because this would make reads very slow. Rather, to collect this large amount of data, one could use size-tiered compaction (STCS) inside L0. In other words, this approach is a "mix" of STCS and LCS with two "levels": L0 uses STCS on new sstables, Ln contains a run of sstables (sstables with disjoint ranges). When L0 reaches 1/10th (say) the size of Ln, L0 is compacted into Ln. Such a mixed approach could have lower write amplification than LCS, but because most of the data is in a run in Ln, it would have same low space and read amplifications as in LCS. None of the mentioned databases (Scylla, Cassandra, or Rocksdb) has such "mixed" compaction supported, as far as I know.

LeveledCompactionStrategy : what is the impact of tuning the sstable_size_in_mb?

To enhance read performance, I try to have fewer underlying SSTables with LCS, so I set sstable_size_in_mb to 1280MB as suggested by some articles, which pointed out that the 160MB default value was picked out by Cassandra core team a long time ago, on a pretty old server by now with only 2GB RAM. However, my concern is about implications of having a higher value of sstable_size_in_mb.
What I understand is LCS regularly compact all the SSTables in L0 together with all the SSTables in L1, then replacing the entire content of L1. So each time L1 is replaced, the hardware requirements CPU/RAM and write amplification may be higher with a higher value of sstable_size_in_mb. Indeed, if sstable_size_in_mb = 1280MB, so 10 tables of 1280MB in L1 have to be merged each time with all L0 tables. And maybe there are also implications on a higher level, even if the SSTables to replace seems lower (one L1 SSTables is merged with 10 L2 SSTables, then those 10 L2 SSTables are replaced).
Questions :
Having a higher value of sstable_size_in_mb may increase read performance by lowering the number of SSTables involved in a CQL Table. However, what are the others implications to have such higher value (like 1280MB) for sstable_size_in_mb?
If higher value, are there any corresponding configuration to tune (Garbage Collector, chunk cache, ...) to allow better performance for compactions of those larger SSTables, and having less GC activity?
More subjective question, what is the typical value of sstable_size_in_mb you use in your deployment?
To answer your first question, I'd like to quote some original text from Jonathan Ellis in CASSANDRA-5727 when the community initially looked into the sstable_size_in_mb (and subsequently decided on the 160 number).
"larger files mean that each level contains more data, so reads will
have to touch less sstables, but we're also compacting less unchanged
data when we merge forward." (Note: I suspect there was a typo and he meant "we're also compacting more unchanged data when we merge forward", which aligns with what you stated in your second paragraph, and what he meant by larger file impacting "compaction efficiency".)
As for any other implication: it might push the envelope on the LCS node density upper bound, as it would allow much higher density for the same number of SSTables per node.
To answer your second question, compaction does create a lot of churns in the heap, as it creates many short lived objects from SSTables. Given much bigger SSTables involved in the compaction when you use the 1280MB size, you should pay attention to your gc.log and watch out for "Humongous Allocation" messages (if you use G1GC). If they turn out to happen a lot, you can increase the region size to avoid costly collections of humongous objects by using the -XX:G1HeapRegionSize option.
For your third question, as far as I know, many have used the 160MB default value for a long time, as we don't have a comprehensive analysis published on the impact/benefit from benchmarking larger SSTable size with modern hardware yet (I attempted to run some quick tests, but got busy with other things and didn't finish that effort, sorry). However, I do think if people are interested in achieving higher node density with LCS, this SSTable size is a parameter that's worth exploring.

Avoid sstable sizes to grow into the sky w/STCS

Got a DSC 2.1.15 14-node cluster, which is using STCS and it seems to be hovering around what seems a stable number of sstables even as we insert more and more data, so currently starting to see sstables data files in the excess of +1TB. See graphs:
Reading this we fear that having too large file sizes, might postpone compacting tombstones to finally release space as we'll have to wait for at least 4 similar sized sstables to get created.
Every node currently have two data directories each, we were hoping cassandra would spread data across those dirs using space relative equally, but as sstables are growing due to compaction, We fear ending with larger and larger sstables and maybe in one data dir primarily.
Howto possible control this better maybe, LCS or...?
Howto determine a sweet spot for number of sstables vs their sizes?
What affects the number of sstables vs their sizes vs in what data dir they get placed?
Currently few nodes are beginning to look skewed:
/dev/mapper/vg--blob1-lv--blob1 6.4T 3.3T 3.1T 52% /blob/1
/dev/mapper/vg--blob2-lv--blob2 6.6T 545G 6.1T 9% /blob/2
Could we stop a node, merge all keyspace's sstables (they seem uniquely named with an id/seq.# even though spread in two data dirs) into one data dir and expand the underlying volume and restart the node again and thus avoid running out of 'space' when only one data dir FS gets filled?

What delays a tombstone purge when using LCS in Cassandra

In a C* 1.2.x cluster we have 7 keyspaces and each keyspace contains a column family that uses wide rows. The cf uses LCS. I am periodically doing deletes in the rows. Initially each row may contain at most 1 entry per day. Entries older than 3 months are deleted and at max 1 entry per week is kept. I have been running this for a few months but disk space isn't really reclaimed. I need to investigate why. For me it looks like the tombstones are not purged. Each keyspace has around 1300 sstable files (*-Data.db) and each file is around 130 Mb in size (sstable_size_in_mb is 128). GC grace seconds is 864000 in each CF. tombstone_threshold is not specified, so it should default to 0.2. What should I look at to find out why diskspace isn't reclaimed?
I've answered a similar question before on the cassandra mailing list here
To elaborate a bit further, it's crucial you understand the Levelled Compaction Strategy and leveldb in general (given normal write behavior)
To summarize the above:
The data store is organized as "levels". Each level is 10 times larger than the level under it. Files in level 0 have overlapping ranges. Files in higher levels do not have overlapping ranges within each level.
New writes are stored as new sstables entering level 0. Every once in a while all sstables in level0 are "compacted" upwards to level 1 sstables, and these are then compacted upwards to level 2 sstables etc..
Reads for a given key will perform ~N reads, N being the number of levels in your tree (which is a function of the total data set size). Level 0 sstables are all scanned (since there are no constraints that each has non-overlapping range with siblings). Level 1 and higher sstables however have no-overlapping ranges, so the DB knows which 1 exact sstable in level1 covers the range of the key you're asking for, same for level 2 etc...
The layout of your LCS tree in cassandra is stored in a json file that you can easily check - you can find it in the same directory as the sstables for the keyspace+ColumnFamily. Here's an example of one of my nodes (coupled with the jq tool + awk to summarize):
$ cat users.json | jq ".generations[].members|length" | awk '{print "Level", NR-1, ":", $0, "sstables"}'
Level 0 : 1 sstables
Level 1 : 10 sstables
Level 2 : 109 sstables
Level 3 : 1065 sstables
Level 4 : 2717 sstables
Level 5 : 0 sstables
Level 6 : 0 sstables
Level 7 : 0 sstables
As you've noted, the sstables are usually of equal size, so you can see that each level is roughly 10x the size of the previous one. I would expect in the node above to satisfy the majority of read operations in ~5 sstable reads. Once I add enough data for Level 4 to reach 10000 sstables and Level 5 starts getting populated, my read latency will increase slightly as each read will incur 1 more sstable read to satisfy. (on a tangent, cassandra provides bucketed histograms for you to check all these stats).
With the above out of the way, let's walk through some operations:
We issue a write ["bob"]["age"] = 30. This will enter level0. Usually soon after it'll be compacted to level1. Slowly then it'll spend time in each level but as more writes enter the system, it'll migrate upwards to the highest level N
We issue a read for ["bob"]["age"]. The DB can then check each level from lowest to highest - as soon as it finds the data it can return it. If it reaches the highest level and it hasn't found it, the data doesn't exist on this node. If at any level it finds a tombstone, it can return "not found" as the data has been deleted
We issue a delete ["bob"]["age"]. This will enter level0 as a normal write with a special value "column tombstone". Usually soon after it'll be compacted to level1. Slowly then it'll spend time in each level but as more writes enter the system, it'll migrate upwards to the highest level N. During each compaction, if the sstables being compacted together have a tombstone (such as in l1) and an actual value (such as "30" in an l2), the tombstone "swallows up" the value and affects the logical deletion at that level. The tomstone however cannot be discarded yet, and must persist till it has had the chance to compact against every level until the highest one is reached - this is the only way to ensure that if L2 has age=30, L3 has an older age=29, and L4 has an even older age=28, all of them will have the chance to be destroyed by the tombstone. Only when the tombstone reaches the highest level can it actually be discarded entirely
We issue a read for ["bob"]["age"]. The DB can then check each level from lowest to highest - as soon as it finds the data it can return it. If it reaches the highest level and it hasn't found it, the data doesn't exist on this node. If at any level it finds a tombstone, it can return "not found" as the data has been deleted
We issue a delete ["bob"]. This will enter level0 as a normal write with a special value "row tombstone". It will follow the same logic as the above column-level tombstone, except if it collides with any existing data of any column under row "bob" it discards it.
We issue a read for ["bob"]["age"]. The DB can then check each level from lowest to highest - as soon as it finds the data it can return it. If it reaches the highest level and it hasn't found it, the data doesn't exist on this node. If at any level it finds a tombstone, it can return "not found" as the data has been deleted
I hope this answers your questions regarding why deletes in cassandra, especially with LCS, actually consume space instead of free up space (at least initially). The rows+columns the tombstones are attached to themselves have a size (which might actually be larger than the size of the value you're trying to delete if you have simple values).
The key point here is that they must traverse all the levels up to the highest level L before cassandra will actually discard them, and the primary driver of that bubbling up is the total write volume.
I was hoping for magic sauce here.
We are going to do a JMX-triggered LCS -> STCS -> LCS in a rolling fashion through the cluster. The switching of compaction strategy forces LCS structured sstables to restructure and apply the tombstones (in our version of cassandra we can't force an LCS compact).
There are nodetool commands to force compactions between tables, but that might screw up LCS. There are also nodetool commands to reassign the level of sstables, but again, that might foobar LCS if you muck with its structure.
What really should probably happen is that row tombstones should be placed in a separate sstable type that can be independently processed against "data" sstables to get the purge to occur. The tombstone sstable <-> data sstable processing doesn't remove the tombstone sstable, just removes tombstones from the tombstone sstable that are no longer needed after the data sstable was processed/pared/pruned. Perhaps these can be classified as "PURGE" tombstones for large scale data removals as opposed to more ad-hoc "DELETE" tombstones that would be intermingled with data. But who knows when that would be added to Cassandra.
Thanks for the great explanation of LCS, #minaguib. I think the statement from Datastax is misleading, at least to me
at most 10% of space will be wasted by obsolete rows.
Depends on how we define the “obsolete rows”. If “obsolete rows” is defined as ALL the rows which are supposed to be compacted, in your example, these “obsolete rows" will be age=30, age=29, age=28. We can end up wasting (N-1)/N space as these “age" can be in different levels.

physical disk space management of cassandra

Recently I have been looking into Cassandra from our new project's perspective and learned a lot from this community and its wiki too. But I have not found anything about about how updates are managed in Cassandra in terms of physical disk space management though it seems to be very much similar to record delete management using compaction.
Suppose there are 100 records with 5 column values each so when all changes would be flushed disk all records will be written adjacently and when delete operation is done then its marked in Memory table first and physically record is deleted after some time as set in configuration or when its full. And the compaction process claims the space.
Now question is that at one side being schema less there is no fixed number of columns at the the beginning but on the other side when compaction process takes place then.. does it put records adjacently on disk like traditional RDBMS to speed up the read process as for RDBMS its easy because they have to allocate fixed amount of space as per declaration of columns datatype.
But how Cassandra exactly makes the records placement on disk in compaction process (both for update/delete) to speed up the reads?
One more question related to compaction is that when there is no delete queries but there is an update query which updates an existent record with some variable length data or insert altogether a new column then how compaction makes its space available on disk between already existent data rows?
Rows and columns are stored in sorted order in an SSTable. This allows a compaction of multiple SSTables to output a new, (sorted) SSTable, with only sequential disk IO. This new SSTable will be outputted into a new file and freespace on the disks. This process doesn't depend on the number of rows of columns, just on them being stored in a sorted order. So yes, in all SSTables (even those resulting form compactions) rows and columns will be arranged in a sorted order on disk.
Whats more, as you hint at in your question, updates are no different from inserts - they do not overwrite the value on disk, but instead get buffered in a Memtable, then get flushed into a new SSTable. When the new SSTable eventually gets compacted with the SSTable containing the original value, the newer value will annihilate the old one - ie the old value will not be outputted from the compaction. Timestamps are used to decide which values is newest.
Deletes are handled in the same fashion, effectively inserted an "anti-value", or tombstone. The limitation of this process is that is can require significant space overhead. Deletes are effectively 'lazy, so the space doesn't get freed until some time later. Also, while the output of the compaction can be the same size as the input, the old SSTables cannot be deleted until the new one is completed, so this can reduce disk utilisation to 50%.
In the system described above, new values for an existing key can be a different size to the existing key without padding to some pre-determined length, as the new value does not get written over the old value on update, but to a new SSTable.

Resources