Manual Compaction in Cassandra - cassandra

There is any way to make the compaction process manual in Cassandra?
When does it happen automatically? What is the time period?
Also, how does the memtable threshold limit factor in?

Compaction in Cassandra happens automatically, but the frequency of it depends on the selected compaction strategy (default is size tiered compaction, where you need to have at least 4 SSTable files of similar size to trigger the compaction). Manual compaction is also supported via nodetool compact but it's not recommended (or at least with nodetool compact -s).
I strongly recommend you to watch DS201 & DS210 courses on DataStax Academy, and read DSE Architecture Guide (it's applicable to Cassandra as well).

If you want to start a compaction manually you can use nodetool compact command, here is a link to the documentation :
http://cassandra.apache.org/doc/latest/tools/nodetool/compact.html
When does compaction happen, it depends on your compaction strategy. You can check this link where you can find some details about compactions :
https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/dml/dmlHowDataMaintain.html
I hope this helps !

Related

Cassandra is tracking the number of deletion in sstables to trigger a compaction?

Wonder whether Cassandra is triggering a compaction (STCS or LCS) based on the number of deletion in sstables? In LCS, as I know, cassandra compacts sstables to next level only if a level is full. But the size of a deletion recored is usually small. If just consider the sstable size to decide whether a level is full or not, it may take long for a tombstone to be reclaimed.
I know rocksdb is triggering compaction using the number of deletions in sstables. This will help to reduce tombstone.
Yes, Cassandra's compaction can be triggered by the number of deletion (a.k.a. tombstones)
Have a look to the common options for all the compaction strategies and specifically this param:
tombstone_threshold
How much of the sstable should be tombstones for us to consider doing a single sstable compaction of that sstable.
See doc here: https://cassandra.apache.org/doc/latest/cassandra/operating/compaction/index.html

Why is forcing major compaction on a table not ideal?

Consider a scenario where a table partitions with thousands of deleted rows. When reading from the table, Cassandra has to scan over thousands of deleted rows before it gets to the live rows.
A common workaround is to manually run a compaction on a node to forcibly get rid of tombstones.
What are the downsides of forcing major compaction on a table (with nodetool compact) and what is the best practice recommendation?
Background
When forcing a major compaction on a table configured with the SizeTieredCompactionStrategy (STCS), all the SSTables on the node get compacted together into a single large SSTable. Due to its size, the resulting SSTable will likely never get compacted out since similar-sized SSTables are not available as compaction candidates. This creates additional issues for the nodes since tombstones do not get evicted and keep accumulating, affecting the cluster's performance.
Caveats
We understand that cluster administrators use major compaction as a way of evicting tombstones which have accumulated as a result of high-delete workloads which in most cases is due to an incorrect data model.
The recommendation in this post does NOT constitute a solution to the underlying issue users face. It should not be considered a long-term fix to the data model problem.
Recommendation
In Apache Cassandra 2.2, CASSANDRA-7272 introduced a huge improvement which splits the output of nodetool compact into multiple files which are 50% then 25% then 12.5% of the original table size until the smallest chunk is 50MB for tables using STCS.
When using major compaction as a last resort for evicting tombstones, use the --split-output (or shorthand -s) to take advantage of this new feature:
$ nodetool compact --split-output -- <keyspace> <table>
NOTE - This feature is only available from Cassandra 2.2 and newer versions.
Also see How to split large SSTables on another server. Cheers!

Is it recommended to do periodic cassandra repair

We recently had a disk fail in one of our Cassandra node (its a 5 Cassandra 2.2 cluster with replication factor of 3). It took about a week or more to perform a full repair on that node. Each node contains 3/5 of the data and doing nodetool repair repaired 3/5 of the token ranges across all nodes. Now that its been repaired it will most likely repair faster since it did a incremental repair. I am wondering if its a good idea to perform periodic repairs on all nodes using nodetool repair -pr (We are at 2.2 and I think incremental repair is default in 2.2).
I think its a good idea because if performed periodically it will take less time to repair as it only needs to repair non repaired SStables. We also might've had instances where the nodes may've been down for more than the hinted handoff window and we probably didn't do anything about it.
Yes, its good practice to run scheduled incremental repair. Run repair frequently enough that every node is repaired before reaching the time specified in the gc_grace_seconds setting.
Also, it would be great if you run incremental repair on a frequent basis, combined with full repair less frequently like once per month/week. incremental repair would repair the SSTable which was not marked as repaired before, but full repair could take care more comprehensive case like SSTable rotting. check the reference from datastax:https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsRepairNodesWhen.html

Do we need to run manual compaction with Leveled compaction strategy and SIzeTiered compaction strategy

We have a couple of tables with Leveled compaction strategy and SizeTiered compaction strategy. How often do we need to run compaction? Thanks in advance
TL;DR
Compaction runs on its own (as long as you did not disable autocompaction in the yaml).
Compaction - what is it?
Per the cassandra write path, we flush memtables to disk periodically into SSTables (sorted string tables) which are immutable. When you update an existing cell, it eventually gets written in an sstable. Possibly a different one than the original record. When we read, sometimes C* has to scan across various sstables (with some optimizations, see bloom filters) to find the latest version of a cell. In Cassandra, last write wins.
Compaction takes sstables and compacts them together removing duplicate data, to optimize reads. This is an automatic operation, though you can tune compactions to run more or less often.
Some useful details on Compaction
Size tiered compaction is the default compaction strategy in cassandra, it looks for sstables that are the same size and compacts them together when it finds enough (4 by default). Size tiered is less IO intensive than leveled and will work better in general when you have smaller boxes and rotational drives.
Leveled compaction is optimized for reads, when you have read heavy workloads or tight read SLA's with lots of updates leveled may make sense. Leveled compaction is more IO and CPU intensive because you are spending more cycles optimizing for reads, but the reads themselves should be faster and hit fewer SStables. Keep an eye on io wait and on pending compactions in nodetool compaction stats when you first enable these or if your workload grows.
Compaction Tunables / Levers
multi threaded compaction - turn it off, the overhead is bigger than the benefit. To the point where it's been removed in C* 2.1.
concurrent compactors - now defaults to 2, used to default to number of cores which is a bad default. If you're on the 2.0 branch and not running the latest DSE check this default and consider decreasing it to 2. this is the number of simultaneous compaction tasks you can run (different column families).
Compaction throttling - a way of limiting the amount of resources that compactions take up. You can tune this on the fly with nodetool getcompactionthreshold and nodetool setcompactionthreshold. You want to tune this to a point where you are not accumulating pending tasks. 0 --> unlimited. Unlimited is, unintuitively, not usually the fastest setting as the system may get bogged down.

Leveled Compaction Strategy with low disk space

We have Cassandra 1.1.1 servers with Leveled Compaction Strategy.
The system works so that there are read and delete operations. Every half a year we delete approximately half of the data while new data comes in. Sometimes it happens that disk usage goes up to 75% while we know that real data take about 40-50% other space is occupied by tombstones. To avoid disk overflow we force compaction of our tables by dropping all SSTables to Level 0. For that we remove .json manifest file and restart Cassandra node. (gc_grace option does not help since compaction starts only after level is filled)
Starting from Cassandra 2.0 the manifest file was moved to sstable file itself: https://issues.apache.org/jira/browse/CASSANDRA-4872
We are considering migration to Cassandra 2.x while we afraid we won't have such a possibility as forcing leveled compaction any more.
My question is: how could we achieve that our table has a disk space limit e.g. 150GB? (When the limit is exceeded it triggers compaction automatically). The question is mostly about Cassandra 2.x. While any alternative solutions for Cassandra 1.1.1 are also welcome.
It seems like I've found the answers myself.
There is tool sstablelevelreset starting from 2.x version which does similar level reset as deletion of manifest file. The tool is located in tools directory of Cassandra distribution e.g. apache-cassandra-2.1.2/tools/bin/sstablelevelreset.
Starting from Cassandra 1.2 (https://issues.apache.org/jira/browse/CASSANDRA-4234) there is tombstone removal support for Leveled Compaction Strategy which supports tombstone_threshold option. It gives the possibility of setting maximal ratio of tombstones in a table.

Resources