I have been searching some docs online to get good understanding of how to tackle large partitions in cassandra.
I followed a document on the below link:
https://www.safaribooksonline.com/library/view/cassandra-high-performance/9781849515122/ch13s10.html.
Regarding "LARGE ROWS WITH COMPACTION LIMITS", below is metioned:
"The default value for in_memory_compaction_limit_in_mb is 64. This value is set in conf/cassandra.yaml. For use cases that have fixed columns, the limit should never be exceeded. Setting this value can work as a sanity check to ensure that processes are not inadvertently writing to many columns to the same key.
Keys with many columns can also be problematic when using the row cache because it requires the entire row to be stored in memory."
In the /conf/cassandra.yaml, I did find a configuration named "in_memory_compaction_limit_in_mb".
The Definition in the cassandra.yaml goes as below:
In Cassandra 2.0:
in_memory_compaction_limit_in_mb
(Default: 64) Size limit for rows being compacted in memory. Larger rows spill to disk and use a slower two-pass compaction process. When this occurs, a message is logged specifying the row key. The recommended value is 5 to 10 percent of the available Java heap size.
In Cassandra 3.0: (No such entries found in cassandra.yaml)
compaction_large_partition_warning_threshold_mb
(Default: 100) Cassandra logs a warning when compacting partitions larger than the set value
I have searching lot on what exactly the setting in_memory_compaction_limit_in_mb does.
It mentions some compaction is done in memory and some compaction is done on disk.
As per my understanding goes, When Compaction process runs:
SSTABLE is being read from disk---->(compared,tombstones removed,stale data removed) all happens in memory--->new sstable written to disk-->old table being removed
This operations accounts to high Disc space requirements and Disk I/O(Bandwidth).
Do help me with,if my understanding of compaction is wrong. Is there anything in compaction that happens in memory.
In my environment the
in_memory_compaction_limit_in_mb is set to 800.
I need to understand the purpose and implications.
Thanks in advance
in_memory_compaction_limit_in_mb is no longer necessary since the size doesn't need to be known before writing. There is no longer a 2 pass compaction so can be ignored. You don't have to do the entire partition at once, just a row at a time.
Now the primary cost is in deserializing the large index at the beginning of the partition that occurs in memory. You can increase the column_index_size_in_kb to reduce the size of that index (at cost of more IO during reads, but likely insignificant compared to the deserialization). Also if you use a newer version (3.11+) the index is lazy loaded after exceeding a certain size which improves things quite a bit.
Related
I was going through Cassandra's SizeTieredCompactionStrategy and found out that it can sometimes double the size of the dataset's largest table during the compaction process. But I didn't get any information regarding when this can happen? Does anyone know about this?
This requirement arises from the fact that compaction process should have enough space to take all SSTables that should be compacted, read data from them, and write new SSTable to the same disk. In the worst case, if you have table consisting of all SSTables that should be compacted, their total size is 50% of available disk space, and no data will be thrown away - in this case, compaction process will write a single SSTable that is equal to size of input data. And if you have input data occupying more than 50% of disk space, compaction won't have enough space for writing a new version.
In real situation, you need to have enough space to compact biggest SSTables in your biggest table performed by N compaction threads at the same time. If you have many tables of similar size, then this restriction is not so strong...
Can anyone tell about memtable_flush_writers use case and significance. And in what situation we should tune from default value? I have already read the datastax docs but not clear the actual uses and benefits.
By default, memtable_cleanup_threshold is computed as: 1 / ( memtable_flush_writers + 1)
There is some guidance in the YAML about how to set this value, as Mehul pointed out. Contrary to that, I would never set that to number of cores, regardless of whether or not you're using SSDs.
The problems come when the memtable_flush_writers is set too high, your node can become overwhelmed with small flushes that trigger compaction. This has the unfortunate side effect of causing your commitlog to fill up, and eventually get to a point where it cannot keep up with the flush frequency.
If that happens, you can force a flush manually using nodetool flush. But if you see your commitlog filling your disk, lowering your memtable_flush_writers is a good thing to try.
NoteL: As with all "tuning" like changes with Cassandra, I'd make incremental changes over time, as opposed to a drastic change. Just to be on the safe side.
memtable_cleanup_threshold : When the total amount of memory used by all non-flushing memtables exceeds this ratio, Cassandra flushes the largest memtable to disk.
memtable_flush_writers : THis defines the number of memtable flush writer threads. The threads will write parallel on disk (sstables). But changing this parameter is suggest in case solid-state drive (SSD) is used.
Note : If your data directories are backed by SSDs, increase this setting to the number of cores.
I hope this solves your query.
The idea behind TimeWindowCompactionStrategy is each SSTable has records from only a particular time window, instead of records from different time windows getting mixed with each other.
Doesn't Leveled Compaction result in something similar? SSTables are compacted with other SSTables from the same level, which are all from the same time window. (aka SSTables at higher levels are always older). This looks very similar to DateTieredCompactionStrategy, except that the SSTable size is determined by max size in MBs instead of a time window.
LeveledCS is grouping SSTables by size in a multilevel structure, while TimeWindowCS is making same-interval SSTables (thus it's a single level structure) and has limitations on number of buckets so tables with TWCS requires TTL for all rows.
You are correct about difference between DTCS and LCS.
P.S. I recommend to watch the slides from presentation by the author of TWCS to get the reasoning behind it.
For about 1 month I'm seeing the following values of used space for the 3 nodes ( I have replication factor = 3) in my Cassandra cluster in nodetool cfstats output:
Pending Tasks: 0
Column Family: BinaryData
SSTable count: 8145
Space used (live): 787858513883
Space used (total): 1060488819870
For other nodes I see good values, something like:
Space used (live): 780599901299
Space used (total): 780599901299
You can note a 25% difference (~254Gb) between Live and Total space. It seems I have a lot garbage on these 3 nodes which cannot be compacted for some reason.
The column family I'm talking about has a LeveledCompaction strategy configured with SSTable size of 100Mb:
create column family BinaryData with key_validation_class=UTF8Type
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};
Note, that total value staying for month on all of the three nodes. I relied Cassandra normalize data automatically.
What I tried to decrease space (without result):
nodetool cleanup
nodetool repair -pr
nodetool compact [KEYSPACE] BinaryData (nothing happens: major compaction is ignored for LeveledCompaction strategy)
Are there any other things I should try to cleanup a garbage and free space?
Ok, I have a solution. It looks like Cassandra issue.
First, I went deep into the Cassandra 1.1.9 sources and noted that Cassandra perform some re-analysing of SStables during node starting. It removes the SStables marked as compacted, performs recalculation of used space, and do some other staff.
So, what I did is restarted the 3 problem nodes. The Total and Live values have become equals immediately after restart was completed and then Compaction process has been started and used space is reducing now.
Leveled compaction creates sstables of a fixed, relatively small size,
in your case it is 100Mb that are grouped into “levels”. Within each
level, sstables are guaranteed to be non-overlapping. Each level is
ten times as large as the previous.
So basically from this statement provided in cassandra doc, we can conclude that may be in your case ten time large level background is not formed yet, resulting to no compaction.
Coming to second question, since you have kept the replication factor as 3, so data has 3 duplicate copies, for which you have this anomaly.
And finally 25% difference between Live and Total space, as you know its due over deletion operation.
For LeveledCompactionStrategy you want to set the sstable size to a max of around 15 MB. 100MB is going to cause you a lot of needless disk IO and it will cause it to take a long time for data to propagate to higher levels, making deleted data stick around for a long time.
With a lot of deletes, you are most likely hitting some of the issues with minor compactions not doing a great job cleaning up deleted data in Cassandra 1.1. There are a bunch of fixes for tombstone cleanup during minor compaction in Cassandra 1.2. Especially when combined with LCS. I would take a look at testing Cassandra 1.2 in your Dev/QA environment. 1.2 does still have some kinks being ironed out, so you will want to make sure to keep up to date with installing new versions, or even running off of the 1.2 branch in git, but for your data size and usage pattern, I think it will give you some definite improvements.
Recently I began to study Cassandra. Please help me understand what effect these settings (I need your interpretation, I read the file cassandra.yaml):
memtable_flush_writers
memtable_flush_queue_size
thrift_framed_transport_size_in_mb
in_memory_compaction_limit_in_mb
slised_buffer_size_in_kb
thrift_max_message_length_in_mb
binary_memtable_throughput_in_mb
column_index_size_in_kb
I know it's very late to answer.But I am answering it as it might help someone else.
The most of the parameters you have mentioned above are related to the Cassandra write operation.
memtable_flush_writers :
It Sets the number of memtable flush writer threads. These threads are blocked by disk I/O, and each one holds a memtable in memory while blocked. If your data directories are backed by SSD, increase this setting to the number of cores.
memtable_flush_queue_size :
The number of full memtables to allow pending flush (memtables waiting for a write thread). At a minimum, set to the maximum number of indexes created on a single table
in_memory_compaction_limit_in_mb : Size limit for rows being compacted in memory. Larger rows spill to disk and use a slower two-pass compaction process. When this occurs, a message is logged specifying the row key. The recommended value is 5 to 10 percent of the available Java heap size.
thrift_framed_transport_size_in_mb : Frame size (maximum field length) for Thrift. The frame is the row or part of the row that the application is inserting.
thrift_max_message_length_in_mb: The maximum length of a Thrift message in megabytes, including all fields and internal Thrift overhead (1 byte of overhead for each frame). Message length is usually used in conjunction with batches. A frame length greater than or equal to 24 accommodates a batch with four inserts, each of which is 24 bytes. The required message length is greater than or equal to 24+24+24+24+4 (number of frames).
You can find more details at Datastax Cassandra documentation