What is SSTable overlap in TWCS in Cassandra? - cassandra

I am trying to understand SStable overlaps in cassandra which is not suitable for TWCS. I found references like https://thelastpickle.com/blog/2016/12/08/TWCS-part1.html but I still don't understand what overlap means and how it is caused by read repairs. Can anyone please provide a simple example that would help me to understand? Thanks

For TWCS, data is compacted into "time windows". If you've configured a time window of 1 hour, TWCS will compact (combine) all partitions written within a one-hour window into a single SSTable. Over a 24-hour period you will end up with 24 SSTables, one for each hour of the day.
Let's say you inspect the SSTable generated at 9am. The minimum and maximum [write] timestamps in that SSTable would be between 8am and 9am.
Now consider a scenario where a replica has missed a few mutations (writes) around 10am. All the writes between 10am and 11am will get compacted to one SSTable. If a repair runs at 3pm, the missed mutations from earlier in that day will get included in the 3pm to 4pm time-window even when it really belongs to the SSTable from the 10-11am time-window.
In TWCS, SSTables from different time windows will not get compacted together. This means that the data from 2 different time windows is fragmented across 2 SSTables. Even if the 11am SSTable is expired, it cannot be dropped (deleted) from disk because there is data in the 4pm SSTable that overlaps with it. The 11am SSTable will not get dropped until all the data in the 4pm SSTable has also expired.
There's a simplified explanation of how TWCS works in How data is maintained in Cassandra. It includes a nice diagram which would hopefully make it easier for you to visualise how data could possibly overlap across SSTables. Cheers!

Related

TTL tombstones in Cassandra using LCS are created in the same level data TTLed data?

I'm using LCS and a relatively large TTL of 2 years for all inserted rows and I'm concerned about the moment at which C* would drop the corresponding tombstones (neither explicit deletes nor updates are being performed).
From Missing Manual for Leveled Compaction Strategy, Tombstone Compactions in Cassandra and Deletes Without Tombstones or TTLs I understand that
All levels except L0 contain non-overlapping SSTables, but a partition key may be present in one SSTable in each level (aka distributed in all levels).
For a compaction to be able to drop a tombstone it must be sure that is compacting all SStables that contains de data to prevent zombie data (this is done checking bloom filters). It also considers gc_grace_seconds
So, for my particular use case (2 years TTL and write heavy load) I can conclude that TTLed data will be in highest levels so I'm wondering when those SSTables with TTLed data will be compacted with the SSTables that contains the corresponding SSTables.
The main question will be: Where are tombstones (from ttls) being created? Are being created at Level 0 so it will take a long time until it will end up in the highest levels (hence disk space will take long time to be freed)?
In a comment from About deletes and tombstones Alain says that
Yet using TTLs helps, it reduces the chances of having data being fragmented between SSTables that will not be compacted together any time soon. Using any compaction strategy, if the delete comes relatively late in the row history, as it use to happen, the 'upsert'/'insert' of the tombstone will go to a new SSTable. It might take time for this tombstone to get to the right compaction "bucket" (with the rest of the row) and for Cassandra to be able to finally free space.
My understanding is that with TTLs the tombstones is created in-place, thus it is often and for many reasons easier and safer to get rid of a TTLs than from a delete.
Another clue to explore would be to use the TTL as a default value if that's a good fit. TTLs set at the table level with 'default_time_to_live' should not generate any tombstone at all in C*3.0+. Not tested on my hand, but I read about this.
I'm not sure what it means with "in-place" since SSTables are immutable.
(I also have some doubts about what it says of using default_time_to_live that I've asked in How default_time_to_live would delete rows without tombstones in Cassandra?).
My guess is that is referring to tombstones being created in the same level (but different SStables) that the TTLed data during a compaction triggered by one of the following reasons:
"Going from highest level, any level having score higher than 1.001 can be picked by a compaction thread" The Missing Manual for Leveled Compaction Strategy
"If we go 25 rounds without compacting in the highest level, we start bringing in sstables from that level into lower level compactions" The Missing Manual for Leveled Compaction Strategy
"When there are no other compactions to do, we trigger a single-sstable compaction if there is more than X% droppable tombstones in the sstable." CASSANDRA-7019
Since tombstones are created during compaction, I think it may be using SSTable metadata to estimate droppable tombstones.
So, compactions (2) and (3) should be creating/dropping tombstones in highest levels hence using LCS with a large TTL should not be an issue per se.
With creating/dropping I mean that the same kind of compactions will be creating tombstones for expired data and/or dropping tombstones if the gc period has already passed.
A link to source code that clarifies this situation will be great, thanks.
Alain Rodriguez's answer from mailing list
Another clue to explore would be to use the TTL as a default value if
that's a good fit. TTLs set at the table level with 'default_time_to_live'
should not generate any tombstone at all in C*3.0+. Not tested on my hand,
but I read about this.
As explained on a parallel thread, this is wrong, mea culpa. I believe the rest of my comment still stands (hopefully :)).
I'm not sure what it means with "in-place" since SSTables are immutable.
My guess is that is referring to tombstones being created in the same
Yes, I believe during the next compaction following the expiration date,
the entry is 'transformed' into a tombstone, and lives in the SSTable that
is the result of the compaction, on the level/bucket this SSTable is put
into. That's why I said 'in-place' which is indeed a bit weird for
immutable data.
As a side idea for your problem, on 'modern' versions of Cassandra (I don't
remember the version, that's what 'modern' means ;-)), you can run
'nodetool garbagecollect' regularly (not necessarily frequently) during the
off-peak period. That might use the cluster resources when you don't need
them to claim some disk space. Also making sure that a 2 years old record
is not being updated regularly by design would definitely help. In the
extreme case of writing a data once (never updated) and with a TTL for
example, I see no reason for a 2 years old data not to be evicted
correctly. As long as the disk can grow, it should be fine.
I would not be too much scared about it, as there is 'always' a way to
remove tombstones. Yet it's good to think about the design beforehand
indeed, generally, it's good if you can rotate the partitions over time,
not to reuse old partitions for example.

Tombstone in Cassandra

I have a Cassandra table with TTL of 60 seconds, I have few questions in this,
1) I am getting the following warning
Read 76 live rows and 1324 tombstone cells for query SELECT * FROM xx.yy WHERE token(y) >= token(fc872571-1253-45a1-ada3-d6f5a96668e8) LIMIT 100 (see tombstone_warn_threshold)
What does this mean?
2) As per my study, Tombstone is a flag in case of TTL (will be deleted after gc_grace_seconds)
i) so till 10 days does it mean that it won't be deleted ?
ii) What will be the consequence of it waiting for 10 days?
iii) Why it is a long time 10 days?
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/tabProp.html
gc_grace_seconds 864000 [10 days] The number of seconds after data is marked with a tombstone (deletion marker) before it is eligible for garbage-collection. Cassandra will not execute hints or batched mutations on a tombstoned record within its gc_grace_period. The default value allows a great deal of time for Cassandra to maximize consistency prior to deletion. For details about decreasing this value, see garbage collection below.
3) I read that performing compaction and repair using nodetool will delete the tombstone, How frequently we need to run this in background, What will be the consequence of it?
This means that your query returned 76 "live" or non-deleted/non-obsoleted rows of data, and that it had to sift through 1324 tombstones (deletion markers) to accomplish that.
In the world of distributed databases, deletes are hard. After all, if you delete a piece of data from one node, and you expect that deletion to happen on all of your nodes, how would you know if it worked? Quite literally, how do you replicate nothing? Tombstones (delete markers) are the answer to that question.
i. The data is gone (obsoleted, rather). The tombstone(s) will remain for gc_grace_seconds.
ii. The "consequence" is that you'll have to put up with those tombstone warning messages for that duration, or find a way to run your query without having to scan over the tombstones.
iii. The idea behind the 10 days, is that if the tombstones are collected too early, that your deleted data will "ghost" its way back up to some nodes. 10 days gives you enough time to run a weekly repair, which ensures your tombstones are properly replicated before removal.
Compaction removes tombstones. Repair replicates them. You should run repair once per week. While you can run compaction on-demand, don't. Cassandra has its own thresholds (based on number and size of SSTable files) to figure out when to run compaction, and it's best not to get in its way. If you do, you'll be manually running compaction from there on out, as you'll probably never reach the compaction conditions organically.
The consequences, are that both repair and compaction take compute resources, and can reduce a node's ability to serve requests. But they need to happen. You want them to happen. If compaction doesn't run, your SSTable files will grow in number and size; eventually causing rows to exist over multiple files, and queries for them will get slow. If repair doesn't run, your data is at risk of not being in-sync.

Cassandra: how to reduce the number of tombstones in a table? (tombstone_compaction_interval, gc_grace_seconds and LevelledCompactionStrategy)

I've a table where I insert data with a TTL of 1 minute and I have a warning in DSE OpsCenter about the high number of tombstones in that table. Which does make sense since in average 80 records per minute are inserted in this table.
So for example for a full day 80 * 60 * 24 = 115200 records inserted and TTL'ed in one day.
My question is what should I do in order to decrease the number of tombstones in this table?
I've been been looking into tombstone_compaction_interval and gc_grace_seconds and this is where it gets a bit confusing as I'm having problems to understand the exact impact of these properties on the tombstones (even after reading the documentation provided by DataStax - http://docs.datastax.com/en/cql/3.1/cql/cql_reference/compactSubprop.html and http://docs.datastax.com/en/cql/3.1/cql/cql_reference/tabProp.html).
I've also been looking into LevelledCompactionStrategy (https://www.datastax.com/dev/blog/leveled-compaction-in-apache-cassandra) since it also does seem to impact the tombstones compaction although I don't fully understand why.
So I'm hoping someone will be able to help me better understand how this all works, or even just let me know if I'm going in the right direction.
Please read this http://thelastpickle.com/blog/2016/07/27/about-deletes-and-tombstones.html. Very good read.
Overall: gc_grace_seconds parameter is the minimal time that tombstones will be kept on disk after data has been deleted. We need to make sure that all the replicas received the delete and have all tombstones stored to avoid having zombie data issues. By default its 10 days.
tombstone_compaction_interval: As part of this JIRA (https://issues.apache.org/jira/browse/CASSANDRA-4781), this property got introduced.
When the compaction ratio was high enough to trigger a single-SSTable compaction, but that tombstones were not evicted due to overlapping SSTables.
I am not sure about your current datamodel but here are my suggestions.
Probably you have to change your DM. Please read https://academy.datastax.com/resources/getting-started-time-series-data-modeling and Time series modelling( with start & end date) in cassandra
Change write pattern.
Change read pattern. Try to read only active data. (As per your current DM, when you are reading it, its going through tombstone cells in-order to reach active cells)
Try to use TimeWindowCompactionStrategy and tune it as per your workload. (http://thelastpickle.com/blog/2017/01/10/twcs-part2.html)
If you are use TTL while inserting (like with INSERT or UPDATE stmnt), see if you can change it to the Table level.
If you are using STCS and want to change compaction sub-properties, probably you could change
unchecked_tombstone_compaction=true and min_threshold=3 (little bit aggressive)

Delete-Upsert-Read Access Pattern in Cassandra

I use Cassandra to store trading information. Based on the queries available, I design my CF as below:
CREATE trades (trading_book text,
trading_date timestamp,
OTHER TRADING INFO ...,
PRIMARY KEY (trading_book, trading_date));
I want to delete all the data on a given date in the following way:
collect all the trading books (which are stored somewhere else);
evenly distribute all the trading books in 20 threads;
in each thread, loop through the books, and
DELETE FROM trades WHERE trading_book='A_BOOK' AND
trading_date='2015-01-01'
There are about 1 million trades and the deletion takes 2 min to complete. Then insert the trading data on 2015-01-01 again (about 1 million trades) immediate after the deletion done.
When the insertion done and I re-read the data, I got the error even with query timeout set to 600 seconds:
ReadTimeout: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'} info={'received_responses': None, 'required_responses': None, 'consistency': 'Not Set'}
It looks like some data inconsistency in the CF now, i.e. the coordinator could identify the partition, but there is no data on the partition?
Is there anything wrong with my access pattern? How to solve this problem?
Any hints will be highly appreciated! Thank you.
You are creating tombstones for every column on that date (by doing the deletes), then writing new records over the top. So now each read must first read the original column, then the tombstone, then the new record. If you do a trace you will see that tombstone reads are killing you. This kind of pattern is problematic with Cassandra, so you should try to find a different (immutable) way to do this. An alternative could be to simply overwrite the data, in which case there are no tombstones to reconcile. But you'll still have to deal with two versions.
In addition to rs_atl's response (which hits the nail on the had with tombstones) here's a bit of info for you to understand / address the problem:
What are tombstones anyway?
Because sstables are immutable, rather than deleting records in Cassandra, we insert a new cell that essentially holds a null value. That's a tombstone. Tombstones become available for deletion or garbage collection after gc_grace seconds (configurable by table).
Tombstones and repairs:
The reason we wait is to ensure that c* has time to propagate a tombstone to all replicas. If a tombstone does not get replicated to all replicas (in some edge cases with low CL writes and flopping nodes for example) and then gets removed / gc'ed, the original data that was deleted will come back to life. This is why we run repairs at least every GC_Grace, ensuring tombstone consistency and preventing zombie data.
How many tombstones am I hitting?
If you turn on tracing in cqlsh tracing on or turn on probabilistic tracing in the yaml or via nodetool you'll be able to see how many tombstones you are hitting for a particular request. As this number gets bigger, your read performance will decrease until you see the timeouts you mentioned.
nodetool cfstats also gives you more macro details (average tombstones per slice) of how many tombstones are in your table.
the sstablemetadata utility shows you the total # of tombstones in a table.
What can I do to get rid of tombstones?
1) If you're deleting everything in the table, truncate table is a way of deleting data for free in c* since you can expire entire sstables.
2) Tombstones are removed by compaction. You can more aggressively delete tombstones by decreasing gc_grace_seconds and/or increasing the tombstone ratio for compaction, but make sure you're running your repairs or you may see zombie data.

Cassandra Tombstoning warning and failure thresholds breached

We are running a Titan Graph DB server backed by Cassandra as a persistent store and are running into an issue with reaching the limit on Cassandra tombstone thresholds that is causing our queries to fail / timeout periodically as data accumulates. It seems like the compaction is unable to keep up with the number of tombstones being added.
Our use case supports:
High read / write throughputs.
High sensitivity to reads.
Frequent updates to node values in Titan. causing rows to be updated in Cassandra.
Given the above use cases, we are already optimizing Cassandra to aggressively do the following:
Aggressive compaction by using the levelled compaction strategies
Using tombstone_compaction_interval as 60 seconds.
Using tombstone_threshold to be 0.01
Setting gc_grace_seconds to be 1800
Despite the following optimizations, we are still seeing warnings in the Cassandra logs similar to:
[WARN] (ReadStage:7510) org.apache.cassandra.db.filter.SliceQueryFilter: Read 0 live and 10350 tombstoned cells in .graphindex (see tombstone_warn_threshold). 8001 columns was requested, slices=[00-ff], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
Occasionally as time progresses, we also see the failure threshold breached and causes errors.
Our cassandra.yaml file has the tombstone_warn_threshold to be 10000, and the tombstone_failure_threshold to be much higher than recommended at 250000, with no real noticeable benefits.
Any help that can point us to the correct configurations would be greatly appreciated if there is room for further optimizations. Thanks in advance for your time and help.
Sounds like the root of your problem is your data model. You've done everything you can do to mitigate getting TombstoneOverwhelmingException. Since your data model requires such frequent updates causing tombstone creation a eventual consistent store like Cassandra may not be a good fit for your use case. When we've experience these types of issues we had to change our data model to fit better with Cassandra strengths.
About deletes http://www.slideshare.net/planetcassandra/8-axel-liljencrantz-23204252 (slides 34-39)
Tombstones are not compacted away until the gc_grace_seconds configuration on a table has elapsed for a given tombstone. So even increasing your compaction interval your tombstones will not be removed until gc_grace_seconds has elapsed, with the default being 10 days. You could try tuning gc_grace_seconds down to a lower value and do repairs more frequently (usually you want to schedule repairs to happen every gc_grace_seconds_in_days - 1 days).
So everyone here is right. If you repair and compact frequently you an reduce your gc_grace_seconds number.
It may also however be worth considering that Inserting Nulls is equivalent to a delete. This will increase your number of tombstones. Instead you'll want to insert the UNSET_VALUE if you're using prepared statements. Probably too late for you, but if anyone else comes here.
The variables you've tuned are helping you expire tombstones, but it's worth noting that while tombstones can not be purged until gc_grace_seconds, Cassandra makes no guarantees that tombstones WILL be purged at gc_grace_seconds. Indeed, tombstones are not compacted until the sstable containing the tombstone is compacted, and even then, it will not be eliminated if there is another sstable containing a cell that is shadowed.
This results in tombstones potentially persisting a very long time, especially if you're using sstables that are infrequently compacted (say, very large STCS sstables). To address this, tools exist such as the JMX endpoint to forceUserDefinedCompaction - if you're not adept at using JMX endpoints, tools to do this for you automatically exist such as http://www.encql.com/purge-cassandra-tombstones/

Resources