When does Cassandra remove data after it has been deleted? - cassandra

Recently I have been trying to familiarize myself with Cassandra but don't quite understand when data is removed from disk after it has been deleted. The use case I'm particularly interested is expiring time series data with DTCS. As an example, consider the following table:
CREATE TABLE metrics (
metric_id text,
time timestamp,
value double,
PRIMARY KEY (metric_id, time),
) WITH CLUSTERING ORDER BY (time DESC) AND
default_time_to_live = 86400 AND
gc_grace_seconds = 3600 AND
compaction = {
'class': 'DateTieredCompactionStrategy',
'timestamp_resolution':'MICROSECONDS',
'base_time_seconds':'3600',
'max_sstable_age_days':'365',
'min_threshold':'4'
};
I understand that Cassandra will create a tombstone for all rows inserted into this table after 24 hours (86400 seconds). These tombstones will first be written to an in-memory Memtable and then flushed to disk as an SSTable when the Memtable reaches a certain size. My question is when will the data that is now expired be removed from disk? Is it the next time the SSTable which contains the data gets compacted? So, with DTCS and min_threshold set to four, we would wait until at least three other SSTables are in the same time window as the expired data, and then those SSTables will be compacted into a SSTable. Is it during this compaction that the data will be removed? It seems to me that this would require Cassandra to maintain some metadata on which rows have been deleted since the newer tombstones would likely not be in the older SSTables that are being compacted.
Alternatively, do the SSTables which contain the tombstones have to be compacted with the SSTables which contain the expired data for the data to be removed? It seems to me that this could result in Cassandra holding the expired data long after it has expired since it's waiting for the new tombstones to be compacted with the older expired data.
Finally, I was also unsure when the tombstones themselves are removed. I know Cassandra does not delete them until after gc_grace_seconds but it can't delete the tombstones until it's sure the expired data has been deleted right? Otherwise it would see the expired data as being valid. Consequently, it seems to me that the question of when tombstones are deleted is intimately tied to the questions above. Thanks!
If it helps I've been experimenting with version 2.0.15 myself.

There's two ways to definitly remove data in Cassandra.
1 : When gc_grace_seconds expires. In your table, gc_grace_seconds is set to 3600. wich means that when you execute a delete statement on a row. You will have to wait 3600 seconds before the data is definitly removed from all the cluster.
2 : When a compaction comes in. During a compaction, Cassandra looks for all the data marked with a tombstone and simply ignores it when writing the new SSTable to ensure that the new SSTable doesn't have already deleted data.
However, it might happen that a node goes down longer than gc_grace_seconds or during a compaction, you'll find more information in the Cassandra documentation.

After some further research and help from others I've realized that I had some misconceptions in my original questions. Specifically: "Data deleted by TTL isn’t the same as issuing a delete – each expiring cell internally has a ttl/timestamp at which it will be converted into a tombstone. There is no tombstone added to the memtable, or flushed to disk – it just treats the expired cells as tombstones once they’re past that timestamp."
Furthermore, Cassandra will check if it can drop SSTables containing only expired data when a memtable is flushed to disk and a minor compaction runs, no more than once every ten minutes though (see this issue). Hope that helps if you had the same questions as me!

Related

How are Cassandra Tombstones deleted in old SSTables?

If I have compaction enabled, like SizeTieredCompaction, my SSTables get compacted until a certain size level is reached. When I "delete" an old entry which is in an SSTable partition that is quite old and wont be compacted again in the near future, when is the deletion taking place?
Imagine you delete 100 entries and all are part of a really old SSTable that was compacted several times, has no hot data and is already quite big. It will take ages until it's compacted again and tombstones are removed, right?
When the tombstone is merged with the data in a compaction the data will be deleted from disk. When that happens depends on the rate new data is being added and your compaction strategy. The tombstones are not purged until after gc_grace_seconds to prevent data resurrection (make sure repairs complete within this period of time).
If you override or delete data a lot and not ok with a lot of obsolete data on disk you should probably use LeveledCompactionStrategy instead (I would recommend always defaulting to LCS if using ssds). It can take a long time for the largest sstables to get compacted if using STCS. STCS is more for constantly appending data (like logs or events). If the entries expire over time and you rely heavily on TTLs you will probably want to use the timed window strategy.

Cassandra - What is difference between TTL at table and inserting data with TTL

I have a Cassandra 2.1 cluster where we insert data though Java with TTL as the requirement of persisting the data is 30 days.
But this causes problem as the files with old data with tombstones is kept on the disk. This results in disk space being occupied by data which is not required. Repairs take a lot of time to clear this data (upto 3 days on a single node)
Is there a better way to delete the data?
I have come across this on datastax
Cassandra allows you to set a default_time_to_live property for an entire table. Columns and rows marked with regular TTLs are processed as described above; but when a record exceeds the table-level TTL, Cassandra deletes it immediately, without tombstoning or compaction. https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html?hl=tombstone
Will the data be deleted more efficiently if I set TTL at table level instead of setting each time while inserting.
Also, documentation is for Cassandra 3, so will I have to upgrade to newer version to get any benefits?
Setting default_time_to_live applies the default ttl to all rows and columns in your table - and if no individual ttl is set (and cassandra has correct ntp time on all nodes), cassandra can easily drop those data safely.
But keep some things in mind: your application is still able so set a specific ttl for a single row in your table - then normal processing will apply. On top, even if the data is ttled it won't get deleted immediately - sstables are still immutable, but tombstones will be dropped during compaction.
What could help you really a lot - just guessing - would be an appropriate compaction strategy:
http://docs.datastax.com/en/archived/cassandra/3.x/cassandra/dml/dmlHowDataMaintain.html#dmlHowDataMaintain__twcs-compaction
TimeWindowCompactionStrategy (TWCS)
Recommended for time series and expiring TTL workloads.
The TimeWindowCompactionStrategy (TWCS) is similar to DTCS with
simpler settings. TWCS groups SSTables using a series of time windows.
During compaction, TWCS applies STCS to uncompacted SSTables in the
most recent time window. At the end of a time window, TWCS compacts
all SSTables that fall into that time window into a single SSTable
based on the SSTable maximum timestamp. Once the major compaction for
a time window is completed, no further compaction of the data will
ever occur. The process starts over with the SSTables written in the
next time window.
This help a lot - when choosing your time windows correctly. All data in the last compacted sstable will have roughly equal ttl values (hint: don't do out-of-order inserts or manual ttls!). Cassandra keeps the youngest ttl value in the sstable metadata and when that time has passed cassandra simply deletes the entire table as all data is now obsolete. No need for compaction.
How do you run your repair? Incremental? Full? Reaper? How big in terms of nodes and data is your cluster?
The quick answer is yes. The way it is implemented is by deleting the SStable/s directly from disk. Deleting an SStable without the need to compact will clear up disk space faster. But you need to be sure that the all the data in a specific sstable is "older" than the globally configured TTL for the table.
This is the feature referred to in the paragraph you quoted. It was implemented for Cassandra 2.0 so it should be part of 2.1

ttl in cassandra creating tombstones

I am only doing inserts to cassandra. While inserting , not nulls are only inserted to avoid tombstones. But few records are inserted with TTL. But then doing select count(*) from table gives following errors -
Read 76 live rows and 1324 tombstone cells for query SELECT * FROM
xx.yy WHERE token(y) >=
token(fc872571-1253-45a1-ada3-d6f5a96668e8) LIMIT 100 (see
tombstone_warn_threshold)
Do TTL inserts lead to tombstones in cassandra 3.7 ? How can the warning be mitigated ?
There are no updates done only inserts , some records without TTL , others with TTL
From datastax documentation: https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_expire_c.html
After the number of seconds since the column's creation exceeds the TTL value, TTL data is considered expired and is included in results. Expired data is marked with a tombstone after on the next read on the read path, but it remains for a maximum of gc_grace_seconds. After this amount of time, the tombstoned data is automatically removed during the normal compaction and repair processes.
These entries will be treated as tombstones until compaction or repair.
To add one more point for TTL and compaction. Even though, after gc_grace_seconds, the default setting for compaction only kicks off depending on tombstone_compaction_interval and tombstone_threshold
Previously, we were having read timeout issue due to high number of tombstones for tables having high number of records. Eventually, we need to reduce tombstone_threshold as well as enable unchecked_tombstone_compaction to make compaction process triggered more frequently.
You can refer to the below docs for more details
http://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlCreateTable.html?hl=unchecked_tombstone_compaction#tabProp__cqlTableGc_grace_seconds

When does Cassandra remove data from an SSTable

In Cassandra 2.x when I delete one or multiple columns, they receive a tombstone in the Memtable but the data is not removed. At some point, the Memtable is flushed to an SSTable including the deleted data and the tombstone. When compaction is running, it will retain the tombstone with the specified grace period. What happens to the data? I have deleted a bunch of columns last week - less than gc_grace_seconds ago. I am not sure compaction has run yet. I haven't seen any change on disk size used yet, so I was wondering at which point is the data physically removed from disk?
In Cassandra 2.x when I delete one or multiple columns, they receive a tombstone in the Memtable but the data is not removed. At some point, the Memtable is flushed to an SSTable including the deleted data and the tombstone. When compaction is running, it will retain the tombstone with the specified grace period.
True.
What happens to the data?
The data will remain on disk at least for gc_grace_seconds. Next minor compaction right after gc_grace_seconds may remove it, but real timing depends mostly on your dataset and workload type.
I have deleted a bunch of columns last week - less than gc_grace_seconds ago. I am not sure compaction has run yet. I haven't seen any change on disk size used yet, so I was wondering at which point is the data physically removed from disk?
If you want to free some disk space, you can:
wait for gc_grace_seconds for normal minor compaction.
run nodetool compact which will trigger major compaction on current node freeing disk space right now.

Retrieving "tombstoned" records in Cassandra

My question is very simple. Is it in any way possible to retrieve columns that have been marked tombstone before the GCGraceSeconds period expiry(default 10 days). If yes what would be the exact CQL query for that?
If I were to understand the deletion process the tombstones are marked on the MemTables and the SSTable being immutable waiting for compaction still has the the deleted data waiting for compaction. So before compaction occurs is there any way to read the tombstoned data from either the Memtable or SSTable?
Using CQL 3.0 on CQLSH command prompt & Cassandra 2.0.
You are right, when a tombstone is inserted it usually doesn't immediately delete the underlying data (unless all your data is in a memtable). However, you can't control when it does. If you don't have much data and compaction happens quickly, the underlying data may be deleted very quickly, much sooner than 10 days.
There is no query to read deleted data, but you can inspect all your SSTables with sstable2json to see if they contain the deleted data.
Just to add on to the previous comment. Have a low value of gc_grace_seconds for the column families that have frequent deletions. It will take some time for gc but tombstones are expected to get cleared .

Resources