How to remove tombstones with a timestamp in the future - cassandra

How can tombstones that were inserted with a timestamp set in the very distance future be removed?
Context:
There was a bug in some code that inserted entries with a timestamp in the distant future. The objects were manually deleted by created a tombstone further in the future.
Now inserts that have the same partition key are automatically deleted because of the future tombstone entry. These entries are not removed during compaction because of their timestamp. How can these future tombstones be removed so that entries with the same PKs can be inserted correctly?

I reproduced the same issue by setting client server to +30 mins. I ran delete on a partition from the client. It created tombstone with future timestamp. All my update on source database server with correct time are being ignored because Cassandra only cares about most recent timestamp data.
I did these steps to clear tombstones on a table with this issue
1) Set table property gc_grace_seconds to 0
2) nodetool flush keyspace table
3) nodetool compact keyspace table
I have just one node. If you have multiple nodes then you need to do the above steps on all the nodes except setting table property
NOTE: Doing this will clear all the tombstones and also Compaction is resource intensive

Related

Which one is better to use TTL or Delete in Cassandra?

I want to remove records from Cassandra cluster after a particular time.
So what Should I use TTL or manually delete?
The answer is "it depends". Deleting data in cassandra is never free.
If you have to "DELETE" you need always to issue those queries, with TTL it's done from the moment you write the data. But by using DELETE you have more control over data deletion.
On the operation side, you should try to get your tombstones in the same sstable so once gc_grace is expired the full sstable can be dropped. Because data is only actually deleted when the sstables are compacted, even if gc_grace has passed, and a compaction didn't happen with the sstable holding the tombstone, the tombstone will not be deleted from the harddrive. This also make relevant the choice of compaction strategy for your table.
If you're also using a lot of tombstones, you should always enable: "unchecked_tombstone_compaction" at table level. You can read more about that here: https://docs.datastax.com/en/cql/3.1/cql/cql_reference/compactSubprop.html
It depends on your data model. The fortunate answer, is that due to their predictable nature, you can build your data model to accommodate TTLs.
Let's say I build the following table to track user requests to a REST service, for example. Suppose that I really only care about the last week's worth of data, so I'll set a TTL of 604800 seconds (7 days). So the query I need to support is basically this (querying transactions for user 'Bob' for the prior 7 days):
SELECT * FROM rest_transactions_by_user
WHERE username='Bob' AND transaction_time > '2018-05-28 13:41';
To support that query, I'll build this table:
CREATE TABLE rest_transactions_by_user (
username TEXT,
transaction_time TIMESTAMP,
service_name TEXT,
HTTP_result BIGINT,
PRIMARY KEY (username,transaction_time))
WITH CLUSTERING ORDER BY (transaction_time DESC)
AND gc_grace_seconds = 864000
AND default_time_to_live = 604800;
A few things to note:
I am leaving gc_grace_seconds at the default of 864000 (ten days). This will ensure that the TTL tombstones will have adequate time to be propagated throughout the cluster.
Rows will TTL at 7 days (as mentioned above). After that, they become tombstones for an additional 10 days.
I am clustering by transaction_time in DESCending order. This puts the rows I care about (the ones that haven't TTL'd) at the "top" of my partition (sequentially).
By querying for a transaction_time of the prior 7 days, I am ignoring anything older than that. As my TTL tombstones will exist for 10 days afterward, they will be at the "bottom" of my partition.
In this way, limiting my query to the last 7 days ensures that Cassandra will never have to deal with the tombstones, as my query will never find them. So in this case, I have built a data model where a TTL is "better" than a random delete.
Letting the record expire based on TTL is better. With TTL based delete, you can set the gc_grace_seconds to a much lower value (1 day or two) and you do not have to worry about tombstones lingering for a longer duration.
With manual delete, you need to make sure the tombstones do not increase beyond the warning and error threshold, as it impacts the query.

Cassandra - What is difference between TTL at table and inserting data with TTL

I have a Cassandra 2.1 cluster where we insert data though Java with TTL as the requirement of persisting the data is 30 days.
But this causes problem as the files with old data with tombstones is kept on the disk. This results in disk space being occupied by data which is not required. Repairs take a lot of time to clear this data (upto 3 days on a single node)
Is there a better way to delete the data?
I have come across this on datastax
Cassandra allows you to set a default_time_to_live property for an entire table. Columns and rows marked with regular TTLs are processed as described above; but when a record exceeds the table-level TTL, Cassandra deletes it immediately, without tombstoning or compaction. https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html?hl=tombstone
Will the data be deleted more efficiently if I set TTL at table level instead of setting each time while inserting.
Also, documentation is for Cassandra 3, so will I have to upgrade to newer version to get any benefits?
Setting default_time_to_live applies the default ttl to all rows and columns in your table - and if no individual ttl is set (and cassandra has correct ntp time on all nodes), cassandra can easily drop those data safely.
But keep some things in mind: your application is still able so set a specific ttl for a single row in your table - then normal processing will apply. On top, even if the data is ttled it won't get deleted immediately - sstables are still immutable, but tombstones will be dropped during compaction.
What could help you really a lot - just guessing - would be an appropriate compaction strategy:
http://docs.datastax.com/en/archived/cassandra/3.x/cassandra/dml/dmlHowDataMaintain.html#dmlHowDataMaintain__twcs-compaction
TimeWindowCompactionStrategy (TWCS)
Recommended for time series and expiring TTL workloads.
The TimeWindowCompactionStrategy (TWCS) is similar to DTCS with
simpler settings. TWCS groups SSTables using a series of time windows.
During compaction, TWCS applies STCS to uncompacted SSTables in the
most recent time window. At the end of a time window, TWCS compacts
all SSTables that fall into that time window into a single SSTable
based on the SSTable maximum timestamp. Once the major compaction for
a time window is completed, no further compaction of the data will
ever occur. The process starts over with the SSTables written in the
next time window.
This help a lot - when choosing your time windows correctly. All data in the last compacted sstable will have roughly equal ttl values (hint: don't do out-of-order inserts or manual ttls!). Cassandra keeps the youngest ttl value in the sstable metadata and when that time has passed cassandra simply deletes the entire table as all data is now obsolete. No need for compaction.
How do you run your repair? Incremental? Full? Reaper? How big in terms of nodes and data is your cluster?
The quick answer is yes. The way it is implemented is by deleting the SStable/s directly from disk. Deleting an SStable without the need to compact will clear up disk space faster. But you need to be sure that the all the data in a specific sstable is "older" than the globally configured TTL for the table.
This is the feature referred to in the paragraph you quoted. It was implemented for Cassandra 2.0 so it should be part of 2.1

Check table size in cassandra historically

I've a Cassandra table (Cassandra version is 2.0) with terabytes of data, here is what the schema looks like
"my_table" (
key ascii,
timestamp bigint,
value blob,
PRIMARY KEY ((key), timestamp)
)
I'd like to delete some data, but before want to estimate how much disk space it will reclaim.
Unfortunately stats from JMX metrics are only available for last two weeks, so thats not very useful.
Is there any way to check how much space is used by certain set of data (for example where timestamp < 1000)?
I was wondering also if there is a way to check query result set size, so that I can do something like select * from my_table where timestamp < 1000 and see how many bytes the result occupies.
There is no mechanism to see the size on disk from the data, it can be pretty far removed from the coordinator of the request and theres levels that impact it like compression and multiple sstables which would make it difficult.
Also be aware that issuing a delete will not immediately reduce disk space. C* does not delete data, the sstables are immutable and cannot be changed. Instead it writes a tombstone entry that after gc_grace_seconds will disappear. When sstables are being merged, the tombstone + data would combine to be just the tombstone. After it is past the gc_grace_seconds the tombstone will no longer be copied during compaction.
The gc_grace is to prevent losing deletes in a distributed system, since until theres a repair (should be scheduled ~weekly) theres no absolute guarantee that the delete has been seen by all replicas. If a replica has not seen the delete and you remove the tombstone, the data can come back.
No, not really.
Using sstablemetadata you can find tombstone drop times, minimum timestamp and maximum timestamp in the mc-####-big-data.db files.
Additionally if you're low on HDD space consider nodetool cleanup, nodetool clearsnapshot and then finally nodetool repair.

ttl in cassandra creating tombstones

I am only doing inserts to cassandra. While inserting , not nulls are only inserted to avoid tombstones. But few records are inserted with TTL. But then doing select count(*) from table gives following errors -
Read 76 live rows and 1324 tombstone cells for query SELECT * FROM
xx.yy WHERE token(y) >=
token(fc872571-1253-45a1-ada3-d6f5a96668e8) LIMIT 100 (see
tombstone_warn_threshold)
Do TTL inserts lead to tombstones in cassandra 3.7 ? How can the warning be mitigated ?
There are no updates done only inserts , some records without TTL , others with TTL
From datastax documentation: https://docs.datastax.com/en/cql/3.1/cql/cql_using/use_expire_c.html
After the number of seconds since the column's creation exceeds the TTL value, TTL data is considered expired and is included in results. Expired data is marked with a tombstone after on the next read on the read path, but it remains for a maximum of gc_grace_seconds. After this amount of time, the tombstoned data is automatically removed during the normal compaction and repair processes.
These entries will be treated as tombstones until compaction or repair.
To add one more point for TTL and compaction. Even though, after gc_grace_seconds, the default setting for compaction only kicks off depending on tombstone_compaction_interval and tombstone_threshold
Previously, we were having read timeout issue due to high number of tombstones for tables having high number of records. Eventually, we need to reduce tombstone_threshold as well as enable unchecked_tombstone_compaction to make compaction process triggered more frequently.
You can refer to the below docs for more details
http://docs.datastax.com/en/cql/3.3/cql/cql_reference/cqlCreateTable.html?hl=unchecked_tombstone_compaction#tabProp__cqlTableGc_grace_seconds

When does Cassandra remove data after it has been deleted?

Recently I have been trying to familiarize myself with Cassandra but don't quite understand when data is removed from disk after it has been deleted. The use case I'm particularly interested is expiring time series data with DTCS. As an example, consider the following table:
CREATE TABLE metrics (
metric_id text,
time timestamp,
value double,
PRIMARY KEY (metric_id, time),
) WITH CLUSTERING ORDER BY (time DESC) AND
default_time_to_live = 86400 AND
gc_grace_seconds = 3600 AND
compaction = {
'class': 'DateTieredCompactionStrategy',
'timestamp_resolution':'MICROSECONDS',
'base_time_seconds':'3600',
'max_sstable_age_days':'365',
'min_threshold':'4'
};
I understand that Cassandra will create a tombstone for all rows inserted into this table after 24 hours (86400 seconds). These tombstones will first be written to an in-memory Memtable and then flushed to disk as an SSTable when the Memtable reaches a certain size. My question is when will the data that is now expired be removed from disk? Is it the next time the SSTable which contains the data gets compacted? So, with DTCS and min_threshold set to four, we would wait until at least three other SSTables are in the same time window as the expired data, and then those SSTables will be compacted into a SSTable. Is it during this compaction that the data will be removed? It seems to me that this would require Cassandra to maintain some metadata on which rows have been deleted since the newer tombstones would likely not be in the older SSTables that are being compacted.
Alternatively, do the SSTables which contain the tombstones have to be compacted with the SSTables which contain the expired data for the data to be removed? It seems to me that this could result in Cassandra holding the expired data long after it has expired since it's waiting for the new tombstones to be compacted with the older expired data.
Finally, I was also unsure when the tombstones themselves are removed. I know Cassandra does not delete them until after gc_grace_seconds but it can't delete the tombstones until it's sure the expired data has been deleted right? Otherwise it would see the expired data as being valid. Consequently, it seems to me that the question of when tombstones are deleted is intimately tied to the questions above. Thanks!
If it helps I've been experimenting with version 2.0.15 myself.
There's two ways to definitly remove data in Cassandra.
1 : When gc_grace_seconds expires. In your table, gc_grace_seconds is set to 3600. wich means that when you execute a delete statement on a row. You will have to wait 3600 seconds before the data is definitly removed from all the cluster.
2 : When a compaction comes in. During a compaction, Cassandra looks for all the data marked with a tombstone and simply ignores it when writing the new SSTable to ensure that the new SSTable doesn't have already deleted data.
However, it might happen that a node goes down longer than gc_grace_seconds or during a compaction, you'll find more information in the Cassandra documentation.
After some further research and help from others I've realized that I had some misconceptions in my original questions. Specifically: "Data deleted by TTL isn’t the same as issuing a delete – each expiring cell internally has a ttl/timestamp at which it will be converted into a tombstone. There is no tombstone added to the memtable, or flushed to disk – it just treats the expired cells as tombstones once they’re past that timestamp."
Furthermore, Cassandra will check if it can drop SSTables containing only expired data when a memtable is flushed to disk and a minor compaction runs, no more than once every ten minutes though (see this issue). Hope that helps if you had the same questions as me!

Resources