Too many Tombstone in Cassandra - cassandra

I have a table named 'holder' which has the single partition in which for every one hour we will have 60K entries,
I have another table named 'holderhistory' which has the 'date' as partitionId, so every day's record from 'holder' table will be copied to the 'holderhistory'
There will be a job running in the application
i) which collects all the older entries in holder table and copy to the holderhistory table
ii) Delete the older entries from holder table
NOW the issue is - there will be too many tombstones created in the holder table.
As default the tombstones will be cleared after 10 days (864000 seconds) gc_grace_seconds
But I don't want to keep the tombstone for more than 3 hours,
1) so It is good to set the gc_grace_seconds to 3 hours?
2) Or It is good to set the default_time_to_live to 3 hours?
Which is the best solution for deleting the tombstone?
Also what are the consequence on reducing the gc_grace_seconds from 10 days to 3 hours? where we will have impact?
Anyhelp is appreciated.

If you reduce the GCGraceSeconds parameter too low and the recovery time of any node longer than the GCGraceSeconds, in such case, once one of these nodes came back online, it would mistakenly think that all of the nodes that had received the delete had actually missed a write and it would start repairing all of the other nodes. I would recommend to use efault_time_to_live and give a try.

To answer your particular case : as the table 'holder' contains only one partition, you can delete the whole partition with a single "delete by partition key" statement, effectively creating a single tombstone.
If you delete the partition once a day, you'll end up with 1 tombstone per day... that's quite acceptable.
1) with gc_grace_seconds equals 3 hours, and if RF > 1, you will not be guaranteed to recover consistently from a node failure longer than 3 hours
2) with default_time_to_live equals 3 hours, each record will be deleted by creating a tombstone 3 hours after insertion
So you could keep default gc_grace_seconds set to 10 days, and take care to delete your daily records with something like DELETE FROM table WHERE PartitionKey = X
EDIT: Answering to your comment about hinted handoff...
Let's say RF = 3, gc_grace_second = 3h and a node goes down. The 2 others replicas continue to receive mutations (insert, update, delete), but they can't replicate them to the offline node. In that case, hints will be stored on disk temporarily, to be sent later if the dead node comes back.
But a hint expires after gc_grace_seconds, after what it will never been sent.
Now if you delete a row, it will generate a tombstone in the sstables of the 2 replicas and a hint in the coordinator node. After 3 hours, the tombstones are removed from the online nodes by the compaction manager, and the hint expires.
Later when your dead node comes back, it still have the row, and it can't know that this row has been deleted because no hint and no more tombstone exist on replicas... thus it's a zombie row.

You might also find this support blog article useful:
https://academy.datastax.com/support-blog/cleaning-tombstones-datastax-dse-and-apache-cassandra

Related

Tombstone scanning in system.log

I have a cassandra cluster with less delete use case. I found in my system.log "Read 10 live and 5645464 tombstones cells in keyspace.table" What does it mean? please help to understand.
Thanks.
For Cassandra, all the information recorded is immutable. This means that when you have a delete operation (explicit with a delete statement or with a Time To Live [TTL] clause), the database will add another record with a special flag named tombstone. All these records will stay on the database until the gc_grace_seconds periods have passed; the default is 10 days.
In your case, the engine found out that most of the records retrieved were deleted, but they are still waiting for the gc_grace_seconds to pass, to let compaction reclaim the space. One possible option to fix the issue is to decrease gc_grace_seconds for that table.
For more information, please refer to this article from the Last Pickle.
One more important thing to keep in mind when working with Cassandra is that tombstones cells do not directly correlate to deletes.
When you insert null value to an attribute when performing your insert, Cassandra internally marks that attribute/cell as a tombstone. So, even if you don't have a lot of deletes happening, you could end up with an enormous number of tombstones. Easy and simple solution is to not insert null values for an attribute while inserting.
As per this statement Read 10 live and 5645464 tombstones cells in keyspace.table goes, there might be a table scan for a query happening that is scanning 10 cells and 5645464 number of tombstones (cells with null value) while doing so is what I am guessing. Need to understand what type of queries are being executed to gain more insight into that.

Will Cassandra return tombstone rows as valid rows?

My idea is to add rows to Cassandra with TTL = 15 minutes so I'll be able to load realtime data (now - 15 minutes, now) w/o storing timestamps etc. My concern is that the rows with expiring TTL will be marked as tombstone (not actually deleted). I.e., will they count when I run select count(*) from realtime_table?
No, tombstoned rows won't be returned as a result - they will be skipped when reading the data.
But if you actively expiring the data, you may need to tune gc_grace_period, otherwise you can get too many not removed tombstones, and in some cases will start to get warning or error during read if read operation will need to skip tombstones (controlled by tombstone_warn_threshold & by tombstone_failure_threshold options of cassandra.yaml.
Here is the very good blog post that describes how data are deleted & cleaned up.
But select count(*) from table is real antipattern in Cassandra - you need to consider correct modelling of your data with partitions, etc.

Cassandra Query Timeout with small set of data

I am having a problem with Cassandra 2.1.17. I have a table with about 40k "rows" in it. One partition I am having a problem with has maybe about 5k entries in it.
Table is:
create table billing (
accountid uuid,
date timeuuid,
credit double,
debit double,
type text,
primary key (accountid,date)
) with clustering order by (date desc)
So there is a lot of inserting and deleting from this table.
My problem is that somehow it seems to get corrupt I think because I am no longer able to select data past a certain point from a partition.
From cqlsh I can run soemthing like this.
SELECT accoutid,date,credit,debit,type FROM billing WHERE accountid=XXXXX-xxxx-xxxx-xxxxx... AND date < 3d466d80-189c-11e7-8a57-f33cbced2fc5 limit 2;
First I did a select limit of 10000 it works up to around 5000 rows pageing through them then towards the end it will give a timeout error.
I then use the second from last timeuuid and select limit 2 it will fail limit 1 will work.
If I use the last timeuuid as a < and limit to 1 it will also fail.
So just looking for what I can do here I am not sure what is wrong and not sure how I can fix/diagnose what happened.
I have tired a repair and force a compaction. but it still seems to have the issue.
Thank you for any help.
Try to start with running manual compaction on your table.
You can increase read_request_timeout_in_ms parameter in cassandra config.
Consider moving to leveled compaction strategy if you are having a lot of deletes and updates.
I think you got too many tombstones in this partition.
What is a tombstone ?
To remember that a record has been deleted Cassandra creates a special value called a "tombstone". A tombstone has a TTL as any other value has but it is not compacted as easily as any other value is. Cassandra keeps it longer to avoid such inconsistency as data reappearence.
How to watch tombstones ?
nodetool cfstats gives you an idea of how many tombstones you have on average per slice
How to fix the issue ?
The duration a tombstone is preserved is gc_grace_seconds. You have to reduce it and then run a major compaction to fix the issue.
It looks to me like you are hitting a lot of tombstones when you do selects. The thing is while they are there cassandra still has to go over them. There might be multiple factors like ttl with insert statements, a lot of deletes, inserting of nulls etc.
My bet would be that you would need to adjust gc_grace_seconds on table and run repairs more often. But be careful and don't set it to to low (one round of repair has to finish before this time).
It's all nicely explained here:
https://opencredo.com/cassandra-tombstones-common-issues/

Cassandra TTL and gc_grace_seconds

Using Cassandra 3.0:
If all of my columns have a (default or otherwise) TTL and I never delete a column, but overwrites happen maybe 2-3 times a day, can I set gc_grace_seconds = 0?
Note: The TTL of a columns, even after being overwritten, always points to the same point in time. E.g. March 10, 2017
Will I run into issues when a node goes down?
I know if I deleted a column and a node goes down and does not come up before gc_grace_seconds, I will have zombie column. "Logic" says this shouldn't be a problem unless the overwritten column has a different TTL.

Deleting a row of supercolumns and immediately replacing it with new data

Say I have a row of super-columns in Cassandra. I delete the entire row (it is now marked with a tombstone). I then immediately (before any compaction / nodetool repair) add different data with the same exact row-key. My question is, does Cassandra properly handle this and delete the data, or is there a risk of sstables being orphaned that should have been deleted?
all depends on the timestamps. The later timestamp wins....so if deletes timestamp is before the modification timestampt, modification wins and puts stuff in there.
Dean
PlayOrm for Cassandra Developer

Resources