SSTables are never deleted on disk if table gets deleted.
I had a a table whose tombstones count is >100000 due to which my read queries were throwing Tombstones error. I then dropped the table, but this didn't delete the SSTable files. I re-created the table, then I ran my select queries, I saw the tombstone error again. I don't understand why the old tombstone error has come up again?
Also, when does the SSTable ever gets deleted on disk?
Truncating a table will not remove the SSTable(s) on disk. You need to run nodetool cleanup
Tombstones will disappear through compaction, but only once gc_grace_seconds has passed. The default is 10 days. Why so long? Its designed to be a bit longer than a week providing enough time to run repair on a cluster before deletes are discarded. This maximizes the opportunity for consistency across the nodes.
In order to have your tables deleted from disk you need to make sure that no hard-links are currently pointing at them. By default, a DROP command will create a snapshot of the CF. You need to set to false the auto_snapshot property in the YAML file:
# Whether or not a snapshot is taken of the data before keyspace truncation
# or dropping of column families. The STRONGLY advised default of true
# should be used to provide data safety. If you set this flag to false, you will
# lose data on truncation or drop.
auto_snapshot: false
If you want err on the safe side (and a general procedure to recreate your keyspace), you could go for:
DROP TABLE IF EXISTS mytable
CREATE TABLE mytable (....)
TRUNCATE mytable
I never had a single problem with this so far.
Truncate operation is safer than drop and recreate. Truncate may throw a timeout exception, do it again until it is completely done.
Related
When is it NOT necessary to truncate the table when restoring a snapshot (incremental) for Cassandra?
All the different documentation "providers" including the 2nd edition of the Cassandra The Definitive Guide, it says something like this... "If necessary, truncate the table." If you restore without truncating (removing the tombstone), Cassandra continues to shadow the restored data. This behavior also occurs for other types of overwrites and causes the same problem.
If I have an insert only C* keyspace (no upserts and no deletes), do I ever need to truncate before restoring?
The documentation seems to imply that I can delete all of the sstable files from a column family (rm -f /data/.), copy the snapshot to /data/, and nodetool refresh.
Is this true?
You are right - you can restore a snapshot excatly this way. Copy over the sstables, restart the node and you are done. With incremental backups be sure you got all sstables with your data.
What could happen if you have updates and deletes is that after restoring a node or during restoring multiple nodes is that there is stale data available or you could run into problems with tombstones when data was deleted after the snapshot.
The magic with truncating tables is that all data is gone at once and you avoid such problems.
I have a Cassandra 2.1 cluster where we insert data though Java with TTL as the requirement of persisting the data is 30 days.
But this causes problem as the files with old data with tombstones is kept on the disk. This results in disk space being occupied by data which is not required. Repairs take a lot of time to clear this data (upto 3 days on a single node)
Is there a better way to delete the data?
I have come across this on datastax
Cassandra allows you to set a default_time_to_live property for an entire table. Columns and rows marked with regular TTLs are processed as described above; but when a record exceeds the table-level TTL, Cassandra deletes it immediately, without tombstoning or compaction. https://docs.datastax.com/en/cassandra/3.0/cassandra/dml/dmlAboutDeletes.html?hl=tombstone
Will the data be deleted more efficiently if I set TTL at table level instead of setting each time while inserting.
Also, documentation is for Cassandra 3, so will I have to upgrade to newer version to get any benefits?
Setting default_time_to_live applies the default ttl to all rows and columns in your table - and if no individual ttl is set (and cassandra has correct ntp time on all nodes), cassandra can easily drop those data safely.
But keep some things in mind: your application is still able so set a specific ttl for a single row in your table - then normal processing will apply. On top, even if the data is ttled it won't get deleted immediately - sstables are still immutable, but tombstones will be dropped during compaction.
What could help you really a lot - just guessing - would be an appropriate compaction strategy:
http://docs.datastax.com/en/archived/cassandra/3.x/cassandra/dml/dmlHowDataMaintain.html#dmlHowDataMaintain__twcs-compaction
TimeWindowCompactionStrategy (TWCS)
Recommended for time series and expiring TTL workloads.
The TimeWindowCompactionStrategy (TWCS) is similar to DTCS with
simpler settings. TWCS groups SSTables using a series of time windows.
During compaction, TWCS applies STCS to uncompacted SSTables in the
most recent time window. At the end of a time window, TWCS compacts
all SSTables that fall into that time window into a single SSTable
based on the SSTable maximum timestamp. Once the major compaction for
a time window is completed, no further compaction of the data will
ever occur. The process starts over with the SSTables written in the
next time window.
This help a lot - when choosing your time windows correctly. All data in the last compacted sstable will have roughly equal ttl values (hint: don't do out-of-order inserts or manual ttls!). Cassandra keeps the youngest ttl value in the sstable metadata and when that time has passed cassandra simply deletes the entire table as all data is now obsolete. No need for compaction.
How do you run your repair? Incremental? Full? Reaper? How big in terms of nodes and data is your cluster?
The quick answer is yes. The way it is implemented is by deleting the SStable/s directly from disk. Deleting an SStable without the need to compact will clear up disk space faster. But you need to be sure that the all the data in a specific sstable is "older" than the globally configured TTL for the table.
This is the feature referred to in the paragraph you quoted. It was implemented for Cassandra 2.0 so it should be part of 2.1
I've a Cassandra table (Cassandra version is 2.0) with terabytes of data, here is what the schema looks like
"my_table" (
key ascii,
timestamp bigint,
value blob,
PRIMARY KEY ((key), timestamp)
)
I'd like to delete some data, but before want to estimate how much disk space it will reclaim.
Unfortunately stats from JMX metrics are only available for last two weeks, so thats not very useful.
Is there any way to check how much space is used by certain set of data (for example where timestamp < 1000)?
I was wondering also if there is a way to check query result set size, so that I can do something like select * from my_table where timestamp < 1000 and see how many bytes the result occupies.
There is no mechanism to see the size on disk from the data, it can be pretty far removed from the coordinator of the request and theres levels that impact it like compression and multiple sstables which would make it difficult.
Also be aware that issuing a delete will not immediately reduce disk space. C* does not delete data, the sstables are immutable and cannot be changed. Instead it writes a tombstone entry that after gc_grace_seconds will disappear. When sstables are being merged, the tombstone + data would combine to be just the tombstone. After it is past the gc_grace_seconds the tombstone will no longer be copied during compaction.
The gc_grace is to prevent losing deletes in a distributed system, since until theres a repair (should be scheduled ~weekly) theres no absolute guarantee that the delete has been seen by all replicas. If a replica has not seen the delete and you remove the tombstone, the data can come back.
No, not really.
Using sstablemetadata you can find tombstone drop times, minimum timestamp and maximum timestamp in the mc-####-big-data.db files.
Additionally if you're low on HDD space consider nodetool cleanup, nodetool clearsnapshot and then finally nodetool repair.
I had a column family containing the space of 40GB. I truncated the column family. So, after GC_GRACE_SECONDS Cassandra created snapshots of the truncated data which is consuming the same amount of space. Is there any way by means of which we can get rid of space utilized by snapshots without disable the creation of snapshots. I mean isn't there any timeout parameter after which it will delete the snapshot consuming unnecessary space.
The snapshot that you are seeing getting created after truncating the column family is actually a safety mechanism in C* to avoid mass data loss in case of accidental table delete or truncation (btw it has nothing to do with gc_grace_seconds). There is a setting 'auto_snapshot' in cassandra.yaml which is true by default. From DataStax documentation
auto_snapshot
(Default: true ) Enable or disable whether a snapshot is taken of the data before keyspace truncation or dropping of tables. To prevent data loss, using the default setting is strongly advised. If you set to false, you will lose data on truncation or drop.
If you want to delete snapshots then you can use nodetool clearsnapshot command as explained here
My question is very simple. Is it in any way possible to retrieve columns that have been marked tombstone before the GCGraceSeconds period expiry(default 10 days). If yes what would be the exact CQL query for that?
If I were to understand the deletion process the tombstones are marked on the MemTables and the SSTable being immutable waiting for compaction still has the the deleted data waiting for compaction. So before compaction occurs is there any way to read the tombstoned data from either the Memtable or SSTable?
Using CQL 3.0 on CQLSH command prompt & Cassandra 2.0.
You are right, when a tombstone is inserted it usually doesn't immediately delete the underlying data (unless all your data is in a memtable). However, you can't control when it does. If you don't have much data and compaction happens quickly, the underlying data may be deleted very quickly, much sooner than 10 days.
There is no query to read deleted data, but you can inspect all your SSTables with sstable2json to see if they contain the deleted data.
Just to add on to the previous comment. Have a low value of gc_grace_seconds for the column families that have frequent deletions. It will take some time for gc but tombstones are expected to get cleared .