Cassandra clean up data and file handlers for deleted tables - cassandra

After I truncated and dropped a table in Cassandra, I still see the sstables on disk plus the lot of open file handler pointing to these.
What is the proper way to get rid of them?
Is there a possibility without restarting the Cassandra nodes?
We're using Cassandra 3.7.

In Cassandra data does not get removed immediately instead marked as tombstoned. You can run nodetool repairto get rid of deleted data.

Related

When is it NOT necessary to truncate the table when restoring a snapshot (incremental) for Cassandra?

When is it NOT necessary to truncate the table when restoring a snapshot (incremental) for Cassandra?
All the different documentation "providers" including the 2nd edition of the Cassandra The Definitive Guide, it says something like this... "If necessary, truncate the table." If you restore without truncating (removing the tombstone), Cassandra continues to shadow the restored data. This behavior also occurs for other types of overwrites and causes the same problem.
If I have an insert only C* keyspace (no upserts and no deletes), do I ever need to truncate before restoring?
The documentation seems to imply that I can delete all of the sstable files from a column family (rm -f /data/.), copy the snapshot to /data/, and nodetool refresh.
Is this true?
You are right - you can restore a snapshot excatly this way. Copy over the sstables, restart the node and you are done. With incremental backups be sure you got all sstables with your data.
What could happen if you have updates and deletes is that after restoring a node or during restoring multiple nodes is that there is stale data available or you could run into problems with tombstones when data was deleted after the snapshot.
The magic with truncating tables is that all data is gone at once and you avoid such problems.

Could not read commit log descriptor in file

I started to use cassandra 3.7 and always I have problems with the commitlog. When the pc unexpected finished by a power outage for example the cassandra service doesn't restart. I try to start for the command line, but always the error cassandra could not read commit log descriptor in file appears.
I have to delete all the commit logs to start the cassandra service. The problem is that I lose a lot of data. I tried to increment the replication factor to 3, but is the same.
What I can do to decrease amount of lost data?
pd: I only one pc to use cassandra database, it is not possible to add more pcs.
I think your option here is to work around the issue since its unlikely there is a guaranteed solution to prevent commit table files getting corrupted on sudden power outage. Since you only have a single node, it makes it more difficult to recover the data. Increasing the replication factor to 3 on a single node cluster is not going to help.
One thing you can try is to reduce the frequency at which the memtables are flushed. On flush of memtable the entries in the commit log are discarded, therefore reducing the amount of data lost. Details here. This will however not resolve the root issue

Cassandra Load status does not update (nodetool status)

Using the nodetool status I can read out the Load of each node. Adding or removing data from the table should have direct impact on that value. However, the value remains the same, no matter how many times the nodetool status command is executed.
Cassandra documentation states that the Load value takes 90 seconds to update. Even allowing several minutes between running the command, the result is always wrong. The only way I was able to make this value update, was to restart the node.
I don't believe it is relevant, but I should add that I am using docker containers to create the cluster.
In the documentation that you linked, under Load it also says
Because all SSTable data files are included, any data that is not
cleaned up, such as TTL-expired cell or tombstoned data is counted.
It's important to note that when Cassandra deletes data, the data is marked with a tombstone and doesn't actually get removed until compaction. Thus, the load doesn't decrease immediately. You can force a major compaction with nodetool compact.
You can also try flushing memtable if data is being added. Apache notes that
Cassandra writes are first written to the CommitLog, and then to a
per-ColumnFamily structure called a Memtable. When a Memtable is full,
it is written to disk as an SSTable.
So you either need to add more data until the memtable is full, or you can run a nodetool flush (documented here) to force it.

Cassandra data directory does not get updated with deletion

Currently, I am bench marking Cassandra database using YCSB framework. During this time I have performed (batch) insertion and deletion of the data quite regularly.
I am using Truncate command to delete keyspace rows. However, I am noticing that my Cassandra data directory swells up as the experiments.
I have checked and can confirm that even there is no data in the keystore when I checked the size of data directory. Is there a way to initialize a process so that Cassandra automatically release the stored space, or does it happen over time.
When you use Truncate cassandra will create snapshots of your data.
To disable it you will have to set auto_snapshot: false in cassandra.yaml file.
If you are using Delete, then cassandra use tombstone,i.e your data will not get deleted immediately. Data will get deleted once compaction is ran.
To remove previous snapshots one can use nodetool snapshot command.

Cassandra not removing deleted rows despite running nodetool compact

Very often I have ghost rows that stay on the server and won't disappear after deleting a row in Cassandra.
I have tried all possible administration options with nodetool (compact, flush, etc.) and also connected to the cluster with jconsole and forced a GC thru it but the rows remain on the cluster.
For testing purpose I updated some rows with a TTL of 0 before doing the DELETE and these rows disappeared completely.
Do I need to live with that or can I somehow trigger a final removal of these deleted rows?
My testcluster uses Cassandra 1.0.7 and has only one single node.
This phenomenon that you are observing is the result of how distributed deletes work in Cassandra. See the Cassandra FAQ and the DistributedDeletes wiki page.
Basically the row will be completely deleted after GCGraceSeconds has passed and a compaction has run.

Resources