Cassandra backup: plain copy disk files vs snapshots - cassandra

We are planning to deploy a Cassandra cluster with 100 virtual nodes.
To store maximally 1TB (compressed) data on each node. We're going to use (host-) local SSD disks.
The infrustructure team is used to plainly backing up the whole partitions.
We've come across Cassandra Snapshots.
What is the difference between plainly copying the whole disk vs. Cassandra snapshots?
- Is there a size difference?
- Using whole partition backups, also unnecessarily saves uncompressed data that are being compacted, is that the motive behind snapshots?

There are few benefits of using snapshots:
Snapshot command will flush the memtable to ssTables and then creates snapshots.
Nodetool can be used to restore the snapshots.
Incremental backup functionality can also be leveraged.
Snapshots create hardlink of your data so it is much faster.
Note: Cassandra can only restore data from a snapshot when the table schema exists. It is recommended that you also backup the schema.
In both it is to be made sure that operation (snapshot or plain copy) run at same time on all the nodes.

Related

How to purge massive data from cassandra when node size is limited

I have a Cassandra cluster (with cassandra v3.11.11) with 3 data centers with replication factor 3. Each node has 800GB nvme but one of the data table is taking up 600GB of data. This results in below from nodetool status:
DataCenter 1:
node 1: 620GB (used) / 800GB (total)
DataCenter 2:
node 1: 610GB (used) / 800GB (total)
DataCenter 3:
node 1: 680GB (used) / 800GB (total)
I cannot install another disk to the server and the server does not have internet connection at all for security reasons. The only thing I can do is to put some small files like scripts into it.
ALL cassandra tables are set up with SizeTieredCompactionStrategy so I end up with very big files ~one single 200GB sstable (full table size 600). Since the cluster is running out of space, I need to free the data which will introduce tombstones. But I have only 120GB of free space left for compaction or garbage collection. That means I can only retain at most 120GB data, and to be safe for the Cassandra process, maybe even less.
I am executed nodetool cleanup so I am not sure if there is more room to free.
Is there any way to free 200GB of data from Cassandra? or I can only retain less than 120 GB data?
(if we remove 200GB data, we have 400GB data left, compaction/gc will need 400+680GB data which is bigger than disk space..)
Thank you!
I personally would start with checking if the whole space is occupied by actual data, and not by snapshots - use nodetool listsnapshots to list them, and nodetool clearsnapshot to remove them. Because if you did snapshot for some reason, then after compaction they are occupying the space as original files were removed.
The next step would be to try to cleanup tombstones & deteled data from the small tables using nodetool garbagecollect, or nodetool compact with -s option to split table into files of different size. For big table I would try to use nodetool compact with --user-defined option on the individual files (assuming that there will be enough space for them). As soon as you free > 200Gb, you can sstablesplit (node should be down!) to split big SSTable into small files (~1-2Gb), so when node starts again, the data will be compacted

How do I replicate a Cassandra's local node for other Cassandra's remote node?

I need to replicate a local node with a SimpleStrategy to a remote node in other Cassandra's DB. Does anyone have any idea where I begin?
The main complexity here, if you're writing data into both clusters is how to avoid overwriting the data that has changed in the cloud later than your local setup. There are several possibilities to do that:
If structure of the tables is the same (including the names of the keyspaces if user-defined types are used), then you can just copy SSTables from your local machine to the cloud, and use sstableloader to replay them - in this case, Cassandra will obey the actual writetime, and won't overwrite changed data. Also, if you're doing deletes from tables, then you need to copy SSTables before tombstones are expired. You may not copy all SSTables every time, just the files that has changed since last data upload. But you always need to copy SSTables from all nodes from which you're doing upload.
If structure isn't the same, then you can either look to using DSBulk or Spark Cassandra Connector. In both cases you'll need to export data with writetime as well, and then load it also with timestamp. Please note that in both cases if different columns have different writetime, then you will need to load that data separately because Cassandra allows to specify only one timestamp when updating/inserting data.
In case of DSBulk you can follow the example 19.4 for exporting of data from this blog post, and example 11.3 for loading (from another blog post). So this may require some shell scripting. Plus you'll need to have disk space to keep exported data (but you can use compression).
In case of Spark Cassandra Connector you can export data without intermediate storage if both nodes are accessible from Spark. But you'll need to write some Spark code for reading data using RDD or DataFrame APIs.

increased disk space usage after nodetool cleanup - Apache Cassandra

We have an Apache Cassandra (version 3.11.4) cluster in production with 5-5 nodes in two DCs. We've just added the last two nodes recently and after the repairs has finished, we started the cleanup 2 days ago. The nodes are quite huge, /data has 2.8TB mounted disk space, Cassandra used around 48% of it before the cleanup.
Cleanup finished (I don't think it broke, no errors in log, and nodetool compactionstats says 0 pending tasks) on the first node after ~14 hours and during the cleanup the disk usage increased up to 81% and since then never gone back.
Will Cassandra clean it up and if yes, when, or do we have to do something manually? Actually we don't find any tmp files that could be removed manually, so we have no idea now. Did anyone met this usecase and has a solution?
Thanks in advance!
Check the old snapshots - most probably you had many snapshots (from backups, or truncated, or removed tables) that were a hard links to the files with data (and not consuming the space), and after nodetool cleanup, the data files were rewritten, and new files were created, while hard links still pointing to the original files, consuming the disk space. Use nodetool listsnapshots to get a list of existing snapshots, and nodetool clearsnapshot to remove not necessary snapshots.

Cassandra Drop Keyspace Snapshot Cleaning

Was reading in Cassandra Documentation that:
Cassandra takes a snapshot of the keyspace before dropping it. In Cassandra 2.0.4 and earlier, the user was responsible for removing the snapshot manually.
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/drop_keyspace_r.html
This would imply that in versions after Cassandra 2.0.4, this is done automatically. If so, what configuration parameter (if any) sets the time before snapshot is automatically removed when doing a DROP KEYSPACE?
For example, in the case of DROP TABLE, gc_grace_seconds is
the number of seconds after data is marked with a tombstone (deletion marker) before it is eligible for garbage-collection.
I believe this reference is not accurate, Cassandra does not automatically clean up snapshots for you.
Cassandra won’t clean up the snapshots for you
http://cassandra.apache.org/doc/latest/configuration/cassandra_config_file.html#snapshot-before-compaction
You can remove snapshots using the nodetool clearsnapshot command, or manually delete the directories & files yourself (this is safe as snapshots are just file hard-links).
Note also that gc_grace_seconds is not related to snapshots, it is used during compactions only.

How can I switch from multiple disks to a single disk in cassandra?

Because I ran out of space when shuffling, I was forced to add multiple disks on my Cassandra nodes.
When I finish compacting, cleaning up, and repairing, I'd like to remove them and return to one disk per node.
What is the procedure to make the switch?
Can I just kill cassandra, move the data from one disk to the other, remove the configuration for the second disk, and re-start cassandra?
I assume files will not have the same name and thus not be overwritten, is this the case?
Run disablegossip and disablethrift from nodetool, such that this
node is seen as DOWN by other nodes.
flush/drain the memtables, run compaction to merge SSTables, if any
[optionally, take snapshot as a precaution]
This stops all the other nodes/clients from writing to this node and since memtables are flushed to disk
stop Cassandra (though this node is down, cluster is available for
write/read, so zero downtime)
move data/log contents from other disk to the disk you want
make changes in cassandra.yaml to change the below paths:
commitlog_directory
saved_caches_directory
data_file_directories
log_directory
restart cassandra
do this for all nodes.

Resources