I stopped a cassandra node. It created a snapshot directory.Under snapshot directory, there are many subfolds. Under those sub folds, there are many sstable files.
I wonder how cassandra put/copy sstable files to those sub folds, other words, what is meaning of the sub folds' name?
I also wonder if the sstables under snapshot are links or copies of data. I used "ls -l", I can not see link. However, I used "du", the size does not make sense too, if they are true copies.
The sstables in the snapshot dir are hardlinks. You can see the number of links to an sstable by running stat on the sstable file.
Related
What is true size in the output of the command nodetool listsnapshots?
There is no explanation in the Cassandra documentation.
Its the total size of sstables that only that snapshot has a hardlink of.
Snapshots just create hardlinks to the actual sstable component. Once compacted and deleted away the hardlink in the snapshot may be only link referencing the inode and preventing it from being freed. Thats what it will measure.
For example if you disable compaction and take a snapshot, immediately after the listsnapshots will show true size as zero. If you turn node off, and delete one sstable in the data directory then restart, the listsnapshots will show true size as the size of the deleted sstable.
I looked for a while and couldn't find anything in the Cassandra docs - however, the Scylladb docs (Scylla being itself derived from Cassandra) says true size is "Total size of all SSTables which are not backed up to disk".
Further reading offers the following example:
There is a single 1TB file in the snapshot directory. If that file also exists in the main column family directory, the size on the disk is 1TB and the true size is 0 because it is already backed up to disk.
It seems "true size" is the amount of data that has not yet been backed up - if your backups are fresh, it will be 0.
$ cd /tmp
$ cp -r /var/lib/cassandra/data/keyspace/table-6e9e81a0808811e9ace14f79cedcfbc4 .
$ nodetool compact --user-defined table-6e9e81a0808811e9ace14f79cedcfbc4/*-Data.db
I expected the two SSTables (where the second one contains only tombstones) to be merged into one, which would be equivalent to the first one minus data masked by tombstones from the second one.
However, the last command returns 0 exit status and nothing changes in the table-6e9e81a0808811e9ace14f79cedcfbc4 directory (still two tables are there). Any ideas how to unconditionally merge potentially multiple SSTables into one in the offline manner (like above, not on SSTable files currently used by the running cluster)?
Just nodetool compact <keyspace> <table> There is no real offline compaction, only telling cassandra which sstables to compact. user-defined compaction just is to give it a custom list of sstables and a major compaction (above example) will include all sstables in a table.
While it really depends on which version your using on if it will work there is https://github.com/tolbertam/sstable-tools#compact available. If desperate can import cassandra-all for your version and do like it : https://github.com/tolbertam/sstable-tools/blob/master/src/main/java/com/csforge/sstable/Compact.java
I have enabled the incremental backup in the cassandra.yaml file. As I know when we enable incremental backups, cassandra will backup the data (in backups directory) only when the memtable is flushed. But what if the memtable is yet to be flushed? I won't be able to get the incremental backup right?. I know that for the memtable to be flushed there are certain conditions to be met such as time interval or memtable space. My question is how do I modify this so that even if I enter one record after the last snapshot, I can still backup entire data along with that latest entry?
Consider this example
Take the snapshot.
Clear incremental backup (backups directory)
Enter a record to a table.
Check for the incremental backup in backups directory. It is still empty.
Now how do I backup the record which is written after the last snapshot?In general how do we backup the entire upto-date data unless we take the snapshot?
You can flush the files manually with nodetool flush just before taking the backup. That way you'll always have the latest memtable flushed.
nodetool docs
If you want to backup a cluster without taking a snapshot you can do it by simply saving everything under /data folder from every node (this includes mainly the .db files stats files etc).
In order to not override files you should store it with the token information as well.
When you want to restore from this backup, you should spin up a cluster with the same number of nodes, and simply copy the data, one-to-one from each backed-up node to a restored node. Pay attention that you'll have to modify cassandra.yaml to include the relevant token in cassandra.yaml (as well as the peers/seeds/etc) for each restored node.
After all the data is copied, you can start C* process on all the nodes.
I am looking for confirmation that my Cassandra backup and restore procedures are sound and I am not missing anything. Can you please confirm, or tell me if something is incorrect/missing?
Backups:
I run daily full backups of the keyspaces I care about, via "nodetool snapshot keyspace_name -t current_timestamp". After the snapshot has been taken, I copy the data to a mounted disk, dedicated to backups, then do a "nodetool clearsnapshot $keyspace_name -t $current_timestamp"
I also run hourly incremental backups - executing a "nodetool flush keyspace_name" and then moving files from the backup directory of each keyspace, into the backup mountpoint
Restore:
So far, the only valid way I have found to do a restore (and tested/confirmed) is to do this, on ALL Cassandra nodes in the cluster:
Stop Cassandra
Clear the commitlog *.log files
Clear the *.db files from the table I want to restore
Copy the snapshot/full backup files into that directory
Copy any incremental files I need to (I have not tested with multiple incrementals, but I am assuming I will have to overlay the files, in sequence from oldest to newest)
Start Cassandra
On one of the nodes, run a "nodetool repair keyspace_name"
So my questions are:
Does the above backup and restore strategy seem valid? Are any steps inaccurate or anything missing?
Is there a way to do this without stopping Cassandra on EVERY node? For example, is there a way to restore the data on ONE node, then somehow make it "authoritative"? I tried this, and, as expected, since the restored data is older, the data on the other nodes (which is newer) overwrites in when they sync up during repair.
Thank you!
There's two ways to restore Cassandra backups without restarting C*:
Copy the files into place, then run "nodetool refresh". This has the caveat that the rows will still be older than tombstones. So if you're trying to restore deleted data, it won't do what you want. It also only applies to the local server (you'll want to repair after)
Use "sstableloader". This will load data to all nodes. You'll need to make sure you have the sstables from a complete replica, which may mean loading the sstables from multiple nodes. Added bonus, this works even if the cluster size has changed. I'm not sure if ordering matters here (that is, I don't know if row timestamps are preserved through the load or if they're redefined during load)
In my C* 1.2.4 setup, I have an ssd drive of 200Gb for the data and a rotational drive for commit logs of 500Gb.
I had the unpleasant surprise during a scrub operation to fill in my ssd drive with the snapshots. That made the cassandra box unresponsive but it kept the status as up when doing nodetool status.
I am wondering if there is a way to specify the target directory for snapshots when doing a scrub.
Otherwise if you have ideas for workarounds?
I can do a column family at a time and then copy the snapshots folder, but I am open for smarter solutions.
Thanks,
H
Snapshots in Cassandra are created as hard links to your existing data files. This means at the time the snapshot is taken, it takes up almost no extra space. However, it causes the old files to remain so if you delete or update data, the old version is still there.
This means snapshots must be taken on the drive that stores the data. If you don't need the snapshot any more, just delete it with 'nodetool clearsnapshot' (see the nodetool help output for how to decide which snapshots to delete). If you want to keep the snapshot then you can move it elsewhere. It will only start using much disk space after a while, so you could keep it until you are happy the scrub didn't delete important data then delete the snapshot.