I screwed up massively, and accidentally deleted some SSTable (.db) files from a Cassandra data directory. Literal "rm -rf" style deletion. I've been trying to recover them using foremost, but I need to know the header and footer file signatures for SSTables in order to configure foremost to do the job.
Does anyone know the header/footer file signatures?
Or, can someone give me some ideas on how I can go about recovering these?
Are they in your snapshots directory? Cassandra creates hardlinks to sstables in the snapshots directory precisely for this kind of scenario.
Related
Theoretical question:
Lets say I have a cassandra cluster with some data in it.
Backups are created on a daily basis.
Now a subset of data is being lost, either by application error or manual deletion.
What is the best way to restore data from existing backup?
I can think of starting a separate node with the backup disk attached, then export data manually through selects and reimport into the prod database.
That would work but sounds complicated, is there a more straight forward solution for such problems?
If its a single partition probably best bet is to use sstabledump or something like sstable-tools to read from it and just manually reinstert. If ok with restoring everything deleted from time of snapshot: reduce gcgrace to purge any tombstones with a force compact (or else they will continue to shadow the restored data) and use the sstable loader or if the token ranges are the same copy the backed up sstables back in the data directory.
As the data in case of Cassandra is physically removed during compaction, is it possible to access the recently deleted data in any way? I'm looking for something similar to Oracle Flashback feature (AS OF TIMESTAMP).
Also, I can see the pieces of deleted data in the relevant commit log file, however it's obviously unreadable. Is it possible to convert this file to a more readable format?
You will want to execute a restore from your commitlog.
The safest is to copy the commitlog to a new cluster (with same schema), and restore following the instructions (comments) from commitlog_archiving.properties file. In your case, you will want to set restore_point_in_time to a time between your insert and your delete.
Is there a way to backup Cassandra directly to tape (streaming device)?
Or to perform real snapshots?
The snapshot Cassandra is referring to is not what I want to call a snapshot.
It is more a consistent copy of the database files to a directory.
Regards Tomas
First, let's clarify the Cassandra write path, so we know what we need to back up. Writes come in and are first journaled in the commitlog, then written to the memtable, then eventually flushed to sstables. When sstables flush, the relevant commitlog segments are deleted.
If you want a consistent backup of Cassandra, you need at the very least the sstables, but ideally the sstables + commitlog, so you can replay any data between the commitlog and the most recent flush.
If you're using tape backup, you can treat the files on disk (both commitlog and sstables) as typical data files - you can tar them, rsync them, copy them as needed, or point amanda or whatever tape system you're using at the data file directory + commitlog directory, and it should just work - there's not a lot of magic there, just grab them and back them up. One of the more common backup processes involves using tablesnap, which watches for new sstables and uploads them to s3.
You can backup Cassandra directly to Tape using SPFS
SPFS is a file system for Spectrum Protect.
Just mount the SPFS file system where you want the backups to land.
Eg
mount -t spfs /backup
And backup Cassandra to this path.
All operations that goes via this mountpoint (/backup), will automatically be translated to Spectrum Protect Client API calls.
On the Spectrum Protect backup server, one can use any type of supported media.
For instance: CD, Tape, VTL, SAS, SATA, SSD, Cloud etc..
In this way, you can easily backup your Cassandra directly to a backup server.
I am looking for confirmation that my Cassandra backup and restore procedures are sound and I am not missing anything. Can you please confirm, or tell me if something is incorrect/missing?
Backups:
I run daily full backups of the keyspaces I care about, via "nodetool snapshot keyspace_name -t current_timestamp". After the snapshot has been taken, I copy the data to a mounted disk, dedicated to backups, then do a "nodetool clearsnapshot $keyspace_name -t $current_timestamp"
I also run hourly incremental backups - executing a "nodetool flush keyspace_name" and then moving files from the backup directory of each keyspace, into the backup mountpoint
Restore:
So far, the only valid way I have found to do a restore (and tested/confirmed) is to do this, on ALL Cassandra nodes in the cluster:
Stop Cassandra
Clear the commitlog *.log files
Clear the *.db files from the table I want to restore
Copy the snapshot/full backup files into that directory
Copy any incremental files I need to (I have not tested with multiple incrementals, but I am assuming I will have to overlay the files, in sequence from oldest to newest)
Start Cassandra
On one of the nodes, run a "nodetool repair keyspace_name"
So my questions are:
Does the above backup and restore strategy seem valid? Are any steps inaccurate or anything missing?
Is there a way to do this without stopping Cassandra on EVERY node? For example, is there a way to restore the data on ONE node, then somehow make it "authoritative"? I tried this, and, as expected, since the restored data is older, the data on the other nodes (which is newer) overwrites in when they sync up during repair.
Thank you!
There's two ways to restore Cassandra backups without restarting C*:
Copy the files into place, then run "nodetool refresh". This has the caveat that the rows will still be older than tombstones. So if you're trying to restore deleted data, it won't do what you want. It also only applies to the local server (you'll want to repair after)
Use "sstableloader". This will load data to all nodes. You'll need to make sure you have the sstables from a complete replica, which may mean loading the sstables from multiple nodes. Added bonus, this works even if the cluster size has changed. I'm not sure if ordering matters here (that is, I don't know if row timestamps are preserved through the load or if they're redefined during load)
In my C* 1.2.4 setup, I have an ssd drive of 200Gb for the data and a rotational drive for commit logs of 500Gb.
I had the unpleasant surprise during a scrub operation to fill in my ssd drive with the snapshots. That made the cassandra box unresponsive but it kept the status as up when doing nodetool status.
I am wondering if there is a way to specify the target directory for snapshots when doing a scrub.
Otherwise if you have ideas for workarounds?
I can do a column family at a time and then copy the snapshots folder, but I am open for smarter solutions.
Thanks,
H
Snapshots in Cassandra are created as hard links to your existing data files. This means at the time the snapshot is taken, it takes up almost no extra space. However, it causes the old files to remain so if you delete or update data, the old version is still there.
This means snapshots must be taken on the drive that stores the data. If you don't need the snapshot any more, just delete it with 'nodetool clearsnapshot' (see the nodetool help output for how to decide which snapshots to delete). If you want to keep the snapshot then you can move it elsewhere. It will only start using much disk space after a while, so you could keep it until you are happy the scrub didn't delete important data then delete the snapshot.