Cassandra global snapshot - linux

I am running a cluster with 3 nodes(EC2 instances) and replication factor=2. I execute a script from the first node which runs nodetool snapshot on all the nodes using pssh (parallel-ssh) utility. But the snapshot data for each node gets stored on that node itself. Is there a way we can get snapshot data of all nodes to the node from where I ran the script so that my script can easily copy the data to S3 from a single place?
Also,
Suppose if I have a 5 node cluster and I have snapshots for each node. Now I want to restore this data to a 10 node clusters and a 2 node cluster with different replication factors. Is the below process correct for restore?
copy snapshot data from all the 5 nodes and merge all the files into a single folder.
run sstableloader command passing all the IP addresses (which are 10 or 2 in number) and single folder location. Will this properly split the data from 5 node to 10 or 2 nodes after restore ?

I strongly suggest to use the Medusa tool (doc) for backup & restore of your Cassandra cluster(s) - it's able to backup data to cloud storage, and you can restore data to clusters, even with different topologies.

Related

Cassandra Cluster Migration Issues

Now:
Single Node
cassandra 3.11.3 + kairosdb 1.2
Two data storage path
/data/cassandra/data/kairosdb 4T Old data
/data1/cassandra/data/kaiosdb 1.1T Now Wrting data
target:
Three Node
cassandra 3.11.3 + kairosdb 1.2
One data storage path
/data/cassandra/data/kairosdb
In this case, how to migrate the data in two data directories under a single node to a three-node cluster, each node of this three-node cluster has only one data directory
I understand how to do it (and have practiced it) when migrating a single node to a three-node cluster, but only when there is only one data directory.2 data directories are migrated to 1,I have searched the Internet for a long time, but there is no reference material.
Data directories are something that the individual Cassandra node cares about but the Cluster doesn't.
Usually you'd want to have all nodes share the same configuration but for replication it really doesn't matter where the SSTables are on Disk on each node.
So migrating here would be the same as you've practiced.
That said the process I'd choose would be to add the new nodes as a second DC with the right replication, run a repair to have all the data in sync and then decommission the original node.

Cassandra cluster bulk loader hangs during export

I am migrating a simple 4 node Cassandra cluster from one cloud provider to another. The number of nodes in both the clouds are same however the newer cluster is at version 3.11.0 and the older one is at 3.0.11. I am using sstableloader to stream data from one cluster to another (schema has been created on new cluster separately). As per the release notes this should not be a problem.
However, for certain column families with sstableloader I get progress to 100% but then it hangs there for hours (time hang >> time to stream). The total data to stream on each node is below 500 GB. Any help on why this is happening and how to avoid is appreciated.
Create a new node and add to the existing cluster from the new cloud server
Flush tables from the memtable to SSTables on disk
Delete one node from old cloud server. Like wise repeat for each node.

Restore Cassandra snapshot (from 3-node-cluster) on developer or test cluster (1-node cluster)

We have set up a backup/restore procedure for our Cassandra production environment via snapshots. The snapshot files, schema and token ring information are copied to S3.
The production cluster is a 3-node-cluster with a replication factor of 3.
For development and test, I would like to restore the snapshots from production into separated clusters. To save money and to keep maintenance easy, it would be nice to restore only the snapshot from one production node. Since we are using a replication factor of 3 in a 3-node-cluster, each snapshot should have all rows. Consistency is also not important for our use-case.
Is it possible (and how) to restore only a single snapshot?
All of your data should exist on all 3 nodes so copying the sstables from any 1 node to your test cluster should be sufficient. Making sure theres a recent repair beforehand may be good idea if worried about consistency.
First create the same schema on the test cluster. Then you can simply take a snapshot with nodetool snapshot -t cloneme. Once complete, copy all the sstables from the folder that is created (cloneme) into the equivalent tables folder on your test cluster. Then run nodetool refresh.
It gets much more complicated if you have a different topology (more nodes, different RF) but since your going with "every node has all the data" its pretty trivial.
Worth mentioning that OpsCenter has a feature to automate the copying of a backup to other clusters.

Continues Cassandra restore testing

I'd would like to set up Continues Cassandra restore testing process.
I have 6 node production cluster and I have incremental backup enabled.
I would like to restore backups on regular basis to a different server.
My question is: do I have to use 6 nodes or can I somehow restore backups from 6 nodes to a single server?
You can simply copy all SSTables from the 6 nodes to a single node in order to restore them. You'll end up with redundant data depending on your replication factor, but this shouldn't be a problem if you have the required disk space.

Priam backup automatic restore

I have a Cassandra cluster managed by Priam, with 3 nodes. I use ephemeral disks to store my Cassandra data, so when I start 1 node, the Cassandra data dir is empty.
I have Priam properly configured and I can see backups are saved in Amazon S3. Suppose a node goes down and then I start another node. Will Priam know how to automatic restore backup from S3 when the node comes up again? The Cassandra data dir will start empty, so I am assuming Priam would give the new node the same token as the old one and it would restore the data... Right?
Yes. I have been running standalone Cassandra on EC2, small Cassandra clusters on mesos on EC2, and larger DataStax Enterprise clusters (with Cassandra) on EC2.
I have been using the Priam 3.x branch.
On restore, it calculates the initial_token, updates the cassandra.yaml file, restores the snapshot and incremental backup files, and restarts Cassandra.
According to Priam/Netflix conventions, if you have a 3 node cluster with Cassandra, your nodes should be named some_thing-other-things. Each node should be a part of an Auto-scaling group called some_thing. Each node should also use a Security Group named some_thing.
Create a 3 node dev cluster and test your backups and restores with data that you can easily recreate, that you don't care about too much. Get used to managing the Auto-scaling groups and Priam. Then, try it on test clusters with data that you care about.

Resources