Migrate data from one cassandra cluster to another - cassandra

Hi I want to migrate data from my cassandra cluster to another cassandra cluster. I have seen many posts suggesting various methods but are not very clear or have limitations. The methods seen are as follows:
Using COPY TO and COPY FROM command: The is easy to use but seems to have a limitation on the number of rows it can copy.
Using SSTABLELOADER: Most articles suggests using sstableloader to move data from one cluster to another. But did not get clear details on steps to create sstables (is it possible to use some nodetool command or require java application to be created? Are these created per node or per cluster? If created how to move them from one cluster to another?) or creating snapshots which is tedious in way that they are created per node and have to be transferred to another cluster. Have also seen answers suggesting using parallel ssh to create snapshot for whole cluster but did not get any example for this as well.
Any help would be appreciated.

It's really a question that requires more information to provide definitive answer. For example, do you need to keep the metadata, such as, WriteTime and TTLs on data, or not? Does the destination cluster has the same topology (number of nodes, token allocation, etc.).
Basically, you have following options:
Use sstableloader - tool shipped with Cassandra itself that is used for restoring from backups, etc. To perform data migration you need to create a snapshot of the table to load (using nodetool snapshot), and run sstableloader on that snapshot. Main advantage is that it will keep metadata (TTL/WriteTime). Main disadvantage is that you need to perform taking snapshot/loading on all nodes of the source cluster, and you need to have exactly the same schema and partitioner in the destination cluster;
You can use backup/restore tool, such as, medusa, that basically automating the taking of snapshot & loading the data;
You can use Apache Spark to copy data from one table to another using Spark Cassandra Connector, for example, as described in this blog post - just read table for one cluster, and write to a table in another cluster. Works fine for simple copy operations, and you have a possibility to perform transformation of data if necessary, but becomes more complex if you need to preserve metadata. Plus it needs Spark;
Use DataStax Bulk Loader (DSBulk) to export data to files on disk, and load into another cluster. In contrast to cqlsh's COPY command, it's heavily optimized for loading/unloading of big amounts of data. It works with Cassandra 2.1+ and most DSE versions (except ancient ones).

If you are able to set up the target cluster with exactly the same topology as the source cluster, the fastest way may be to simply copy the data files from the source to the target cluster, since this avoids the overhead of processing the data to redistribute it to different nodes. In order for this to work, your target cluster must have the same number of nodes, the same rack configuration, and even the same tokens assigned to each node.
To get the tokens for a source node, you can run nodetool info -T | grep Token | awk '{print $3}' | tr '\n' , | sed 's/,$/\n/'. You can then copy the comma-separated list of tokens from the output and paste it into the initial_token setting in your target node's cassandra.yaml. Once you start the node, check its tokens using nodetool info -T to verify that it has the correct tokens. Repeat these steps for each node in the target cluster.
Once you have all of your target nodes set up with exactly the same tokens, DC, and racks as the source cluster, take a snapshot of the desired tables on the source cluster and copy the snapshots to the corresponding node's data directories on the target cluster. DataStax OpsCenter can automate the process of backing up and restoring data and will use direct copying for clusters with the same topology. It appears that medusa can do this too though I have not used this tool before.

Related

How do I replicate a Cassandra's local node for other Cassandra's remote node?

I need to replicate a local node with a SimpleStrategy to a remote node in other Cassandra's DB. Does anyone have any idea where I begin?
The main complexity here, if you're writing data into both clusters is how to avoid overwriting the data that has changed in the cloud later than your local setup. There are several possibilities to do that:
If structure of the tables is the same (including the names of the keyspaces if user-defined types are used), then you can just copy SSTables from your local machine to the cloud, and use sstableloader to replay them - in this case, Cassandra will obey the actual writetime, and won't overwrite changed data. Also, if you're doing deletes from tables, then you need to copy SSTables before tombstones are expired. You may not copy all SSTables every time, just the files that has changed since last data upload. But you always need to copy SSTables from all nodes from which you're doing upload.
If structure isn't the same, then you can either look to using DSBulk or Spark Cassandra Connector. In both cases you'll need to export data with writetime as well, and then load it also with timestamp. Please note that in both cases if different columns have different writetime, then you will need to load that data separately because Cassandra allows to specify only one timestamp when updating/inserting data.
In case of DSBulk you can follow the example 19.4 for exporting of data from this blog post, and example 11.3 for loading (from another blog post). So this may require some shell scripting. Plus you'll need to have disk space to keep exported data (but you can use compression).
In case of Spark Cassandra Connector you can export data without intermediate storage if both nodes are accessible from Spark. But you'll need to write some Spark code for reading data using RDD or DataFrame APIs.

What is the best way to change the partitioner in cassandra

Currently we are using random partitioner and we want to update that to murmur3 partitioner. I know we can achive this by using sstable2json and then json2sstable to convert your SSTables manually. Then I can use sstableloader or we need to create new cluster with murmur3 and write an application to pull all the data from old cluster and write to a new cluster.
is there a other easy way to achieve this?
There is no easy way, its a pretty massive change so might want to check on if its absolutely necessary (do some benchmarks, its likely undetectable). Its more a kind of change to make if your switching to a new cluster anyway.
To do it live: Create a new cluster thats murmur3, write to both clusters. In background read and copy data to new cluster while the writes are duplicated. Once background job is complete flip reads from old cluster to new cluster and then you can decommission old cluster.
Offline: sstable2json->json2sstable is pretty inefficient mechanism. Will be a lot faster if you use an sstable reader and use sstable writer (ie edit SSTableExport in cassandra code to write a new sstable instead of dumping output). If you have smaller dataset the cqlsh COPY command may be viable.

Cassandra Loading Options

I have deployed a 9 node DataStax Cluster in Google Cloud. I am new to Cassandra and not sure how generally people push the data to Cassandra.
My requirement is read the data from flatfiles and RDBMs table and load into Cassandra which is deployed in Google Cloud.
These are the options I see.
1. Use Spark and Kafka
2. SStables
3. Copy Command
4. Java Batch
5. Data Flow ( Google product )
Is there any other options and which one is best.
Thanks,
For flat files you have 2 most effective options:
Use Spark - it will load data in parallel, but requires some coding.
Use DSBulk for batch loading of data from command line. It supports loading from CSV and JSON, and very effective. DataStax's Academy blog just started a series of the blog posts on DSBulk, and first post will provide you enough information to start with it. Also, if you have big files, consider to split them into smaller ones, as it will allow DSBulk to perform parallel load using all available threads.
For loading data from RDBMS, it depends on what you want to do - load data once, or need to update data as they change in the DB. For first option you can use Spark with JDBC source (but it has some limitations too), and then saving data into DSE. For 2nd, you may need to use something like Debezium, that supports streaming of change data from some databases into Kafka. And then from Kafka you can use DataStax Kafka Connector for submitting data into DSE.
CQLSH's COPY command isn't as effective/flexible as DSBulk, so I won't recommend to use it.
And never use CQL Batch for data loading, until you know how it works - it's very different from RDBMS world, and if it's used incorrectly it will really make loading less effective than executing separate statements asynchronously. (DSBulk uses batches under the hood, but it's different story).

Cassandra: Migrate keyspace data from Multinode cluster to SingleNode Cluster

I have a keyspace in a multi-node cluster in QA environment. I want to copy that keyspace to my local single-node cluster. Is there any direct way to do this? I can't afford to write some code like SSTableLoader implementation at this point of time. Please suggest the quickest way.
Make sure you have plenty of free disk space on your new node and that you've properly set replication factor and consistency levels in your tests/build for your new, single node "cluster"
First, restore the exact schema from the old cluster to your new node. After that the data can be loaded in two ways:
1.) Execute the "sstableloader" utility on every node in your old cluster and point it at your new node. sstableloader is token aware, but in your case it will end up shipping all data to your new, single node cluster.
sstableloader -d NewNode /Path/To/OldCluster/SStables
2.) Snapshot the keyspace and copy the raw sstable files from the snapshot folders of each table in your old cluster to your new node. Once they're all there, copy the files to their corresponding table directory and run "nodetool refresh."
# Rinse and repeat for all tables
nodetool snapshot -t MySnapshot
cd /Data/keyspace/table-UUID/snapshots/MySnapshot/
rsync -avP ./*.db User#NewNode:/NewData/Keyspace/table-UUID
...
# when finished, exec the following for all tables in your new node
nodetool refresh keyspace table
Option #1 is probably best because it will stream the data and compact naturally on the new node. It's also less manual work. Option #2 is good, quick, and dirty if you don't have a direct line from one cluster to the other. You probably won't notice much difference since it's probably a relatively small keyspace for QA.

Changing Cassandra partitioner type

Question about changing partitioner types: i want to use sstableloader to copy data from an old cluster to a newer cluster. but the old cluster is using RandomPartitioner, whereas the new one uses Murmur3Partitioner. You might ask why not using COPY command to export data to csv and import it again? Well we have huge data sets and COPY command would not work (all other node's data would aggregate to one machine).
is it possible to switch new cluster partitioner to RandomPartitioner, do data replication using sstableloader, and switch back? (i tried switching, but cassandra won't restart because of it...)
No, you can't change the partitioner. That would require Cassandra to redistribute all the data and is not supported.
You could use sstable2json (with the old yaml) and then json2sstable (with the new yaml) to convert your SSTables manually. Then you can use sstableloader.

Resources