Migrate Data from one Riak cluster to another - data-migration

I have a situation where we need to migrate data from one Riak cluster to another and then remove the old cluster. The ring size will be same, even the region will be the same. We need to do this to upgrade the instances to AL2. Is there a clean approach to do so on Prod, without realtime data loss?

The answer to this may be tied to your version of Riak KV. If you have the open source version of Riak KV 2.2.3 or earlier, this will require an in-situ upgrade to Riak KV 2.2.6 before progressing. See https://www.tiot.jp/riak-docs/riak/kv/2.2.6/setup/upgrading/version/ with packages at https://files.tiot.jp/riak/kv/2.2/2.2.6/
For an Enterprise Editions of Riak KV 2.2.3 and earlier or the open source edition of Riak KV 2.2.6 or higher, you can use multi-data centre replication (MDC).
Use both of these at the same time for proper replication and to prevent data loss:
fullsync replication will copy across all stored data on its first run and then any missing data on subsequent runs.
realtime replication will replicate all transactions in almost realtime.
If you then set this up as bidirectional replication (get each cluster to replicate to the other for both fullsync and realtime) then you will be able to seemlessly switch your production environment from one cluster to the other without any issues. Once you are happy everything is working as expected, you can kill the old cluster.
Please see the documentation for replication at https://www.tiot.jp/riak-docs/riak/kv/2.2.6/using/cluster-operations/v3-multi-datacenter/

Related

Local Persistent Volume for Cassandra Hosted in Kubernetes

We are trying to deploy Cassandra within Kubernetes. Thinking of the storage and how to make it work its fastest at each datacenter, without the expense of implementing network attached storage at each data center, it would seem reasonable to make use of a Local Persistent Volume at each datacenter and leverage Cassandra to handle the cross-datacenter replication.
Am I thinking about this problem correctly? Is there a better way to consider implementing Cassandra in each of our data centers to make our application run their fastest by connecting to a more local data center?
#Simon Fontana Oscarsson is right.
I just want to add a bit more details about that feature for people who will find that question, because it is a common case.
Local Persistent Volumes are available only from 1.7 in alpha stage and from 1.10 in beta.
It requires pre-configured LVM on nodes, and it should be done before you will use it.
Here you may find examples of configuration here.

Cassandra upgrade from 2.0.x to 2.1.x or 3.0.x

I've searched for previous versions of this question, but none seem to fit my case. I have an existing Cassandra cluster running 2.0.x. I've been allocated new VMs, so I do NOT want to upgrade my existing Cassandra nodes - rather I want to migrate to a) new VMs and b) a more current version of Cassandra.
I know for in-place upgrades, I would upgrade to the latest 2.0.x, then to the latest 2.1.x. AFAIK, there's no SSTable inconsistency here. If I go this route via addition of new nodes, I assume I would follow the datastax instructions for adding new nodes/decommissioning old nodes?
Given the above, is it possible to move from 2.0.x to 3.0.x? I know the SSTable format is different; however, if I'm adding new nodes (rather than re-using SSTables on disk), does this matter?
It seems to me that #2 has to work - otherwise, it implies that any upgrade requiring SSTable upgrades would require all nodes to be taken offline simultaneously; otherwise, there would be mixed 2.x.x and 3.0.x versions running in the same cluster at some point.
Am I completely wrong? Does anyone have any experience doing this?
Yes, it is possible to migrate data to a different environment (the new vm's with the updated Cassandra using sstableloader, but you will need C* 3.0.5 and above, as that version added support to upload sstables from previous versions.
Once that the process is completed it is recommended to execute nodetool upgradesstables to ensure that there are no incompatibilities on the data, and a nodetool cleanup.
Regarding your comment ... it implies that any upgrade requiring SSTable upgrades would require all nodes to be taken offline simultaneously;... is not true; doing the upgrade one node at a time will create a mixed cluster with nodes with the two versions as you mentioned, which is not optimal, but will allow you to avoid any downtime in production. (Note that the impact of this operation will depend on the consistency level used in your application.)
Don't worry about the migration. You can simply migrate your Cassandra 2.0.X cluster to Cassandra 3.0.X. But its better if you migrate your cluster Cassandra 2.0.X to latest Cassandra 2.X.X then Cassandra 3.0.X. You need to follow some steps-
Backup data
Uninstall present version
Install the version you want to upgrade
Restore data
As you are doing migration, you need to be careful about your data always. For the data backup and restore you can follow two ways-
Creating snapshots of your sstables and then after installing the new version of cassandra, placing the files to the data location and run sstableloader.
Backup your schema's to a .cql file and copy all the tables to .csv and then after installing the new version of cassandra, source your schema from .cql and copy all the tables from every single .csv file.
If you are fully convinced how you will complete the migration then you can write a bash script to complete the backup and restore steps.

Is it possible to recover a Cassandra node without a snapshot?

Offsite backups for Cassandra seem like a challenging thing. You basically have to make yet another copy of ALL your data, including the copies of data that exist due to the replication factor. Snapshots make backups easy when you don't mind storing it on the same disk that your node already uses. I'm curious - in the event of a catastrophic failure of this disk, is it possible to recover the node using the nodes that the data was replicated to?
Yes, you can restore data on crashed node using a procedure in documentation - Replacing a dead node or dead seed node. It's for Cassandra 3.x, please pick your Cassandra version from a drop-down menu on the top of the page.
But please note that you still need to do backups if your data is valuable. If you using AWS you can use this project to backup Cassandra to S3 storage.
If you are looking for offsite or off-host backups, you can also look at opscenter from Datastax or Talena software (my company). Both provide you the ability to backup your database locally or to S3. As you may expect, you also have the ability to restore data in case of hardware failures, user errors or logical corruptions which the replicas will not protect you against.
Yes, it is possible. Just execute in terminal "nodetool repair" on the node with missed data. It can take a lot of time. Also I would recommend execute repair operation on each node every month to keep your data always replicated because cassandra does not repairs data automatically (for example after node(s) falling).

Regarding upgrade from 2.0.3 to 2.0.7

I am currently planning for an upgrade to 2.0.7 cassandra version . My base version is 2.0.3. I have not done an upgrade so far and hence want to be absolutely sure about what am doing . Can someone explain what needs to be done apart front this.
Do a nodetool drain to stop all writes to the particular node.
Stop the cassandra node(I have a 8 node , 2 data center network topology. I am bringing down one node in DC1)
Change the cassandra.yaml accordingly in the new binary tarball.
Make the required changes for the new node(using gossiping property file snitch. So , making changes for that)
Start off the new cassandra binary(2.0.7)
Question striking me the most
Do I have to copy the data from 2.0.3 to 2.0.7?
2.Even if it's a rolling upgrade , I think the following steps will do( Except moving from one version to another ) . My assumption is right?
Am going to do this operation on a running application. Am planning to have the application running while doing this as I have enough replicas in local quorum to satisfy reads and writes. Does this idea have any disadvantages ? I loved cassandra for this kind of operation but would like to know of there are any potential problems ?
I will be having the existing 2.0.3 in my running machine while doing this. If there is a problem in 2.0.7 , I shall start off 2.0.3 version again right? Just wanted to know whether there will be any data conflicts with other nodes in the cluster? Or having a snapshot to recover the data is the best option?
5.Apart from this, any other thing I have bear in mind?
Do I have to copy the data from 2.0.3 to 2.0.7? 2.Even if it's a rolling upgrade , I think the following steps will do( Except moving from one version to another ) . My assumption is right?
If you just upgrade the binaries, you can leave all of the data in place and it will use it automatically.
Am going to do this operation on a running application. Am planning to have the application running while doing this as I have enough replicas in local quorum to satisfy reads and writes. Does this idea have any disadvantages ? I loved cassandra for this kind of operation but would like to know of there are any potential problems ?
Normal read and write operations are fine. While you are temporarily running a mixed-version cluster, it's best to avoid doing anything that involves streaming (repairs) or topology changes (bootstrapping or decommissioning nodes). They might work, but they're not officially supported and you're more likely to have problems.
I will be having the existing 2.0.3 in my running machine while doing this. If there is a problem in 2.0.7 , I shall start off 2.0.3 version again right? Just wanted to know whether there will be any data conflicts with other nodes in the cluster? Or having a snapshot to recover the data is the best option?
You want to have a snapshot to recover from. Newer versions of Cassandra may use new SSTable or commitlog formats which the older version will not be able to read.

migrating cassandra from 1.1.2 to 1.2.6

My current cassandra version is 1.1.2, it is implemented with a single node cluster, i would like to upgrade it 1.2.6 with multiple nodes in the ring. is it a proper way to migrate it directly to 1.2.6 or i should follow version by version migration.
I found the upgrading steps from this link
http://fossies.org/linux/misc/apache-cassandra-1.2.6-bin.tar.gz:a/apache-cassandra-1.2.6/NEWS.txt.
There are 9 other releases available between this two versions.
I migrate a two cluster nodes from 1.1.6 to 1.2.6 without problems and without doing version by version. Anyway, you should take a closer look into:
http://www.datastax.com/documentation/cassandra/1.2/index.html?pagename=docs&version=1.2&file=index#upgrade/upgradeC_c.html#concept_ds_smb_nyr_ck
Because there are a lot of new features from version 1.2 like the partioners maybe you need to change some configurations for your cluster.
You may directly hop on to C1.2.6.
We migrated our 4-node cluster from C1.0.9 to C1.2.8 recently without any issues. This was a rolling upgrade i.e. upgrade one node at a time and after each upgrade of a node, allow the cluster to stabilize (depends upon the traffic during upgrade)
These are the steps that we followed:
Perform below on each node,
Run Disablegossip and disablethrift, such that this node is seen as DOWN by other nodes.
flush/drain the memtables, run compaction to merge SSTables
take snapshot and enable incremental backups
This stops all the other nodes/clients from writing to this node and since memtables are flushed to disk, startup times are fast as it need not walk-through commit logs.
stop Cassandra (though this node is down, cluster is available for write/read, so zero downtime)
upgrade sstables to new storage format using sstableupgrade
install/untar Cassandra 1.2.8 on the new locations
move upgraded sstables to appropriate location
merge Cassandra.yaml from previous version and current version by a manual diff (need to detail out difference)
start Cassandra
watch the startup messages to ensure the node comes up without difficulty and is shown in the ring with mixed 1.0.x/1.2.x

Resources