What is the process to Degrade/rollback to datastax cassandra lower version? - cassandra

I want to degrade my datastax cassandra enterprise version from 5.0.15 to 4.8.16 to list down the rollback process in case of any emergency.
Please help me to list down the process.

Before performing upgrade it's always recommended to perform backup of existing data, and if something went wrong, then you can restore data from snapshots - the precise steps will depend on how you did perform backup - via OpsCenter, or with nodetool snapshot.
See DSE upgrade guide for additional information about upgrade.
P.S. DataStax support's KB has very good article on the "manual" backup/restore - I recommend to follow it if you won't use OpsCenter for backup.

Related

Migrate Data from one Riak cluster to another

I have a situation where we need to migrate data from one Riak cluster to another and then remove the old cluster. The ring size will be same, even the region will be the same. We need to do this to upgrade the instances to AL2. Is there a clean approach to do so on Prod, without realtime data loss?
The answer to this may be tied to your version of Riak KV. If you have the open source version of Riak KV 2.2.3 or earlier, this will require an in-situ upgrade to Riak KV 2.2.6 before progressing. See https://www.tiot.jp/riak-docs/riak/kv/2.2.6/setup/upgrading/version/ with packages at https://files.tiot.jp/riak/kv/2.2/2.2.6/
For an Enterprise Editions of Riak KV 2.2.3 and earlier or the open source edition of Riak KV 2.2.6 or higher, you can use multi-data centre replication (MDC).
Use both of these at the same time for proper replication and to prevent data loss:
fullsync replication will copy across all stored data on its first run and then any missing data on subsequent runs.
realtime replication will replicate all transactions in almost realtime.
If you then set this up as bidirectional replication (get each cluster to replicate to the other for both fullsync and realtime) then you will be able to seemlessly switch your production environment from one cluster to the other without any issues. Once you are happy everything is working as expected, you can kill the old cluster.
Please see the documentation for replication at https://www.tiot.jp/riak-docs/riak/kv/2.2.6/using/cluster-operations/v3-multi-datacenter/

Sstable upgrade required after scylla version upgrade?

I have checked scylla upgrade documentations found the steps.
https://docs.scylladb.com/upgrade/upgrade-opensource/upgrade-guide-from-2.1-to-2.2/upgrade-guide-from-2.1-to-2.2-rpm/.
As cassandra after binary upgrade we need to do perform sstable upgrade by nodetool upgradesstable command but in scylla there is no any step to perform. Is it required? or nodetool upgradesstable command not supported in scylla? please help
There's no requirement to run any tool. The existing sstables will be naturally upgraded via compaction.

How to Install and Use Cassandra Reaper for Apache Cassandra 2.2.X version

We are using Cassandra 2.2.x version in production and currently we are manually triggering the repairs on each node. So we are planning to automate repairs using Cassandra Reaper. I don't see much documentation on it. Can anyone please list the steps how to configure the reaper in multi dc environment and use it?

Cassandra upgrade from 2.0.x to 2.1.x or 3.0.x

I've searched for previous versions of this question, but none seem to fit my case. I have an existing Cassandra cluster running 2.0.x. I've been allocated new VMs, so I do NOT want to upgrade my existing Cassandra nodes - rather I want to migrate to a) new VMs and b) a more current version of Cassandra.
I know for in-place upgrades, I would upgrade to the latest 2.0.x, then to the latest 2.1.x. AFAIK, there's no SSTable inconsistency here. If I go this route via addition of new nodes, I assume I would follow the datastax instructions for adding new nodes/decommissioning old nodes?
Given the above, is it possible to move from 2.0.x to 3.0.x? I know the SSTable format is different; however, if I'm adding new nodes (rather than re-using SSTables on disk), does this matter?
It seems to me that #2 has to work - otherwise, it implies that any upgrade requiring SSTable upgrades would require all nodes to be taken offline simultaneously; otherwise, there would be mixed 2.x.x and 3.0.x versions running in the same cluster at some point.
Am I completely wrong? Does anyone have any experience doing this?
Yes, it is possible to migrate data to a different environment (the new vm's with the updated Cassandra using sstableloader, but you will need C* 3.0.5 and above, as that version added support to upload sstables from previous versions.
Once that the process is completed it is recommended to execute nodetool upgradesstables to ensure that there are no incompatibilities on the data, and a nodetool cleanup.
Regarding your comment ... it implies that any upgrade requiring SSTable upgrades would require all nodes to be taken offline simultaneously;... is not true; doing the upgrade one node at a time will create a mixed cluster with nodes with the two versions as you mentioned, which is not optimal, but will allow you to avoid any downtime in production. (Note that the impact of this operation will depend on the consistency level used in your application.)
Don't worry about the migration. You can simply migrate your Cassandra 2.0.X cluster to Cassandra 3.0.X. But its better if you migrate your cluster Cassandra 2.0.X to latest Cassandra 2.X.X then Cassandra 3.0.X. You need to follow some steps-
Backup data
Uninstall present version
Install the version you want to upgrade
Restore data
As you are doing migration, you need to be careful about your data always. For the data backup and restore you can follow two ways-
Creating snapshots of your sstables and then after installing the new version of cassandra, placing the files to the data location and run sstableloader.
Backup your schema's to a .cql file and copy all the tables to .csv and then after installing the new version of cassandra, source your schema from .cql and copy all the tables from every single .csv file.
If you are fully convinced how you will complete the migration then you can write a bash script to complete the backup and restore steps.

DataStax: Will back up work if OpsCenter goes down

If I configure backups with OpsCenter will the agents continue to function, will the back up service still run, if OpsCenter goes down?
Or do I need to build redundancy/set up cron jobs to complete snap shots and incremental backups?
Backups will stop if you loose opscenterd. You may want to set up HA opscenter if you need guarantees that your backups will happen on opscenter downtime:
https://docs.datastax.com/en/opscenter/5.2/opsc/configure/configFailover.html
Note that Opscenter only provides node level snapshots and does not give you a cluster wide, consistent snapshot. This means, you may lose data if a Cassandra node goes down during a backup window. Any change in the cluster topology during the backup window may also result in some data loss. So you should be careful to schedule them appropriately.
If you need your backups to be resilient across Cassandra node failures and topology changes, you may want to checkout “DatosIO”.
There are a number of commercial and opensource solutions appearing in the market. Check out Priam and Talena if you interested in Cassandra backup. They provide the capabilities you are referring to.

Resources