Cassandra migration - cassandra

I have Cassandra 0.8.0 running with data on server 1, and a clean install of Cassandra 1.0.3 on server 2.
Is it possible to just copy some files from server 1 to server 2? Or do i have to write my own import/export code?
Both servers can be taken down, restarted, etc.

Why would you not upgrade server1? Upgrade details here (either way read this first):
http://svn.apache.org/viewvc/cassandra/branches/cassandra-1.0/NEWS.txt?view=markup
But if you do want to change machines, follow the procedures for 'nodetool snapshot' as detailed here:
http://wiki.apache.org/cassandra/Operations#Backing_up_data
Re-create the schema on the new node, then add the snapshots to the data directory (as described above), restart cassandra then issue a nodetool scrub.

Thanks zznate it had to do with hardware.
Here some links i found useful:
http://jonathanhui.com/cassandra-data-maintenance-backup-and-system-recovery
http://wiki.apache.org/cassandra/StorageConfiguration
http://www.memonic.com/user/pneff/folder/database/id/1bZvk
If it looks like nothing happened after migrating make sure you create the column family's on the new node using CassandraCli.

Related

Upgrade cassandra 2.1.19 cluster to 3.11.1

I want to upgrade cassandra 2.1.19 cluster to 3.11.1 without downtime.
Will 3.11.1 nodes work together with 2.1.19 nodes at the same time?
Key point will be how you connect to your cluster. You will need to try out on test systems if everything works from your application side doing the switch.
I recommend a two stop process in this case, migrate from 2.1.19 to 3.0.x - one node at atime.
For every node do the following (i said you need to test before before going to production right?):
nodetool drain - wait for finish
stop cassandra
backup your configs, the old one wont work out of the box
remove the cassandra package / tarball
read about the java and other cassandra 3.x requirements and ensure you met them
add the repo and install the 3.0.x package or tar ball
some packages start the node immediately - you may have to stop them again
make up the new config files (diff or something will be your friend, read the docs about the new options), one time only you should be able to resuse if on all the other nodes
start cassandra (did I say test this on a test system?) and wait until the node has joined the ring again nodetool status
upgrade your sstables with nodetool upgradesstables - almost always needed, dont skip this even if "something" works right now
this upgrade tends to be really slow - it's just a single thread running rewriting all your data, so i/o will be a factor here
all up and running -> go ahead to the next node and repeat
After that - upgrade 3.0.x to 3.11.x in the same fashion, add the new repo, configure for 3.11.x as for 3.0.x above and so on. But this time you can skip upgrading sstables as the format stays the same (but it wont harm if you do so).
Did I mention to do this on testing system first? One thing that will happen and may break things - older native protocols will be gone as well as rpc/ thrift.
Hope I didn't miss something ;)

Cassandra upgrade from 2.0.x to 2.1.x or 3.0.x

I've searched for previous versions of this question, but none seem to fit my case. I have an existing Cassandra cluster running 2.0.x. I've been allocated new VMs, so I do NOT want to upgrade my existing Cassandra nodes - rather I want to migrate to a) new VMs and b) a more current version of Cassandra.
I know for in-place upgrades, I would upgrade to the latest 2.0.x, then to the latest 2.1.x. AFAIK, there's no SSTable inconsistency here. If I go this route via addition of new nodes, I assume I would follow the datastax instructions for adding new nodes/decommissioning old nodes?
Given the above, is it possible to move from 2.0.x to 3.0.x? I know the SSTable format is different; however, if I'm adding new nodes (rather than re-using SSTables on disk), does this matter?
It seems to me that #2 has to work - otherwise, it implies that any upgrade requiring SSTable upgrades would require all nodes to be taken offline simultaneously; otherwise, there would be mixed 2.x.x and 3.0.x versions running in the same cluster at some point.
Am I completely wrong? Does anyone have any experience doing this?
Yes, it is possible to migrate data to a different environment (the new vm's with the updated Cassandra using sstableloader, but you will need C* 3.0.5 and above, as that version added support to upload sstables from previous versions.
Once that the process is completed it is recommended to execute nodetool upgradesstables to ensure that there are no incompatibilities on the data, and a nodetool cleanup.
Regarding your comment ... it implies that any upgrade requiring SSTable upgrades would require all nodes to be taken offline simultaneously;... is not true; doing the upgrade one node at a time will create a mixed cluster with nodes with the two versions as you mentioned, which is not optimal, but will allow you to avoid any downtime in production. (Note that the impact of this operation will depend on the consistency level used in your application.)
Don't worry about the migration. You can simply migrate your Cassandra 2.0.X cluster to Cassandra 3.0.X. But its better if you migrate your cluster Cassandra 2.0.X to latest Cassandra 2.X.X then Cassandra 3.0.X. You need to follow some steps-
Backup data
Uninstall present version
Install the version you want to upgrade
Restore data
As you are doing migration, you need to be careful about your data always. For the data backup and restore you can follow two ways-
Creating snapshots of your sstables and then after installing the new version of cassandra, placing the files to the data location and run sstableloader.
Backup your schema's to a .cql file and copy all the tables to .csv and then after installing the new version of cassandra, source your schema from .cql and copy all the tables from every single .csv file.
If you are fully convinced how you will complete the migration then you can write a bash script to complete the backup and restore steps.

Cassandra: where to modify opscenter agent for a newly added node to existing cluster

I have a single node Cassandra cluster on EC2 (launched from a Datastax AMI) and I manually added a new node which is also backed by the same Datastax AMI after deleting data directory and modifying cassandra.yaml. I can see two nodes in the Nodes section of Opscenter but I see Opscenter agent is not installed in the new node (1 of 2 agents are connected). It looks like in the new node it has its own opscenter installation and that somehow conflicts with the opscenter installation in the first node? I guess I have to fix some configuration file of opscenter agent in the new node so that it can point to the opscenter installation of the first node? But I can't find where to modify.
Thanks!
It is stomp_interface section of /var/lib/datastax-agent/conf/address.yaml
I had to manually put stomp_interface into the configuration file. Also, I noticed that the process was looking for /etc/datastax-agent/address.yaml and never looked for /var/lib/datastax-agent/conf/address.yaml
Also, local_interface was not necessary to get things to work for me. YMMV.
I'm not sure where this gets set, or if this changed between agent versions at some point in time. FWIW, I installed both opscenter and the agents via packages.

Regarding upgrade from 2.0.3 to 2.0.7

I am currently planning for an upgrade to 2.0.7 cassandra version . My base version is 2.0.3. I have not done an upgrade so far and hence want to be absolutely sure about what am doing . Can someone explain what needs to be done apart front this.
Do a nodetool drain to stop all writes to the particular node.
Stop the cassandra node(I have a 8 node , 2 data center network topology. I am bringing down one node in DC1)
Change the cassandra.yaml accordingly in the new binary tarball.
Make the required changes for the new node(using gossiping property file snitch. So , making changes for that)
Start off the new cassandra binary(2.0.7)
Question striking me the most
Do I have to copy the data from 2.0.3 to 2.0.7?
2.Even if it's a rolling upgrade , I think the following steps will do( Except moving from one version to another ) . My assumption is right?
Am going to do this operation on a running application. Am planning to have the application running while doing this as I have enough replicas in local quorum to satisfy reads and writes. Does this idea have any disadvantages ? I loved cassandra for this kind of operation but would like to know of there are any potential problems ?
I will be having the existing 2.0.3 in my running machine while doing this. If there is a problem in 2.0.7 , I shall start off 2.0.3 version again right? Just wanted to know whether there will be any data conflicts with other nodes in the cluster? Or having a snapshot to recover the data is the best option?
5.Apart from this, any other thing I have bear in mind?
Do I have to copy the data from 2.0.3 to 2.0.7? 2.Even if it's a rolling upgrade , I think the following steps will do( Except moving from one version to another ) . My assumption is right?
If you just upgrade the binaries, you can leave all of the data in place and it will use it automatically.
Am going to do this operation on a running application. Am planning to have the application running while doing this as I have enough replicas in local quorum to satisfy reads and writes. Does this idea have any disadvantages ? I loved cassandra for this kind of operation but would like to know of there are any potential problems ?
Normal read and write operations are fine. While you are temporarily running a mixed-version cluster, it's best to avoid doing anything that involves streaming (repairs) or topology changes (bootstrapping or decommissioning nodes). They might work, but they're not officially supported and you're more likely to have problems.
I will be having the existing 2.0.3 in my running machine while doing this. If there is a problem in 2.0.7 , I shall start off 2.0.3 version again right? Just wanted to know whether there will be any data conflicts with other nodes in the cluster? Or having a snapshot to recover the data is the best option?
You want to have a snapshot to recover from. Newer versions of Cassandra may use new SSTable or commitlog formats which the older version will not be able to read.

migrating cassandra from 1.1.2 to 1.2.6

My current cassandra version is 1.1.2, it is implemented with a single node cluster, i would like to upgrade it 1.2.6 with multiple nodes in the ring. is it a proper way to migrate it directly to 1.2.6 or i should follow version by version migration.
I found the upgrading steps from this link
http://fossies.org/linux/misc/apache-cassandra-1.2.6-bin.tar.gz:a/apache-cassandra-1.2.6/NEWS.txt.
There are 9 other releases available between this two versions.
I migrate a two cluster nodes from 1.1.6 to 1.2.6 without problems and without doing version by version. Anyway, you should take a closer look into:
http://www.datastax.com/documentation/cassandra/1.2/index.html?pagename=docs&version=1.2&file=index#upgrade/upgradeC_c.html#concept_ds_smb_nyr_ck
Because there are a lot of new features from version 1.2 like the partioners maybe you need to change some configurations for your cluster.
You may directly hop on to C1.2.6.
We migrated our 4-node cluster from C1.0.9 to C1.2.8 recently without any issues. This was a rolling upgrade i.e. upgrade one node at a time and after each upgrade of a node, allow the cluster to stabilize (depends upon the traffic during upgrade)
These are the steps that we followed:
Perform below on each node,
Run Disablegossip and disablethrift, such that this node is seen as DOWN by other nodes.
flush/drain the memtables, run compaction to merge SSTables
take snapshot and enable incremental backups
This stops all the other nodes/clients from writing to this node and since memtables are flushed to disk, startup times are fast as it need not walk-through commit logs.
stop Cassandra (though this node is down, cluster is available for write/read, so zero downtime)
upgrade sstables to new storage format using sstableupgrade
install/untar Cassandra 1.2.8 on the new locations
move upgraded sstables to appropriate location
merge Cassandra.yaml from previous version and current version by a manual diff (need to detail out difference)
start Cassandra
watch the startup messages to ensure the node comes up without difficulty and is shown in the ring with mixed 1.0.x/1.2.x

Resources