New Cassandra nodes in cluster with different version - cassandra

I have a Cassandra cluster with 3 nodes running on v3.11.4
I want to add 3 more nodes to this cluster. Now, Cassandra v4 is available so i have installed it on the new nodes.
When i restart Cassandra, the new nodes are unable to join the cluster.
Error: There are nodes in the cluster with a different schema version than us
I even tried added skip_schema options in jvm-server.options file but still the nodes could not join.
Please help me how can i add the new nodes in the existing cluster. I want to keep v4 for new nodes so i don't have to update these when upgrading older nodes to v4.

It isn't possible to add nodes running a new major version to the cluster. You will only be able to add nodes running Cassandra 3.11.
They won't be able to stream data to each other because they have different formats. This is the reason you can't run repairs during an upgrade. You also can't add or decommission nodes in the middle of an upgrade. Cheers!

So the plan forward here, would be to shut down the Cassandra 4.0 nodes. Then upgrade the 3.11 nodes to 4.0. Then adding the new 4.0 nodes should work as expected.

Related

Migrate Datastax Enterprise Cassandra to Apache Cassandra

We have currently using DSE 4.8 and 5.12. we want to migrate to apache cassandra .since we don't use spark or search thought save some bucks moving to apache. can this be achieved without down time. i see sstableloader works other way. can any one share me the steps to follow to migrate from dse to apache cassandra. something like this from dse to apache.
https://support.datastax.com/hc/en-us/articles/204226209-Clarification-for-the-use-of-SSTABLELOADER
Figure out what version of Apache Cassandra is being run by DSE. Based on the DSE documentation DSE 4.8.14 is using Apache Cassandra 2.1 and DSE 5.1 is using Apache Cassandra 3.11
Simplest way to do this is to build another DC (Logical DC per Cassandra) and add it to the existing cluster.
As usual, with a "Nodetool Rebuild {from-old-DC}" on to the new DC nodes, let Cassandra take care of streaming data to the new Apache Cassandra nodes naturally.
Once data streaming is completed, based on the LoadBalancingPolicy being used by applications, switch their local_dc to DC2 (the new DC). Once the new DC starts taking traffic, shutdown nodes in old DC say DC1 one by one.
alter keyspace dse_system and dse_security not using everywhere
on non-seed nodes, cleanup cassandra data directory
turn on replace in cassandra-env.sh
start instance
monitoring streaming process using command 'nodetool netstats|grep Receiving'
change seeds node definition and rolling restart before finally migrate previous seeds nodes.

DSE 5 and DSE 4.8.9 in Same Cluster

Is it at all possible to have two different DSE versions in the same cluster? In my case, I have a a cluster of two DSE 5 nodes and another one of two DSE 4.8.9 nodes. Can I connect them such that data is replicated from DSE 4.8.9 to DSE 5 in real time?
No. If you were to try this, you'd be in an "Upgrade State." And clusters in an upgrade state are bound by these restrictions:
Do not enable new features.
Do not run nodetool repair.
Do not issue
these types of CQL queries during a rolling restart: DDL and
TRUNCATE.
During upgrades, the nodes on different versions might show
a schema disagreement.
Failure to upgrade SSTables when required
results in a significant performance impact and increased disk usage.
Upgrading is not complete until the SSTables are upgraded.
Trying something like this would be further exacerbated by the fact that 4.8.9 is based on Cassandra 2.1 and 5.0 is based on Cassandra 3.0. There were some significant changes between the two, so you would undoubtedly run into problems.
The best way to go about this would be to upgrade your 4.8.9 nodes to 5.0 first, and then add your new 5.0 cluster nodes afterward.

How to clean Cassandra cluster

I had setup a 50 node Apache Cassandra cluster
I took one node and wanted to install DSE on it and make is a single node DSE cluster
I have removed /var/lib/cassandra and /var/log/cassandra
I have truncated systems.peers table on the single node
When I start dse cassandra on this node, I still see the remaining nodes doing handshake and being added to this cluster.
What is the best way to complete remove any traces of existing Cassandra cluster from this node?
You need to change the cluster_name directive in cassandra.yaml to a different name to the rest of the cluster.

Showing old cluster ips when i run nodetool status for new cluster

I already have a cassandra cluster of 4 nodes (version 1.2.5). I made an image of one of the cassandra instance in the existing cluster and created new cluster with 2 nodes. When i started new instances and checked nodetool status it is showing the old cluster ips also. Checked the cassandra.yaml file. Seeds are set for new nodes only. What else needs to be changed?
I think the cluster metainfo is stored in the system keyspace--check the peers table.
#Alex Popescu , Thanks that worked for me.
1)SELECT * FROM system.peers; - gave me the entries for the hosts .
2)REMOVED the entry for unwanted node.
3)Ran nodetool flush system.
4)Restarted cassandra on all nodes of the cluster.

migrating cassandra from 1.1.2 to 1.2.6

My current cassandra version is 1.1.2, it is implemented with a single node cluster, i would like to upgrade it 1.2.6 with multiple nodes in the ring. is it a proper way to migrate it directly to 1.2.6 or i should follow version by version migration.
I found the upgrading steps from this link
http://fossies.org/linux/misc/apache-cassandra-1.2.6-bin.tar.gz:a/apache-cassandra-1.2.6/NEWS.txt.
There are 9 other releases available between this two versions.
I migrate a two cluster nodes from 1.1.6 to 1.2.6 without problems and without doing version by version. Anyway, you should take a closer look into:
http://www.datastax.com/documentation/cassandra/1.2/index.html?pagename=docs&version=1.2&file=index#upgrade/upgradeC_c.html#concept_ds_smb_nyr_ck
Because there are a lot of new features from version 1.2 like the partioners maybe you need to change some configurations for your cluster.
You may directly hop on to C1.2.6.
We migrated our 4-node cluster from C1.0.9 to C1.2.8 recently without any issues. This was a rolling upgrade i.e. upgrade one node at a time and after each upgrade of a node, allow the cluster to stabilize (depends upon the traffic during upgrade)
These are the steps that we followed:
Perform below on each node,
Run Disablegossip and disablethrift, such that this node is seen as DOWN by other nodes.
flush/drain the memtables, run compaction to merge SSTables
take snapshot and enable incremental backups
This stops all the other nodes/clients from writing to this node and since memtables are flushed to disk, startup times are fast as it need not walk-through commit logs.
stop Cassandra (though this node is down, cluster is available for write/read, so zero downtime)
upgrade sstables to new storage format using sstableupgrade
install/untar Cassandra 1.2.8 on the new locations
move upgraded sstables to appropriate location
merge Cassandra.yaml from previous version and current version by a manual diff (need to detail out difference)
start Cassandra
watch the startup messages to ensure the node comes up without difficulty and is shown in the ring with mixed 1.0.x/1.2.x

Resources