Decommissioning One of Two Datacenter - cassandra

I have one cassandra datacenter. Let's name it DC1. Then I added new datacenter for extending purporse in nodesize. Let's name it DC2. I use replication_factors DC1:3 and DC2:3. I write all my data as LocalDC=DC2 and ConsistencyLevel.LocalQuorom. I am sure that all write requests go to DC2. I want to remove DC1 and I dont want to run nodetool repair command. I dont want to wait.
Can I just simply change all keypaces replication_factor DC2:3 and run nodetool decommission on DC1 nodes?

Yes.
As you said you are sure that there is no data latency between two data centers you can skip that step.
Change all key space replication strategy using ALTER
Decommission all the nodes one by one
See this: https://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_decomission_dc_t.html

Related

Apache Cassandra decommission second DC and join nodes into first DC as brand new nodes?

My Cassandra cluster consists of 2 DCs, each DC has 5 nodes and replication factor per DC is 3. Both DCs are hosted onto the same docker orchestrator. This is a legacy and probably it was done during last major system migration years ago. At the time being I don't see any advantage of having 2 DCs with same replication factor 3. This way same data is written 6 times. Cluster is at least 80% write heavy, reads are more or less limited.
Cassandra load is struggling at peak times, so I would like to have 1 DC with 10 nodes (instead of 2DCs x 5 ndoes) to be able to balance across 10 nodes, instead of just 5. This way I will also bring down data size per node. Having same amount of RAM and CPU dedicated to Cassandra, I would win performance and free storage space ;-)
So idea is to decommission DC2 and bring all 5 nodes from it to DC1 as brand new nodes.
Steps are known:
alter keyspaces to be limited to DC1 only.
no clients to be writing/reading to/from DC2 - DCAwarePolicy with LOCAL_*
I wonder about next step - it says I need to start decommissioning node by node DC2. Is this mandatory or I could somehow just take those nodes down? Goal is not to decommission some, but all nodes in a DC. If I decommission say node5, data would be transferred to remaining 4 nodes and so on. At some point I would be left with 3 nodes and replication factor 3, so I won't be able to decommission any further. What is more - I guess there would be no free space on those node volumes and I am not willing to extend those any further.
So my questions are:
is there a way to alter keyspace to DC1 only, then just to bring all DC2 nodes down, erase volumes and add them one by one to DC1, expanding DC1? Basically to decommission all DC2 nodes at once?
Is there a way for even quicker move of those 5 DC2 nodes to DC1 (at the end they contain same data as 5 nodes in DC1)? Like just join them to DC1 with all data they contain?
What is the advantage of having 2 DCs in a single cluster, instead of having a single DC with double the nodes? Or it strongly depends on the usage and the way services write and read data from Cassandra?
Appreciate your replies, thanks.
Cheers,
OvivO
is there a way to alter keyspace to DC1 only, then just to bring all DC2 nodes down, erase volumes and add them one by one to DC1, expanding DC1? Basically to decommission all DC2 nodes at once?
Yes, you can adjust the keyspace definition to just replicate within DC1. Since you're basically removing a DC, you could shut them all down, and run a nodetool removenode for each. In theory, that would remove the nodes from gossip and (if they're down) not attempt to move data around. Then yes, add each node back to DC1, one at a time. Once you're done, run a repair, followed by a nodetool cleanup on each node.
Is there a way for even quicker move of those 5 DC2 nodes to DC1 (at the end they contain same data as 5 nodes in DC1)? Like just join them to DC1 with all data they contain?
No. Token range assignment is DC dependent. If they moved to a new DC, their range assingments would change, and the nodes would very likely be responsible for different ranges of data.
What is the advantage of having 2 DCs in a single cluster, instead of having a single DC with double the nodes?
Geographic awareness. If you have a mobile app and users on both the West Coast and East Coast, you don't want your East Coast users making a call for data all the way to the West Coast. You want that data call to happen as locally as possible. So, you'd build up a DC on each coast, and let Cassandra keep them in-sync.

Cassandra Replication Factor

Lets say I have two Data Centers(DC1, DC2) in a Single Cassandra cluster.
DC1 - 4 nodes.
DC2 - 4 nodes.
Initially i have set the replication factor for all the keyspaces to be {DC1:2 , DC2:2}.(Network topology strategy)
But After some time lets say I alter the keyspace and change the replication factor to {DC:2} for all the keyspaces.(removing DC1).No replication factor for DC1.
So now what will happen? Will DC1 get any data written into it in the future?
Will all the token ranges be assigned to only DC2?
If you exclude DC1 - it won't get data written for that keyspace, nor data will be read from the DC1. Before switching off DC1, make sure that you perform nodetool repair on the serves in DC2, to make sure that you have all data synchronized. After changing RF, you
When you change RF for specific keyspace, drivers and Cassandra itself recalculate the token ranges assignments taking into account information about data centers.

Can Cassandra support multi-DC cluster with different number of nodes?

I want to be able to get the backup/replicarw of operational data to a single node so we can do some adhoc queries.
Having just one machine handle this replica will be work for now.
Is this possible ? If not what are the arguments against it ?
Yes, you can have different number of nodes in each data center. Set the replication factor as per your requirement.
E.g. If you have DC1 with 4 nodes and going to add DC2 with 1 node then replication factor for your keyspace should be DC1=x,DC2=1(where x<=4).
To add one more data center you need to check the Topology, Snitch and seeds configurations.
E.g. If you are using SimpleSnitch then you can't have multiple data centers, So you need to change your snitch and topology. Check this link which explains more about changing snitch and topology.

Do i need decease my replication factor if replication > no.of nodes ( planning to decommission node)

Now presently we are having a Dc with 3 nodes and with a replication of 3, i am planning to decommission a node do i need to decrease my replication to 2 or just decommissioning node will adjust the data among the two nodes with a replication of 3??
Decommissioning a node will not cause your Cassandra cluster to break necessarily, but it will make it so that a few things will stop working.
A few things that will happen if you decommission the node but don't adjust the replication factor:
First, nothing about your replication factor will be changed just because you decommission a node. To do otherwise would cause chaos.
Queries (both read and write) that attempt to use ConsistencyLevel.ALL will fail, because they will not be able to get 3 machines to participate
Queries with ConsistencyLevel.QUORUM will be less available, because BOTH remaining machines will need to respond to queries to meet quorum.
Because you have 3 machines and a RF of 3, that means that every machine has a complete copy of the data. Decommission the node, update your replication factor, and then run nodetool repair on the remaining two nodes. After you do that, you should be good to go.
My 2 cents: I would suggest you to first change your replication to 2, run a repair on all nodes and then issue "nodetool decommission" from the node you want to decommission. There will be data moving around, but by doing it this way nothing should stop working.

Cassandra: For a single node cluster, will keyspace replication factor >1 increase disk space usage?

I have a keyspace with replication factor set to 3 but I have only a single node. Will then the disk space be used 3 times the data size? As the replicas are not yet assigned to any other nodes, will cassandra stop creating replicas unless new nodes join the cluster?
No, the disk space used would not be three times the size. The single node would own the entire token range and all writes would be written to that single node once.
What happens with the writes for the other two replicas would depend on if those nodes were previously present in the cluster and are currently down, or if they have never been added to the cluster. If they had never been added, then C* would just skip trying to write to them.
If they had been added but are currently down, and if you have hinted handoffs enabled and are still within the hinted handoff window, then C* will store hints for the down nodes on the single up node.
It depends on the replication strategy you have used . Assuming your queries are working you might have used SimpleStrategy , if you try to write to such a configuration your write should fail as it needs to write to 2 additional replica node before it gives a acknowledgement to the client ,which in case of SimpleStratagy are the next two clockwise nodes in the Ring.

Resources