how to efficiently manage cassandra initial token? - cassandra

I'm new cassandra user. I know that there is initial token configuration and how to generate it.
The question is if I have an existen cluster with x nodes and I want to add additional node (one or more) should I reconfigure all the nodes to the new tokens (according to new generated values)?
Or is there more efficient way to manage this?

If you're looking for what the best practices are for handling such tasks, take a look at this section of the Cassandra 1.0 docs dedicated to token strategy.
Shortened version of your options, from the documentation:
Add capacity by doubling the cluster size -- [..] nodes can keep their existing token assignments, and new nodes are assigned tokens that bisect (or trisect) the existing token ranges.
Recalculate new tokens for all nodes and move nodes -- [..] you will have to recalculate tokens for the entire cluster. Existing nodes will have to have their new tokens assigned using nodetool move.
Add one node at a time and leave initial_token empty -- [..] splits the token range of the heaviest loaded node and places the new node into the ring at that position. [..] not result in a perfectly balanced ring, but it will alleviate hot spots.
link
If you were seeking a management solution Priam (from Netflix) might be worth looking at. It's open source and Apache-licensed, but requires some amount of configuration and is probably only worth investing [time] in for larger clusters.

Related

Cassandra adding new Datacenter with even token distribution

We have a 1 DC cluster running Cassandra 3.11. The DC has 8 nodes total with 16 tokens per node and 3 seed nodes. We use Murmur3Partitioner.
In order to ensure better data distribution for the upcoming cluster in another DC, we want to use the tokens allocation approach where you manually specify initial_token for seed nodes and use allocate_tokens_for_keyspace for non seed nodes.
The problem is that our current datacenter cluster is not well balanced, since we built the cluster without a tokens allocation approach. So currently this means that the tokens are not well distributed. I can't figure out how to calculate initial_token for the new seed nodes in the new Datacenter. I probably cannot consider the token range of the new cluster as independent and calculate the initial token range as I would for a fresh cluster. At this point I am very unsure how to proceed. Any help will be appreciated, thanks.
Currently, I am trying to make a concept of migration and have come to the part where I do not know what to do and the documentation is not helpful.
There are scripts available to calculate the initial_token value, for example, you could use the one here to quickly calculate these values:
https://www.geroba.com/cassandra/cassandra-token-calculator/
You do have the ability to set allocate_tokens_for_keyspace and point it to a keyspace with a replication factor you plan to use for user-created keyspaces in the cluster, if you're adding a new DC, then you probably already have such a keyspace, and this should help you get better distribution. Remember to set this before bootstrapping nodes to the new DC.
Another option would be to avoid using vnodes entirely and go with single token architecture by setting num_tokens to 1. This gives you the ability to bootstrap nodes to the new DC, load/stream data and then monitor the distribution and make changes as needed using 'nodetool move':
https://cassandra.apache.org/doc/3.11/cassandra/tools/nodetool/move.html
This method would require you to monitor the distribution and make changes to the token assignments as needed, and you'd want to follow-up the move command with 'nodetool repair' and 'nodetool cleanup' on all nodes, but it gives you the ability to rectify uneven distribution quickly without bootstrapping new nodes. You would still want to use the same method for calculating the initial_token values with single-token architecture and set that before bootstrap.
I suspect either method could work well for you, but wanted to give you a second option.

Cassandra Virtual Nodes

Although it is asked many times and answered many times, I did not find a good answer anyway.
Neither in forums nor in cassandra docs.
How do virtual nodes work?
Suppose a node having 256 virtual nodes.
And docs say they are distributed randomly.
(put away how that "randomly" done...I have another,more urgent question):
Is that right that every cassandra node ("physical") actually responsible for several distinct locations in the ring? (for 256 locations)? Does that mean the "physical" node sort of "spread" on the whole circle?
How in that case re-balancing works? If I add a new node?
The ring will get an additional 256 nodes.
How those additional nodes will divide the data with the old nodes?
Will they, basically, appear as additional "bicycle spokes" randomly spread through the whole ring?
A lot of info on the internet, but nobody makes a clear explanation...
Vnodes break up the available range of tokens into smaller ranges, defined by the num_tokens setting in the cassandra.yaml file. The vnode ranges are randomly distributed across the cluster and are generally non-contiguous. If we use a large number for num_tokens to break up the token ranges, the random distribution means it is less likely that we will have hot spots.Using statistical computation, the point where all clusters of any size always had a good token range balance was when 256 vnodes were used. Hence, the num_tokens default value of 256 was the recommended by the community to prevent hot spots in a cluster.
Ans 1:- It is a range of tokens based on num_tokens. if you have set 256 the you will get 256 token ranges which is default.
Ans 2:- Yes, when you are adding or removing the nodes the tokens will distribute again in the cluster based on vnodes configurations.
you may refer for more details are here https://docs.datastax.com/en/ddac/doc/datastax_enterprise/dbArch/archDataDistributeVnodesUsing.html
LetsNoSQL answer is correct. See also https://stackoverflow.com/a/37982696/5209009. I'll only add a few more comments:
Yes, the "physical" node is spread on the token range.
As explained in the link, any new node will take 256 new token ranges, dividing some of the existing ones. There is no other rebalancing, it relies on randomness to achieve some rebalancing, that's why it's using a relatively large (256) number of tokens per node.
It's worth mentioning that there is another option. You can run vnodes with a smaller number of tokens per node (4-8) with a token allocation algorithm. Any new tokens will not be allocated randomly, a greedy algorithm will be used so that the new tokens will create a distribution that optimises the load on a given keyspace. It will simply divide in half the token ranges containing most of the data. Since it's not random it can work with a smaller number of tokens (4-8). It's not really relevant for small clusters, but for 100+ nodes it can be.
See https://www.datastax.com/blog/2016/01/new-token-allocation-algorithm-cassandra-30 and https://thelastpickle.com/blog/2019/02/21/set-up-a-cluster-with-even-token-distribution.html.

Cassandra: Can't one use snapshots to rapidly scale out a cluster?

This details how to replicate data to a new cluster:
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_snapshot_restore_new_cluster.html
Can't a similar scheme be used to rapidly scale out a cluster with existing data? Say take a snapshot of all the nodes, copy them all to new nodes, set the tokens in the yaml, set the peers to point to the old instances, and then join them up?
Won't they be treated like nodes that once were part of the cluster and were rejoined?
That won't work, because snapshots are specific to the node which they are taken on. Once you add (or remove) a node, the token ranges on all nodes are recalculated, and you immediately invalidate any existing snapshots. Restoring the snapshots to another node would appear to work, but it would only serve the data which happened to match its token ranges.
Plus, it would try to serve data which matches its token ranges whether or not the snapshot you restored from had that data or not. Not a good scenario.

How to ensure that consistent hashing works?

I'm going to implement consistent hashing over a bunch of nodes. Each node has a limited capacity (let's say 1GB). I starts with one node and when it's getting full I'm gonna add another node and use consistent hashing to redistribute the data and move forward by adding new nodes. However there are still chances that a node gets full. I know some nosql databases such as cassandra uses consistent hashing to do something similar to what i'm doing. How can I avoid nodes from overflowing using consistent hashing?
Cassandra does not use consistent hashing in a way you described.
Each table has a partition key (you can think about it as a primary key or first part of it in RDBMS terminology), this key is hashed using murmur3 algorithm. The whole hash space forms a continuos ring from lowest possible hash to the highest. After that this ring is divided into chunks (vnodes, 256 by default) and these chunks are fairly distributed among multiple nodes. Each node hosts not only it's own part of the ring, but also maintains replicated copy of other vnodes according to replication factor.
This way of doing things helps to solve a lot of problems:
balance data load among all cluster nodes, no specific node can be overloaded (data size, reads and writes are evenly distributed, no hot points)
if you add a new node to a cluster, it will handle it's own part of ring and pull required vnodes automatically from other nodes. No need to manual resharding.
if node fails, due to replication you won't miss any data because it is already stored on other nodes. In this case you can decomission failed nodes so all other nodes will redistribute failed ring part among them. No need to have complex switching scenarios for failed db nodes.
Of course, you can always implement similar DB behaviour on top of any RDBMS in your application layer, but it is always much harder and not error-prone than using already existing battle-tested solution.
I guess you know how keys gets moved from one node to another node, when a node is added or deleted. Coming to your question of how uniform distribution happens?
You can have your own logic here to make it happen. You keep on monitoring all the nodes in the hash if any node is getting hot(Handling more keys) insert another node before this node so that the load will be distributed among the old and the new nodes. Similar way if any of the the nodes are under utilised you can delete them so that load will be shift to the next node.
Hope this help..!!

What would be the exact procedure to add new nodes to a Cassandra cluster so that the cluster remains balanced?

I've read the relevant documentation I could find, but I still have doubts.
What I read
From http://wiki.apache.org/cassandra/Operations#Moving_nodes
If you add nodes to your cluster your ring will be unbalanced and only way to get perfect balance is to compute new tokens for every node and assign them to each node manually by using nodetool move command.
and from http://www.datastax.com/docs/1.1/operations/cluster_management#adding-capacity-to-an-existing-cluster
If you need to increase capacity by a non-uniform number of nodes, you must recalculate tokens for the entire cluster, and then use nodetool move to assign the new tokens to the existing nodes. After all nodes are restarted with their new token assignments, run a nodetool cleanup to remove unused keys on all nodes
But I'm not clear on the order of these things.
Could you explain how to do it in the following scenario?
I'm using cassandra 1.1.9, so no virtual nodes are in use.
I have a cluster ring with 5 nodes, and each owns 20%
Their tokens are
0
34028236692093846346337460743176821145
68056473384187692692674921486353642291
102084710076281539039012382229530463436
136112946768375385385349842972707284582
I want to add 2 additional nodes.
What steps do I have to follow? I know I should install and configure cassandra, use the original 5 as seeds, and calculate their new tokens, but in what order should I move the data using nodetool move? Is it one at a time?
What happens with the data when I move the first one? Is it available at all times?
Should I start the two new nodes before moving the original 5 to their new tokens?
A step by step guide would be ideal.
Please note that I need to do it pre version 1.2
The new tokens should be
0
24305883351495604533098186245126300818
48611766702991209066196372490252601636
72917650054486813599294558735378902454
97223533405982418132392744980505203272
121529416757478022665490931225631504090
145835300108973627198589117470757804908
calculated using 2^127/7 * {0-7}.
What steps do I have to follow?
in what order should I move the data using nodetool move?
You should
Bootstrap in one node at 48611766702991209066196372490252601636
Bootstrap the other node at 121529416757478022665490931225631504090
Move 34028236692093846346337460743176821145 to 24305883351495604533098186245126300818
Move 68056473384187692692674921486353642291 to 72917650054486813599294558735378902454
Move 102084710076281539039012382229530463436 to 97223533405982418132392744980505203272
Move 136112946768375385385349842972707284582 to 145835300108973627198589117470757804908
(I tried to minimise the amount of data transferred - might not be optimal but is close enough to not make much difference given the inbalance of data you probably have already.)
Is it one at a time?
You should bootstrap one node and once and move one token at once. This avoids placing excess load on the cluster while streaming data.
What happens with the data when I move the first one? Is it available at all times?
Data is fully available during the move. The node participates in reads and writes for the old and new range so you can read and write during the move.
Should I start the two new nodes before moving the original 5 to their new tokens?
Always better to have more nodes in the cluster - if you moved first, you'd have some nodes with twice as much data as the others.
From Cassandra 1.2, keeping a cluster balanced when adding nodes is very easy, because of the new vnodes (multiple seeds per node) feature. Cassandra now automatically balances the cluster for you. If you upgrade from an earlier version you will have to activate the vnode feature yourself

Resources