Cassandra Token Distribution - cassandra

I am going through cassandra tutorials and come across this picture that represents multinode cassandra cluster -
Isnt total number of tokens ( in the above 256 ) should be distributed across all three nodes around 85 tokens each ?

No, the num_tokens parameter specifies how many tokens ranges each node will handle. From cassandra.yaml description:
This defines the number of tokens randomly assigned to this node on the ring. The more tokens, relative to other nodes, the larger the proportion of data that this node will store. You probably want all nodes to have the same number of tokens assuming they have equal hardware capability.
Otherwise, what would happen if you have cluster with more than 256 nodes? ;-)

Related

Cassandra Virtual Nodes

Although it is asked many times and answered many times, I did not find a good answer anyway.
Neither in forums nor in cassandra docs.
How do virtual nodes work?
Suppose a node having 256 virtual nodes.
And docs say they are distributed randomly.
(put away how that "randomly" done...I have another,more urgent question):
Is that right that every cassandra node ("physical") actually responsible for several distinct locations in the ring? (for 256 locations)? Does that mean the "physical" node sort of "spread" on the whole circle?
How in that case re-balancing works? If I add a new node?
The ring will get an additional 256 nodes.
How those additional nodes will divide the data with the old nodes?
Will they, basically, appear as additional "bicycle spokes" randomly spread through the whole ring?
A lot of info on the internet, but nobody makes a clear explanation...
Vnodes break up the available range of tokens into smaller ranges, defined by the num_tokens setting in the cassandra.yaml file. The vnode ranges are randomly distributed across the cluster and are generally non-contiguous. If we use a large number for num_tokens to break up the token ranges, the random distribution means it is less likely that we will have hot spots.Using statistical computation, the point where all clusters of any size always had a good token range balance was when 256 vnodes were used. Hence, the num_tokens default value of 256 was the recommended by the community to prevent hot spots in a cluster.
Ans 1:- It is a range of tokens based on num_tokens. if you have set 256 the you will get 256 token ranges which is default.
Ans 2:- Yes, when you are adding or removing the nodes the tokens will distribute again in the cluster based on vnodes configurations.
you may refer for more details are here https://docs.datastax.com/en/ddac/doc/datastax_enterprise/dbArch/archDataDistributeVnodesUsing.html
LetsNoSQL answer is correct. See also https://stackoverflow.com/a/37982696/5209009. I'll only add a few more comments:
Yes, the "physical" node is spread on the token range.
As explained in the link, any new node will take 256 new token ranges, dividing some of the existing ones. There is no other rebalancing, it relies on randomness to achieve some rebalancing, that's why it's using a relatively large (256) number of tokens per node.
It's worth mentioning that there is another option. You can run vnodes with a smaller number of tokens per node (4-8) with a token allocation algorithm. Any new tokens will not be allocated randomly, a greedy algorithm will be used so that the new tokens will create a distribution that optimises the load on a given keyspace. It will simply divide in half the token ranges containing most of the data. Since it's not random it can work with a smaller number of tokens (4-8). It's not really relevant for small clusters, but for 100+ nodes it can be.
See https://www.datastax.com/blog/2016/01/new-token-allocation-algorithm-cassandra-30 and https://thelastpickle.com/blog/2019/02/21/set-up-a-cluster-with-even-token-distribution.html.

The replication factor in Cassandra when creating a keyspace

When creating a new namespace in Cassandra, we need to give a number for a replication factor.
Ex:
Does the number, that we are giving as the replication factor, determine the number of nodes that initially create to store the replicate data?
Can anybody give a clear clarification about what that replication factor does?
It will not create the number of nodes specified. It just means the number of copies of data. For instance if your cluster is having 5 nodes, your write will be replicated(written) to 3 different nodes depending on the token range it falls. Coming to SimpleStrategy its asn implementation where it does not consider rack or dc's into consideration when replicating.
The explanation #Praneeth Gudumasu given for replication_factor is true. The number of nodes in a Cassandra cluster is not something you "give", you can actually connect as many number of nodes as you wish: https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddNodeToCluster.html
and each time you connect a new node it is assigned a token range as per Cassandra's architecture. If you don't know how many nodes you need for your application I suggest running a performance test with data size approaching the size you would be inserting in your real application, then try to execute some queries (concurrently) and see with how many nodes you would get a reasonable response time for your queries.

Cassandra vnodes replicas

Setting up the context:
Cassandra currently implements vnodes. 256 by default which is tweakable in the cassandra.yaml file
Vnodes as I understand are token-ranges/hash-ranges. Eg. (x...y], where y is the token number of the vnode. Each physical node in Cassandra is assigned random 256 tokens, and each of those tokens are the boundary value of a hash/token range. The tokens assigned are within the range of 2^-63 to 2^63-1 (the range of hash numbers which murmur3 has partitioner may generate). So far so good.
Question:
1. Is it that a token range(vnode) is a fixed range. Once set, this token range will be copied to other Cassandra nodes to satisfy the replication factor like a token range(vnode) being a fundamental chunk of data(tokens) which goes around together. Only in case of bootstrap of a new node in the cluster, this token range(vnode) might break apart to be assigned to other node.
Riding on the last proposition, (say the last proposition is true).
Then a vnode must only contain tokens which belong a given keyspace.
Because each keyspace(container of column family/tables) has a defined replication strategy and replication factor. And it is highly likely that replication factor of keyspaces in a Cassandra cluster will vary.
Consider an example. "system_schema" keyspace has a RF of 1 whereas I created a keyspace "test_ks" with RF 3. If a row of system_schema keyspace has a token number 2(say) and a row of my test_ks has token number 5(say).
these 2 tokens can't be placed in the same token range(vnode). If a vnode is consistent chunk of token ranges, say token 2 and 5 belong to vnode with token number 10. so vnode 10 has to be placed on 3 different physical nodes to satisfy the RF =3 for test_ks, but we are unnecessary placing token 2 on 3 different nodes whose RF is supposed to be 1.
Is this proposition correct that, a vnode is only dedicated to a given keyspace?
which boils down to out of 256 tokens on a physical node... 20(say) vnodes currently belong to "system" keyspace, 80 vnodes(say) belong to test_ks.
Again riding on the above proposition, this means that each node should have the info of keyspace-wise vnodes currently available in the cluster.
That way when a new write comes in for a Keyspace the co-ordinator node would locate all vnodes in the cluster for that keyspace and assign the new row a token number which falls within the token range of those keyspaces. That being the case can I know how many vnodes currently belong to a keyspace in the entire cluster/ or on a given node.
Please do correct me if I'm wrong.
I have been following the below blogs and videos to get an understanding of this concept:
https://www.scribd.com/document/253239514/Virtual-Nodes-Strategies-for-Apache-Cassandra
https://www.youtube.com/watch?v=GddZ3pXiDys&t=11s
Thanks in advance
There is no fixed token-range, the tokens are just generated randomly. This is one of the reasons that vnodes were implemented - the idea being that if there are more tokens it is more likely that the resulting token-ranges will be more evenly distributed across nodes.
Token generation was recently improved in 3.0, allowing Cassandra to place new tokens a little more intelligently (see CASSANDRA-7032). You can also manually configure tokens (see initial_token), although it can become tricky to keep things balanced when it comes time to expand the cluster unless you plan on doubling the number of nodes.
The total number of tokens in a cluster is the number of nodes in the cluster multiplied by the number of vnodes per node.
In regards to placement of replicas, the first copy of a partition is placed in the node that owns that partition's token. The additional n copies are placed sequentially on the next n nodes in the ring that are in the same data centre. There is no relationship between tokens and keyspaces.
When a new write comes into a coordinator node, the coordinator node determines which node owns the partition by hashing the partition key. Note that for better performance this can actually be done by the driver instead if you use TokenAwarePolicy. The coordinator sends the write to the node that owns the partition, and if the data needs to be replicated the coordinator node also writes the replicas to the next two nodes sequentially in the token-space.
For example, suppose that we have 3 nodes which each have one token: node1: 10, node2: 20 & node3: 30. If we write a record whose partition key hashes to 22, to a keyspace with RF3, then the first copy goes to node2, the second goes to node3 and the third goes to node1. Note that each replica is equally valid - there is nothing special about the "first" replica other than that it happens to be stored on the "first" replica node.
Vnodes do not change this process, they just split up each node's token ranges by allowing each node to have more than one token. For example, if our cluster now has 2 vnodes for each node, it might instead look like this: node1: 10, 25, node2: 20, 3 & node3: 30, 21. Now our write that hashed to 22 goes to node3 (because it owns the range from 21-24), and the copies go to node1 and node2.

Number of tokens attribute in Cassandra YAML

Is there any significance to the value used in the 'number of tokens' attribute in the Cassandra's YAML file if all the nodes have got the same value in their respective YAML files? Is it the relative value that makes the difference? For e.g. is there any difference, whatsoever, in the below 2 scenarios(assume a cluster of n nodes):
Case 1: Number of tokens is set as 256 in each of the n nodes.
Case 2: Number of tokens is set as x where x is different than 256 in each of the n nodes.
This value can affect how good data is distributed between nodes - the bigger value is, the more uniform data distribution will be. But this comes at cost of additional overhead because Cassandra will need to maintain all these virtual nodes. Depending on the number of virtual nodes & replication factor, the distribution may vary - for example, for RF=3 & vnodes=8, distribution may vary by ~10%. The recommendation could also different for different versions of Cassandra - for 3.x, recommendation is from 8 to 32. More information you can find in this document.
P.S. If you're using DSE, you may also tweak the allocate_tokens_for_local_replication_factor for better allocation of data.
allocate_tokens_for_local_replication_factor is specific to DSE
Apache Cassandra has the parameter allocate_tokens_for_keyspace

Cassandra token for three replicas

I'm trying to build two 3-node Cassandra clusters in separate data centers. I want to have NetworkToplogyStrategy replication between them, with a replication factor of 3 in each. Thus, I want each node in each data center to have the same records.
Question, what should my token assignment look like for each node? (since i'm not actually partitioning, just replicating).
Thank you!
If you're using Cassandra 1.2 use virtual nodes with automatic assignment.
If you're using 1.1 or earlier, use for one DC the evenly distributed tokens:
0
56713727820156410577229101238628035242
113427455640312821154458202477256070484
(0, 1 and 2 times 2**127/3)
For the other DC, you can choose anything as long as it is also evenly distributed. Offsetting by 1 works:
1
56713727820156410577229101238628035243
113427455640312821154458202477256070485
Although for now the tokens don't matter since all nodes hold the same data, if you want to scale in the future it will help to have them already balanced.

Resources