Does Cassandra do resharding when a node stores too much data? - cassandra

All I learned is that Cassandra will do resharding when a new node joins the cluster.
Will Cassandra do resharding when a node gets too much data?
If Cassandra has the resharding strategy, how to disable it?

Does Cassandra do resharding when a node stores too much data?
No, it does not. When a node stores too much data, the failure policies take over, and (usually) the node shuts down.
Adding or removing a node are the only times that partition token ranges are recalculated, forcing data to be reassigned to other nodes.

Another answer by #Aaron already explains that resharding only happens during adding or removing nodes. I want to add that you can use virtual nodes, so the partitions are smaller. However, you cannot change the number vnodes on already running physical node/machine.
To use vnodes change configuration in cassandra.yaml:
Remove or comment out initial_token
Uncomment and set num_tokens to desired number of vnodes. 128 or 256 can be a good number.

Related

Cassandra adding new Datacenter with even token distribution

We have a 1 DC cluster running Cassandra 3.11. The DC has 8 nodes total with 16 tokens per node and 3 seed nodes. We use Murmur3Partitioner.
In order to ensure better data distribution for the upcoming cluster in another DC, we want to use the tokens allocation approach where you manually specify initial_token for seed nodes and use allocate_tokens_for_keyspace for non seed nodes.
The problem is that our current datacenter cluster is not well balanced, since we built the cluster without a tokens allocation approach. So currently this means that the tokens are not well distributed. I can't figure out how to calculate initial_token for the new seed nodes in the new Datacenter. I probably cannot consider the token range of the new cluster as independent and calculate the initial token range as I would for a fresh cluster. At this point I am very unsure how to proceed. Any help will be appreciated, thanks.
Currently, I am trying to make a concept of migration and have come to the part where I do not know what to do and the documentation is not helpful.
There are scripts available to calculate the initial_token value, for example, you could use the one here to quickly calculate these values:
https://www.geroba.com/cassandra/cassandra-token-calculator/
You do have the ability to set allocate_tokens_for_keyspace and point it to a keyspace with a replication factor you plan to use for user-created keyspaces in the cluster, if you're adding a new DC, then you probably already have such a keyspace, and this should help you get better distribution. Remember to set this before bootstrapping nodes to the new DC.
Another option would be to avoid using vnodes entirely and go with single token architecture by setting num_tokens to 1. This gives you the ability to bootstrap nodes to the new DC, load/stream data and then monitor the distribution and make changes as needed using 'nodetool move':
https://cassandra.apache.org/doc/3.11/cassandra/tools/nodetool/move.html
This method would require you to monitor the distribution and make changes to the token assignments as needed, and you'd want to follow-up the move command with 'nodetool repair' and 'nodetool cleanup' on all nodes, but it gives you the ability to rectify uneven distribution quickly without bootstrapping new nodes. You would still want to use the same method for calculating the initial_token values with single-token architecture and set that before bootstrap.
I suspect either method could work well for you, but wanted to give you a second option.

Can I upgrade a Cassandra cluster swapping in new nodes running the updated version?

I am relatively new to Cassandra... both as a User and as an Operator. Not what I was hired for, but it's now on my plate. If there's an obvious answer or detail I'm missing, I'll be more than happy to provide it... just let me know!
I am unable to find any recent or concrete documentation that explicitly spells out how tolerant Cassandra nodes will be when a node with a higher Cassandra version is introduced to an existing cluster.
Hypothetically, let's say I have 4 nodes in a cluster running 3.0.16 and I wanted to upgrade the cluster to 3.0.24 (the latest version as of posting; 2021-04-19). For reasons that are not important here, running an 'in-place' upgrade on each existing node is not possible. That is: I can not simply stop Cassandra on the existing nodes and then do an nodetool drain; service cassandra stop; apt upgrade cassandra; service cassandra start.
I've looked at the change log between 3.0.17 and 3.0.24 (inclusive) and don't see anything that looks like a major breaking change w/r/t the transport protocol.
So my question is: Can I introduce new nodes (running 3.0.24) to the c* cluster (comprised of 3.0.16 nodes) and then run nodetool decommission on each of the 3.0.16 nodes to perform a "one for one" replacement to upgrade the cluster?
Do i risk any data integrity issues with this procedure? Is there a specific reason why the procedure outlined above wouldn't work? What about if the number of tokens each node was responsible for was increased with the new nodes? E.G.: 0.16 nodes equally split the keyspace over 128 tokens but the new nodes 0.24 will split everything across 256 tokens.
EDIT: After some back/forth on the #cassandra channel on the apache slack, it appears as though there's no issue w/ the procedure. There were some other comorbid issues caused by other bits of automation that did threaten the data-integrity of the cluster, however. In short, each new node was adding ITSSELF to list list of seed nodes as well. This can be seen in the logs: This node will not auto bootstrap because it is configured to be a seed node.
Each new node failed to bootstrap, but did not fail to take new writes.
EDIT2: I am not on a k8s environment; this is 'basic' EC2. Likewise, the volume of data / node size is quite small; ranging from tens of megabytes to a few hundred gigs in production. In all cases, the cluster is fewer than 10 nodes. The case I outlined above was for a test/dev cluster which is normally 2 nodes in two distinct rack/AZs for a total of 4 nodes in the cluster.
Running bootstrap & decommission will take quite a long time, especially if you have a lot of data - you will stream all data twice, and this will increase load onto cluster. The simpler solution would be to replace old nodes by copying their data onto new nodes that have the same configuration as old nodes, but with different IP and with 3.0.24 (don't start that node!). Step-by-step instructions are in this answer, when it's done correctly you will have minimal downtime, and won't need to wait for bootstrap decommission.
Another possibility if you can't stop running node is to add all new nodes as a new datacenter, adjust replication factor to add it, use nodetool rebuild to force copying of the data to new DC, switch application to new data center, and then decommission the whole data center without streaming the data. In this scenario you will stream data only once. Also, it will play better if new nodes will have different number of num_tokens - it's not recommended to have different num_tokens on the nodes of the same DC.
P.S. usually it's not recommended to do changes in cluster topology when you have nodes of different versions, but maybe it could be ok for 3.0.16 -> 3.0.24.
To echo Alex's answer, 3.0.16 and 3.0.24 still use the same SSTable file format, so the complexity of the upgrade decreases dramatically. They'll still be able to stream data between the different versions, so your idea should work. If you're in a K8s-like environment, it might just be easier to redeploy with the new version and attach the old volumes to the replacement instances.
"What about if the number of tokens each node was responsible for was increased with the new nodes? E.G.: 0.16 nodes equally split the keyspace over 128 tokens but the new nodes 0.24 will split everything across 256 tokens."
A couple of points jump out at me about this one.
First of all, it is widely recognized by the community that the default num_tokens value of 256 is waaaaaay too high. Even 128 is too high. I would recommend something along the lines of 12 to 24 (we use 16).
I would definitely not increase it.
Secondly, changing num_tokens requires a data reload. The reason, is that the token ranges change, and thus each node's responsibility for specific data changes. I have changed this before by standing up a new, logical data center, and then switching over to it. But I would recommend not changing that if at all possible.
"In short, each new node was adding ITSSELF to list list of seed nodes as well."
So, while that's not recommended (every node a seed node), it's not a show-stopper. You can certainly run a nodetool repair/rebuild afterward to stream data to them. But yes, if you can get to the bottom of why each node is adding itself to the seed list, that would be ideal.

Cassandra lower vtokens

We have the need to reduce vtokens on a Cassandra cluster (2 nodes) to compensate for both machines having different storage capabilities. The replication factor is currently 1 so there is no replication of data happening.
Can't we simple reduce vtokens to 32 instead of current 256 and restart server? What'll happen if we try this? Will it stream the extra tokens or will we loose data?
We read about decommissioning the node to copy all data to the bigger one, reconfigure it to have less vtokens, delete cassandra data locally and make it rejoin the cluster, just wondering what happens if we try to reduce vtokens before decommissioning it?
Thanks!
You can't do balancing with vnodes. By virtue of statistics you should have a pretty even distribution of data across your nodes even with 32 vnodes. And fewer vnodes will give you better search performance.
Also keep an eye on CASSANDRA-7032, this should let us go to even lower num_tokens without sacrificing data distribution.

Cassandra: How to find node with matching token for restoring to newer cluster?

I want to restore data from an existing cluster to newer cluster. I want to do so using the method, that of, copying the snapshot SSTables from old cluster to keyspaces of newer cluster, as explained in http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/operations/ops_backup_snapshot_restore_t.html.
The same document says, " ... the snapshot must be copied to the correct node with matching tokens". What does it really mean by "node with matching tokens"?
My current cluster is of 5 nodes and for each node num_tokens: 256. I am gonna create another cluster with same no of nodes and num_tokens and same schema. Do I need to follow the ring order while copying SSTables to newer cluster? How do I find matching target node for a given source node?
I tried command "nodetool ring" to check if I can use token values to match. But this command gives all the tokens for each host. How can I get the single token value (which determines the position of the node in the ring)? If I can get it, then I can find the matching nodes as well.
With vnodes its really hard to copy the sstables over correctly because its not just one assigned token that you have to reassign, but 256. To do what your asking you need to do some additional steps described http://datascale.io/cloning-cassandra-clusters-fast-way/. Basically reassign the 256 tokens of each node to a new node in other cluster so the ring is the same. The article you listed describes loading it on the same cluster which is a lot simpler because you dont have to worry about different topologies. Worth noting that even in that scenario, if a new node was added or a node was removed since the snapshot it will not work.
Safest bet will be to use sstableloader, it will walk through the sstable and distribute the data in the appropriate node. It will also open up possibility of making changes without worrying if everything is correct. Also it ensures everything is on the correct nodes so no worries about human errors. Each node in the original cluster can just run sstableloader on each sstable to the new cluster and you will parallelize the work pretty well.
I would strongly recommend you use this opportunity to decrease the number of vnodes to 32. The 256 default is excessive and absolutely horrible for rebuilds, solr indexes, spark, and most of all it ruins repairs. Especially if you use incremental repairs (default), the additional ranges will cause much more anticompactions and load. If you use sstableloader on each sstable it will just work. Increasing your streaming throughput in the cassandra.yaml will potentially speed this up a bit as well.
If by chance your using OpsCenter this backup and restore to new cluster is automated as well.

Why and when to use Vnodes in Cassandra in real life production scenarios?

I understand that you don't have to rebalance the vnodes but when do we really use
it in production scenarios? does it function the same way as a physical single token node? If so, then why use single token nodes at all? Does vnodes help if I have large amount data and the cluster size (say 300 nodes)?
The main benefit of using vnodes is more evenly distributed data being streamed when bootstrapping a new node. Why? Well, when adding a new node, it will request for the data in its token range. Optimally, the data it requests would be spread out evenly across all nodes reducing the workload for all of the nodes sending the data to the bootstrapping node (and also speeding up the bootstrap process).
Once you have a high number of physical nodes, like your example of 300, it would seem this benefit would be reduced (assuming no hot spotting or data partitioning issues). I'm not aware of an actual guidelines referencing the number of nodes to use or not use vnodes other than what is in the documentation. Yes, it is seen in production.
More information can be found here:
http://docs.datastax.com/en/datastax_enterprise/4.8/datastax_enterprise/config/configVnodes.html
In addition to Chris' excellent answer, I'll make an addition. When you have a large cluster with vnodes, it is helpful to let Cassandra manage the token ranges. Without vnodes, you would end up having to size and re-specify the token range for each (existing and) new node yourself. With vnodes, Cassandra handles that for you.
Compare the difference in the steps listed in the documentation:
Adding a node without vnodes: http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsAddRplSingleTokenNodes.html
vs.
Adding with vnodes: http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_add_node_to_cluster_t.html

Resources