Cassandra- Replacing seed node without downtime - cassandra

I have a 3 node cassandra cluster with replication factor of 3 and also run on different az's in aws.
In my current setup I have all 3 nodes configured as seed nodes.(1 node per az). So when a seed node goes down, how can I bring it back up without downtime?
I cannot think of a proper way to do it. Because the first step is to remove the seed node from the seed node list and do a rolling restart of all the servers. When I do this, there is a duration when there is only one node online and since my application does queries with QUORUM it fails.
Is there a way to achieve this without downtime by only having 3 replicas ?
Thanks in advance.

Seed nodes are used for initial discovery of cluster's topology, and then all the nodes are discovered via gossip & continue to exchange information until next restart. If your seed node simply went down, then just start it, and it will connect the other seed nodes & get cluster information from them.
The removal of seed node & rolling restart is required only if you completely remove the node, and replace it with another (as described in documentation).

Related

How to add a seed node in cassandra 3 clutser?

I was trying to change one of my node to a machine with high specs (more memory and CPU). So I run node decommission wait to leave the ring, terminate the machine and add a new machine. After that I configured the cassandra.yaml with
cluster_name
listend_address
rpc_broadcast_address
-seeds: with the ip's machine
After starting the cassandra service, the seed node joined the ring right way and with a low load. Which for me is really strange, since others node took a lot of time to join the ring.
After 1h the seed node is still with the same load.
What should I do to add the seed node?
Thanks in advance.
Yes, many newer users of Cassandra have this idea that a seed node is some mystical master-node equivalent. It's really not anything special. Essentially, a node needs to know about the cluster topology at start-time, and the seeds property provides a list of nodes that should be there.
In theory, you can have a new node designate any existing node as its seed node. And that node could designate another node as its seed node. And so on, and so on. All it does is use that node to figure out the cluster topology.
After starting the cassandra service, the seed node joined the ring right way and with a low load. Which for me is really strange, since others node took a lot of time to join the ring.
After 1h the seed node is still with the same load.
What should I do to add the seed node?
Seed nodes do not stream data. The extra steps required to get data on to a seed node, are one of the main reasons that it's not a good idea to designate all nodes as seed nodes.
You could just run a nodetool repair or rebuild on the new seed node, and that would stream data to it. The problem with this approach, is that it will still be accepting requests (and probably failing) while it is streaming data.
The other approach, would be to add the new node while specifying other existing node(s) in that node's seeds list.
Once it is up and has streamed data, then you have a couple of options:
Leave everything as-is, and any future nodes can use your new node in its seed list.
If your other existing nodes have a node(s) in their seed list that don't exist anymore, you can update those to point to your new node as a seed.
The nice part about option #2, is that you can change that in the cassandra.yaml and not have to restart them. This is because the only time you'll ever need that change is when a restart happens anyway. The seed node designation don't come into play during normal operations.
Hope that helps!

cassandra 2.2.8: How to add node to cassandra dataCenter /Cluster with minimum impact

I added a new node to Cassandra cluster by making it seed node and than started rebuilding it (nodetool rebuild ) command. Although node joined the cluster quickly, the rebuild process that started streaming from all nodes in selected caused the whole dc nodes to slow down. The impact on application is severe. I'll have to stop rebuild process in order to keep normal operation ON!.
Here, I'm seeking advice if you guys can share ways/tricks so to minimize the impact of (node rebuild ) operation on rest of dc nodes and application.
I'll much appreciate your suggestions - thanks for reading my message and your help in advance.
While adding a new node you shouldn't make it a seed node. The seed node is used to bootstrap other nodes and join them in cluster. Making the new node as a seed node will not allow to join the new node in the cluster. Follow the steps provided in the Cassandra docs provided in the link below.
https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/operations/ops_add_node_to_cluster_t.html
This is the best way to add a new node in the cluster.
Note: Make sure the new node is not listed in the -seeds list. Do not make all nodes seed nodes. Please read Internode communications (gossip).
As I understand, you added a node as a seed node just so it will not bootstrap and join the cluster instantly. While is approach is right for it joins the cluster quickly, the downside is, it will not bootstrap and hence will not copy all the data from other nodes that it is responsible for. When you run a rebuild on that node, data is blindly copied (without doing any validation) from other nodes, which can choke up existing nodes throughput and your network pipeline. This approach is very safe and is recommended when using adding a new DC but not when adding nodes to an existing DC.
When you are adding a node, the simplest way is to add a node using the procedure described here. https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/operations/ops_add_node_to_cluster_t.html
When the node bootstrap, it will copy data needed from other nodes but will not start taking client connections until it fully bootstraps and validates the data. So, add one node at a time and let it bootstrap so all the necessary data is copied and validated. After you are done adding the number of nodes you desire, run a cleanup on all the previously joined nodes to cleanup all the keys for which the old nodes are not responsible.

Adding new node to Cassandra cluster

I have a 4 node cluster and will be adding an additional node in two days. We aren't using vnodes.
Just wondering the best way to rebalance the cluster after I'm done. Do I just bring the new node up and then start the nodetool move?
Or do I shut each node down, change the initial_token value for each one (using one of those generators to calculate the values for me) and then bring each node up?
I just want to know the simplest way to do this from command line. The new node already has Cassandra installed as it was initially a non-production server, I will delete the data off of the node and change the config files accordingly for the new cluster it will now be a part of, just unsure as to the other steps.
From this page Adding or replacing single-token nodes, the simplest mechanism is to start the new node with it's initial-token left empty in cassandra.yaml. This will make the cluster 'split the token range of the heaviest loaded node and position the new node there'. This won't give you a balanced cluster.
If you want a balanced cluster then you have to go through the nodetool move, node restart, nodetool cleanup procedure you mentioned.

Cannot change the number of tokens from 1 to 256

I am using Cassandra 2.0 and cluster has been setup with 3 nodes. Nodetool status and ring showing all the three nodes. I have specified tokens for all the nodes.
I followed the below steps to change the configuration in one node:
1) sudo service cassandra stop
2) updated cassandra.yaml (to update thrift_framed_transport_size_in_mb)
3) sudo srevice cassandra start
The specific not started successfully and system.log shows below exception:
org.apache.cassandra.exceptions.ConfigurationException: Cannot change
the number of tokens from 1 to 256
What is best mechanism to restart the node without losing the existing data in the node or cluster ?
Switching from Non-Vnodes to Vnodes has been a slightly tricky proposition for C* and the mechanism for previously performing this switch (shuffle) is slightly notorious for instability.
The easiest way forward is to start fresh nodes (in a new datacenter) with vnodes enabled and to transfer data to those nodes via repair.
I was also getting this error while I was trying to change the number of tokens from 1 to 256. To solve this I tried the following:
Scenario:
I have 4 node DSE (4.6.1) cassandra cluster. Let say their FQDNs are: d0.cass.org, d1.cass.org, d2.cass.org, d3.cass.org. Here, the nodes d0.cass.org and d1.cass.org are the seed providers. My aim is to enable nodes by changing the num_token attribute in the cassandra.yaml file.
Procedure to be followed for each node (one at a time):
Run nodetool decommission on one node: nodetool decommission
Kil the cassandra process on the decommissioned node. Find the process id for dse cassandra using ps ax | grep dse and kill <pid>
Once the decommissioning of the node is successful, go to one of the remaining nodes and check the status of the cassandra cluster using nodetool status. The decommissioned node should not appear in the list.
Go to one of the active seed_providers and type nodetool rebuild
On the decommissioned node, open the cassandra.yaml file and uncomment the num_tokens: 256. Save and close the file. If this node was originally seed provider, make sure that it's ip-address is removed from the seeds: lists from cassandra.yaml file. If this is not done, the stale information about the cluster topology it has will hinder with the new topology which is being provided by the new seed node. On successful start, it can be added again in the seed list.
Restart the remaining cluster either using the corresponding option in opscenter or manually stopping cassandra on each node and starting it again.
Finally, start cassandra on it using dse cassandra command.
This should work.

Adding a new node to existing cluster

Is it possible to add a new node to an existing cluster in cassandra 1.2 without running nodetool cleanup on each individual node once data has been added?
It probably isn't but I need to ask because I'm trying to create an application where each user's machine is a server allowing for endless scaling.
Any advice would be appreciated.
Yes, it is possible. But you should be aware of the side-effects of not doing so.
nodetool cleanup purges keys that are no longer allocated to that node. According to the Apache docs, these keys count against the allocated data for that node, which can cause the auto bootstrap process for the next node to not properly balance the ring. So depending on how you are bringing new user machines into the ring, this may or may not be a problem.
Also keep in mind that nodetool cleanup only needs to be run on nodes that lost keyspace to the new node - i.e. adjacent nodes, not all nodes, in the cluster.

Resources