I have 3 cassandra nodes, when I execute a query, 2 nodes are giving same response but 1 node is giving different response
Suppose I executed following query
select * from employee;
Node1 and Node2 are giving 2 rows but Node3 is giving 0 rows(empty response)
How to solve this issue
1.You are not using Network topology.
2.Your replication factor is 2.
Simple strategy : Use only for a single datacenter and one rack. SimpleStrategy places the first replica on a node determined by the partitioner. Additional replicas are placed on the next nodes clockwise in the ring without considering topology (rack or datacenter location).
Go to this link :
https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archDataDistributeReplication.html
I did the following steps, then problem was solved and now the data is in sync in all the 3 nodes
run the command nodetool rebuild on the instances and also
update 'replication_factor': '2' to 'replication_factor': '3'
Related
I have a Cassandra cluster of 3 nodes and I create a keyspace 'abcd' using SimpleStrategy and ReplicationFactor 1. Since I have chosen RF as 1, I assume that any writes to my node-1 should not be replicated across the other 2 nodes.
But when I inserted a record into keyspace/table, I saw this new row is getting inserted in to all nodes in my cluster.
My question is since I have chosen RF as 1 for this keyspace, I would have expected only one node (i.e. node-1) in this cluster should have owned this data, not the rest of the nodes.
Pease correct me if my understanding is wrong.
Since your RF is 1, your data is getting written to only one node. But you can access that data from running the select query from other nodes also as any node in a Cassandra cluster is able to access all the data present in Cluster.
If the node from which you are running the query does not have the data, it will fetch the data from other nodes and display the result.
You can check which exact node has the data by running nodetool getendpoints.
You will need to mention your keyspace, table name and partition key.
I have a 3 node cluster with 1 seed and nodes in different zones. All running in GCE with GoogleCLoudSnitch.
I wanted to change hardware on each node so I started with adding a new seed in a different region which joined perfectly to the cluster. Then I started with "nodetool decommission" and when done I removed the the node when it is down and "nodetool status" states it's not in the cluster. I did this for all nodes and lastly I did it on the "extra" seed in the different region just to remove it to get back to a 3 node cluster.
We lost data! What can possibly be the problem? I saw a commando, "nodetool rebuild", which I ran and actually got some data back. "nodetool cleanup" didn't help either. Should I have run "nodetool flush" prior to "decommission"?
At the time of running "decommission" most keyspaces had ..
{'class' : 'NetworkTopologyStrategy', 'europe-west1' : 2}"
Should I first altered key spaces to include the new region/datacenter, which would be "'europe-west3' : 1" since only one node exist in that datacenter? I also noted that some keyspaces in the cluster had by mistake ..
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }
Could this have caused the loss of data? It seems that it was in the "SimpleStrategy keyspaces" the data was lost.
(Disclaimer: I'm a ScyllaDB employee)
Did you 1st add new nodes to replace the ones you are decommissioning and configured the keyspace replication strategy accordingly? (you only mentioned the new seed node in your description, you did not mention if you did it for the other nodes).
Your data loss can very well be a result of the following:
Not altering the keyspaces to include the new region/zone with the proper replication strategy and replication factor.
Keyspaces that were configured with simple strategy (no netwrok aware) replication policy and replication factor 1. This means that the data was stored only in 1 node, and once that node went down and decommissioned, you basically lost the data.
Did you by any chance took snapshots and stored them outside your cluster? If you did you could try and restore them.
I would highly recommend reviewing these procedures for better understanding and the proper way to perform the procedure you intended to perform:
http://docs.scylladb.com/procedures/add_dc_to_exist_dc/
http://docs.scylladb.com/procedures/replace_running_node/
I have configured a cassandra clustter with 3 nodes
Node1(192.168.0.2) , Node2(192.168.0.3), Node3(192.168.0.4)
Created a keyspace 'test' with replication factor as 2.
Create KEYSPACE test WITH replication = {'class':'SimpleStrategy',
'replication_factor' : 2}
When I stop either Node2 or Node3 (one at a time and both at one time) , I am able to do the CRUD operations on the keyspace.table.
When I stop Node1 and try to update/create a row from Node4 or Node3, getting following error although Node3 and Node4 are up and running-:
All host(s) tried for query failed (tried: /192.168.0.4:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout while
trying to acquire available connection (you may want to increase the
driver number of per-host connections)))
com.datastax.driver.core.exceptions.NoHostAvailableException: All
host(s) tried for query failed (tried: /192.168.0.4:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout while
trying to acquire available connection (you may want to increase the
driver number of per-host connections)))
I am not sure how Cassandra elects a leader if a leader node dies.
So, you are using replication_factor 2, so only 2 nodes will have a replica of you keyspace (not all the 3 nodes).
My first advise is to change the RF to 3.
You have to pay attention to the consistency level you are using; If you have only 2 copies of you data (RF: 2), and you are using Consistency Level QUORUM, it will try to write the data on half of nodes + 1, in this case, all 2 nodes. So if 1 node is down, you will not be able to write/read data.
to verify where the data is replicated you could see how is the ring in you cluster. As you are using SimpleStrategy it will copy the data clockwise direction. And in your case its copied at nodes at 192.168.0.2 and 192.168.0.3.
Take a look at the concepts of replication factor: http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architectureDataDistributeReplication_c.html
And Consistency Level: http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html
Great answer about RF vs CL: https://stackoverflow.com/a/24590299/6826860
You can use this calculator to find out if your setup have a decent consistency. In your case the result is You can survive the loss of no nodes without impacting the application
I think I wasn't clear at response. The replication factor is about how many copies of your data will exists. The consistency level is how many copies your client will wait to be made before get an response from server.
Ex: All your nodes are up. The client make a CQL with CL Quorum, the server will copy the data in 2 nodes (3/2 + 1) and reply to client, in background it will copy the data at the third node as well.
In your example, if you shutdown 2 nodes of a 3 node cluster you will never achieve an QUORUM to make requests (with CL QUORUM), so you have to use consistency level ONE, once the nodes are up again, cassandra will copy the data on them. One thing that can happen is: before cassandra copy the data on other 2 nodes, the client make a request for node1 or node2 and the data is not there yet.
I set keyspace like the following.
CREATE KEYSPACE name_of_keyspace WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'dc1' : 3, 'dc2' : 3};
If I want to follow the rule of this keyspace, do I need to have 3 or 4 nodes in dc1?
The reason why I'm confused is that there are two different types of nodes, one is coordinator node and the other is general node that can be chosen when a node fails.
Should I include this coordinator node as part of general node and create only 3 nodes in dc1 or create 4 nodes to make this work?
In Cassandra all nodes can act as coordinator. So for a request that requires a coordinator the node the client connected to will act as a coordinator.
A RF of 3 with 4 nodes is fine for a DC, but it is not needed unless you have a capacity you are trying to reach with the extra node. In one of my clusters we have 18 nodes for capacity with a RF of 3. That's generally how you scale Cassandra.
The coordinator node is chosen at query time. All nodes have the same capabilities.
When you run a cluster with rf 3 and run a query, for a partition:
need only one node up if you read/write with consistency level 1.
need two nodes if you read/write with quorum or two CL
three nodes if you read/write with all or three CL
Note that the read/writes are issued to all nodes that holds/should write the data, but the driver will wait for the configured level.
Check this page for more information about consistency levels.
So, you can run a 3 nodes cluster with rf 3,and depending on what CL you read/write you can survive 0, 1, or 2 nodes being down.
I have a Cassandra cluster with 2 nodes. I am using NetworkTopologyStrategy
I was trying to increase the replication factor of keyspace in Cassandra to 2. I did the following steps:
UPDATE KEYSPACE demo WITH strategy_options = {DC1:2,DC2:2}; on both the nodes
Then I ran the nodetool repair on both the nodes
Then I ran my Hector code to count the number of rows and columns in the database.
I get the following error: Unavailable Exception
Also when I run the command
./nodetool –h ip_address ring
I found that both nodes ownership is 0 %. Please tell me how should I fix that.
You mention "both nodes", which implies that you have two total nodes rather than two data centers as would be suggested by your strategy options. Specifying {DC1:2,DC2:2} would require a minimum of four nodes (two in each DC to satisfy the replication factor), although this would not be advised since essentially all your nodes would be points of failure.
A minimal Cassandra cluster should have at least three nodes, in which case a RF of two would allow one node to go down without bringing down the system. It sounds like you have a single cluster (rather than two data centers), so what you really need is one more node (3 total), RF=2, using the SimpleStrategy instead of NetworkTopologyStrategy.