Cassandra read consistency is one, but node connects to another node - cassandra

3 node cluster and RF of 3 means every node has all the data. Consistency is ONE.
So when queried for some data on node-1, ideally as node-1 has all the data it should be able to complete my query.
But when I checked how my query is running using 'tracing on', it shows me that it connects to node-2 also, which is not needed as per my understanding.
Am I missing something here ?
Thanks in advance.
Edited ::
Added the output of 'tracing on'.
It can be seen in the image that, node 10.101.201.3 has contacted 10.101.201.4

3 node cluster and RF of 3 means every node has all the data.
Just because every node has 100% of all data, does not mean that every node has 100% of all token ranges. The token ranges in a 3 node cluster will be split-up evenly # ~33% each.
In short, node-1 may have all of the data, but it's only primarily responsible for 33% of it. When the partition key is hashed, the query is likely being directed toward node-2 because it is primarily responsible for that partition key...despite the fact that the other nodes contain secondary and tertiary replicas.
cqlsh. Does this change if I'm running the query from application code?
Yes, because the specified load balancing policy (configured in app code) can also affect this behavior.

Related

Cassandra LOCAL_QUORUM is waiting for remote datacenter responses

We have a 2 datacenters ( One in EU and one in US ) cluster with 4 nodes each deployed in AWS.
The nodes are separated in 3 racks ( Availability zones ) each.
In the cluster we have a keyspace test with replication: NetworkTopologyStrategy, eu-west:3, us-east:3
In the keyspace we have a table called mytable that has only one row 'id' text
Now, we were doing some tests on the performance of the database.
In CQLSH with a consistency level of LOCAL_QUORUM we were doing some inserts with TRACING ON and we noticed that the requests were not working as we expected them.
From the tracing data we found out that the coordinator node was hitting as expected 2 other local nodes and was also sending a request to one of the remote datacenter nodes. Now the problem here was that the coordinator was waiting not only for the local nodes ( who finished in no time ) but for the remote nodes too.
Now since our 2 datacenters are geographically far away from each other, our requests were taking a very long time to complete.
Notes:
- This does not happen with DSE but our understanding was we don't need to pay crazy money for LOCAL_QUORUM to work as is expected
There is a high probability that you're hitting CASSANDRA-9753 when the non-zero dclocal_read_repair_chance will trigger a query against remote DC. You need to check the trace for hint about triggering of read repair for your query. If you really get it, then you can set dclocal_read_repair_chance to 0 - this parameter is deprecated anyway...
For functional and performance tests it would be better to use the driver instead of CQLSH, as most of the time that will be the way that you are interacting with the database.
For this case, you may use a DC-aware policy like
Cluster cluster = Cluster.builder()
.addContactPoint("127.0.0.1")
.withLoadBalancingPolicy(
DCAwareRoundRobinPolicy.builder()
.withLocalDc("myLocalDC")
.build()
).build();
This is modified from the example here, where all the clauses that allow to interact with remote datacenters are removed, as your purpose is to isolate the calls to local.

How is the coordinator node in cassandra determined by a client driver? [duplicate]

This question already has answers here:
how Cassandra chooses the coordinator node and the replication nodes?
(2 answers)
Closed 3 years ago.
I don't understand the load balancing algorithm in cassandra.
It seems that the TokenAwarePolicy can be used to route my request to the coordinator node holding the data. In particular, the documentation states (https://docs.datastax.com/en/developer/java-driver/3.6/manual/load_balancing/) that it works when the driver is able to automatically calculate a routing-key. If it can, I am routed to the coordinator node holding the data, if not, I am routed to another node. I can still specify the routing-key myself if I really want to reach the data without any extra hop.
What does not make sense to me:
If the driver cannot calculate the routing-key automatically, then why can the coordinator do this? Does it have more information than the client driver? Or does the coordinator node then ask every other node in the cluster on my behalf? This would then not scale, right?
I thought that the gossip protocol is used to share the topology of the ring among all nodes (AND the client driver). The client driver than has the complete 'ring' structure and should be equal to any 'hop' node.
Load balancing makes sense to me when the client driver determines the N replicas holding the data, and then prioritizes them (host-distance, etc), but it doesn't make sense to me when I reach a random node that is unlikey to have my data.
Token aware load balancing happens only for that statements that are able to hold routing information. For example, for prepared queries, driver receives information from cluster about fields in query, and has information about partition key(s), so it's able to calculate token for data, and select the node. You can also specify the routing key youself, and driver will send request to corresponding node.
It's all explained in the documentation:
For simple statements, routing information can never be computed automatically
For built statements, the keyspace is available if it was provided while building the query; the routing key is available only if the statement was built using the table metadata, and all components of the partition key appear in the query
For bound statements, the keyspace is always available; the routing key is only available if all components of the partition key are bound as variables
For batch statements, the routing information of each child statement is inspected; the first non-null keyspace is used as the keyspace of the batch, and the first non-null routing key as its routing key
When statement doesn't have routing information, the request is sent to node selected by nested load balancing policy, and the node-coordinator, performs parsing of the statement, extract necessary information and calculates the token, and forwards request to correct node.

Cassandra DB - Node is down and a request is made to fetch data in that Node

If we configured our replication factor in such a way that there are no replica nodes (Data is stored in one place/Node only) and if the Node contains requested data is down, How will the request be handled by Cassandra DB?
Will it return no data or Other nodes gossip and somehow pick up data from failed Node(Storage) and send the required response? If data is picked up, Will data transfer between nodes happen as soon as Node is down(GOSSIP PROTOCOL) or after a request is made?
Have researched for long time on how GOSSIP happens and high availability of Cassandra but was wondering availability of data in case of "No Replicas" since I do not want to waste additional Storage for occasional failures and at the same time, I need availability and No data loss(though delayed)
I assume when you say that there is "no replica nodes" you mean that you have set the Replication Factor=1. In this case if the request is a Read then it will fail, if the request is a write it will be stored as a hint, up to the maximum hint time, and will be replayed. If the node is down for longer than the hint time then that write will be lost. Hinted Handoff: repair during write path
In general only having a single replica of data in your C* cluster goes against some the basic design of how C* is to be used and is an anti-pattern. Data duplication is a normal and expected part of using C* and is what allows for it's high availability aspects. Having an RF=1 introduces a single point of failure into the system as the server containing that data can go out for any of a variety of reasons (including things like maintenance) which will cause requests to fail.
If you are truly looking for a solution that provides high availability and no data loss then you need to increase your replication factor (the standard I usually see is RF=3) and setup your clusters hardware in such a manner as to reduce/remove potential single points of failure.

how does cassandra reacts when a write is being performed in node and node went down

We have 2 node Cassandra cluster. Replication factor is 1 and consistency level is 1. We are not using replication as the data we are inserting is very huge for each record.
How does Cassandra reacts when a node is down when write is being performed in that node? We are using Hector API from Java client.
My understanding is that Cassandra will perform the write to other node which is running.
No, using CL.ONE the write would not be performed if the inserted data belongs to the tokenrange of the downed node. The consistency level defines how many replica nodes have to respond to accept the request.
If you want to be able to write, even if the replica node is down, you need to use CL.ANY. ANY will make sure that the coordinator stores a hint for the request. Hints are stored in System.Hints table. After the replica comes back up again, all hints will be processed and sent to the upcoming node.
Edit
You will receive the following error:
com.datastax.driver.core.exceptions.UnavailableException: Not enough replica available for query at consistency ONE (1 required but only 0 alive)

Which couchbase node will serve request?

I am having NodeJS service which talks to couchbase cluster to fetch the data. The couchbase cluster has 4 nodes(running on ip1, ip2, ip3, ip4) and service also is running on same 4 servers. On all the NodeJS services my connection string looks like this:
couchbase://ip1,ip2,ip3,ip4
but whenever I try to fetch some document from bucket X, console shows node on ip4 is doing that operation. No matter which NodeJS application is making request the same ip4 is serving all the request.
I want each NodeJS server to use their couchbase node so that RAM and CPU consumption on all the servers are equal so I changed the order of IPs in connection string but every time request is being served by same ip4.
I created another bucket and put my data in it and try to fetch it but again it went to same ip4. Can someone explain why is this happening and can it cause high load on one of the node?
What do you mean by "I want each NodeJS server to use their couchbase node"?
In Couchbase, part of the active dataset is on each node in the cluster. The sharding is automatic. When you have a cluster, the 1024 active vBuckets (shards) for each Bucket are spread out across all the nodes of the cluster. So with your 4 nodes, there will be 256 vBuckets on each node. Given the consistent hashing algorithm used by the Couchbase SDK, it will be able to tell from the key which vBucket that object goes into and combined with the cluster map it got from the cluster, know which node that vBucket lives in the cluster. So an app will be getting data from each of the nodes in the cluster if you have it configured correctly as the data is evenly spread out.
On the files system there will be as part of the Couchbase install a CLI tool call vbuckettool that takes an objectID and clustermap as arguments. All it does is the consistent hashing algorithm + the clustermap. So you can actually predict where an object will go even if it does not exist yet.
On a different note, best practice in production is to not run your application on the same nodes as Couchbase. It really is supposed to be separate to get the most out of its shared nothing architecture among other reasons.

Resources