Difference between consistency level and replication factor in cassandra? - cassandra

I am new to cassandra and I wanted to understand the granular difference between consistency level and replication factor.
Scenario: If I have a replication factor of 2 and consistency level of 3, how the write operation would be performed? When consistency level is set to 3, it means the results will be acknowledged to the client after writing to the 3 nodes. If data is written to 3 nodes, then it gives me a replication factor of 3 and not 2..? Are we sacrificing the replication factor in this case?
Can someone please explain where my understanding is wrong?
Thanks!

Replication factor: How many nodes should hold the data for this keyspace.
Consistency level: How many nodes needs to respond the coordinator node in order for the request to be successful.
So you can't have a consistency level higher than the replication factor simply because you can't expect more nodes to answer to a request than the amount of nodes holding the data.
Here are some references:
Understand cassandra replication factor versus consistency level
http://docs.datastax.com/en/cassandra/2.1/cassandra/architecture/architectureDataDistributeReplication_c.html
http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html

You will get an error: Cannot achieve consistency level THREE.
You can do some further reading here

Consistency levels are of two types, write consistency and read consistency. Consistency levels can be one, two, three or quorum. If it's quorum, atleast half of the nodes should be available for the operation. Otherwise (for one, two, three), the name itself gives you the definition.
Replication factor is the number of copies that you are planning to maintain in the cluster. If the strategy is simple, you will have just one replication factor. If you are having network topology strategy and is using multi dc cluster, then you have to set replication factor for each data centre.
In your scenario, if you have RF as 2 and CL as 3, it will work(I am assuming there are more than 3 nodes in the cluster and atleast one seed node). In this scenario, it will check whether three nodes are up and normal to receive the data and if the CL is met, it will write two copies to two nodes.
For your second question
When consistency level is set to 3, it means the results will be
acknowledged to the client after writing to the 3 nodes. If data is written to 3 nodes, then it gives me a replication factor of 3 and not 2..?
As far as I understood cassandra, It will not be acknowledged to cassandra. It just needs the CL to be met and the number of nodes acknowledged about new data will be equal to the RF.
So, there is no question in sacrificing RF.

Related

default consistency level and quorum setting in Cassandra and what is the best practice for tuning them

I just started learning Cassandra, wondering if there is a default consistency level and quorum setting. seems to me there are quite a few parameters (like replicator number, quorum number) are tunable to balance Consistency with performance, is there a best practice on these settings? what's the default settings?
Thank you very much.
Default READ and WRITE consistency is ONE in cassandra.
Consistency can be specified for each query. CONSISTENCY command can be used from cqlsh to check current consistency value or set new consistency value.
Replication factor is number of copies of data required.
Deciding consistency depends on factors like whether it is write heavy workload or read heavy workload, how many nodes failure can be handled at a time.
Ideally LOCAL_QUORUM READ & WRITE will give you strong consistency.
quorum = (sum_of_replication_factors / 2) + 1
For example, using a replication factor of 3, a quorum is 2 nodes ((3 / 2) + 1 = 2). The cluster can tolerate one replica down.Similar to QUORUM, the LOCAL_QUORUM level is calculated based on the replication factor of the same datacenter as the coordinator node. Even if the cluster has more than one datacenter, the quorum is calculated with only local replica nodes.
Consistency in cassandra
Following are the excellent links and should help you to understand consistency level and its configuration in Cassandra. Second link contains many pictorial diagrams.
https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/dml/dmlConfigConsistency.html#dmlConfigConsistency__about-the-quorum-level
https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/dml/dmlClientRequestsReadExp.html

Not enough replica available for query at consistency SERIAL (2 required but only 1 alive)

Experts,
I have the following configuration 3 nodes cluster (Cassandra 2.1):
- Replication factor of 2
- Consistency level ONE
- Driver consistency level SERIAL
- SimpleStrategy
- GossipingPropertyFileSnitch
With this configuration, if I bring down one node I get the following error:
Not enough replica available for query at consistency SERIAL (2 required but only 1 alive)
Data is distributed evenly across all the nodes and nodetool status correctly shows that one node is down on the running 2 cassandra nodes
With CONSISTENCY ONE and 2 nodes ups, why does it require both the replica nodes to be up???
Also I read that with SERIAL drive consistency wrt WRITE failures:
If one of three nodes is down, the Paxos commit fails under the following conditions:
CQL query-configured consistency level of ALL
Driver-configured serial consistency level of SERIAL
Replication factor of 3
This works if I set the replication factor to 3. But I don't think there should be a need to do so.
Am I missing something here?
You have hit one of the hidden gems of the Paxos protocol in Cassandra. Under the hood, Paxos works in a way that it uses a QUORUM-like consistency level for its calls.
Note that it complains about SERIAL consistency level in your error message instead of the consistency level ONE that you have set. LWT ignores what normal consistency level is set in most cases. It follows either SERIAL or LOCAL_SERIAL consistency level, which maps almost directly to a QUORUM or a LOCAL_QUORUM of nodes.
The quorum of two nodes is: two. Therefore you are getting this error message when one node is down.

Best practice to set Consistency level and Replication factor for Cassandra

If Replication Factor and Consistency Level are set to QUORUM then we can achieve Availability and Consistency but Performance degrade will increase as the number of nodes increases.
Is this statement correct? If yes then what is the best practice to get better result, considering Availability and Consistency as high priority and not to decrease performance as number of nodes increases.
Not necessarily. If you increase the number of nodes in your cluster, but do not alter your replication factor, the number of replicas required for single partition queries does not increase so you should therefore not expect performance to degrade.
With a 10 node cluster, replication factor 3 and CL QUORUM, only 2 replicas are required to meet quorum, the same is true for a 20 node cluster.
Things change if your query requires some kind of fan out that requires touching all replica sets. Since you have more replica sets, your client or the coordinating C* node needs to make more requests to retrieve all of your data which will impact performance.

Cassandra QUORUM write consistency level and multiple DC

I'm a bit confused about how QUORUM write selects nodes to write into in case of multiple DC.
Suppose, for example, that I have a 3 DC cluster with 3 nodes in each DC, and the replications factor is 2, so that the number of replicas needed to achieve QUORUM is 3. Note: this is just an example to help me formulate my question and not the actual configuration.
My question is the following: in case of write, how these 3 replicas will be distributed across all the DCs in my cluster? Is it possible that all 3 replicas will end up in the same DC?
The replication is defined at the key space level. So for example
create keyspace test with replication = { 'class' : 'NetworkTopologyStrategy', 'DC1' : 2, 'DC2' : 2, 'DC3' : 2 };
As you can see clearly each DC will hold two copies of data for that keyspace and not more. You could have another key space in the same cluster defined only to replicate in one DC and not the other two. So its flexible.
Now for consistency, with 3 DCs and RF=2 in each DC, you have 6 copies of data. By definition of Quorum a majority (which is RF/2 + 1) of those 6 members needs to acknowledge the write, before claiming that the write was successful. So 4 nodes needs to respond for a quorum write here and these 4 members could be a combination of nodes from any DC. Remember the number of replicas matter to calculate quorum and not the total no. of nodes in DC.
On a side note, in Cassandra, RF=2 is as good as RF=1. To simplify, lets imagine a 3 node single DC situation. With RF=2 there are two copies of data and in order to achieve quorum ((RF=2)/2 + 1), 2 nodes needs to acknowledge the write. So both the nodes always have to be available. Even if one node fails the writes will start to fail. Event another node can take hints here, but your reads with quorum are bound to fail. So fault tolerance of node failure is equal to zero in this situation.
You could use local_quorum to speed up the writes instead of quorum. Its sacrifice of consistency over speed. Welcome to "eventually consistency".
Consistency Level Determine the number of replicas on which the write must succeed before returning an acknowledgment to the client application
Even at low consistency levels, the write is still sent to all replicas for the written key, even replicas in other data centers. The consistency level just determines how many replicas are required to respond that they received the write.
Source : http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html
So If you set Consistency Level to QUORUM. I assume each DC have RF of 2. And so QUORUM is 3. So all your write still send all replicas of each DC (3 * 2 = 6 node) And will wait for 3 node to success after that it will send the acknowledgment to the client

Cassandra Read/Write CONSISTENCY Level in NetworkTopologyStrategy

I have setup cassandra in 2 data centers with 4 nodes each with replication factor of 2.
Consistency level is ONE (set by default)
I was facing consistency issue when trying to read data at consistency level of ONE.
As read in DataStax documentation, Consistency level (read + write) should be greater than replication factor.
I decided to change the write consistency level to TWO and read consistency level as ONE which resolves the inconsistency problem in single data center.
But in case of multiple data center, the problem would be resolved by consistency level as LOCAL_QUORUM.
How would i achieve that write should be (LOCAL_QUORUM + TWO) so that i should write to the local data center and also on 2 nodes.
Just write using LOCAL_QUORUM in the datacenter you want. If you have a replication factor of 2 in each of your datacenter then the data you are writing in the "local" datacenter will eventually be replicated in the "other" datacenter (but you have no guaranty of when).
LOCAL_QUORUM means: "after the write operation returns, data has been effectively writen on a quorum of nodes in the local datacenter"
TWO means: "after the write operation returns, data has been writen on at least 2 nodes in any of the datacenter"
If you want to read the data you have just written with LOCAL_QUORUM in the same datacenter, you should use LOCAL_ONE consistency. If you read with ONE, then there is a chance that the closest replica is in the "remote" datacenter and therefore not yet replicated by Cassandra.
This also depends on the load balancing strategy configured at the driver level. You can read more about this here: https://datastax.github.io/java-driver/manual/load_balancing/

Resources