Does increasing the replication factor improve read latency? - cassandra

If we increase the replication factor, does it improve read latency in case of Cassandra.

It depends on your read consistency level. If you're reading at *_ONE, it won't make a difference. If you're reading at *_QUORUM, it will likely increase latency (more replicas to read and reconcile).
In short, increasing the RF will not improve (lower) read latency. In fact, the best case scenario is that it doesn't affect it at all.
On the other hand, if your cluster is serving tens of thousands of reads per second, adding nodes (with the same RF) should improve read latency. This of course is assuming that your current cluster nodes are overwhelmed during high throughput scenarios.

Related

Cassandra read latency increases while writing

I have a cassandra cluster, its read latency increases during writes. The writes mostly happen via spark jobs during the night time. The writes happen in huge bursts, is there a way to reduce read latency during the writes. The writes happen using LOCAL_QUORUM and reads happen using LOCAL_ONE. Is there a way to reduce read latency when writes are happening?
Cassandra Cluster Configs
10 Node cassandra cluster (5 in DC1, 5 in DC2)
CPU: 8 Core
Memory: 32GB
Grafana Metrics
I can give some advice:
Use LCS compaction strategy.
Prefer round-robin load balancing policy for reads.
Choose partition_key wisely so that requests are not bombarded on a single partition.
Partition size also play a good role. Cassandra recommends to have smaller partition size. However, I have tested with Partitions of 10000 rows each with each row having size of 800 bytes. It worked better than with 3000 rows(or even 1 row). Very tiny partitions tend to increase CPU usage when data stored is large in terms of row count. However, very large partitions should be avoided even.
Replication Factor should be chosen strategically . Write consistency level should be decided considering the replication of all keyspaces.

Recommended number of partitions in Cassandra

Although Cassandra allows -2^63 to +2^63-1 number of paritions, is there a recommended max number of partitions beyond which performance might suffer?
After about 1 billion partitions per node full repairs (non incremental) begin to have pretty serious issues with over streaming. Particularly with smaller partitions as the validation compactions run slower.
Ideally i would recommend it by partition size not count. Somewhere around 100mb partitions and you will have more efficient compactions without too much of the expensive overhead of the partition index on reads. I wouldn't be too strict on it though as its very hand wavey on a lot of factors. Try to focus on modeling for your queries first then fine tune it if the said model ends up having too large or too many too small partitions (hundreds of millions or more sub 1k or any multi gb ~ish -- per node not total)

Best practice to set Consistency level and Replication factor for Cassandra

If Replication Factor and Consistency Level are set to QUORUM then we can achieve Availability and Consistency but Performance degrade will increase as the number of nodes increases.
Is this statement correct? If yes then what is the best practice to get better result, considering Availability and Consistency as high priority and not to decrease performance as number of nodes increases.
Not necessarily. If you increase the number of nodes in your cluster, but do not alter your replication factor, the number of replicas required for single partition queries does not increase so you should therefore not expect performance to degrade.
With a 10 node cluster, replication factor 3 and CL QUORUM, only 2 replicas are required to meet quorum, the same is true for a 20 node cluster.
Things change if your query requires some kind of fan out that requires touching all replica sets. Since you have more replica sets, your client or the coordinating C* node needs to make more requests to retrieve all of your data which will impact performance.

Hadoop/Spark : How replication factor and performance are related?

Without discussing all other performance factors, the disk space and the Name node objects, how can replication factor emproves the performance of MR, Tez and Spark.
If we have for example 5 datanades, does it better for the execution engine to set the replication to 5 ? Whats the best and the worst value ?
How this can be good for aggregations, joins, and map-only jobs ?
One of the major tenants of Hadoop is moving the computation to the data.
If you set the replication factor approximately equal to the number of datanodes, you're guaranteed that every machine will be able to process that data.
However, as you mention, namenode overhead is very important and more files or replicas causes slow requests. More replicas also can saturate your network in an unhealthy cluster. I've never seen anything higher than 5, and that's only for the most critical data of the company. Anything else, they left at 2 replicas
The execution engine doesn't matter too much other than Tez/Spark outperforming MR in most cases, but what matters more is the size of your files and what format they are stored in - that will be a major drive in execution performance

nodetool repair across replicas of data center

Just want to understand the performance of 'nodetool repair' in a multi data center setup with Cassandra 2.
We are planning to have keyspaces with 2-4 replicas in each data center. We may have several tens of data centers. Writes are done with LOCAL_QUORUM/EACH_QUORUM consistency depending on the situation and reads are usually done with LOCAL_QUORUM consistency. Questions:
Does nodetool repair complexity grow linearly with number of replicas across all data centers?
Or does nodetool repair complexity grow linearly with a combination of number of replicas in the current data center, and number of data centers? Vaguely, this model could possibly sync data with each of the individual nodes in current data center, but at EACH_QUORUM-like operation against replicas in other data centers.
To scale the cluster, is it better to add more nodes in an existing data center or add a new data center assuming constant number of replicas as a whole? I ask this question in the context of nodetool repair performance.
To understand how nodetool repair affects the cluster or how the cluster size affects repair, we need to understand what happens during repair. There are two phases to repair, the first of which is building a Merkle tree of the data. The second is having the replicas actually compare the differences between their trees and then streaming them to each other as needed.
This first phase can be intensive on disk io since it will touch almost all data on the disk on the node on which you run the repair. One simple way to avoid repair touching the full disk is to use the -pr flag. When using -pr, it will disksize/RF instead of disksize data that repair has to touch. Running repair on a node also sends a message to all nodes that store replicas of any of these ranges to build merkle trees as well. This can be a problem, since all the replicas will be doing it at the same time, possibly making them all slow to respond for that portion of your data.
The factor which determines how the repair operation affects other data centers is the use of the replica placement strategy. Since you are going to need consistency across data centers (EACH_QOURUM cases) it is imperative that you use a cross-dc replication strategy like the Network Topology strategy in your case. For repair this will mean that you cannot limit yourself to local dc while running the repair since you have some EACH_QUORUM consistency cases. To avoid a repair affecting all replicas in all data centers, you should a) Wrap your replication strategy using Dynamic snitch and configure the badness threshold properly b) Use -snapshot option while running the repair.
What this will do is take a snapshot of your data (snapshots are just hardlinks to existing sstables, exploiting the fact that sstables are immutable, thus making snapshots extremely cheap) and sequentially repair from the snapshot. This means that for any given replica set, only one replica at a time will be performing the validation compaction, allowing the dynamic snitch to maintain performance for your application via the other replicas.
Now we can answer the questions you have.
Does nodetool repair complexity grow linearly with number of replicas across all data centers?
You can limit this by wrapping your replication strategy with Dynamic snitch and pass -snapshot option during repair.
Or does nodetool repair complexity grow linearly with a combination of number of replicas in the current data center, and number of data centers? Vaguely, this model could possibly sync data with each of the individual nodes in current data center, but at EACH_QUORUM-like operation against replicas in other data centers.
The complexity will grow in terms of running time with the number of replicas if you use the approach above. This is because the above approach will do a sequential repair on one replica at a time.
To scale the cluster, is it better to add more nodes in an existing data center or add a new data center assuming constant number of replicas as a whole? I ask this question in the context of nodetool repair performance.
From nodetool repair perspective IMO, this does not make any difference if you take the above approach. Since it depends on the overall number of replicas.
Also, the goal of repair using nodetool is so that deletes do not come back. The hard requirement for routine repair frequency is the value of gc_grace_seconds. In systems that seldom delete or overwrite data, you can raise the value of gc_grace with minimal impact to disk space. This allows wider intervals for scheduling repair operations with the nodetool utility. One of the recommended ways to avoid frequent repairs is to have immutability of records by design. This may be important to you since you need to run on a tens of data centers and ops will otherwise already be painful.

Resources