Cassandra read latency increases while writing - cassandra

I have a cassandra cluster, its read latency increases during writes. The writes mostly happen via spark jobs during the night time. The writes happen in huge bursts, is there a way to reduce read latency during the writes. The writes happen using LOCAL_QUORUM and reads happen using LOCAL_ONE. Is there a way to reduce read latency when writes are happening?
Cassandra Cluster Configs
10 Node cassandra cluster (5 in DC1, 5 in DC2)
CPU: 8 Core
Memory: 32GB
Grafana Metrics

I can give some advice:
Use LCS compaction strategy.
Prefer round-robin load balancing policy for reads.
Choose partition_key wisely so that requests are not bombarded on a single partition.
Partition size also play a good role. Cassandra recommends to have smaller partition size. However, I have tested with Partitions of 10000 rows each with each row having size of 800 bytes. It worked better than with 3000 rows(or even 1 row). Very tiny partitions tend to increase CPU usage when data stored is large in terms of row count. However, very large partitions should be avoided even.
Replication Factor should be chosen strategically . Write consistency level should be decided considering the replication of all keyspaces.

Related

increase cassandra performance for read/write operations

I am using 4 cores and 8gb system and working on a single node with replication factor = 1. It is taking 3 mins to write approx 350k data to table. I want to increase the speed of read and write operation. I would want to explore as many options possible.

Recommended number of partitions in Cassandra

Although Cassandra allows -2^63 to +2^63-1 number of paritions, is there a recommended max number of partitions beyond which performance might suffer?
After about 1 billion partitions per node full repairs (non incremental) begin to have pretty serious issues with over streaming. Particularly with smaller partitions as the validation compactions run slower.
Ideally i would recommend it by partition size not count. Somewhere around 100mb partitions and you will have more efficient compactions without too much of the expensive overhead of the partition index on reads. I wouldn't be too strict on it though as its very hand wavey on a lot of factors. Try to focus on modeling for your queries first then fine tune it if the said model ends up having too large or too many too small partitions (hundreds of millions or more sub 1k or any multi gb ~ish -- per node not total)

How to design a big scale VoltDB cluser with dozens of nodes and hundreds of partitions?

If I have 32 phsical servers which have 32 cores CPU and 128G memory inside, I want to build a VoltDB cluster with all of those 32 servers with K-Safefy=2 and 32 partitions in each server, so we will get VoltDB cluster with 256 available partitions to save data.
Looks there are too many partitions to split tables especially when some tables don't have a lot of records. But there will be too many copies of table if we choice replica of table.
If we build a much smaller cluster with a couple of servers from the beginning, there's a worry that the cluster will have to scale-out soon along with the business grows. Actually I don't konw how the VoltDB will re-organize data when a cluster expand to more nodes horizontally.
Do you have comments? Appreciated.
It may be more optimal to set the sitesperhost to less than 32, so that some % of cores are free to run threads for subsystems like export or database replication, or to handle non-VoltDB processes. Typically somewhere from 8 - 24 is the optimal number.
VoltDB creates the logical partitions based on the sitesperhost, the number of hosts, and the kfactor. If you need to scale out later, you can add additional nodes to the cluster which will increase the number of partitions, and VoltDB will gradually and automatically rebalance data from existing partitions to the new ones. You must add multiple servers together if you have kfactor > 0. For kfactor=2, you would add servers in sets of 3 so that they provide their own redundancy for the new partitions.
Your data is distributed across the logical partitions based on a hash of the partition key value of a record, or the corresponding input parameter for routing the execution of a procedure to a partition. In this way, the client application code does not need to be aware of the # of partitions. It doesn't matter so much which partition each record goes to, but you can assume that any records that share the same partition key value will be located in the same partition.
If you choose partition keys well, they should be columns with high cardinality, such as ID columns. This will evenly distribute the data and procedure execution work across the partitions.
Typically a VoltDB cluster is sized based on the RAM requirements, rather than the need for performance, since the performance is very high on even a very small cluster.
You can contact VoltDB at info#voltdb.com or ask more questions at http://chat.voltdb.com if you'd like to get help with an evaluation or discuss cluster sizing and planning with an expert.
Disclaimer: I work for VoltDB.

Cassandra Read Timeouts on Specific Servers

We have a five node Cassandra cluster with replication factor 3. We are experiencing a lot of Read Timeouts in our application. When we checked tpstats on each Cassandra node, we see that three of the nodes have a lot of Read request drops and a high CPU utilisation, whereas on the other two nodes Read request drops are zero and CPU utilisation is moderate. Note that the total number of Read requests on all servers are almost same.
After taking thread dump we found out that the reason for high CPU utilisation is that Parallel GC is running a lot on the three nodes compared to the other two nodes, which is causing CPU utilisation to go high. What we are not able to understand is why GC should be running more on three nodes and less on two nodes, when the distribution of our partition key and our queries is almost uniform.
Cassandra version is 2.2.3.

Cassandra vnodes performance overhead and changing the number of vnodes

We have a test cluster of 4 nodes, and we've turned on vnodes. It seems that reading out is somewhat slower than the old method (initial_token). Is there some performance overhead by using vnodes? Do we have to increase/decrease the default num_tokens (256) if we only have 4 physical nodes?
Another scenario we would like to test is to change the num_tokens of the cluster on the fly. Is it possible, or do we have to recreate the whole cluster? If possible, how can we accomplish that?
We're using Cassandra 2.0.4.
It really depends on your application, but if you are running Spark queries on top of Cassandra, then a high number of vnodes can significantly slow down your queries, by at least 2x or 5x. This is because Spark cannot subdivide queries across vnodes, and each vnode results in one Spark partition, and a high number of partitions slows down low latency queries.
The recommended number of vnodes is more like 16. This lets you split a two node cluster in theory to 32 nodes max, which is more than enough of an expansion ratio for most folks.

Resources