Imagine we have a cassandra cluster with 8 nodes. And we installed 1 million objects and out of that 100 objects are popular.
So, there is a high probability to distribute these 100 objects to all the 8 nodes. This mean that the whole cluster become heavily loaded because of only few popular objects. isn't it?
In such a case, increase replication of objects will make huge effect on the system.
Do you think in such a situation, higher degree of replication would downgrade the system performance?
How the replication factor affect access distributions in such a system?
Related
I have a cluster 4 cassandra nodes. I have recently added a new node but data processing is taking too long. Is there a way to make this process faster ? output of nodetool
Less data per node. Your screenshot shows 80TB per node, which is insanely high.
The recommendation is 1TB per node, 2TB at most. The logic behind this is bootstrap times get too high (as you have noticed). A good Cassandra ring should be able to rapidly recover from node failure. What happens if other nodes fail while the first one is rebuilding?
Keep in mind that the typical model for Cassandra is lots of smaller nodes, in contrast to SQL where you would have a few really powerful servers. (Scale out vs scale up)
So, I would fix the problem by growing your cluster to have 10X - 20X the number of nodes.
https://groups.google.com/forum/m/#!topic/nosql-databases/FpcSJcN9Opw
Without discussing all other performance factors, the disk space and the Name node objects, how can replication factor emproves the performance of MR, Tez and Spark.
If we have for example 5 datanades, does it better for the execution engine to set the replication to 5 ? Whats the best and the worst value ?
How this can be good for aggregations, joins, and map-only jobs ?
One of the major tenants of Hadoop is moving the computation to the data.
If you set the replication factor approximately equal to the number of datanodes, you're guaranteed that every machine will be able to process that data.
However, as you mention, namenode overhead is very important and more files or replicas causes slow requests. More replicas also can saturate your network in an unhealthy cluster. I've never seen anything higher than 5, and that's only for the most critical data of the company. Anything else, they left at 2 replicas
The execution engine doesn't matter too much other than Tez/Spark outperforming MR in most cases, but what matters more is the size of your files and what format they are stored in - that will be a major drive in execution performance
Is there a relation between Cassandra's configuration parameters(given below with current values), Datastax's C++ driver configuration parameters(given below with current values) and the node's hardware specifications(no. of processors, RAM, no. of disks etc.)
Cassandra's Configuration Parameters(in YAML)
concurrent_reads set as 16
concurrent_writes set as 256
native_transport_max_threads set as 256
native_transport_max_frame_size_in_mb set as 512
Datastax's C++ Driver Configuration Parameters
cass_cluster_set_num_threads_io set as 10
cass_cluster_set_core_connections_per_host set as 1
cass_cluster_set_max_connections_per_host set as 20
cass_cluster_set_max_requests_per_flush set as 10000
Node's specs
No. of processors: 32
RAM: >150 GB
No. of hard disks: 1
Cassandra's Version: 3.11.2
Datastax C++ driver version: 2.7
RHEL version: 6.5
I have a cluster of 2 nodes and I've been getting dismal throughput(12000 ops/second). 1 operation = read + write(I can't use row cache). Is there any parameter which should've been set higher/lower(considering the nodes' specs)?
Please also note that my read+write application is multi-threaded(10
threads). Also, I'm doing asynchronous read+ asynchronous write(using future).
Replication factor is 2, both nodes are in the same DC, consistency
level for both read and write is also 2.
Some of the configuration properties in Cassandra are computed from available CPU cores and drives.
concurrent_reads = 16 * (number of drives)
concurrent_writes = 8 * (CPU cores)
It looks like you've done that, although I would question whether or not your 32 CPUs are all physical cores, or hyper-threaded.
I have a cluster of 2 nodes and I've been getting dismal throughput(12000 ops/second).
Just my opinion, but I think 12k ops/sec is pretty good. Actually REALLY good for a two node cluster. Cassandra scales horizontally, and linearly at that. So the solution here is an easy one...add more nodes.
What is your target operations per second? Right now, you're proving that you can get 6k ops/second per node. Which means, if you add another, the cluster should support 18K/sec. If you go to six nodes, you should be able to support 36k/sec. Basically, figure out your target, and do the math.
One thing you might consider, is to try ScyllaDB. Scylla is a drop-in replacement for Cassandra, which trumpets the ability to hit very high throughput requirements. The drawback, is that I think Scylla is only Cassandra 2.1 or 2.2 compatible ATM. But it might be worth a try based on what you're trying to do.
I'm doing some prototyping/benchmarking on Titan, a NoSQL graph database. Titan uses Cassandra as back-end.
I've got one Titan-Cassandra VM running and two cassandra VM's.
Each of them owning roughly 33% of the data (replication factor 1):
All the machines have 4GB of RAM and 4 i7 cores (shared).
I'm interested in all adjacent nodes, so I call Rexter (a REST API) with: http://192.168.33.10:8182/graphs/graph/vertices/35082496/both
These are the results (in seconds):
Note that with the two nodes test, the setup was the exact same as described above, except there is one Cassandra node less. The two nodes (titan-casssandra and Cassandra) both owned 50% of the data.
Titan is the fastest with 1 node and performance tend to degrade when more nodes are added. This is the opposite of what distribution should accomplish, so obviously I'm doing something wrong, right?
These is my Cassandra config:
Cassandra YAML: http://pastebin.com/ZrsRdtuD
Node 2 and node 3 have the exact same YAML file. The only difference is the listen_address (this is equal to the node's IP)
How to improve this performance?
If you need any additional information, don't hesitate to reply.
Thanks
We have a test cluster of 4 nodes, and we've turned on vnodes. It seems that reading out is somewhat slower than the old method (initial_token). Is there some performance overhead by using vnodes? Do we have to increase/decrease the default num_tokens (256) if we only have 4 physical nodes?
Another scenario we would like to test is to change the num_tokens of the cluster on the fly. Is it possible, or do we have to recreate the whole cluster? If possible, how can we accomplish that?
We're using Cassandra 2.0.4.
It really depends on your application, but if you are running Spark queries on top of Cassandra, then a high number of vnodes can significantly slow down your queries, by at least 2x or 5x. This is because Spark cannot subdivide queries across vnodes, and each vnode results in one Spark partition, and a high number of partitions slows down low latency queries.
The recommended number of vnodes is more like 16. This lets you split a two node cluster in theory to 32 nodes max, which is more than enough of an expansion ratio for most folks.