What would be the minimal and the highest setup for a 90TB cassandra cluster. Kindly include the spec of processor, switch, hard disks and RAM. The no. of nodes is 5. Datastax's cassandra is gonna be used, so I guess in-memory function requires more amount of RAM.
I found a Document for determining the configurations for the Datastax cassandra nodes. Found here.
http://www.datastax.com/documentation/cassandra/2.0/pdf/cassandra20.pdf
Related
I am currently working on setting up a Cassandra cluster that will be used by different applications each with their own keyspace (in a multi-tenancy fashion).
So I was wondering if I could limitate the usage of my cluster for each keyspace individually.
For example, if keyspace1 is using 65% of the cluster resources, every new request on that keyspace would be put in a queue so it doesn't impact requests on other keyspaces.
I know I can get statistics on each keyspace using nodetool cfstats but I don't know how to take counter measures.
Cluster resources is also a term to define as it can be total CPU usage, JVM heap usage, or proportion of write/read on each keyspace on the cluster at instant t.
Also, if you have strategies to avoid entering into this kind of situation, I'm glad to hear about it !
No, Cassandra doesn't have such functionality. That's why it's recommended to setup separate clusters to isolate from noisy neighbors...
Theoretically you can do this on Docker/Kubernetes/... but it could take a lot of resources to build something working reliably.
We are running Spark/Hadoop on a different set of nodes than Cassandra. We have 10 Cassandra nodes and multiple spark cores but Cassandra is not running on Hadoop. Performance in fetching data from Cassandra through spark(in yarn client mode) is not very good and bulk data reads from HDFS are faster(6 mins in Cassandra to 2 mins in HDFS). Changing Spark-Cassandra parameters is not helping much also.
Will deploying Hadoop on top of Cassandra solve this issue and majorly impact read performance ?
Without looking at your code, bulk reads in an analytics/Spark capacity, are always going to be faster when directly going to the file VS. reading from a database. The database offers other advantages such as schema enforcement, availability, distribution control, etc but I think the performance differences you're seeing are normal.
I would like to know about hardware limitations in cluster planning (in TBs) specific to my use case. I have read few threads and documents related to it but some content seem to be over 5 years old. Thought of giving it a shot again:
Use case: Building a time-series cassandra cluster where there is from time-to-time bulk loading from data sources which are in Gigabytes. However, the end-user will majorly be focused in reading the data from the cluster. Quite rarely will be some update or delete on the rows
I have an initial hardware configuration with me to setup Cassandra cluster:
2*12 Cores
128 GB RAM
HDD SAS 3.27 TB
This is the initial plan that I come up with:
When I now speculate over the setup, and after reading the post:
should I further divide my nodes with lesser RAM, vCPUs and HDD?
If yes, what would be the good fit wrt my case?
I now have a cluster of 4 spark nodes and 1 solr node and use cassandra as my database. I want to increase the nodes in the medium term to 20 and in the long term to 100. But Datastax doesn't seem to support Mesos or Yarn. How would I best manage all these nodes CPU, memory and storage? Is Mesos even necessary with 20 or 100 nodes? So far I couldn't find any example of this using datastax. I usually do not have jobs that need to be completed but I am running a continuous stream of data. That's why I am even thinking of deleting Datastax because I couldn't manage this many nodes efficiently without YARN or Mesos in my opinion, but maybe there is a better solution I haven't thought of? Also I am using python so apparently Yarn is my only option.
If you have any suggestions or best practice examples let me know.
Thanks!
If you want to run DSE with a supported Hadoop/Yarn environmet you need to use BYOH, read about it HERE In BYOH you can either run the internal Hadoop platform in DSE or you can run a Cloudera or HDP platform with YARN and anything else that is available.
i am trying to load around 2 million records to cassandra through spark. Spark has 4 executors and cassandra has 4 nodes in the cluster. But it takes around 20 mins to save all the data to cassandra. Can anyone please help me to make this thing bit more faster.
Ok so I can see several issues with your configuration
Running Cassandra in VM for performance benchmark
Spark NOT co-located (so no data locality ...)
In general, installing Cassandra inside a virtual machine is not recommended for performance benchmark, this is an anti-pattern. So your slow insertion rate is normal, do not complain, you can't ask for better perf while using VM ...