In Cassandra 3.x, is there a way to set a limit on the cluster usage for each keyspace? - cassandra

I am currently working on setting up a Cassandra cluster that will be used by different applications each with their own keyspace (in a multi-tenancy fashion).
So I was wondering if I could limitate the usage of my cluster for each keyspace individually.
For example, if keyspace1 is using 65% of the cluster resources, every new request on that keyspace would be put in a queue so it doesn't impact requests on other keyspaces.
I know I can get statistics on each keyspace using nodetool cfstats but I don't know how to take counter measures.
Cluster resources is also a term to define as it can be total CPU usage, JVM heap usage, or proportion of write/read on each keyspace on the cluster at instant t.
Also, if you have strategies to avoid entering into this kind of situation, I'm glad to hear about it !

No, Cassandra doesn't have such functionality. That's why it's recommended to setup separate clusters to isolate from noisy neighbors...
Theoretically you can do this on Docker/Kubernetes/... but it could take a lot of resources to build something working reliably.

Related

Cassandra : how to properly implement "global" back-pressure with multiples applications?

As you know, with Cassandra, when nodes are overloaded, it may seriously hurt your production depending on required consistency, because nodes might become unresponsive, the entire daemon might also crash, hints might fill-up your data mount point, and so on.
So the keyword here is back-pressure.
To do appropriate back-pressure with Spark on Cassandra, there are especially the following properties :
--conf "spark.cassandra.output.throughputMBPerSec=2"
--total-executor-cores 24
(There are also similar back-pressure options with Datastax driver, or cqlsh. You basically limit the throughput per core, to apply some back-pressure)
Let say, I found my global write throuput on my Cassandra cluster, and I set appropriate settings for my application1 that works fine.
BUT still, the challenge, is that there are many developers on a Cassandra cluster. So at a given time, I may have Spark application1, application2, application3, ... that runs concurrently.
Question : What are my options to ensure that the write troughput (no matter how many applications runs concurrently) at a given time is globally NOT going to reach too much pressure for Cassandra, thus hurting my production workload ?
Thank you
What I recommend folks do to separate analytical workloads, is to spin-up another (logical) data center. Sure, it could be in the same physical data center. But what you want is separate compute and storage to keep the analytics load from interfering with the production traffic.
First, make sure that you're running with the GossipingPropertyFileSnitch (cassandra.yaml) and that your keyspaces are using the NetworkTopologyStrategy. Likewise, you'll want to make sure that your keyspace definition contains a named data center and that your production application/services are configured to use that data center (ex: dc1 as below) as their default DC:
ALTER KEYSPACE product_data WITH
REPLICATION={'class':'NetworkTopologyStrategy',
'dc1':'3'};
Once the new infra is up, install Cassandra and join the nodes to the cluster as a new DC by specifying the new name in the cassandra-rackdc.properties file. Something like:
dc=dc1_analytics
Next adjust your keyspace(s) to replicate data to the new DC.
ALTER KEYSPACE product_data WITH
REPLICATION={'class':'NetworkTopologyStrategy',
'dc1':'3','dc1_analyitcs':'3'};
Run a repair/rebuild on the new DC, and then configure the Spark jobs to only use dc1_analytics.

when to add nodes to cassandra cluster

What are the symptoms/signs that indicates that the existing cluster nodes are over-capacity and that the cluster would need more nodes to be added? Want to know what are the possible performance symptoms after which more nodes are to be added to the cluster.
It depends a lot on the configuration and your use cases. You may have to take a look at the different metrics from your existing cluster. A few metrics that you should keep an eye on includes.
CPU usage
Query Latency
Memory (Depends how you are using the heap memory)
Disk usage
Based on these metrics, you should make a decision whether to scale out the cluster or not.
These are the common scenarios you should look after for adding a new node
Performance of the cluster is degraded. You are not getting required throughput even after all the tunings.
Require more Disk space- Generally you can increase the disk space by adding a new disk but after a limit(2TB generally) it is advised to add a new node.
You have metrices in your hand to identify that your performance is degrading. For example you can use nodetool tablehistograms to identify read and write latency for a particular table. If read/write latency is under your required latencies then you are good and if you see your system is getting slower with more traffic, then it is sign that you should add a node to the cluster.

Frequent Compaction of OpsCenter.rollup_state on all the nodes consuming CPU cycles

I am using Datastax Cassandra 4.8.16. With cluster of 8 DC and 5 nodes on each DC on VM's. For last couple of weeks we observed below performance issue
1) Increase drop count on VM's.
2) LOCAL_QUORUM for some write operation not achieved.
3) Frequent Compaction of OpsCenter.rollup_state and system.hints are visible in Opscenter.
Appreciate any help finding the root cause for this.
Presence of dropped mutations means that cluster is heavily overloaded. It could be increase of the main load, so it + load from OpsCenter, overloaded system - you need to look into statistics about number of requests, latencies, etc. per nodes and per tables, to see where increase happened. Please also check the I/O statistics on machines (for example, with iostat) - sizes of the queues, read/write latencies, etc.
Also it's recommended to use a dedicated OpsCenter cluster to store metrics - it could be smaller size, and doesn't require an additional license for DSE. How it said in the OpsCenter's documentation:
Important: In production environments, DataStax strongly recommends storing data in a separate DataStax Enterprise cluster.
Regarding VMs - usually it's not really recommended setup, but heavily depends on what kind of underlying hardware - number of CPUs, RAM, disk system.

Measuring throughput in cassandra cluster

This may sound as an elementary question - but I am confused with all the literature.
I have a 3-node cassandra cluster on 3.11.x,with 1 seed node.
We are testing brute force write throughput in this setup with a single threaded client seated outside the cluster.
With nodetool and cqsl at my disposal - how do I go about realistically assessing the following:
How much volume was processed by each node.
How much of the total time was consumed
a)by the actual flush + compaction at each node
b)time taken by the cluster to resolve the node/partition(hashing)
c)network latency in chaperoning the data to the node
The best way would be monitoring JMX port of Cassandra by a tool like Opennms or Zabbix and compare Metrics such as Mutation, compatcion, etc (Hundreds of Metrics there):
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsMonitoring.html

What is the proper setup for spark with cassandra

After using and playing around with the spark connector, I want to utilize it in the most efficient way, for our batch processes.
is the proper approach to set up a spark worker on the same host where Cassandra node is on? does the spark connector ensure data locality?
I am a bit concerned that a memory intensive spark worker will cause the entire machine to stop, then I will lose a Cassandra node, so I'm a bit confused whether I should place the workers on the Cassandra nodes, or separate (which means no data locality). what is the common way and why?
This depends on your particular use case. Some things to be aware of
1) CPU Sharing, while memory will not be shared (heaps will be separate) between Spark and Cassandra. There is nothing stopping spark executors from stealing time on C* cpu cores. This can lead to load and slowdowns in C* if the spark process is very cpu intensive. If it isn't then this isn't much of a problem.
2) Your network speed, if your network is very fast then there is much less value to locality than if you are on a slower network.
So you have to ask yourself, do you want a simpler setup (everything in one place) or do you want a complicated setup but more isolated.
For instance DataStax (the company I work for) ships Spark running colocated with Cassandra by default, but we also offer the option of having it run separately. Most of our users colocate possibly because of this default, those who don't usually do so because of easier scaling.

Resources