Cassandra: operating a cluster with very different machines [duplicate] - cassandra

This question already has an answer here:
Cassandra with uneven hardware, how to configure?
(1 answer)
Closed 7 years ago.
I have a development machine and we are transitioning to production. However the machine is not too bad:
HOST: HP - ProLiant BL460c G7 - CZJ20601RL
PROC: 2 x Intel(R) Xeon(R) CPU X5660 # 2.80GHz; HT is on (total: 24 thread(s))
RAM : 6 x 2048 MB (total: 11895 MB)
DISK: 2 x 300 GB SAS
but the disks are rather small.
The two other production machines will have larger disk. How can I make sure that I don't fill up the disk of the first machine? Is I do what's going to happen?
I thought about reducing the number of "tokens" (vnodes): 256 on the two production machines and only 64 on this one.

I thought about reducing the number of "tokens" (vnodes)
Tuning the number of vnodes tokens is a good way to size the load on a cluster with different hardware.
However, it's all about guessing. Ideally if your high-end servers has x2 CPU, x2 memory and x2 disk bandwidth, you can do a x2 scaling with vnodes tokens.
In your case, it's more complicated because the hardware-scaling factor is not so obvious.
How can I make sure that I don't fill up the disk of the first machine?
System monitoring. Also, OpsCenter can give you metrics about system disk usage if you install the agents on each server

Related

Express (NodeJS) more cores vs. more nodes? (With Analysis and Examples)

When it comes to running Express (NodeJS) in something like Kubernetes, would it be more cost effective to run with more cores and less nodes? Or more nodes with less cores each? (Assuming the cost of cpus/node is linear ex: 1 node with 4 cores = 2 nodes 2cores)
In terms of redundancy, more nodes seems the obvious answer.
However, in terms of cost effectiveness, less nodes seems better because with more nodes, you are paying more for overhead and less for running your app. Here is an example:
1 node with 4 cores costs $40/month, it is running:
10% Kubernetes overhead on one core
90% your app on one core and near 100% on others
Therefore you are paying $40 for 90% + 3x100% = 390% your app
2 nodes with 2 cores each cost a total of $40/month running:
10% Kubernetes overhead on one core (PER NODE)
90% you app on one core and near 100% on other (PER NODE)
Now you are paying $40 for 2 x (90% + 100%) = 2 x 190% = 380% your app
I am assuming balancing the 2 around like 4-8 cores is ideal so you aren't paying so much for each node, scaling nodes less often, and getting hight percentage of compute running your app per node. Is my logic right?
Edit: Math typo
because the node does not come empty, but it has to run some core apps like :
kubelet
kube-proxy
container-runtime (docker, gVisor, or other)
other daemonset.
Sometimes, 3 large VMs are better than 4 medium VMs in term of the best usage of capacity.
However, the main decider is the type of your workload (your apps):
If your apps eats memory more than CPUs (Like Java Apps), you will need to choose Node of [2CPU, 8GB] is better than [4CPUs, 8GB].
If your apps eats CPUs more than memory (Like ML workload), you will need to choose the opposite; computing-optimized instances.
The golden rule 🏆 is to calculate the whole capacity is better than looking into the individual capacity for each node.
At the end, you need to consider not only cost effectiveness but also :
Resilience
HA
Redundancy

High disk I/O on Cassandra nodes

Setup:
We have 3 nodes Cassandra cluster having data of around 850G on each node, we have LVM setup for Cassandra data directory (currently consisting 3 drives 800G + 100G + 100G) and have separate volume (non LVM) for cassandra_logs
Versions:
Cassandra v2.0.14.425
DSE v4.6.6-1
Issue:
After adding 3rd (100G) volume in LVM on each of the node, all the nodes went very high in disk I/O and they go down quite often, servers also become inaccessible and we need to reboot the servers, servers don't get stable and we need to reboot after every 10 - 15 mins.
Other Info:
We have DSE recommended server settings (vm.max_map_count, file descriptor) configured on all nodes
RAM on each node : 24G
CPU on each node : 6 cores / 2600MHz
Disk on each node : 1000G (Data dir) / 8G (Logs)
As I suspected, you are having throughput problems on your disk. Here's what I looked at to give you background. The nodetool tpstats output from your three nodes had these lines:
Pool Name Active Pending Completed Blocked All time blocked
FlushWriter 0 0 22 0 8
FlushWriter 0 0 80 0 6
FlushWriter 0 0 38 0 9
The column I'm concerned about is the All Time Blocked. As a ratio to completed, you have a lot of blocking. The flushwriter is responsible for flushing memtables to the disk to keep the JVM from running out of memory or creating massive GC problems. The memtable is an in-memory representation of your tables. As your nodes take more writes, they start to fill and need to be flushed. That operation is a long sequential write to disk. Bookmark that. I'll come back to it.
When flushwriters are blocked, the heap starts to fill. If they stay blocked, you will see the requests starting to queue up and eventually the node will OOM.
Compaction might be running as well. Compaction is a long sequential read of SSTables into memory and then a long sequential flush of the merge sorted results. More sequential IO.
So all these operations on disk are sequential. Not random IOPs. If your disk is not able to handle simultaneous sequential read and write, IOWait shoots up, requests get blocked and then Cassandra has a really bad day.
You mentioned you are using Ceph. I haven't seen a successful deployment of Cassandra on Ceph yet. It will hold up for a while and then tip over on sequential load. Your easiest solution in the short term is to add more nodes to spread out the load. The medium term is to find some ways to optimize your stack for sequential disk loads, but that will eventually fail. Long term is get your data on real disks and off shared storage.
I have told this to consulting clients for years when using Cassandra "If your storage has an ethernet plug, you are doing it wrong" Good rule of thumb.

Azure Websites - Scale Up vs. Scale Out

Has anyone seen any analysis or info on when it is ideal to scale out vs. scale up. When does one make more sense than the other.
Currently, 2 small instances will cost the same as one medium under both the standard and basic modes.
Is having 2 small instances and thus 4 GB of RAM, the same as having 1 Medium instance with 4 GB of RAM (but without an SLA); and the same for cores. All the other features are the same.
Does either CPU pressure or memory pressure, two easy metrics, dictate which way to scale?
And, in this case, scaling out does not present an issue as far as apps/sites working on different machines.
When you can, always try to scale out vs. scale up. Chances of one VM going down due to a reboot/upgrade/etc and having catastrophic downtime are much bigger than 0... while the overhead of running two VM's and load-balancing between them is minimal and chances of you having both VM's down are much much smaller.
In addition if you ever need 3 servers, scaling up with medium servers will not yield the right granularity.
Having 2 small instances of 1.75 GB each IS NOT the same as having 1 Medium instance with 3.5 GB of RAM. It is better to have a MEDIUM instance because 3.5 GB is now available to applications instead of just 1.75 GB. Also, each OS takes some RAM away approximately 800-900 MB. Having two instances takes RAm of two OS.

Cassandra latency patterns under constant load

I've got pretty unusual latency patterns in my production setup:
the whole cluster (3 machines: 48 gig ram, 7500 rpm disk, 6 cores) shows latency spikes every 10 minutes, all machines at the same time.
See this screenshot.
I checked the logfiles and it seems as there are no compactions taking place at that time.
I've got 2k reads and 5k reads/sec. No optimizations have been made so far.
Caching is set to "ALL", hit rate for row cache is at ~0,7.
Any ideas? Is tuning memtable size an option?
Best,
Tobias

Solr Indexing Time

Solr 1.4 is doing great with respect to Indexing on a dedicated physical server (Windows Server 2008). For Indexing around 1 million full text documents (around 4 GB size) it takes around 20 minutes with Heap Size = 512M - 1G & 4GB RAM.
However while using Solr on a VM, with 4 GB RAM it took 50 minutes to index at the first time. Note that there is no Network delays and no RAM issues. Now when I increased the RAM to 8GB and increased the heap size, the indexing time increased to 2 hrs. That was really strange. Note that except for SQL Server there is no other process running. There are no network delays. However I have not checked for File I/O. Can that be a bottleneck? Does Solr has any issues running in "Virtualization" Environment?
I read a paper today by Brian & Harry: "ON THE RESPONSE TIME OF A SOLR SEARCH ENGINE IN A VIRTUALIZED ENVIRONMENT" & they claim that performance gets deteriorated when RAM is increased when Solr is running on a VM but that is with respect to query times and not indexing times.
I am bit confused as to why it took longer on a VM when I repeated the same test second time with increased heap size and RAM.
I/O on a VM will always be slower than on dedicated hardware. This is because the disk is virtualized and I/O operations must pass through an extra abstraction layer. Indexing requires intensive I/O operations, so it's not surprising that it runs more slowly on a VM. I don't know why adding RAM causes a slowdown though.

Resources