Disk stats on Amazon EBS with high iowait - linux

I'm currently trying to diagnose issues with a Postgres database that appears IO bound. The CPU is spending most of it's time in iowait but vmstat -d persistently shows 0 current outstanding ops for all mounted volumes. The volumes in question are EBS mounts. Anyone know if the outstanding ops stats for EBS mounts are just broken or have any idea what's going on here?

There is an excellent chance that you are suffering from noisy neighbors that saturate shared physical infrastructure.
To diagnose this, I would:
Create an EBS snapshot of your instance and your attached EBS mounts (you do have an EBS backed instance?)
Fire up a new copy of your server in a different availability zone
Retest
Note that firing up a new instance in the same availability zone could cause you to end up on the same hardware that is suffering from noisy neighbors.

Related

How to get Kubernetes to utilize a mounted disk?

I have an Ubuntu machine that runs a Kubernetes cluster.
I constantly get "disk pressure" issues in various pods in that cluster.
To combat this issue, I've attached a volume/disk to that machine, formatted it, and mounted it in /media/whatever.
Unfortunately, it seems that the Kubernetes cluster is not utilizing the new disk space from the mounted volume.
My question is: how to get the Kubernetes cluster to utilize the new volume?
I don't mean to attach any volumes to individual pods, but to allow Kubernetes to use any available disk space freely.
I am aware that this question is a bit general and arises from a big gap in overall understanding of Kubernetes,
but I hope that you will still be kind enough to help me.

Cassandra - HDD vs. SSD usage makes no difference in throughput

The Context
I'm currently running tests with Apache Cassandra on a single node cluster. I've ensured the cluster is up and running using nodetool status, I've done a multitude of reads and writes that suggest as such, and I'm confident my cluster is set up properly. I am now attempting to speed up my throughput by mounting a SSD onto the directory where Cassandra writes its data to.
My Solution
The write location of Cassandra data is generally to /var/lib/cassandra/data, however I've since switched mine using cassandra.yaml to write to another location, where I've mounted my SSD. I've ensured that Cassandra is writing to this location by checking the size of the data directory's contents through watch du -h and other methods. The directory I've mounted the SSD on includes table data, commitlog, hints, a nested data directory, and saved_caches.
The Problem
I've been using YCSB benchmarks (see https://github.com/brianfrankcooper/YCSB) to test the average throughput and ops/sec of Cassandra. I've noticed no difference in the average throughput when mounting HDD vs. SSD on the location where Cassandra writes its data to. I've analyzed disk access through dstat -cd --disk-util --disk-tps and found HDD caps out on CPU usage in multiple instances whereas SSD only spikes to around 80% on several occassions.
The Question
How can I speed up the throughput of Cassandra using a SSD over a HDD? I assume this is the correct place to mount my SSD, but does Cassandra not utilize its extra processing power? Any help would be greatly appreciated!
SSD should always win over the HDD in terms of latency, etc. It's just a law of physics. I think that your test simply didn't provide enough load on the system. Another problem could be that you mount only data to SSD, but not the commit logs - on HDDs they should be always put onto a separate disk to avoid clashes with data load. On SSDs they could be put on the same disk as data - please point all directories to SSD to see a difference.
I recommend to perform a comparison by using following tools:
perfscripts - it uses fio tool to emulate Cassandra-like workloads, and if you run it on the both HDDs & SSDs, then you will see the difference in latency. You may not even execute it - just look historic folder, where there are results for different disk types;
DSBench - it was recently released by DataStax team, who is specializing in benchmarking Cassandra and DSE. There are built-in workloads described in wiki, that you can use for testing. Only make sure that you run the load long enough to see the effect of compaction, etc.

Increasing Number of VMs decreases Cassandra Throughput. What can be reason?

I am using YCSB benchmarking tool to benchmark Cassandra cluster.
I am varying the number of Virtual machines in the cluster.
I am using 1 physical host and I am using 1,2,3,4 virtual machines for benchmarking(as shown in attached figure).
The generated workload is same all the time Workload C 10,000,00 operations, 10,000 records
Each VM has 2 GB RAM, 20GB drive
Cassandra - 1 seed node, endpoint_snitch - gossipproperty
Keyspace YCSB - Replication factor 3,
The problem is that when I increase the number of virtual machines in the cluster, the throughput decreases. What can be the reason?
By definition, by increasing compute resources(i.e virtual machines), the cluster should offer better performance, but the opposite is happening as shown in attached figure. Kindly explain what can be the probable reason for this? I am writing my thesis on this topic but I am unable to figure out the reason for this, please help, I will be grateful to you.
Throughput observed by varying number of VMs in Cassandra cluster:
Very likely hitting a disk IO bottleneck. Especially with non ssd drives this is completely expected. Unless you have dedicated disk/cpu per vm the competition for resources will cause contention like this. Also 2gb per vm is not enough to do any kind of performance benchmark with Cassandra since the minimum recommended JVM heap size is 8gb.
Cassandra is great at horizontal scaling (nearly linear), but that doesn't mean that simply adding VMs to one physical host will increase throughput - a single VM on the physical host will have less contention for resources (disk, cpu, memory, network) than 4, so it's likely one VM would perform better than 4.
By definition, if you WERE increasing resources, you SHOULD see it perform better - but you're not, you're simply adding contention to existing resources. If you want to scale cassandra, you need to test it with additional physical resources - more physical machines, not more VMs on the same machine.
Finally, as Chris Lohfink mentions, your VMs are too small to do meaningful tests - 8GB JVM heap is recommended, with another 8GB of vm page cache to support reads - running Cassandra with less than 16G of RAM is typically non-ideal in production.
You're trying to test a jet engine (a distribute database designed for hundreds or thousands of physical nodes) with a gas station level equipment - your benchmark hardware isn't viable for a real production environment, so your benchmark results aren't meaningful.

Cassandra compaction tasks stuck

I'm running Datastax Enterprise in a cluster consisting of 3 nodes. They are all running under the same hardware: 2 Core Intel Xeon 2.2 Ghz, 7 GB RAM, 4 TB Raid-0
This should be enough for running a cluster with a light load, storing less than 1 GB of data.
Most of the time, everything is just fine but it appears that sometimes the running tasks related to the Repair Service in OpsCenter sometimes get stuck; this causes an instability in that node and an increase in load.
However, if the node is restarted, the stuck tasks don't show up and the load is at normal levels again.
Because of the fact that we don't have much data in our cluster we're using the min_repair_time parameter defined in opscenterd.conf to delay the repair service so that it doesn't complete too often.
It really seems a little bit weird that the tasks that says that are marked as "Complete" and are showing a progress of 100% don't go away, and yes, we've waited hours for them to go away but they won't; the only way that we've found to solve this is to restart the nodes.
Edit:
Here's the output from nodetool compactionstats
Edit 2:
I'm running under Datastax Enterprise v. 4.6.0 with Cassandra v. 2.0.11.83
Edit 3:
This is output from dstat on a node that behaving normally
This is output from dstat on a node with stucked compaction
Edit 4:
Output from iostat on node with stucked compaction, see the high "iowait"
azure storage
Azure divides disk resources among storage accounts under an individual user account. There can be many storage accounts in an individual user account.
For the purposes of running DSE [or cassandra], it is important to note that a single storage account should not should not be shared between more than two nodes if DSE [or cassandra] is configured like the examples in the scripts in this document. This document configures each node to have 16 disks. Each disk has a limit of 500 IOPS. This yields 8000 IOPS when configured in RAID-0. So, two nodes will hit 16,000 IOPS and three would exceed the limit.
See details here
So, this has been an issue that have been under investigation for a long time now and we've found a solution, however, we aren't sure what the underlaying problem that were causing the issues were but we got a clue even tho that, nothing can be confirmed.
Basically what we did was setting up a RAID-0 also known as Striping consisting of four disks, each at 1 TB of size. We should have seen somewhere 4x one disks IOPS when using the Stripe, but we didn't, so something was clearly wrong with the setup of the RAID.
We used multiple utilities to confirm that the CPU were waiting for the IO to respond most of the time when we said to ourselves that the node was "stucked". Clearly something with the IO and most probably our RAID-setup was causing this. We tried a few differences within MDADM-settings etc, but didn't manage to solve the problems using the RAID-setup.
We started investigating Azure Premium Storage (which still is in preview). This enables attaching disks to VMs whose underlaying physical storage actually are SSDs. So we said, well, SSDs => more IOPS, so let us give this a try. We did not setup any RAID using the SSDs. We are only using one single SSD-disk per VM.
We've been running the Cluster for almost 3 days now and we've stress tested it a lot but haven't been able to reproduce the issues.
I guess we didn't came down to the real cause but the conclusion is that some of the following must have been the underlaying cause for our problems.
Too slow disks (writes > IOPS)
RAID was setup incorrectly which caused the disks to function non-normally
These two problems go hand-in-hand and most likely is that we basically just was setting up the disks in the wrong way. However, SSDs = more power to the people, so we will definitely continue using SSDs.
If someone experience the same problems that we had on Azure with RAID-0 on large disks, don't hesitate to add to here.
Part of the problem you have is that you do not have a lot of memory on those systems and it is likely that even with only 1GB of data per node, your nodes are experiencing GC pressure. Check in the system.log for errors and warnings as this will provide clues as to what is happening on your cluster.
The rollups_60 table in the OpsCenter schema contains the lowest (minute level) granularity time series data for all your Cassandra, OS, and DSE metrics. These metrics are collected regardless of whether you have built charts for them in your dashboard so that you can pick up historical views when needed. It may be that this table is outgrowing your small hardware.
You can try tuning OpsCenter to avoid this kind of issues. Here are some options for configuration in your opscenterd.conf file:
Adding keyspaces (for example the opsc keyspace) to your ignored_keyspaces setting
You can also decrease the TTL on this table by tuning the 1min_ttlsetting
Sources:
Opscenter Config DataStax docs
Metrics Config DataStax Docs

AWS Amazon offers 160GB space for small instance. On booting Suse linux the total root partition space is

AWS Amazon offers 160GB space for small instance. On booting Suse linux the total root partition space I got is 10GB. On df -h I only see/dev/sda1 with 10GB space. Where is rest 150GB? How can I claim this space? I dont want to use EBS as it cost extra and 160GB space suffice my need. Please help.
The extra 150GB is given as an ephemeral storage, e.g. data on this storage won't survive reboots in contrast to the data on your root storage. During launching, you can select where your ephemeral disks should be made available as a device in your machine (this is the -boption when using the command line or in the "Instance Storage" tab when launching via the S3 console. You can then simply mount it in your running instance.

Resources