VMware ESXi slow local storage only on one disk partition - linux

I'm experiencing very odd behaviour of VMware ESXi free hypervisor 6 whilst using local hard drives as a storage for VM's.
Everything works up to one partition.
Here's the setup.
2TB WD RED drive divided onto 2 pieces - one partition 1 TB total in size, and another 500 GB. Both parts/partitions of this drive are assigned to one VM (running Ubuntu 14.04 LTS) and are formatted and configured in fstab regularly. Everything fine on that.
Now the issue with performance.
When I try to read or write from big (1TB size) partition, mounted in /mnt/bigpart I get expected both write and read speeds (~150 MB/s).
But if try to do the same with smaller partition (500GB), both read and write speeds are 50% lower! So I cannot max read spead above 80 MB/s. Writes are even lower.
I just don't get it. Also esxitop (d) shows exactly the same results. Smaller partition just cannot seem to be any faster.
This is very odd as both partitions are preallocated (in favour of spinning drive speed), and both are physically located on the same hard drive.
I know that in theory with spinning hard drives it can be that end of the drive platter is somewhat slower then the beginning, but this is just too much of a performance hit.
Additionally, the hard drive has ~360 GB of free space after those preallocations.
Perhaps I should try to re-assign the smaller partition again but this time with thin provisioning.
Take a look at measurements:
BIGGER (1TB) PARTITION / DISK
11649792+0 records in
11649792+0 records out
5964693504 bytes (6.0 GB) copied, 39.873 s, 150 MB/s
SMALLER (500GB) PARTITION / DISK
11649792+0 records in
11649792+0 records out
5964693504 bytes (6.0 GB) copied, 67.1635 s, 88.8 MB/s

I know that in theory with spinning hard drives it can be that end
of the drive platter is somewhat slower then the beginning
This is true in practice as well. Look at the sustained transfer rate
of this 2 TB hard drive which is from a different vendor. The chart
displays the sequential read throughput depending on the offset.
In the first terabyte, the sequential read throughput is between 170
and 130 MiB/s, which is pretty close to what you experience (150
MB/s). The throughput drops sharply after the second half of the hard
drive. Even if it does not explain 100% of the performance hit you
experience, it is probably the dominant factor.

this can (don't have to) be a problem with block alignment.
There is in real-cases no big difference if a vmdk is provisioned as thin or thick.
So you have two local datastores (VMFS5?) on the same harddisk?
Do both Datastores have a block size of 1 MB? (Host -> Configuration -> Storage)
if yes - do both partitions in your guest have a block size of 1 MB too?
is it possible that one partition is with MBR generated and one with GPT? (GPT would be the better way)
Maybe you can do also a SMART check of the HDD - maybe there are some broken sectors.

Related

Too much disk space used by Apache Kudu for WALs

I have a hive table that is of 2.7 MB (which is stored in a parquet format). When I use impala-shell to convert this hive table to kudu, I notice that the /tserver/ folder size increases by around 300 MB. Upon exploring further, I see it is the /tserver/wals/ folder that holds the majority of this increase. I am facing serious issues due to this. If a 2.7 MB file generates a 300 MB WAL, then I cannot really work on bigger data. Is there a solution to this?
My kudu version is 1.1.0 and impala is 2.7.0.
I never used KUDU but I'm able to Google on a few keywords, and read some documentation.
From the Kudu configuration reference section "Unsupported flags"...
--log_preallocate_segments Whether the WAL should preallocate the entire segment before writing to it Default true
--log_segment_size_mb The default segment size for log roll-overs, in MB Default 64
--log_min_segments_to_retain The minimum number of past log segments to keep at all times, regardless of what is required for
durability. Must be at least 1. Default 2
--log_max_segments_to_retain The maximum number of past log segments to keep at all times for the purposes of catching up other
peers. Default 10
Looks like you have a minimum disk requirement of (2+1)x64 MB per tablet, for the WAL only. And it can grow up to 10x64 MB if some tablets are straggling and cannot catch up.
Plus some temp disk space for compaction etc. etc.
[Edit] these default values have changed in Kudu 1.4 (released in June 2017); quoting the Release Notes...
The default size for Write Ahead Log (WAL) segments has been reduced
from 64MB to 8MB. Additionally, in the case that all replicas of a
tablet are fully up to date and data has been flushed from memory,
servers will now retain only a single WAL segment rather than two.
These changes are expected to reduce the average consumption of disk
space on the configured WAL disk by 16x

Dedicated commitlog storage vs Read/Write ratio?

As we are using SSD disks to provide storage for our cluster on servers with 30 GB of memory.
There is an argument about the commitlog directory, whether to dedicate an individual disk or having it on the same data disk.
As we already using SSD disks, performance should be fine having both commitlogs and data on the same disk, as there is no mechanical moving head for writing.
However, there is another factor, that is the read/write ratio. How would such a ratio affect the performance of writing or reading when we have both commitlogs and data on the same disk?
Using SSD, when would it become important to dedicate a high performance disk for the commitlog directory?
A dedicated commitlog device usually makes a lot of sense when you have HDDs, but is less obvious if you're using SSDs.
Even if you asked only if it makes sense with SSDs setups, I will try to give some general hints about the subject, primarily based on my understandings and my own experience. I admit the focus is probably too much on HDDs, but HDDs allow a deep insight on how Cassandra stuff works and why backing a commitlog/data directory with an SSD can be a life saver.
Background: IOPS and OPS are not the same thing.
I will start from a (very) far point: Device Performance. Here's a start-point lecture about storage device performances in general. Even if the article's neutrality is under discussion, it can provide some insights about the general metrics and performance you can expect from some systems. Of course, your mileage may vary, depending on what device (type/brand/model etc...) and how much stress (intended as type of workload) you put on the device, but I think it is a good starting point for our discussion here.
The reason I prefer to start from IOPS is because it is the very starting point for understanding storage performance. The C* literature speaks about OPS, Operations Per Second, because people usually don't think in terms of IOPS, especially when looking at stats. This really hides a lot of details, the operation size for starters.
A Cassandra Operation usually consists of multiple IOPS. Cassandra documentation usually refers to spinning disks (even if SSDs are referenced too), clearly states what happens when performing reads/writes, and people tend to ignore the fact that when their software stack (that spans from up the application down to Cassandra and its data files on the storage) hit the disks the performance decreases by a huge amount just because they have failed to recognize a random workload, and even if "Cassandra is an high-performance etc... etc.. etc...".
As an example, looking at the picture in the read path documentation, you can clearly see what data structures are in memory/on disk, and how the SSTable data is accessed. Further, the row cache paragraph says:
... If row cache is enabled, desired partition data is read from the row cache, potentially saving two seeks to disk for the data...
And here's where the catch starts: these two seeks are potentially saved from Cassandra's point of view. This simply means that Cassandra won't make two requests to the storage system: it will avoid to request the partition index, and the data because everything is already in RAM, but it doesn't really translates to "the storage system will save two IO operations". Indeed, how (generic) data is retrieved from the storage device is a very different thing, and of course depends on how the files are layed-out on the disk itself: are you using EXT4, XFS, or what? Assuming no cache is available (eg for very big data set sizes you can't really cache everything...), looking for a file is IOPS consuming, and this tends to amplify the potentially saved seeks when you have data in RAM, and tends to amplify the penalty you perceive when your data is not.
You can't escape physics: HDDs pay some taxes, SSDs no.
As you already know, the main "problem" (performance-wise) of HDDs is the average seek time, that is the time the HDD needs to wait on average in order to have a target sector under the heads. Once the sector is under the heads, if the system have to read a bunch of sequential bits everything is smooth and the throughput is proportional to the rotational speed of the HDD (to be precise to the tangential speed of the platters under the head, which depends also on the track etc...).
In other terms, HDDs have an average fixed performance tax (the average seek time), and everything after is almost "free". If an application requests a bunch of sectors that are not "contiguous" (from the disk point of view, eg a fragmented file is splitted across multiple sectors, but an application can't really know this), the disk will have to wait the average seek time on average multiple times, and this fixed tax influences its maximum throughput.
The strongest argument about storage is: every device have its own maximum magic average IOPS number. This number express the number of random IOPS the device can perform. You can't force an HDD to have more IOPS on average, it's a physical problem. The OS is usually smart enough to "enqueue" sector requests in the attempt to reduce the seek times, eg ordering by ascending requested sector number (trying to exploit some sequential operations), but nothing will save the performances from a random IO workload. You have X allotted available IOPS and must face your problems with that. No matter what.
You need to take advantage of the allotted IOPS of your device, and you must be wise on how you use them.
Suppose you have an HDD that maxes out at 100 IOPS on average. If your application performs a bunch of small (say 4KB) file reads, you have an application that performs 100 * 4KB reads every second: the throughput will be around 400KB/s (unless some caching is involved, and in that case the cache saved you precious IOPS). Astonishing. This is simply because you keep paying the seek time multiple times. If you change your access pattern to something that reads 16MB (contiguous) files, you get an higher throughput because you won't pay the seek time so much, you are exploiting a sequential pattern. What changes under the hood is the Request Size of each operation.
Now an interesting question is: how are "IOPS" and "Request Size" related to? Does one request size of 16MB can be considered one IOPS? And what about a 128MB request size? This is indeed a good question. At lower level, the Request Size spans from 512 bytes (the minimum sector size) to 128KB (32*4K sectors in one request). If the operation has small size, its transfer time, the time the disk needs to fetch the data, is also small. Higher request sizes have higher transfer times obviously. However, if you are able to perform 100 4KB IOPS, you will probably be able to perform around 80 IOPS #8KB. The relation can't be linear, because the transfer time depends on the rotational speed of the disks only (the transfer time is negligible compared to the seek time), and since you are actually reading from two adjacent sectors, you'll hit the seek time penalty once per request. This translates to a throughput of around 400KB/s for 4K requests and 1.6MB/s for 8K requests. And so on.... The larger the request size, the longer it takes to transfer data, the lesser IOPS you have, the higher throughput you have. (These are random numbers, pun intended, no measurements done! Just to let you understand. I however think they are in the ballpark).
SSDs don't suffer mechanical penalties and that's why they are capable of performing much better than HDDs. They have much more IOPS, and their limits come from the onboard electronics, bus connection etc.... Having an higher IOPS device is a big plus, these can be consumed by applications that are not IOPS friendly, and the user won't notice that the applications suck. However, with SSDs, the Request Size linearly influences the number of IOPS you can perform. When you look at some device that have 100k IOPS, these are usually referred at 4K. You'll be able to perform only 6.2k requests if you perform 64K requests.
Why Cassandra has a such good read performances even with HDDs then?
Speaking from a single node point of view (because given the performance of a cluster Cassandra scales linearly with the number of nodes in the cluster), the problem lies in the question itself. This is only true if you model your data in this particular way:
You must fetch all your data with one query only.
Your data must be ordered.
If you can't fetch your data with one data, denormalize in order to retrieve it with one query only.
You fetch a relative good amount of data on every read
These are well-known Cassandra modeling rules, but the key point is that these rules do really have a reason to be applied IOPS-wise. Indeed, these rules allow Cassandra to:
Be a super fast database because it will just require the partition index and the SSTable offset index of the data: two IOPS in the best case, much more IOPS in the worst case.
Be a super fast database because it will exploit the sequential capabilities of the HDDs and will not stress the IO subsystem by issuing other IO (random) seeks.
Be a super fast database because it will just fetch more data like the point number 1.
Be a super fast database because it will exploit longer the sequential capabilities of the HDDs.
In other terms, following these basic data modeling rules allows Cassandra to be IOPS friendly when reading data back.
What happens if you screw-up your data model? Cassandra won't be IOPS friendly, and as a consequence the performances will be predictably horrible. Unless you use an SSD, which has greater IOPS and then you won't notice slowness too much.
What happens if you read/write a small amount of data (eg due to misconfigured flush sizes, small commit log etc...)? Cassandra won't be IOPS friendly, and as a consequence the performances will be predictably horrible. Unless you use an SSD, which has greater IOPS and then you won't notice slowness too much.
How a read/write ratio pattern can influence performance in a Cassandra node?
Cassandra is a complex system, with different components that interact each other. I will try to explain from my point of view what are the main points when you put everything on one device only.
Writes/Deletes/Updates in Cassandra are fast because they are simply append-only writes to the CommitLog device. Reads, on the contrary, can be very IOPS consuming. When both CommitLog and Data are on the same physical disk (either HDD or SSD), the read/write paths interact, and they both consume IOPS.
Two important questions are:
How many IOPS a read (using the read path) consumes?
How many IOPS a write consumes?
These are important question because you have to remember that your device can perform at most X IOPS, and your system will have to split these X IOPS among these operations.
It is quite difficult to answer to the "read" question because, when you request some data, Cassandra needs to locate all the SSTables needed to satisfy the request. Assuming a very big dataset size, where caching is not effective, this imply that the Cassandra read path can be very IOPS hungry. Indeed, if your data is spread into 3 different SSTables, Cassandra will have to locate all of them, and for each SSTable will follow the read path: will read the partition index, and then will read the data in the SSTable. These are at least two IOPS, because if your filesystem is not "collaborative" enough, locating a file and/or pointing at a file offset could require some more IOPS. In the end, in this example Cassandra is consuming at least six IOPS per read.
Answering the "write" question is also tricky, because compactions and flushes can be triggered. They will consume a lot of IOPS. Flushes are easy to understand: they write data from memtables to disk with a sequential pattern. Instead, compactions read data back from different SSTables on disk, and while reading the tables they flush the result out to a new disk file. This is a mixed read/write pattern, and on HDDs this is very disruptive, because will force the disk to perform multiple seeks.
Mixing percentages: TL;DR
If you have a R/W ratio of 95% reads and 5% writes, having a separate CommitLog device can be a waste of resources, because writes will hardly impact your read performances, and you write so rarely that write performance may be considered not critical.
If you have a R/W ratio of 5% reads and 95% writes, having a separate CommitLog device can be again a waste of resources, because reads will hardly impact your write performances, and your read performances will hardly suffer from a bunch of sequential appends on the commitlog.
And finally, if you have a R/W ratio of 50% reads and 50% writes, having a separate CommitLog device is NOT a waste of resources, because every write performed on the CommitLog device won't produce at least two IOPS on the data drive (one for writing, and one for going back to read).
Please note that I didn't mention compactions, because independently on your workload, when compaction triggers in, your workload will be disrupted by mixed read/write background operations on different files (consuming disk IOPS all the way), and you will suffer both on reads and writes.
All this should be clear enough for HDDs because you run out of IOPS very fast, and when you do you notice it immediately. On SSDs, however, you don't run out of IOPS that fast, but you could do if your data consists of a lot small data rows.
The reality is that getting out of IOPS on an SSD is very hard because you'll get out (by a far amount) of CPU resources, But once you do you will see your performance slowly decrease. The effect however won't be such dramatic as in the cases of HDDs. As an example, if you have a 100 IOPS HDD and you run-out of IOPS by trying to issue 500 random IO stuff, you cleary get a penalty. By calling this penalty P, if you have an SSD with 100k IOPS, to get the same penalty P you should issue 500k IOPS, which can be very difficult to do without exhausting CPU or RAM.
In general, when you run out of some type of resource in your system, you need to increase its quantity. The most important thing (to me) is not to run out of IOPS in the "Data" part of your Cassandra cluster. In the case of SSDs IOPS, it's rare enough that you'll get the limit. You'll burn your CPU well before I think. But you will if you don't tune your system, or if your workload put too much stress on the disk subsystem (eg Leveled Compaction). I'd suggest to put an ordinary HDD instead of an high performance SSD for the commitlog, saving money. But if you have a lot of very small commitlog flushes an SSD is a completely life saver, because your writers won't suffer the latency of HDDs.
Finally, in my opition, you should go in pre-production with some sort of real data, and check your IOPS requirements. If you have enough room to put the SSD there don't worry. Go and save money. If your system gets too much pressure due to compaction then having a separate device is suggested. Analyze your commitlog pattern, and if its not IOPS demanding put it on a separate disk. Moreover, if you have a virtual environment you can provision a relatively small commitlog device regardless of other factors. It won't rise the cost of your solution too much.
The actual numbers will depend highly on the type of workload you have the configuration you have etc. You can have a look at Netflix tech blog posts for ballpark numbers, e.g. #1, #2.
Dedicating a disk for commitlog directory is a sort of scale up strategy. Cassandra works well with scale out approach. You just add more nodes into the cluster to spread the load - 2nd from the linked articles has a nice graph showing near linear scalability.

Cassandra Cache Memory Management

I have 4 Node Cassandra 2.1.13 Cluster with the below configurations.
32 GB Ram
Max HEAP SIZE - 8 GB
250 GB Hard Disk Each (Not SSD).
I am trying to do a load test on write and read. I have created a multi threaded program to create 50 Million Records. Each row has 30 Columns.
I was able to insert 50 Million records in 84 Minutes at a rate of 9.5K insert per seconds.
Next I was trying to read those 50 Million records randomly using 32 clients and I was able to do read at 28K per second.
The problem is after some time, the memory gets full and most of it cached. almost 20GB.After some time the system hangs because of out of memory.
If I clean the cache Memory, my read throughput goes down to 100 per second.
How should I manage my cache memory without affecting read performance.
Let me know if you need any more any more information.
What you noticed is the Linux disk cache, which is supposed to serve data from RAM instead of going to disk in order to speed up data read access. Please make sure to understand how it works, e.g. see here.
As you're already using top, I'd recommend add "cache misses" as well to the overview (hit F + select nMaj). This will show you whenever a disk read cannot be served by the cache. You should see an increase of misses once the page cache starts to become saturated.
How should I manage my cache memory without affecting read performance.
The cache is fully managed by Linux and does not need any actions from your side to take care of.

PostgreSQL VACUUM/CLUSTER/UPDATE disk at 100% but only 5MB/sec

I am getting a very strange PostgreSQL 9.4 behavior. When it runs an UPDATE on a large table, or performs VACUUM or CLUSTER of a large table it seems to hang for a very long time. In fact I just end up killing the process the following day. What's odd about it is that CPU is idle and at the same time disk activity is at 100% BUT it only reports a 4-5 MB/sec reads and writes (see screenshot of nmap & atop).
My server is 24CPU, 32GB RAM and RAID1 (2 SAS 15K x 2). Normally when disk is at 100% utilization it gives me 120-160 MB/s combined reads/writes which can stay almost indefinitely at >100MB/sec of sustained IO.
The system becomes very sluggish altogether even terminal command line. My guess it has something to do with shared memory and virtual memory. When this happens PostgreSQL consumes maximum configured shared memory.
I have disabled swapping vm.swappiness=0. I didn't play with vm.dirty_ratio, vm.dirty_background_ratio and such. System huge pages are disabled vm.nr_hugepages=0.
The following are my postgresql.conf settings:
shared_buffers = 8200MB
temp_buffers = 12MB
work_mem = 32MB
maintenance_work_mem = 128MB
#-----------------------------------------------------
synchronous_commit = off
wal_sync_method = fdatasync
checkpoint_segments = 32
checkpoint_completion_target = 0.9
#-----------------------------------------------------
random_page_cost = 3.2 # RAIDed disk
effective_cache_size = 20000MB # 32GB RAM
geqo_effort = 10
#-----------------------------------------------------
autovacuum_max_workers = 4
autovacuum_naptime = 45s
autovacuum_vacuum_scale_factor = 0.16
autovacuum_analyze_scale_factor = 0.08
How can disk be at 100% when it is only doing 5MB/sec? Even the most exhausting random read/write routine should still be a level of magnitude faster. It must have something to do with the way PostgreSQL deals with the mapped/shared memory. Also this wasn't occurring with postgres 9.1.
I am trying to educate myself on disk/memory behavior but at this point I need help from the PROs.
After lengthy investigation I found correlation between disk saturation with low read/write speeds and the IOPS number. The higher the number of IOPS the lower the IO saturation bandwidth. One of the screenshots in my question has "Transfers/sec". When than number goes high the transfer rate falls.
Unfortunately there isn't much can be done on the database configuration side. PostgreSQL heavily relies on shared memory mapping files to memory pages. When time comes to sync some/all memory pages back to disk it may have tens/hundreds of thousands of dirty pages to sync for a database with large tables. It causes a lot of random disk access and a zillion of small atomic IOs.
Since neither installing SSD nor enabling writeback is an option in my case I had to resolve the problem by approaching it from a different angle. I addressed each case individually.
The UPDATE statement I had was affecting more than half or table records every time it ran. Instead of doing the update I recreate the table each time. This almost doubled performance.
CLUSTER-ing a table results in rebuilding all table indexes except the one by which clustering is performed. For large tables with many indexes this is an important consideration to keep that in mind when performing clustering.
I also replaced VACUUM with ANALYSE which didn't seem like it affected table performance much but runs measurably quicker than VACUUM.

FreeNAS/ZFS with both RAID-Z and Mirror?

I'm considering switching to FreeNAS at the same time I'm acquiring some new disks for my home server. The end configuration will have a 1.5TB drive (currently the largest disk in the set) and two 3TB drives.
The "obvious" way to structure this (to me) would be to create partitions on the 3TB drives equal in size to the full 1.5TB drive, then RAID-Z those partitions together for 3TB of redundant storage. The remainder of the 3TB drives could be mirrored together for another 1.5TB of redundant storage. This seems like it gives me no wasted space, and a full 4.5TB of redundant storage to work with.
The problem is that I can't find anything that would let me treat these two segments as a single pool. I don't really care if any given data is written to parity vs. mirrored space, so long as it's all resilient to a single disk failure.
Am I stuck with two virtual spaces and allocating data between them, or is there a ZFS option I'm not finding that would let me pool the whole thing?
Technically you should be able to build a pool with two vdevs -- one with RAID-Z with 3 partitions and another a mirror with 2 partitions.
Something like this should work:
zpool create tank raidz da0p0 da1p0 da2p0 mirror da0p1 da1p1
That said, you don't want to do that for performance reasons. Reads and writes will be distributed across all vdevs, and, as the result, across *all your partitions for every chunk of data ZFS needs to write out. In the end your 3GB hard drives will have to do two seeks access data on different partitions each time ZFS writes out each transaction group. Once data is written, similar seeks will be needed to read data that's not in ARC yet. At 10-20ms per seek performance will be rather terrible.

Resources