I wanted to measure my disk throughput using the following command:
dd if=/dev/zero of=/mydir/junkfile bs=4k count=125000
If the junkfile exists, my disk throughput is 6 times smaller than if junkfile does not exist. I have repeated that many times and the results hold. Anybody knows why?
Thanks,
Amir.
In order to minimize disk caching, you need to copy an amount
significantly larger than the amount of memory in your system. 2X the
amount of RAM in your server is a useful amount.
from http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm
Related
We're running a standard B8ms VM with a 257GB Premium SSD. According to the docs it says the throughput should be Up to 170 MB/second Provisioned 100 MB/second
https://azure.microsoft.com/en-us/pricing/details/managed-disks/
However when i test it, the throughput looks to be about 35 MB/Second
▶ dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 30.8976 s, 34.8 MB/s
Is there something else i need to account for in order to maximize the throughput?
You have diffrent limits, you have the IOPS limit on the disk and you have the Throughput limit on the disk. If you use bigger blocks when testing you will hit the througput limit and if you use smaller blocks you will hit the IOPS limit.
Then you have the VM limits and you have the Disk/Storage limits. So there is many things to take into consideration when doing this types of tests.
And you also have the caching settings to take into consideration on the disks.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/disks-benchmarks
The YCSB Endpoint benchmark would have you believe that Cassandra is the golden child of Nosql databases. However, recreating the results on our own boxes (8 cores with hyperthreading, 60 GB memory, 2 500 GB SSD), we are having dismal read throughput for workload b (read mostly, aka 95% read, 5% update).
The cassandra.yaml settings are exactly the same as the Endpoint settings, barring the different ip addresses, and our disk configuration (1 SSD for data, 1 for a commit log). While their throughput is ~38,000 operations per second, ours is ~16,000 regardless (relatively) of the threads/number of client nodes. I.e. one worker node with 256 threads will report ~16,000 ops/sec, while 4 nodes will each report ~4,000 ops/sec
I've set the readahead value to 8KB for the SSD data drive. I'll put the custom workload file below.
When analyzing disk io & cpu usage with iostat, it seems that the reading throughput is consistently ~200,000 KB/s, which seems to suggest that the ycsb cluster throughput should be higher (records are 100 bytes). ~25-30% of cpu seems to be under %iowait, 10-25% in use by the user.
top and nload stats are not ostensibly bottlenecked (<50% memory usage, and 10-50 Mbits/sec for a 10 Gb/s link).
# The name of the workload class to use
workload=com.yahoo.ycsb.workloads.CoreWorkload
# There is no default setting for recordcount but it is
# required to be set.
# The number of records in the table to be inserted in
# the load phase or the number of records already in the
# table before the run phase.
recordcount=2000000000
# There is no default setting for operationcount but it is
# required to be set.
# The number of operations to use during the run phase.
operationcount=9000000
# The offset of the first insertion
insertstart=0
insertcount=500000000
core_workload_insertion_retry_limit = 10
core_workload_insertion_retry_interval = 1
# The number of fields in a record
fieldcount=10
# The size of each field (in bytes)
fieldlength=10
# Should read all fields
readallfields=true
# Should write all fields on update
writeallfields=false
fieldlengthdistribution=constant
readproportion=0.95
updateproportion=0.05
insertproportion=0
readmodifywriteproportion=0
scanproportion=0
maxscanlength=1000
scanlengthdistribution=uniform
insertorder=hashed
requestdistribution=zipfian
hotspotdatafraction=0.2
hotspotopnfraction=0.8
table=usertable
measurementtype=histogram
histogram.buckets=1000
timeseries.granularity=1000
The key was increasing native_transport_max_threads in the casssandra.yaml file.
Along with the increased settings in the comment (increasing connections in ycsb client as well as concurrent read/writes in cassandra), Cassandra jumped to ~80,000 ops/sec.
I used Couchdb to create a private NPM mirror, but I found that beam.smp kept my CPU usage to 100%, is there any way to make it lower, like 50%?
Thank you very much.
You cannot directly limit CPU/memory usage for CouchDB, but you may tweak Replicator options to reduce their usage. Options you're interested:
http_connections
Defines maximum number of HTTP connections per replication. Keeping them lower reduces transfer bandwidth.
[replicator]
http_connections = 20
worker_batch_size
With lower batch sizes checkpoints are done more frequently. Lower batch sizes also reduce the total amount of used RAM memory.
[replicator]
worker_batch_size = 500
worker_processes
Amount of replication workers. Keeping them lower reduces amount of data replication handled => reduces CPU usage because of less data to process.
[replicator]
worker_processes = 4
Play with these options to find right combination to fit your limits.
I've got pretty unusual latency patterns in my production setup:
the whole cluster (3 machines: 48 gig ram, 7500 rpm disk, 6 cores) shows latency spikes every 10 minutes, all machines at the same time.
See this screenshot.
I checked the logfiles and it seems as there are no compactions taking place at that time.
I've got 2k reads and 5k reads/sec. No optimizations have been made so far.
Caching is set to "ALL", hit rate for row cache is at ~0,7.
Any ideas? Is tuning memtable size an option?
Best,
Tobias
Solr 1.4 is doing great with respect to Indexing on a dedicated physical server (Windows Server 2008). For Indexing around 1 million full text documents (around 4 GB size) it takes around 20 minutes with Heap Size = 512M - 1G & 4GB RAM.
However while using Solr on a VM, with 4 GB RAM it took 50 minutes to index at the first time. Note that there is no Network delays and no RAM issues. Now when I increased the RAM to 8GB and increased the heap size, the indexing time increased to 2 hrs. That was really strange. Note that except for SQL Server there is no other process running. There are no network delays. However I have not checked for File I/O. Can that be a bottleneck? Does Solr has any issues running in "Virtualization" Environment?
I read a paper today by Brian & Harry: "ON THE RESPONSE TIME OF A SOLR SEARCH ENGINE IN A VIRTUALIZED ENVIRONMENT" & they claim that performance gets deteriorated when RAM is increased when Solr is running on a VM but that is with respect to query times and not indexing times.
I am bit confused as to why it took longer on a VM when I repeated the same test second time with increased heap size and RAM.
I/O on a VM will always be slower than on dedicated hardware. This is because the disk is virtualized and I/O operations must pass through an extra abstraction layer. Indexing requires intensive I/O operations, so it's not surprising that it runs more slowly on a VM. I don't know why adding RAM causes a slowdown though.