Linux novice here so bear with me here.
I am writing a Bash Script for school (On a CentOS 8 VM) and I am attempting to save the output of Siege (Load Tester) to a variable so I can compare values.
Here is the issue I am running into: The HTTP lines between "The server is now under siege" and "Lifting the server siege..." are what are being stored in the variable, and not the nice little summary after "Lifting the server siege..."
[root#prodserver siege-4.1.1]# siege -c 1 -t 1s 192.168.1.3
** SIEGE 4.1.1
** Preparing 1 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 0.00 secs: 6707 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 2008 bytes ==> GET /assets/images/taste_bug.gif
HTTP/1.1 200 0.00 secs: 2579 bytes ==> GET /assets/images/backpack_bug.gif
HTTP/1.1 200 0.00 secs: 2279 bytes ==> GET /assets/images/desert_bug.gif
HTTP/1.1 200 0.00 secs: 1653 bytes ==> GET /assets/images/calm_bug.gif
HTTP/1.1 200 0.00 secs: 1251 bytes ==> GET /assets/javascripts/menus.js
...Shortened for readability...
HTTP/1.1 200 0.00 secs: 1251 bytes ==> GET /assets/javascripts/menus.js
HTTP/1.1 200 0.00 secs: 2579 bytes ==> GET /assets/images/backpack_bug.gif
HTTP/1.1 200 0.00 secs: 2279 bytes ==> GET /assets/images/desert_bug.gif
HTTP/1.1 200 0.00 secs: 1653 bytes ==> GET /assets/images/calm_bug.gif
HTTP/1.1 200 0.00 secs: 1251 bytes ==> GET /assets/javascripts/menus.js
HTTP/1.1 200 0.01 secs: 2008 bytes ==> GET /assets/images/taste_bug.gif
HTTP/1.1 200 0.00 secs: 2579 bytes ==> GET /assets/images/backpack_bug.gif
HTTP/1.1 200 0.00 secs: 2279 bytes ==> GET /assets/images/desert_bug.gif
HTTP/1.1 200 0.00 secs: 1653 bytes ==> GET /assets/images/calm_bug.gif
Lifting the server siege...
Transactions: 149 hits
Availability: 100.00 %
Elapsed time: 0.22 secs
Data transferred: 3.95 MB
Response time: 0.00 secs
Transaction rate: 677.27 trans/sec
Throughput: 17.97 MB/sec
Concurrency: 1.00
Successful transactions: 149
Failed transactions: 0
Longest transaction: 0.01
Shortest transaction: 0.00
Currently this is how I am attempting to store the variable in bash:
SIEGE="$(siege -c $1 -t $2 [ip])"
As mentioned before, when I echo $SIEGE, it turns out the variable stored all the HTTP lines and NOT the Summary after "Lifting the siege..."
My question is how can I store that Summary in a variable.
NOTE: I'm not familiar with siege so I have no idea if all of the output is going to stdout or if some of the output could be going to stderr.
Assuming all siege output is going to stdout ... a couple ideas depending on which lines need to be ignored:
# grab lines from `^Lifting` to end of output:
SIEGE="$(siege -c $1 -t $2 [ip] | sed -n '/^Lifting/,$ p')"
# ignore all lines starting with `^HTTP`
SIEGE="$(siege -c $1 -t $2 [ip] | grep -v '^HTTP')"
If it turns out some of the output is being sent to stderr, change the siege call to redirect stderr to stdout:
# from
siege -c $1 -t $2 [ip]
# to
siege -c $1 -t $2 [ip] 2>&1
Though I'd probably opt for saving all output to a file and then parsing the file as needed, ymmv ...
Related
I am writing a shell script that runs the command mpstat and iostat to get CPU and disk usage, extract information from those and put them into a .plot file to later graph them using bargraph.pl. What I am having troubles on is when I go use awk to get the time from mpstat like this
mpstat | awk 'FNR == 4 {print $1;}' >> CPU_usage.plot
It will prints a new line at the end of the code. I tried using printf as this is working for my other lines of codes to get the specific information needed without adding a new line of code, but I don't know how I can format it. Is there any way to do this with awk or any other method that I can use to accomplish this? Thanks in advance.
When use the command mpstat this is what bash returns
Linux 3.4.0+ (DESKTOP-JM295S0) 04/30/2017 _x86_64_ (4 CPU)
03:56:43 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
03:56:43 PM all 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
This is what I'm trying to accomplish, take the time, usr, sys, and idle and put them into a file called CPU_usage.plot. This is what I wanted to put into the file:
03:56:43 0.00 0.00 100.00
What I got instead is:
03:56:43
0.00 0.00 100.00
This is my code:
mpstat | awk 'FNR == 4 {print $1;}' >> CPU_usage.plot
mpstat | awk 'FNR == 4 {printf " %f", $4;}' >> CPU_usage.plot
mpstat | awk 'FNR == 4 {printf " %f", $6;}' >> CPU_usage.plot
mpstat | awk 'FNR == 4 {printf " %f\n", $13;}' >> CPU_usage.plot
Use the following awk approach:
mpstat | awk 'NR==4{print $1,$4,$6,$13}' OFS="\t" >> CPU_usage.plot
Now, CPU_usage.plot file should contain:
03:56:43 0.00 0.00 100.00
i need to get stats from my Centos 6.7 with Cpanel and send to my external monitor server. What I would like to get is an average cpu load per user or per process name in the last 3 minutes. After many research and test not found any praticable solutions apart bash run top with
top -d 180 -b -n 2 > /top.log
second iteration looks like...
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
38017 mysql 20 0 760m 265m 6324 S 1.4 14.2 244:27.08 mysqld
39501 nobody 20 0 1047m 93m 7068 S 0.1 5.0 0:06.80 httpd
54877 johnd 20 0 32728 3612 2364 S 0.0 0.2 0:00.09 imap
51530 johnd 20 0 353m 5372 1928 S 0.0 0.3 0:04.17 php-fpm
39500 nobody 20 0 1046m 79m 3656 S 0.0 4.3 0:02.57 httpd
7 root 20 0 0 0 0 S 0.0 0.0 27:47.61 events/0
39497 nobody 20 0 1046m 84m 7784 S 0.0 4.5 0:02.77 httpd
etc...
then grep (only on the second iteration output) with COMMAND or USER, sum and divide by 100 to get value like cpu-load
echo "$PRTGTOP" | grep johnd | awk '{ sum += $9; } END { print sum/100; }'
I should probably also try to count the process times etc ?, maybe there is a simpler way to achieve the same result, maybe with third-party software to generate stats?
Thanks.
top gets its info from /proc/*/stat. Each numerical directory under /proc is a process number for a currently running process.
It may be easier for you to collect data directly from those directories. The data format is well defined and can be found in man proc under the subsection called "/proc/[pid]/stat".
You can try the pidstat tool (part of the sysstat package):
pidstat -C httpd -U johnd -h -u 180 1 | awk '{ sum += $7; } END { print sum/100;}'
This will return the percentage CPU usage of all processes matching the httpd command string and the johnd user over a 180-second interval.
ok, pidstat is better, thanks!, but if USER pid is run for only few seconds no cpu use is reported. i found best result with:
#run pidstat with 10 iterations for 18 times
pidstat -U -u 10 18 > /pidstat.log
then
#sum all cpu usage and divide by 18
cat /pidstat.log | grep -v Average | grep johnd | awk '{ sum += $8; } END { print sum/100/18;}' OFMT="%3.3f"
cat /pidstat.log | grep -v Average | grep httpd | awk '{ sum += $8; } END { print sum/100/18;}' OFMT="%3.3f"
with this i get best cpu usage stat per USER even if process is run only for few seconds but with high cpu usage
I am running a very simple RESTful API on AWS using Node.js. The API takes a request in the form of '/rest/users/jdoe' and returns the following (it's all done in memory, no database involved):
{
username: 'jdoe',
firstName: 'John',
lastName: 'Doe'
}
The performance of this API on Node.js + AWS is horrible compared to the local network - only 9 requests/sec vs. 2,214 requests/sec on a local network. AWS is running a m1.medium instance whereas the local Node server is a desktop machine with an Intel i7-950 processor. Trying to figure out why such a huge difference in performance.
Benchmarks using Apache Bench are as follows:
Local Network
10,000 requests with concurrency of 100/group
> ab -n 10000 -c 100 http://192.168.1.100:8080/rest/users/jdoe
Document Path: /rest/users/jdoe
Document Length: 70 bytes
Concurrency Level: 100
Time taken for tests: 4.516 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 2350000 bytes
HTML transferred: 700000 bytes
Requests per second: 2214.22 [#/sec] (mean)
Time per request: 45.163 [ms] (mean)
Time per request: 0.452 [ms] (mean, across all concurrent requests)
Transfer rate: 508.15 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 2
Processing: 28 45 7.2 44 74
Waiting: 22 43 7.5 42 74
Total: 28 45 7.2 44 74
Percentage of the requests served within a certain time (ms)
50% 44
66% 46
75% 49
80% 51
90% 54
95% 59
98% 65
99% 67
100% 74 (longest request)
AWS
1,000 requests with concurrency of 100/group
(10,000 requests would have taken too long)
C:\apps\apache-2.2.21\bin>ab -n 1000 -c 100 http://54.200.x.xxx:8080/rest/users/jdoe
Document Path: /rest/users/jdoe
Document Length: 70 bytes
Concurrency Level: 100
Time taken for tests: 105.693 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 235000 bytes
HTML transferred: 70000 bytes
Requests per second: 9.46 [#/sec] (mean)
Time per request: 10569.305 [ms] (mean)
Time per request: 105.693 [ms] (mean, across all concurrent requests)
Transfer rate: 2.17 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 98 105 3.8 106 122
Processing: 103 9934 1844.8 10434 10633
Waiting: 103 5252 3026.5 5253 10606
Total: 204 10040 1844.9 10540 10736
Percentage of the requests served within a certain time (ms)
50% 10540
66% 10564
75% 10588
80% 10596
90% 10659
95% 10691
98% 10710
99% 10726
100% 10736 (longest request)
Questions:
Connect time for AWS is 105 ms (avg) compared to 0 ms on local network. I assume that this is because it takes a lot more time to open a socket to AWS then to a server on a local network. Is there anything to be done here for better performance under load assuming requests are coming in from multiple machines across the globe.
More serious is the server processing time - 45 ms for local server compared to 9.9 seconds for AWS! I can't figure out what's going on in here. The server is only pressing 9.46 requests/sec. which is peanuts!
Any insight into these issues much appreciated. I am nervous about putting a serious application on Node+AWS if it can't perform super fast on such a simple application.
For reference here's my server code:
var express = require('express');
var app = express();
app.get('/rest/users/:id', function(req, res) {
var user = {
username: req.params.id,
firstName: 'John',
lastName: 'Doe'
};
res.json(user);
});
app.listen(8080);
console.log('Listening on port 8080');
Edit
Single request sent in isolation (-n 1 -c 1)
Requests per second: 4.67 [#/sec] (mean)
Time per request: 214.013 [ms] (mean)
Time per request: 214.013 [ms] (mean, across all concurrent requests)
Transfer rate: 1.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 104 104 0.0 104 104
Processing: 110 110 0.0 110 110
Waiting: 110 110 0.0 110 110
Total: 214 214 0.0 214 214
10 request all sent concurrently (-n 10 -c 10)
Requests per second: 8.81 [#/sec] (mean)
Time per request: 1135.066 [ms] (mean)
Time per request: 113.507 [ms] (mean, across all concurrent requests)
Transfer rate: 2.02 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 98 103 3.4 102 110
Processing: 102 477 296.0 520 928
Waiting: 102 477 295.9 520 928
Total: 205 580 295.6 621 1033
Results using wrk
As suggested by Andrey Sidorov. The results are MUCH better - 2821 requests per second:
Running 30s test # http://54.200.x.xxx:8080/rest/users/jdoe
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 137.04ms 48.12ms 2.66s 98.89%
Req/Sec 238.11 27.97 303.00 88.91%
84659 requests in 30.01s, 19.38MB read
Socket errors: connect 0, read 0, write 0, timeout 53
Requests/sec: 2821.41
Transfer/sec: 661.27KB
So it certainly looks like the culprit is ApacheBench! Unbelievable!
It's probably ab issue (see also this question). There is nothing wrong in your server code. I suggest to try to benchmark using wrk load testing tool. Your example on my t1.micro:
wrk git:master ❯ ./wrk -t12 -c400 -d30s http://some-amazon-hostname.com/rest/users/10 ✭
Running 30s test # http://some-amazon-hostname.com/rest/users/10
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 333.42ms 322.01ms 3.20s 91.33%
Req/Sec 135.02 59.20 283.00 65.32%
48965 requests in 30.00s, 11.95MB read
Requests/sec: 1631.98
Transfer/sec: 407.99KB
Update 1: #BagosGiAr tests with a quite similar configuration shows the cluster always should perform better. That is, there is some problem with my configuration, and I'm asking you to help me find out what could be.
Update 2: I'd like to go in deep of this problem. I've tested on a LiveCD* (Xubuntu 13.04), same node version. First thing is that, with Linux, performances are way better than Windows: -n 100000 -c 1000 gives me 6409.85 reqs/sec without cluster, 7215.74 reqs/sec with clustering. Windows build has definitely a lot of problems. Still I want to investigate why this is happening only to me, given that some people with a similar configuration perform better (and clustering performs well too).
*It should be noted that LiveCD uses a RAM filesystem, while in Windows I was using a fast SSD.
How this is possible? Shouldn't result be better with cluster module? Specs: Windows 7 x64, Dual Core P8700 2.53Ghz, 4GB RAM, Node.js 0.10.5, ab 2.3. Test command line is ab -n 10000 -c 1000 http://127.0.0.1:8080/.
var http = require('http');
http.createServer(function (req, res) {
res.end('Hello World');
}).listen(8080);
Benchmark result ~ 2840.75 reqs/second:
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 12 bytes
Concurrency Level: 1000
Time taken for tests: 3.520 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 870000 bytes
HTML transferred: 120000 bytes
Requests per second: 2840.75 [#/sec] (mean)
Time per request: 352.020 [ms] (mean)
Time per request: 0.352 [ms] (mean, across all concurrent requests)
Transfer rate: 241.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 7.1 0 505
Processing: 61 296 215.9 245 1262
Waiting: 31 217 216.7 174 1224
Total: 61 297 216.1 245 1262
Percentage of the requests served within a certain time (ms)
50% 245
66% 253
75% 257
80% 265
90% 281
95% 772
98% 1245
99% 1252
100% 1262 (longest request)
With cluster module:
var cluster = require('cluster'),
http = require('http'),
numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', function (worker, code, signal) {
console.log('worker ' + worder.process.pid + ' died');
});
} else {
http.createServer(function (req, res) {
res.end('Hello World');
}).listen(8080);
}
... and with the same benchmark, result is worst: 849.64 reqs/sec:
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 12 bytes
Concurrency Level: 1000
Time taken for tests: 11.770 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 870000 bytes
HTML transferred: 120000 bytes
Requests per second: 849.64 [#/sec] (mean)
Time per request: 1176.967 [ms] (mean)
Time per request: 1.177 [ms] (mean, across all concurrent requests)
Transfer rate: 72.19 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 21.3 0 509
Processing: 42 1085 362.4 1243 2274
Waiting: 27 685 409.8 673 1734
Total: 42 1086 362.7 1243 2275
Percentage of the requests served within a certain time (ms)
50% 1243
66% 1275
75% 1286
80% 1290
90% 1334
95% 1759
98% 1772
99% 1787
100% 2275 (longest request)
You are not giving port number 8080 in your url address.
By default 80 is used when no port given.(8080 is default port used for Apache Tomcat). Maybe another server is listening on port 80 on your machine.
Update
Machine Specs : Intel(R) Xeon(R) CPU X5650 # 2.67GHz, 64GB RAM,CentOS Linux release 6.0 (Final), node -v 0.8.8, ab -V 2.3
I think the problem in your case is that either Windows is not efficiently using the resources or CPU or RAM is being saturated when you run the benchmark.
Without cluster (used the same script)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.232.5.169 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 10.232.5.169
Server Port: 8000
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1000
Time taken for tests: 3.196 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 860000 bytes
HTML transferred: 110000 bytes
Requests per second: 3129.14 [#/sec] (mean)
Time per request: 319.577 [ms] (mean)
Time per request: 0.320 [ms] (mean, across all concurrent requests)
Transfer rate: 262.80 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 3 43.0 0 2999
Processing: 1 81 39.9 81 201
Waiting: 1 81 39.9 81 201
Total: 12 84 57.8 82 3000
Percentage of the requests served within a certain time (ms)
50% 82
66% 103
75% 114
80% 120
90% 140
95% 143
98% 170
99% 183
100% 3000 (longest request)
With cluster (used your cluster script)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.232.5.169 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 10.232.5.169
Server Port: 8000
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1000
Time taken for tests: 1.056 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 924672 bytes
HTML transferred: 118272 bytes
Requests per second: 9467.95 [#/sec] (mean)
Time per request: 105.620 [ms] (mean)
Time per request: 0.106 [ms] (mean, across all concurrent requests)
Transfer rate: 854.96 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 22 47 13.6 46 78
Processing: 23 52 13.8 52 102
Waiting: 5 22 17.6 17 83
Total: 77 99 5.8 100 142
Percentage of the requests served within a certain time (ms)
50% 100
66% 101
75% 102
80% 102
90% 104
95% 105
98% 110
99% 117
100% 142 (longest request)
I assume this is a result for not using the concurrent option of ApacheBench. as a default ab makes one request at the time, so the each request (in the cluster test) is served by one node and the rest of them stay idle. If you use the -c option you will benchmark the cluster mode of nodejs.
eg
ab -n 10000 -c 4 -t 25 http://127.0.0.1:8083/
My result are:
Without cluster ab -n 10000 -t 25 http://127.0.0.1:8083/:
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8083
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1
Time taken for tests: 16.503 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 4300000 bytes
HTML transferred: 550000 bytes
Requests per second: 3029.66 [#/sec] (mean)
Time per request: 0.330 [ms] (mean)
Time per request: 0.330 [ms] (mean, across all concurrent requests)
Transfer rate: 254.44 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 0 0 0.4 0 13
Waiting: 0 0 0.4 0 11
Total: 0 0 0.5 0 13
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 1
80% 1
90% 1
95% 1
98% 1
99% 1
100% 13 (longest request)
With cluster ab -n 10000 -c 4 -t 25 http://127.0.0.1:8083/:
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8083
Document Path: /
Document Length: 11 bytes
Concurrency Level: 4
Time taken for tests: 8.935 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 4300000 bytes
HTML transferred: 550000 bytes
Requests per second: 5595.99 [#/sec] (mean)
Time per request: 0.715 [ms] (mean)
Time per request: 0.179 [ms] (mean, across all concurrent requests)
Transfer rate: 469.98 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 0 1 0.6 1 17
Waiting: 0 0 0.6 0 17
Total: 0 1 0.6 1 18
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 1
100% 18 (longest request)
Cheers!
EDIT
I forgot my specifications; Windows 8x64, intel core i5-2430M # 2.4GHz, 6GB RAM
I want use RAM instead of SSD. I'm looking for experienced people to give me some advice about this. I want to mount a partition and put into it my Rails app.
Any ideas?
UPD: I tested the SSD and RAM. I have a OSX with 4x4Gb Kingston # 1333 RAM, Intel Core i3 # 2,8 Ghz, OCZ Vertex3 # 120Gb, HDD Seagate ST3000DM001 # 3Tb. My OS installed on SSD and ruby with gems placed in home folder on SSD. I create new Rails app with 10.000 product items in sqlite and create controller with code:
#products = Product.all
Rails.cache.clear
Tested it with AB.
SSD
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 39.274 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2546.21
Transfer rate: 3325.35 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 398 1546 1627
Total: 398 1546 1627
RAM
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 39.272 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2546.33
Transfer rate: 3325.51 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 366 1546 1645
Total: 366 1546 1645
HDD
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 40.510 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2468.54
Transfer rate: 3223.92 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 1193 1596 2400
Total: 1193 1596 2400
So, I think that thing in ruby with gems placed on SSD and get this scripts slowly, I will test on a real server and puts all ruby scripts into RAM with more complicated code or real application.
ps: sorry for my english :)
You are looking for a ramdisk.