I am using ApacheBench to get some basic timing information. Meeting my needs very well. I noticed that "-g file" creates a tab delim'ed file with each call as a row. I am unable to determine the definition of the columns however. Here is my best guess:
starttime: self-explanatory, the time that this call started
seconds: based on data, I think this may be "starttime" in a different format
ctime: ?, has a 0 value for each row for me, so no idea
dtime, ttime, wait: either dtime or ttime or wait appear to be the "time that this call took in ms"
The AB documentation doesn't seem to cover the output format. Anyone know what these columns mean, or where I can find some documentation?
Here is what I have deduced:
ctime: Connection Time
dtime: Processing Time
ttime: Total Time
wait: Waiting Time
How I deduced this:
$ ab -v 2 -n 1 -c 1 -g output.txt http://stackoverflow.com/questions/5929104/apache-bench-gnuplot-output-wha
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking stackoverflow.com (be patient)...INFO: POST header ==
---
GET /questions/5929104/apache-bench-gnuplot-output-what-are-the-column-definitions HTTP/1.0
Host: stackoverflow.com
User-Agent: ApacheBench/2.3
Accept: */*
---
LOG: header received:
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Type: text/html; charset=utf-8
Expires: Thu, 26 May 2011 19:24:52 GMT
Last-Modified: Thu, 26 May 2011 19:23:52 GMT
Vary: *
Date: Thu, 26 May 2011 19:23:51 GMT
Connection: close
Content-Length: 29118
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>performance testing - apache bench gnuplot output - what are the column definitions? - Stack Overflow
<link rel="shortcut icon" href="http://cdn.sstatic.net/stackoverflow/img/favicon.ico">
<link rel="apple-touch-icon" href="http://cdn.sstatic.net/stackoverflow/img/apple-touch-icon.png">
<link rel="search" type="application/opensearchdescription+xml" title="Stack Overflow" href="/opensearch.xml
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5.2/jquery.min.js"></scrip
<script type="text/javascript" src="http://cdn.sstatic.net/js/stub.js?v=005820c36f6e"></script>
<link rel="stylesheet" type="text/css" href="http://cdn.sstatic.net/stackoverflow/all.css?v=d75e6659067e">
<link rel="canonical" href="http://stackoverflow.com/questions/5929104/apache-bench-gnuplot-output-what-
<link rel="alternate" type="application/atom+xml" title="Feed for question 'apache bench gnuplot output
..done
Server Software:
Server Hostname: stackoverflow.com
Server Port: 80
Document Path: /questions/5929104/apache-bench-gnuplot-output-what-are-the-column-definitions
Document Length: 29118 bytes
Concurrency Level: 1
Time taken for tests: 0.330 seconds
Complete requests: 1
Failed requests: 0
Write errors: 0
Total transferred: 29386 bytes
HTML transferred: 29118 bytes
Requests per second: 3.03 [#/sec] (mean)
Time per request: 329.777 [ms] (mean)
Time per request: 329.777 [ms] (mean, across all concurrent requests)
Transfer rate: 87.02 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 79 79 0.0 79 79
Processing: 251 251 0.0 251 251
Waiting: 91 91 0.0 91 91
Total: 330 330 0.0 330 330
Compare to:
$ cat output.txt
starttime seconds ctime dtime ttime wait
Thu May 26 12:24:02 2011 1306437842 79 251 330 91
There's a good explanation here (including wait that's missing from the accepted answer):
I came to the same results as saltycrane, but want to make some additions (and want to sum things up):
starttime: self-explanatory, the time that this call started (as stated in question)
seconds: starttime as unix timestamp (date -d #1306437842 returns starttime output)
ctime: Connection Time
dtime: Processing Time
ttime: Total Time (ttime = ctime + dtime)
wait: Waiting Time
Related
I am trying to write a bash script that will list and count the number of HTTP: 500 - 511 web error inside this file "ccc2022-02-19.txt"
Inside every file there are several 500 errors ranging from HTTP 500, 501, 502, 503 up to 511.
Within the directory where this files are , there are 4 different type of files listed there daily but I am only interested on the files that starts with "ccc" because they are listed daily for example "ccc2022-02-19.txt", "ccc2022-02-20.txt" etc
Below is an example of the file content "ccc2022-02-19.txt"
10.32.10.181 ignore 19 Feb 2022 00:26:04 GMT 10.32.10.44 GET / HTTP/1.1 500 73 N 0 h
10.32.26.124 ignore 19 Feb 2022 00:26:06 GMT 10.32.10.44 GET / HTTP/1.1 501 73 N 0 h
10.32.42.249 ignore 19 Feb 2022 00:26:27 GMT 10.32.10.44 GET / HTTP/1.1 500 73 N 1 h
10.32.10.181 ignore 19 Feb 2022 00:26:34 GMT 10.32.10.44 GET / HTTP/1.1 302 73 N 0 h
10.32.26.124 ignore 19 Feb 2022 00:26:36 GMT 10.32.10.44 GET / HTTP/1.1 503 73 N 1 h
10.32.26.124 ignore 19 Feb 2022 00:26:36 GMT 10.32.10.44 GET / HTTP/1.1 502 73 N 1 h
10.32.26.124 ignore 19 Feb 2022 00:26:36 GMT 10.32.10.44 GET / HTTP/1.1 502 73 N 1 h
10.32.26.124 ignore 19 Feb 2022 00:26:36 GMT 10.32.10.44 GET / HTTP/1.1 504 73 N 1 h
10.32.26.124 ignore 19 Feb 2022 00:26:36 GMT 10.32.10.44 GET / HTTP/1.1 511 73 N 1 h
10.32.26.124 ignore 19 Feb 2022 00:26:36 GMT 10.32.10.44 GET / HTTP/1.1 508 73
I have tried using this command
awk '{for(i=1;i<=NF;i++){if($i>=500 && $i<=511){print $i}}}' ccc2022-02-19.txt
which listed the numbers 500 -511 but I'm afraid that it is not giving only the HTTP response but grepped other number too like 50023, 503893 found inside the file.
To be specific, I just want to see only the HTTP errors. Please note that the file content above is just an example......
Here is a simple awk script:
awk '$12 ~ /5[[:digit:]]{2}/ && $12 < 512 {print $12}' input.txt
Explanation
$12 ~ /5[[:digit:]]{2}/ Field #12 match 5[0-9][0-9]
$12 < 512 Field #12 less than 12
$12 ~ /5[[:digit:]]{2}/ && $12 < 512 (Field #12 match 5[0-9][0-9]) AND (Field #12 less than 12)
{print $12} Print field #12 only if 2 conditions above are met
I think this script might help
#!/bin/bash
ccc=500
while [ $ccc -le 511 ]
do
echo $ccc
ccc=$(( $ccc +1 ))
sleep 0.5
done
You can try out this:
#!/bin/bash
CURRENTDATE=`date +"%Y-%m-%d"`
echo Today date is=${CURRENTDATE}
echo Looking for today file www${CURRENTDATE}.txt
echo "#####"
echo Start listing 500 response codes for file:ccc${CURRENTDATE}.txt
#awk '{print $3 " " $4 " " $5 " " $6 " " $11}' ccc${CURRENTDATE}.txt | grep 500
echo "I am not listing to reduce amount of lines per Ms-teams limit"
echo Completed listing 500 response codes for file:ccc${CURRENTDATE}.txt
echo "#####"
Assuming all lines look like the sample (ie, the http error code is always in the 12th white-space delimited field):
$ awk '$12>= 500 && $12<=511 {print $12}' ccc2022-02-19.txt
500
501
500
503
502
502
504
511
508
If this doesn't work for all possible input lines then the question should be updated with a more representative set of sample data.
This should achieve what you want. Please guys always try to read the description before concluding that he asked a stupid question. It is actually clear!!
awk '{print $3 " " $4 " " $5 " " $6 " " $11 " " $12}' ccc2022-02-21.txt | grep 500 | wc -l
This experiment was done in reference to the file output he provided above and i tested this and it worked! This was a brilliant question in my opinion
Linux novice here so bear with me here.
I am writing a Bash Script for school (On a CentOS 8 VM) and I am attempting to save the output of Siege (Load Tester) to a variable so I can compare values.
Here is the issue I am running into: The HTTP lines between "The server is now under siege" and "Lifting the server siege..." are what are being stored in the variable, and not the nice little summary after "Lifting the server siege..."
[root#prodserver siege-4.1.1]# siege -c 1 -t 1s 192.168.1.3
** SIEGE 4.1.1
** Preparing 1 concurrent users for battle.
The server is now under siege...
HTTP/1.1 200 0.00 secs: 6707 bytes ==> GET /
HTTP/1.1 200 0.01 secs: 2008 bytes ==> GET /assets/images/taste_bug.gif
HTTP/1.1 200 0.00 secs: 2579 bytes ==> GET /assets/images/backpack_bug.gif
HTTP/1.1 200 0.00 secs: 2279 bytes ==> GET /assets/images/desert_bug.gif
HTTP/1.1 200 0.00 secs: 1653 bytes ==> GET /assets/images/calm_bug.gif
HTTP/1.1 200 0.00 secs: 1251 bytes ==> GET /assets/javascripts/menus.js
...Shortened for readability...
HTTP/1.1 200 0.00 secs: 1251 bytes ==> GET /assets/javascripts/menus.js
HTTP/1.1 200 0.00 secs: 2579 bytes ==> GET /assets/images/backpack_bug.gif
HTTP/1.1 200 0.00 secs: 2279 bytes ==> GET /assets/images/desert_bug.gif
HTTP/1.1 200 0.00 secs: 1653 bytes ==> GET /assets/images/calm_bug.gif
HTTP/1.1 200 0.00 secs: 1251 bytes ==> GET /assets/javascripts/menus.js
HTTP/1.1 200 0.01 secs: 2008 bytes ==> GET /assets/images/taste_bug.gif
HTTP/1.1 200 0.00 secs: 2579 bytes ==> GET /assets/images/backpack_bug.gif
HTTP/1.1 200 0.00 secs: 2279 bytes ==> GET /assets/images/desert_bug.gif
HTTP/1.1 200 0.00 secs: 1653 bytes ==> GET /assets/images/calm_bug.gif
Lifting the server siege...
Transactions: 149 hits
Availability: 100.00 %
Elapsed time: 0.22 secs
Data transferred: 3.95 MB
Response time: 0.00 secs
Transaction rate: 677.27 trans/sec
Throughput: 17.97 MB/sec
Concurrency: 1.00
Successful transactions: 149
Failed transactions: 0
Longest transaction: 0.01
Shortest transaction: 0.00
Currently this is how I am attempting to store the variable in bash:
SIEGE="$(siege -c $1 -t $2 [ip])"
As mentioned before, when I echo $SIEGE, it turns out the variable stored all the HTTP lines and NOT the Summary after "Lifting the siege..."
My question is how can I store that Summary in a variable.
NOTE: I'm not familiar with siege so I have no idea if all of the output is going to stdout or if some of the output could be going to stderr.
Assuming all siege output is going to stdout ... a couple ideas depending on which lines need to be ignored:
# grab lines from `^Lifting` to end of output:
SIEGE="$(siege -c $1 -t $2 [ip] | sed -n '/^Lifting/,$ p')"
# ignore all lines starting with `^HTTP`
SIEGE="$(siege -c $1 -t $2 [ip] | grep -v '^HTTP')"
If it turns out some of the output is being sent to stderr, change the siege call to redirect stderr to stdout:
# from
siege -c $1 -t $2 [ip]
# to
siege -c $1 -t $2 [ip] 2>&1
Though I'd probably opt for saving all output to a file and then parsing the file as needed, ymmv ...
I am running a very simple RESTful API on AWS using Node.js. The API takes a request in the form of '/rest/users/jdoe' and returns the following (it's all done in memory, no database involved):
{
username: 'jdoe',
firstName: 'John',
lastName: 'Doe'
}
The performance of this API on Node.js + AWS is horrible compared to the local network - only 9 requests/sec vs. 2,214 requests/sec on a local network. AWS is running a m1.medium instance whereas the local Node server is a desktop machine with an Intel i7-950 processor. Trying to figure out why such a huge difference in performance.
Benchmarks using Apache Bench are as follows:
Local Network
10,000 requests with concurrency of 100/group
> ab -n 10000 -c 100 http://192.168.1.100:8080/rest/users/jdoe
Document Path: /rest/users/jdoe
Document Length: 70 bytes
Concurrency Level: 100
Time taken for tests: 4.516 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 2350000 bytes
HTML transferred: 700000 bytes
Requests per second: 2214.22 [#/sec] (mean)
Time per request: 45.163 [ms] (mean)
Time per request: 0.452 [ms] (mean, across all concurrent requests)
Transfer rate: 508.15 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 2
Processing: 28 45 7.2 44 74
Waiting: 22 43 7.5 42 74
Total: 28 45 7.2 44 74
Percentage of the requests served within a certain time (ms)
50% 44
66% 46
75% 49
80% 51
90% 54
95% 59
98% 65
99% 67
100% 74 (longest request)
AWS
1,000 requests with concurrency of 100/group
(10,000 requests would have taken too long)
C:\apps\apache-2.2.21\bin>ab -n 1000 -c 100 http://54.200.x.xxx:8080/rest/users/jdoe
Document Path: /rest/users/jdoe
Document Length: 70 bytes
Concurrency Level: 100
Time taken for tests: 105.693 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 235000 bytes
HTML transferred: 70000 bytes
Requests per second: 9.46 [#/sec] (mean)
Time per request: 10569.305 [ms] (mean)
Time per request: 105.693 [ms] (mean, across all concurrent requests)
Transfer rate: 2.17 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 98 105 3.8 106 122
Processing: 103 9934 1844.8 10434 10633
Waiting: 103 5252 3026.5 5253 10606
Total: 204 10040 1844.9 10540 10736
Percentage of the requests served within a certain time (ms)
50% 10540
66% 10564
75% 10588
80% 10596
90% 10659
95% 10691
98% 10710
99% 10726
100% 10736 (longest request)
Questions:
Connect time for AWS is 105 ms (avg) compared to 0 ms on local network. I assume that this is because it takes a lot more time to open a socket to AWS then to a server on a local network. Is there anything to be done here for better performance under load assuming requests are coming in from multiple machines across the globe.
More serious is the server processing time - 45 ms for local server compared to 9.9 seconds for AWS! I can't figure out what's going on in here. The server is only pressing 9.46 requests/sec. which is peanuts!
Any insight into these issues much appreciated. I am nervous about putting a serious application on Node+AWS if it can't perform super fast on such a simple application.
For reference here's my server code:
var express = require('express');
var app = express();
app.get('/rest/users/:id', function(req, res) {
var user = {
username: req.params.id,
firstName: 'John',
lastName: 'Doe'
};
res.json(user);
});
app.listen(8080);
console.log('Listening on port 8080');
Edit
Single request sent in isolation (-n 1 -c 1)
Requests per second: 4.67 [#/sec] (mean)
Time per request: 214.013 [ms] (mean)
Time per request: 214.013 [ms] (mean, across all concurrent requests)
Transfer rate: 1.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 104 104 0.0 104 104
Processing: 110 110 0.0 110 110
Waiting: 110 110 0.0 110 110
Total: 214 214 0.0 214 214
10 request all sent concurrently (-n 10 -c 10)
Requests per second: 8.81 [#/sec] (mean)
Time per request: 1135.066 [ms] (mean)
Time per request: 113.507 [ms] (mean, across all concurrent requests)
Transfer rate: 2.02 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 98 103 3.4 102 110
Processing: 102 477 296.0 520 928
Waiting: 102 477 295.9 520 928
Total: 205 580 295.6 621 1033
Results using wrk
As suggested by Andrey Sidorov. The results are MUCH better - 2821 requests per second:
Running 30s test # http://54.200.x.xxx:8080/rest/users/jdoe
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 137.04ms 48.12ms 2.66s 98.89%
Req/Sec 238.11 27.97 303.00 88.91%
84659 requests in 30.01s, 19.38MB read
Socket errors: connect 0, read 0, write 0, timeout 53
Requests/sec: 2821.41
Transfer/sec: 661.27KB
So it certainly looks like the culprit is ApacheBench! Unbelievable!
It's probably ab issue (see also this question). There is nothing wrong in your server code. I suggest to try to benchmark using wrk load testing tool. Your example on my t1.micro:
wrk git:master ❯ ./wrk -t12 -c400 -d30s http://some-amazon-hostname.com/rest/users/10 ✭
Running 30s test # http://some-amazon-hostname.com/rest/users/10
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 333.42ms 322.01ms 3.20s 91.33%
Req/Sec 135.02 59.20 283.00 65.32%
48965 requests in 30.00s, 11.95MB read
Requests/sec: 1631.98
Transfer/sec: 407.99KB
Update 1: #BagosGiAr tests with a quite similar configuration shows the cluster always should perform better. That is, there is some problem with my configuration, and I'm asking you to help me find out what could be.
Update 2: I'd like to go in deep of this problem. I've tested on a LiveCD* (Xubuntu 13.04), same node version. First thing is that, with Linux, performances are way better than Windows: -n 100000 -c 1000 gives me 6409.85 reqs/sec without cluster, 7215.74 reqs/sec with clustering. Windows build has definitely a lot of problems. Still I want to investigate why this is happening only to me, given that some people with a similar configuration perform better (and clustering performs well too).
*It should be noted that LiveCD uses a RAM filesystem, while in Windows I was using a fast SSD.
How this is possible? Shouldn't result be better with cluster module? Specs: Windows 7 x64, Dual Core P8700 2.53Ghz, 4GB RAM, Node.js 0.10.5, ab 2.3. Test command line is ab -n 10000 -c 1000 http://127.0.0.1:8080/.
var http = require('http');
http.createServer(function (req, res) {
res.end('Hello World');
}).listen(8080);
Benchmark result ~ 2840.75 reqs/second:
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 12 bytes
Concurrency Level: 1000
Time taken for tests: 3.520 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 870000 bytes
HTML transferred: 120000 bytes
Requests per second: 2840.75 [#/sec] (mean)
Time per request: 352.020 [ms] (mean)
Time per request: 0.352 [ms] (mean, across all concurrent requests)
Transfer rate: 241.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 7.1 0 505
Processing: 61 296 215.9 245 1262
Waiting: 31 217 216.7 174 1224
Total: 61 297 216.1 245 1262
Percentage of the requests served within a certain time (ms)
50% 245
66% 253
75% 257
80% 265
90% 281
95% 772
98% 1245
99% 1252
100% 1262 (longest request)
With cluster module:
var cluster = require('cluster'),
http = require('http'),
numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', function (worker, code, signal) {
console.log('worker ' + worder.process.pid + ' died');
});
} else {
http.createServer(function (req, res) {
res.end('Hello World');
}).listen(8080);
}
... and with the same benchmark, result is worst: 849.64 reqs/sec:
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /
Document Length: 12 bytes
Concurrency Level: 1000
Time taken for tests: 11.770 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 870000 bytes
HTML transferred: 120000 bytes
Requests per second: 849.64 [#/sec] (mean)
Time per request: 1176.967 [ms] (mean)
Time per request: 1.177 [ms] (mean, across all concurrent requests)
Transfer rate: 72.19 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 21.3 0 509
Processing: 42 1085 362.4 1243 2274
Waiting: 27 685 409.8 673 1734
Total: 42 1086 362.7 1243 2275
Percentage of the requests served within a certain time (ms)
50% 1243
66% 1275
75% 1286
80% 1290
90% 1334
95% 1759
98% 1772
99% 1787
100% 2275 (longest request)
You are not giving port number 8080 in your url address.
By default 80 is used when no port given.(8080 is default port used for Apache Tomcat). Maybe another server is listening on port 80 on your machine.
Update
Machine Specs : Intel(R) Xeon(R) CPU X5650 # 2.67GHz, 64GB RAM,CentOS Linux release 6.0 (Final), node -v 0.8.8, ab -V 2.3
I think the problem in your case is that either Windows is not efficiently using the resources or CPU or RAM is being saturated when you run the benchmark.
Without cluster (used the same script)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.232.5.169 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 10.232.5.169
Server Port: 8000
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1000
Time taken for tests: 3.196 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 860000 bytes
HTML transferred: 110000 bytes
Requests per second: 3129.14 [#/sec] (mean)
Time per request: 319.577 [ms] (mean)
Time per request: 0.320 [ms] (mean, across all concurrent requests)
Transfer rate: 262.80 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 3 43.0 0 2999
Processing: 1 81 39.9 81 201
Waiting: 1 81 39.9 81 201
Total: 12 84 57.8 82 3000
Percentage of the requests served within a certain time (ms)
50% 82
66% 103
75% 114
80% 120
90% 140
95% 143
98% 170
99% 183
100% 3000 (longest request)
With cluster (used your cluster script)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.232.5.169 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 10.232.5.169
Server Port: 8000
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1000
Time taken for tests: 1.056 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 924672 bytes
HTML transferred: 118272 bytes
Requests per second: 9467.95 [#/sec] (mean)
Time per request: 105.620 [ms] (mean)
Time per request: 0.106 [ms] (mean, across all concurrent requests)
Transfer rate: 854.96 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 22 47 13.6 46 78
Processing: 23 52 13.8 52 102
Waiting: 5 22 17.6 17 83
Total: 77 99 5.8 100 142
Percentage of the requests served within a certain time (ms)
50% 100
66% 101
75% 102
80% 102
90% 104
95% 105
98% 110
99% 117
100% 142 (longest request)
I assume this is a result for not using the concurrent option of ApacheBench. as a default ab makes one request at the time, so the each request (in the cluster test) is served by one node and the rest of them stay idle. If you use the -c option you will benchmark the cluster mode of nodejs.
eg
ab -n 10000 -c 4 -t 25 http://127.0.0.1:8083/
My result are:
Without cluster ab -n 10000 -t 25 http://127.0.0.1:8083/:
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8083
Document Path: /
Document Length: 11 bytes
Concurrency Level: 1
Time taken for tests: 16.503 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 4300000 bytes
HTML transferred: 550000 bytes
Requests per second: 3029.66 [#/sec] (mean)
Time per request: 0.330 [ms] (mean)
Time per request: 0.330 [ms] (mean, across all concurrent requests)
Transfer rate: 254.44 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 0 0 0.4 0 13
Waiting: 0 0 0.4 0 11
Total: 0 0 0.5 0 13
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 1
80% 1
90% 1
95% 1
98% 1
99% 1
100% 13 (longest request)
With cluster ab -n 10000 -c 4 -t 25 http://127.0.0.1:8083/:
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8083
Document Path: /
Document Length: 11 bytes
Concurrency Level: 4
Time taken for tests: 8.935 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Total transferred: 4300000 bytes
HTML transferred: 550000 bytes
Requests per second: 5595.99 [#/sec] (mean)
Time per request: 0.715 [ms] (mean)
Time per request: 0.179 [ms] (mean, across all concurrent requests)
Transfer rate: 469.98 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 1
Processing: 0 1 0.6 1 17
Waiting: 0 0 0.6 0 17
Total: 0 1 0.6 1 18
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 1
100% 18 (longest request)
Cheers!
EDIT
I forgot my specifications; Windows 8x64, intel core i5-2430M # 2.4GHz, 6GB RAM
I want use RAM instead of SSD. I'm looking for experienced people to give me some advice about this. I want to mount a partition and put into it my Rails app.
Any ideas?
UPD: I tested the SSD and RAM. I have a OSX with 4x4Gb Kingston # 1333 RAM, Intel Core i3 # 2,8 Ghz, OCZ Vertex3 # 120Gb, HDD Seagate ST3000DM001 # 3Tb. My OS installed on SSD and ruby with gems placed in home folder on SSD. I create new Rails app with 10.000 product items in sqlite and create controller with code:
#products = Product.all
Rails.cache.clear
Tested it with AB.
SSD
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 39.274 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2546.21
Transfer rate: 3325.35 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 398 1546 1627
Total: 398 1546 1627
RAM
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 39.272 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2546.33
Transfer rate: 3325.51 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 366 1546 1645
Total: 366 1546 1645
HDD
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 40.510 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2468.54
Transfer rate: 3223.92 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 1193 1596 2400
Total: 1193 1596 2400
So, I think that thing in ruby with gems placed on SSD and get this scripts slowly, I will test on a real server and puts all ruby scripts into RAM with more complicated code or real application.
ps: sorry for my english :)
You are looking for a ramdisk.