I have setup the pretender.io in AW. There are 2 t2.large ec2 instances sitting inside the load balancer.
For a single request, it is taking about 2-3 s to finish one request.
when I am using jmeter to do the performance testing. the test request is poor:
total duration: 99
requests: 151
requests per second: 1
response duration (ms)
min: 43
average: 5217
max: 20146
standard deviation: 3905
quantiles (ms)
10% 92
20% 2907
30% 3250
40% 3565
50% 4142
60% 4933
70% 5447
80% 6874
90% 10815
99% 17062
99.9% 17538
100.0% 20146 (max. value)
response status codes
200: 151 (46.04%)
504: 177 (53.96%)
54.9% of the request got 504.
and seeing many errors like:
(node:9631) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Cannot communicate with PhantomJS process: Unknown reason
error: Cannot communicate with PhantomJS process: Unknown reason
Experiencing infinite javascript loop. Killing phantomjs...
what things should I look into prerender or phantom.js to tune it?
numbers of nodejs workers?
Numbers of iteration to kill phantom.js
AWS instance size?
Related
I'm trying to prepare a CentOS server to run Nuxt.js (Node.js) application via Nginx reverse proxy.
First, I fire up a simple test server that returns an HTTP 200 response with the text "ok". It easily handles ~10.000 requests/second with ~10ms of mean latency.
Then, when I switch to the hello-world NUXT example app (npx create-nuxt-app) and I run weighttp http benchmarking tool to run the following command:
weighttp -n 10000 -t 4 -c 100 localhost:3000
The results are as follows:
starting benchmark...
spawning thread #1: 25 concurrent requests, 2500 total requests
spawning thread #2: 25 concurrent requests, 2500 total requests
spawning thread #3: 25 concurrent requests, 2500 total requests
spawning thread #4: 25 concurrent requests, 2500 total requests
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done
finished in 9 sec, 416 millisec and 115 microsec, 1062 req/s, 6424 kbyte/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed,
0 errored
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 61950000 bytes total, 2000000 bytes http, 59950000 bytes data
As You can see it won't climb over 1062 req/s. Sometimes I can reach something like ~1700 req/s if I ramp up the concurrency param, but no more than that.
I'm expecting a simple hello world example app to run at least ~10.000 req/s without a high delay or latency on this machine.
I've tried checking file limits, open connection limits, Nginx workers, etc but couldn't find the root cause, so I'm really looking forward to any ideas on where to at least start searching for the root cause.
I can provide any logs or any other additional info if needed.
On performance testing my node.js socket.io app it seems unable to handle the desired amount of concurrent websocket requests.
I am testing the application in a Docker environment with the following specs:
CPUs: 2
Ram: 4 GB
The application is stripped down to a bare minimum that only accepts websocket connections with socket.io + express.js.
I perform the tests with the help of artillery.io, the test scenario is:
config:
target: "http://127.0.0.1:5000"
phases:
- duration: 100
arrivalRate: 20
scenarios:
- engine: "socketio"
flow:
- emit:
channel: "echo"
data: "hello"
- think: 50
Report:
Summary report # 16:54:31(+0200) 2018-07-30
Scenarios launched: 2000
Scenarios completed: 101
Requests completed: 560
RPS sent: 6.4
Request latency:
min: 0.1
max: 3
median: 0.2
p95: 0.5
p99: 1.4
Scenario counts:
0: 2000 (100%)
Codes:
0: 560
Errors:
Error: xhr poll error: 1070
timeout: 829
So I get a lot of xhr poll errors.
While I monitor the CPU + mem stats the highest value for the CPU is only 43,25%. Memory will only get as high as 4%.
Even when I alter my test to an arrival rate of 20 over a timespan of 100 seconds I still get XHR poll errors.
So are these test numbers beyond the capability of nodejs + socket.io with this specs or is something else nog working as expected ? Perhaps the docker environment or the Artillery software ?
any help or suggestions would be appreciated !
side note: Already looked into nodejs clustering for scaling but like to get the most out of one process first.
Update 1
After some more testing with a websocket stresstest script found here: https://gist.github.com/redism/11283852
It seems I hit some sort of limit when I use an arrival rate higher than 50 or want to establish more connections then +/- 1900.
Until 1900 connections almost each connection gets established but after this number the XHR poll error grows exponential.
Still no high CPU or Memory values for the docker containers.
The XHR poll error in detail:
Error: xhr poll error
at XHR.Transport.onError (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transport.js:64:13)
at Request.<anonymous> (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transports\polling-xhr.js:128:10)
at Request.Emitter.emit (D:\xxx\xxx\api\node_modules\component-emitter\index.js:133:20)
at Request.onError (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transports\polling-xhr.js:309:8)
at Timeout._onTimeout (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transports\polling-xhr.js:256:18)
at ontimeout (timers.js:475:11)
at tryOnTimeout (timers.js:310:5)
at Timer.listOnTimeout (timers.js:270:5) type: 'TransportError', description: 503
Update 2
Changing the transport to "websocket" in the artillery test gives some better performance.
Testcase:
config:
target: "http://127.0.0.1:5000"
socketio:
transports: ["websocket"]
phases:
- duration: 20
arrivalRate: 200
scenarios:
- engine: "socketio"
flow:
- emit:
channel: "echo"
data: "hello"
- think: 50
Results: Arrival rate is not longer the issue but I hit some kind of limit at 2020 connections. After that it gives a "Websocket error".
So is this a limit on Windows 10 and can you change it ? Is this limit the reason why the tests with long-polling perform so badly
I'm just playing with the Snap framework and wanted to see how it performs against other frameworks (under completely artificial circumstances).
What I have found is that my Snap application tops out at about 1500 requests/second (the app is simply snap init; snap build; ./dist/app/app, ie. no code changes to the default app created by snap):
$ ab -n 20000 -c 500 http://127.0.0.1:8000/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
Server Software: Snap/0.9.5.1
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 721 bytes
Concurrency Level: 500
Time taken for tests: 12.845 seconds
Complete requests: 20000
Failed requests: 0
Total transferred: 17140000 bytes
HTML transferred: 14420000 bytes
Requests per second: 1557.00 [#/sec] (mean)
Time per request: 321.131 [ms] (mean)
Time per request: 0.642 [ms] (mean, across all concurrent requests)
Transfer rate: 1303.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 44 287.6 0 3010
Processing: 6 274 153.6 317 1802
Waiting: 5 274 153.6 317 1802
Total: 20 318 346.2 317 3511
Percentage of the requests served within a certain time (ms)
50% 317
66% 325
75% 334
80% 341
90% 352
95% 372
98% 1252
99% 2770
100% 3511 (longest request)
I then fired up a Grails application, and it seems like Tomcat (once the JVM warms up) can take a bit more load:
$ ab -n 20000 -c 500 http://127.0.0.1:8080/test-0.1/book
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
Server Software: Apache-Coyote/1.1
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /test-0.1/book
Document Length: 722 bytes
Concurrency Level: 500
Time taken for tests: 4.366 seconds
Complete requests: 20000
Failed requests: 0
Total transferred: 18700000 bytes
HTML transferred: 14440000 bytes
Requests per second: 4581.15 [#/sec] (mean)
Time per request: 109.143 [ms] (mean)
Time per request: 0.218 [ms] (mean, across all concurrent requests)
Transfer rate: 4182.99 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 67 347.4 0 3010
Processing: 1 30 31.4 21 374
Waiting: 0 26 24.4 20 346
Total: 1 97 352.5 21 3325
Percentage of the requests served within a certain time (ms)
50% 21
66% 28
75% 35
80% 42
90% 84
95% 230
98% 1043
99% 1258
100% 3325 (longest request)
I'm guessing that a part of this could be the fact that Tomcat seems to reserve a lot of RAM and can keep/cache some methods. During this experiment Tomcat was using in excess of 700mb or RAM while Snap barely approached 70mb.
Questions I have:
Am I comparing apples and oranges here?
What steps would one take to optimise Snap for throughput/speed?
Further experiments:
Then, as suggested by mightybyte, I started experimenting with +RTS -A4M -N4 options. The app was able to serve just over 2000 requests per second (about 25% increase).
I also removed the nested templating and served a document (same size as before) from the top level tpl file. This increased the performance to just over 7000 requests a second. The memory usage went up to about 700MB.
I'm by no means an expert on the subject so I can only really answer your first question, and yes you are comparing apples and oranges (and also bananas without realizing it).
First off, it looks like you are attempting to benchmark different things, so naturally, your results will be inconsistent. One of these is the sample Snap application and the other is just "a Grails application". What exactly are each of these things doing? Are you serving pages? Handling requests? The difference in applications will explain the differences in performance.
Secondly, the difference in RAM usage also shows the difference in what these applications are doing. Haskell web frameworks are very good at handling large instances without much RAM where other frameworks, like Tomcat as you saw, will be limited in their performance with limited RAM. Try limiting both applications to 100mb and see what happens to your performance difference.
If you want to compare the different frameworks, you really need to run a standard application to do that. Snap did this with a Pong benchmark. The results of an old test (from 2011 and Snap 0.3) can be seen here. This paragraph is extremely relevant to your situation:
If you’re comparing this with our previous results you will notice that we left out Grails. We discovered that our previous results for Grails may have been too low because the JVM had not been given time to warm up. The problem is that after the JVM warms up for some reason httperf isn’t able to get any samples from which to generate a replies/sec measurement, so it outputs 0.0 replies/sec. There are also 1000 connreset errors, so we decided the Grails numbers were not reliable enough to use.
As a comparison, the Yesod blog has a Pong benchmark from around the same time that shows similar results. You can find that here. They also link to their benchmark code if you would like to try to run a more similar benchmark, it is available on Github.
The answer by jkeuhlen makes good observations relevant to your first question. As to your second question, there are definitely things you can play with to tune performance. If you look at Snap's old raw result data, you can see that we were running the application with +RTS -A4M -N4. The -N4 option tells the GHC runtime to use 4 threads. (Note that you have to build the application with -threaded to do this.) The -A4M option sets the size of the garbage collector's allocation area. Our experiments showed that these two seemed to have the biggest impact on performance. But that was done a long time ago and GHC has changed a lot since then, so you probably want to play around with them and find what works best for you. This page has in-depth information about other command line options available to control GHC's runtime if you wish to do more experimentation.
A little work was done last year on updating the benchmarks. If you're interested in that, look around the different branches in the snap-benchmarks repository. It would be great to get more help on a new set of benchmarks.
js to drive the testing of an helloworld express.js.
There is nothing much in the sample express.js than sending back a hello world.
I started my app.js
When I run the following command:
nodeload.js -c 100 -n 1000 http://xxx:8082
I got this warnings all over the place; warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Then it completed with some statistic, however many stack traces were dumped with the node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Completed 1000 requests
Server: myhost :8082
HTTP Method: GET
Document Path: /
Concurrency Level: 100
Number of requests: 1000
Body bytes transferred: 83989
Elapsed time (s): 0.46
Requests per second: 2178.65
Mean time per request (ms): 42.89
Time per request standard deviation: 21.84
Percentages of requests served within a certain time (ms)
Min: 21
Avg: 42.9
50%: 30
95%: 90
99%: 102
Max: 103
Is this the nodeload issue? Or the sample hello world issue?
People mentioned about process.setMaxListeners(0).
But that does not resolve the issue for me when I set it in the app.js
Please advise what is this warning complaining about?
This is not a node.js issue? How to resolve this?
Thanks in advance.
I need to test a web form that takes a file upload.
The filesize in each upload will be about 10 MB.
I want to test if the server can handle over 100 simultaneous uploads, and still remain
responsive for the rest of the site.
Repeated form submissions from our office will be limited by our local DSL line.
The server is offsite with higher bandwidth.
Answers based on experience would be great, but any suggestions are welcome.
Use the ab (ApacheBench) command-line tool that is bundled with Apache
(I have just discovered this great little tool). Unlike cURL or wget,
ApacheBench was designed for performing stress tests on web servers (any type of web server!).
It generates plenty statistics too. The following command will send a
HTTP POST request including the file test.jpg to http://localhost/
100 times, with up to 4 concurrent requests.
ab -n 100 -c 4 -p test.jpg http://localhost/
It produces output like this:
Server Software:
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 0 bytes
Concurrency Level: 4
Time taken for tests: 0.78125 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Non-2xx responses: 100
Total transferred: 2600 bytes
HTML transferred: 0 bytes
Requests per second: 1280.00 [#/sec] (mean)
Time per request: 3.125 [ms] (mean)
Time per request: 0.781 [ms] (mean, across all concurrent requests)
Transfer rate: 25.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.6 0 15
Processing: 0 2 5.5 0 15
Waiting: 0 1 4.8 0 15
Total: 0 2 6.0 0 15
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 15
95% 15
98% 15
99% 15
100% 15 (longest request)
Automate Selenium RC using your favorite language. Start 100 Threads of Selenium,each typing a path of the file in the input and clicking submit.
You could generate 100 sequentially named files to make looping over them easyily, or just use the same file over and over again
I would perhaps guide you towards using cURL and submitting just random stuff (like, read 10MB out of /dev/urandom and encode it into base32), through a POST-request and manually fabricate the body to be a file upload (it's not rocket science).
Fork that script 100 times, perhaps over a few servers. Just make sure that sysadmins don't think you are doing a DDoS, or something :)
Unfortunately, this answer remains a bit vague, but hopefully it helps you by nudging you in the right track.
Continued as per Liam's comment:
If the server receiving the uploads is not in the same LAN as the clients connecting to it, it would be better to get as remote nodes as possible for stress testing, if only to simulate behavior as authentic as possible. But if you don't have access to computers outside the local LAN, the local LAN is always better than nothing.
Stress testing from inside the same hardware would be not a good idea, as you would do double load on the server: Figuring out the random data, packing it, sending it through the TCP/IP stack (although probably not over Ethernet), and only then can the server do its magic. If the sending part is outsourced, you get double (taken with an arbitrary sized grain of salt) performance by the receiving end.