I wish to do a performance test on my site, simulating thousands of user and find per server capacity limit. The tool I'm using is jmeter and I have prepared a .jmx for the test scenario. But when I try to simulate 1000 of users simultaneously I start to get:
<httpSample t="0" lt="0" ts="1338538936990" s="false" lb="VerifyPassword" rc="Non HTTP response code: java.net.SocketException" rm="Non HTTP response message: Too many open files" tn="LoadConfig 1-901" dt="text" by="1375"/>
I think the error is on the client side because of the too many socket connection. If so how can I simulate the case from my local machine? Can I increase the number of open sockets on linux?
Also one thing I discover testing from a single client can give false alarm where the client is the bottleneck and the server works fine. How can I do a performance testing such that I simulate a real life scenario such that I have 10K+ users each have its own CPU/ RAM and then do a performance testing?
I have run JMeter from .NET but I think will be the same for your case.
You cannot increase the number of sockets. You should do a distributed load testing.
Luckly for you Jmeter has this ability :)
The google term you should look for is distributed JMeter testing or remote JMeter testing. If it happens that you only can use your local machine you might use virtual machines to create several JMeter distributed instances...
Check:
http://jmeter.apache.org/usermanual/remote-test.html
Related
I do a stress test to determine the maximum number of TPS(Transaction per second), Hits per second of a server by making HTTP requests through JMeter.
When I run script in Jmeter with different clients (tested on the same server, same script), I find that the number of tps(or hits per second) that the server can handle is different.
Assume that server can handle maximum of around 500 TPS when run script in client 1, 400 TPS when run script in client 2.
I am very confused with the following issues:
Why is there such a difference between two clients?
What affects TPS, hits per second ?
Although when doing stress test, I found that the server could only handle maximum of around 500 TPS, is there any way to increase the server's performance, increase the max number of tps that the server can handle?
Thanks in advance especially if anyone who can solve this problem for me !!
If you run the same JMeter test from different machines and get different results it might be the case JMeter cannot send requests fast enough
JMeter is a normal Java application and it's default configuration is good for tests development and/or debugging, however you need to do some tuning when it comes to load test execution.
First of all make sure to follow JMeter Best Practices
Then you need to ensure that JMeter properly utilizes operating system resources, you might want to increase JVM Heap size and play with Garbage Collector configuration in order to:
allow JMeter to use not less than 30 and not more than 80% of total avaiable heap space
GC shouldn't happen too often as it "pauses" the JVM execution
JMeter must not overload the underlying operating system, it should have enough headroom to operate in terms of CPU, RAM, etc. so it worth checking the OS health, it can be done using JMeter PerfMon Plugin
And last but not the least, if you run into the limits of the machine you can consider running JMeter in Distributed Mode so both client1 and client2 will run the same test providing cumulative 800 TPS or even more (given your server can handle such a load)
#Dimitri T, Thank you very much for your help !!!
I have performed some performance tests on WSO2 APIM on both WebServices (WSDL) and Gateway interfaces. Everything went good on the gateway one, however I am facing an odd behavior when using the WebServices one.
Basically I created a test that add, change password and delete a user and run a test plan using 64 threads. At the very beggining my throughput increases a lot up until reach all 64 threads (throughput peak was 1600 req/seg). However, after that the throughput start to decrease with no reason.
All 64 threads are still active and running, and the machine hosting the wso2am reduce CPU usage. It seems that APIM is given up of handling the request even though it has threads and processors for that.
The picture below shows the vmstat result for processor (user, system and idle) and the context switch and interruptions. It is possible to cpu/context switch follows the throughput.
And the next picture illustrate the jmeter test result after at the end (after decrease throughput).
Basically what I need is a clue on what may be the reason for such behavior. I have already tried to increase the pool of threads on both wso2am and tomcat, however it has no effect. It is like the requests were not arriving at all. Even though jmeter is full of power and had already send a bigger throughput before.
I would bet that a simple configuration on tomcat or wso2 is the answer for that. Any help is appreciate.
Thanks and Regards
It may be due to JMeter not being able to send the requests fast enough, try the following steps:
Upgrade JMeter to the latest version (3.1 as of now), you can get the most recent JMeter distribution from JMeter download page
Run your test in command-line non-GUI mode. JMeter GUI can be used for tests development and/or debugging only, it is not designed for running load tests.
Remove (or disable) all the listeners during test execution. Later on you can open JMeter GUI, add the listener of your choice, load .jtl results file and perform analysis or create an HTML Reporting Dashboard out of results file
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for above points explained in details and few more tips on configuring JMeter for maximum performance and throughput
I have a question related to latency benchmark. I run Apache ZooKeeper in a cluster of 5 machines (one leader and the rest are followers). There is another machine (client) used to sequence send requests to the protocol.
I manage to run a benchmark program which lasts for pre-selected time, aims to send requests simultaneously and continuously to each ZooKeeper server. When the pre-selected time elapses, I can see the latency result.
However, the above benchmark uses only one client machine to run the benchmark code. Now, I want to increase the number of client machines to make more machines send requests simultaneously. Note that I want to use same code above to test the latency. The question is how to run the benchmark code from different machines simultaneously?
I guess it should be a Linux script which runs in different machines at the same time.
My experiments are runusing remote Linux cluster which is accessed with SSH.
I look forward to hearing from you
Cheers,
Ansible is certainly the kind of tool you're searching for
I'm trying to stress test a server with JMeter. I followed the manual and successfully created the tests (Test are running ok and response is correct).
However even if I keep increasing the number of threads it never fails, but I keep reading that there must be limitations? So what am I doing wrong?
My CPU is running on +/-5% when I'm not running JMeter. Running 3000 threads I see the number of threads increase by 3000 and CPU usage goes to +/-15%. Also JMeter never complains something went wrong.
My JMeter configuration is:
Number of threads: 3000
Ramp-Up Period: 30
LoopCount: Forever (Let it run for over an hour and still nothing goes wrong)
The bottleneck now is my internet connection which simply can't handle this load and maxes out at 2.1Mbps. Is this causing the problem? It is increasing my latency from 10ms per thread to over 5000ms per thread, but threads are still running.
Assuming you have confirmed that you definitely aren't getting back any errors (e.g. using a results table listener, or logging/displaying only errors using a results graph listener) and your internet connection is running at capacity then yes, it does sound like your internet connection is the bottleneck. It doesn't sound like your server is being stressed at all.
If you can easily make use of other machines (e.g. servers in the same location as the server you are testing), you could try using JMeter remote (distributed) testing to sidestep the limitations of your internet connection. See http://jmeter.apache.org/usermanual/remote-test.html for details.
Alternatively, if it's easy (e.g. if you're using VM's in a cloud and can easily spin one up with your software on), you could try using the least-powerful server you can instead and stress testing that to see if you can make it struggle even with your internet connection (just as a sanity check).
If this doesn't help, more details on your server (hardware specifications, web server software and thread pool settings, language) and the site/pages you are testing (mostly static or dynamic? large requests/responses?) would be useful. I've certainly managed to make lower-powered machines (e.g. EC2 m1.small) struggle using JMeter over a 2Mbps connection, but it depends on the site you're testing.
I'm benchmarking Node.js with ApacheBench under Mac OS X and I'm comparing it with Apache 2.
I basically have three questions:
My basic test of a Hello World web page resulted in the following:
http://i.imgur.com/lgPMc.png
Node.js serves "Hello World" as plain text through a web server and Apache 2 serves a plain text file which also only contains "Hello World".
I did the same test with 8000 requests and it shows the same increase of response time during the last 1000 requests. What is the reason for the increase in the end?
Is there an equivalent for linux dstat on Mac OS X to record the memory and CPU usage during the test?
Is there a base set of tests that have to be performed to get an evaluation of the performance, throughput etc. of a web server?
What do you mean by Apache? Apahce Tomcat? Apache 2? Apache Hadoop (okay not that one obviously)? You should specify what is the code you are benchmarking, and what is the platform. Depending on what you have write, you may be causing a lot of problems, in basic Hello World tests, NodeJS should perform better because executing a PHP script requires disk acccess, while most NodeJS Hello World tests are in-memory, because you just write response.send("Hello World"); which is way different than PHP benchmarks.
The reason for increase at the end is limits of your computer. Your computer cannot maintain many hundreds of open connections. NodeJS responds all connections one by one, while the Apache is firing up new threads, which will cause all of them to drain at the end, because there will be many context-swap so a slowness will occur at the end.
There comes a point when your tests query the server faster than it can respond, this causes unresolved requests to accumulate at an increasing rate. Their overhead disproportionately impact response times. In other words, you performed a DDoS attack on your server and discovered it's capacity.
There is the unix 'top' command, but it's a bit fiddly, there is also ps
Quick google says that standards do exists for benchmarking webservers, such as SpecWeb, WebStone and SURGE