I have simple html page with two input fields no css, design nothing. Page size is 134KB. In my performance testing test case I am only trying to load page with 25 concurrent users all hitting at once. I have performed test with both Jmeter and Junit(multithread). The server CPU usage reaches to 100% when all threads are up. Is this a normal behavior? or is it an issue? Why does it happen? I have replicated same scenario with another page on same server and CPU usage is same. With 10 concurrent user CPU usage is 30 to 75%. I am new to performance testing.
its normal and depends on the server RAM/Cores, if it is simple static site , enable static file caching, specify the stack you are using , so that you can get the steps to do so.
Related
so my requirement is to run 90 concurrent user doing mutiple scenario (15 scenario)simultenously for 30 minutes in virtual macine.so some of the threads i use concurrent thread group and normal thread group.
now my issue is
1)after i execute all 15 scenarios, my max response for each scenario displayed very high (>40sec). is there any suggestion to reduce this high max response?
2)one of the scenario is submit web form, there is no issue if submit only one, however during the 90 concurrent user execution, some of submit web form will get 500 error code. is the error is because i use looping to achieve 30 min duration?
In order to reduce the response time you need to find the reason for this high response time, the reasons could be in:
lack of resources like CPU, RAM, etc. - make sure to monitor resources consumption using i.e. JMeter PerfMon Plugin
incorrect configuration of the middleware (application server, database, etc.), all these components need to be properly tuned for high loads, for example if you set maximum number of connections on the application server to 10 and you have 90 threads - the 80 threads will be queuing up waiting for the next available executor, the same applies to the database connection pool
use a profiler tool to inspect what's going on under the hood and why the slowest functions are that slow, it might be the case your application algorithms are not efficient enough
If your test succeeds with single thread and fails under the load - it definitely indicates the bottleneck, try increasing the load gradually and see how many users application can support without performance degradation and/or throwing errors. HTTP Status codes 5xx indicate server-side errors so it also worth inspecting your application logs for more insights
In my application when i execute 2000 virtual users in thread(No: of threads) for 1 http request my response time was 30 sec , when i changed no of threads to 500 and instead of 1 http request I put 4 copies of same http request, RESPONSE TIME WAS 3 SEC . What is the difference? Is it the right way to reduce no of threads and increasing replicas of request? please help
Note: In each reqest i have changed the user id also
In terms of HTTP Request samplers your test must behave exactly like real browser behaves so artificially adding more HTTP Requests may (and will) break the logic of your workload (if it is in place).
In your case high response time seems to be caused by incorrect JMeter configuration, i.e. if JMeter is not properly configured to high load it simply will not be able to fire requests fast enough resulting in increased response time while your server will just be idle.
2000 threads sounds like quite a big number so make sure to:
Follow JMeter Best Practices
Follow recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure especially these:
Increase JVM Heap size allocated for JMeter
Run your test in non-GUI mode
Remove all the Listeners from the Test Plan
Monitor baseline OS health metrics on the machine where JMeter is running (CPU, RAM, Disk, Network usage). You can use JMeter PerfMon Plugin for this. If you will notice the lack of any of the aforementioned resources, i.e. usage will start exceeding, say, 90% of total available capacity - JMeter is not acting at full speed and you will need to consider Distributed Testing.
To extend #Dmitri T answer, If your server response 10 times more on load, as you execute 2000 virtual users, it means there's a bottleneck that you need to identify.
Read JMeter's Best Practices
consider running multiple non-GUI JMeter instances on multiple machines using distributed mode
Also check Delay Thread creation until needed checkbox in Thread Group
JMeter has an option to delay thread creation until the thread starts sampling, i.e. after any thread group delay and the ramp-up time for the thread itself. This allows for a very large total number of threads, provided that not too many are active concurrently.
And set Thread Group Ramp-up to 2000
Start with Ramp-up = number of threads and adjust up or down as needed.
My web app faces huge CPU spikes. Not because of traffic increasing, but because of heavy load, such as reports going out. Some of these cause the CPU to go from a healthy 30% load to 100% for the next 2-10 min... Here i'll describe as if i had only 1 server, but i've seen up to 4 servers going crazy because the alignment of the stars made around 50 of my clients want a report at the same time... i'm hosted on azure and I use the auto-scale to handle these spikes. if the load goes north of 70% for more than 2 min, a new instance goes up.
The thing is, because server 1 is 100% backed-up, when it goes up, (i hope) the load balance will direct every new request to server 2 until server 1 can handle more again. Because of this (expected) behavior, I was wondering if I should raise the min number of threads so it can faster handle the requests that are coming.
My usual rate of requests is around 15/s, so thought i should start the pool with at least 50...
what you guys think?
Edit 1 2017-07-13
So far this is working fine... i'll try a higher setting and see what happens
This strategy did prove itself very helpful and mitigated a lot of issues. Not all my problems are gone but the errors/timeouts decreased immensely.
I cannot figure out what is the cause of the bottleneck on this site, very bad response times once about 400 users reached. The site is on Google compute engine, using an instance group, with network load balancing. We created the project with sailjs.
I have been doing load testing with Google container engine using kubernetes, running the locust.py script.
The main results for one of the tests are:
RPS : 30
Spawn rate: 5 p/s
TOTALS USERS: 1000
AVG(res time): 27500!! (27,5 seconds)
The response time initially is great, below one second, but when it starts reaching about 400 users the response time starts to jump massively.
I have tested obvious factors that can influence that response time, results below:
Compute engine Instances
(2 x standard-n2, 200gb disk, ram:7.5gb per instance):
Only about 20% cpu utilization used
Outgoing network bytes: 340k bytes/sec
Incoming network bytes: 190k bytes/sec
Disk operations: 1 op/sec
Memory: below 10%
MySQL:
Max_used_connections : 41 (below total possible)
Connection errors: 0
All other results for MySQL also seem fine, no reason to cause bottleneck.
I tried the same test for a new sailjs created project, and it did better, but still had terrible results, 5 seconds res time for about 2000 users.
What else should I test? What could be the bottleneck?
Are you doing any file reading/writing? This is a major obstacle in node.js, and will always cause some issues. Caching read files or removing the need for such code should be done as much as possible. In my own experience, serving files like images, css, js and such trough my node server would start causing trouble when the amount of concurrent requests increased. The solution was to serve all of this trough a CDN.
Another proble could be the mysql driver. We had some problems with connection not being closed correctly (Not using sails.js, but I think they used the same driver at the time I encountered this), so they would cause problems on the mysql server, resulting in long delays when fetching data from the database. You should time/track the amount of mysql queries and make sure they arent delayed.
Lastly, it could be some special issue with sails.js and Google compute engine. You should make sure there arent any open issues on either of these about the same problem you are experiencing.
Good Afternoon Everyone,
I am load testing my .NET Web API which is hosted on a Windows 2008 Server virtual machine. I am using Visual Studio 2012 Load Test. However, once my load test reaches 780 concurrent users, the CPU % starts to decrease as shown in the image attached. The load test reaches a maximum of 1000 concurrent users but the CPU % is still decreasing at the highest user load. I cannot explain why. Is any kind of IIS limit being reached? Why does this occur? Is the maximum user load reached for this function?
Just looking for an explanation to this result and some guidance.
Thank you
IIS does have separate output cache settings which are enabled by default which does start to make sense after considering how it handles dynamic content with static response and cache worthiness:
The IIS output caching feature targets semi-dynamic content. It lets
you cache static responses for dynamic requests and increase
scalability
Configuring Cache Worthiness:
Even if you enable output caching, IIS does not immediately cache a
request. It must be requested a few times before IIS considers a
request to be “cache worthy.” Cache worthiness can be configured via
the ServerRuntime section.
Two properties determine cache worthiness:
frequentHitTimePeriod
frequentHitThreshold
A request is only cached if more than frequentHitThreshold requests for a cacheable URL arrive within the frequentHitTimePeriod.
This was a good explanation: http://www.iis.net/learn/manage/managing-performance-settings/configure-iis-7-output-caching