I am trying to come up with a formula to calculate time required as per following configuration:
It takes 40 seconds to process each request. Server allows 3 requests per IP address per second.
If we use 50 proxies to process 200 requests, how much time will be required to complete all requests? I need the formula where number of requests and number of proxies can be variables.
Related
I am trying to measure the performance of a single REST endpoint (GET) with Gatling, with a very simple setup like this one:
val httpProtocol: HttpProtocolBuilder = http
.baseUrl("https://domain:8085")
.acceptHeader("*/*");
val scn =
scenario("MyScenario")
.exec(
http("MyRequest")
.get(myPath)
)
setUp(scn.inject(rampUsersPerSec(1) to (1) during (10 seconds))).protocols(httpProtocol)
Which means 1 request per second.
The problem is that the min response time is more than 500ms, average 600ms, but if I do the same test manually with Postman, same endpoint and parameters, the response time is between 150ms and 250ms.
Why could be this difference appear? How can I track the issue?
I verified that the execution time in the server side is the same for both.
Thank you!
Which means 1 request per second - no it doesn't, it means 1 thread executing requests for 10 seconds as fast as it can, given you state response time is around 500ms my expectation is that around 20 requests have been executed which gives approximately 2 requests per second. I don't think it is the real issue, but it's not the "same test"
It's also not the "same test" when it comes to HTTP Headers, Postman sends few more by default like Postman-Token and especially Accept-Encoding which is missing in your Gatling test and present by default in Postman requests and this guy can have quite big impact
In order to be absolutely to send the same request that Postman sends you can just record it using Gatling Recorder
Is it the same as "Time Taken (time-taken)" from IIS Logs as described here?
Time-taken is the time it takes from when IIS receives the first byte in the request until it sends out the last byte in that request. This includes network time getting to the client (for nearly all cases if you have a page that is less than 2KB, I think, that is cached) so a slow network connection will have a longer network time and thus time taken.
The start time of when the first byte is received by IIS is the "time" field and the finish time when IIS sends out that the last byte is the "time" field + the "time-taken" field
The Average response times is a composite metric composed of 2 metrics.
1) #of Requests
2) Sum of response times for a sample.
We publish metrics every 10s, so 6 samples in a minute. This means every 10 s we publish number of requests in that 10s and the Sum of response times in that 10s.
It does not have information about response time for each request. The smallest granularity is 1 minute so aggregates the 6 samples calculate computed sampling types (Average, Sum, Max, Min) .
I need to find any requests where 5 of them in a row take 30 seconds or more.
Is there a way to do this using a kusto query?
I guess it could be rephrased as 'if there is a request that takes 30 seconds or more to get a response, check if the next 4 requests also take 30 seconds'.
Check this thread -
Check for the request where response time is higher
Hope it helps.
I am using Jmeter (started using it a few days ago) as a tool to simulate a load of 30 threads using a csv data file that contains login credentials for 3 system users.
The objective I set out to achieve was to measure 30 users (threads) logging in and navigating to a page via the menu over a time span of 30 seconds.
I have set my thread group as:
Number of threads: 30
Ramp-up Perod: 30
Loop Count: 10
I ran the test successfully. Now I'd like to understand what the results mean and what is classed as good/bad measurements, and what can be suggested to improve the results. Below is a table of the results collated in the Summary report of Jmeter.
I have conducted research only to find blogs/sites telling me the same info as what is defined on the jmeter.apache.org site. One blog (Nicolas Vahlas) that I came across gave me some very useful information,but still hasn't help me understand what to do next with my results.
Can anyone help me understand these results and what I could do next following the execution of this test plan? Or point me in the right direction of an informative blog/site that will help me understand what to do next.
Many thanks.
According to me, Deviation is high.
You know your application better than all of us.
you should focus on, avg response time you got and max response frequency and value are acceptable to you and your users? This applies to throughput also.
It shows average response time is below 0.5 seconds and maximum response time is also below 1 second which are generally acceptable but that should be defined by you (Is it acceptable by your users). If answer is yes, try with more load to check scaling.
In you requirement it is mentioned that you need have 30 concurrent users performing different actions. The response time of your requests is less and you have ramp-up of 30 seconds. Can you please check total active threads during the test. I believe the time for which there will be 30 concurrent users in system is pretty short so the average response time that you are seeing seems to be misleading. I would suggest you run a test for some more time so that there will be 30 concurrent users in the system and that would be correct reading as per your requirements.
You can use Aggregate report instead of summary report. In performance testing
Throughput - Requests/Second
Response Time - 90th Percentile and
Target application resource utilization (CPU, Processor Queue Length and Memory)
can be used for analysis. Normally SLA for websites is 3 seconds but this requirement changes from application to application.
Your test results are good, considering if the users are actually logging into system/portal.
Samples: This means the no. of requests sent on a particular module.
Average: Average Response Time, for 300 samples.
Min: Min Response Time, among 300 samples (fastest among 300 samples).
Max: Max Response Time, among 300 samples (slowest among 300 samples).
Standard Deviation: A measure of the variation (for 300 samples).
Error: failure %age
Throughput: No. of request processed per second.
Hope this will help.
I want to setup loadtest with Loadrunner. System requirements are as below
1- max 30K users can be online i want to test if system can reach 15TPS.
2- i want to test if system can reach 2000TPS while some of online
users can visit 5 different pages. With how many vusers i should do this test ?
For both browsing and login operations response time is 0.1 or 0.2 seconds but think-time is ignored for login operation but 5 minutes for browsing operations. ( This value can be changed for sake of simplecity.) For login operation i setup vusers count to 30 and used 1000 iterations for reaching 15TPS.
i know that we can calculate vusers with below
number of required VUsers = required transaction per seconds * user
scenario length (sec)
but i m not sure how to apply this to second scenario.
Required TPS =15
users 5
Pacing =5/15
use this and it will work