Throughput calculation in performance testing - performance-testing

If there are 10k busses and peak point is 9:30Am to 10:30Am, there will be 2% increase of Vusers every year then what is the throughput after 10 years?
Please help me how to solve this type of questions without using a tool.
Thanks in Advance.

The formula would be:
10000 * (1.02)^10 = 12190
With regards to implementation, 10000 busses (whatever it is) per our is 166 per minute or 2.7 per second which doesn't seem to be a very high load for me. Depending on your load testing tool there are different options on how to simulate it, for Apache JMeter it will be Constant Throughput Timer, for LoadRunner there are Pages per minute / Hits Per Second goals, etc.

Related

How To Get the Aggregated Usage of CPU in GRAFANA when hyperthreading is enabled

We are running GRAFANA/PROMETHEUS to monitor our CPU metrics and find aggregated CPU Usage of all cpus. the problem is we have enabled hyperthreading and when we stress CPU the percentage exceeds from 100%. my question is how to limit that cpu usage to show only usage in 100% not more even if cpu is highly utilized.
P.S i have tried setting the max and min limit in grafana but still the graph spikes goes above that limit.
Kindly give me the right query for this problem.
The queries I have tried are given below.
sum(irate(node_cpu_seconds_total{instance="localhost",job="node", mode!="idle"}[5m]))*100
100 - avg(irate(node_cpu_seconds_total{instance="localhost",job="node", mode!="idle"}[5m]))*100
and other similar queries we have tried.
If all you want is to "cap" a variable or expression result to a maximum value (that is, 100) you could simply use the Prometheus function clamp_max.
Thus, you could do:
clamp_max(<expr>, 100)
This is probably the most helpful query.
(1 - avg(irate(node_cpu_seconds_total{instance="$instance",job="$job",mode!="idle"}[5m])))*100
Replace your instance IP and your node exporter job name.

Am I applying Little's Law correctly to model a workload for a website?

Using these metrics (shown below), I was able to utilize a workload modeling formula (Littleā€™s Law) to come up with what I believe are the correct settings to sufficiently load test the application in question.
From Google Analytics:
Users: 2,159
Pageviews: 4,856
Avg. Session Duration: 0:02:44
Pages / Session: 2.21
Sessions: 2,199
The formula is N = Throughput * (Response Time + Think Time)
We calculated Throughput as 1.35 (4865 pageviews / 3600 (seconds in an hour))
We calculated (Response Time + Think Time) as 74.21 (164 seconds avg. session duration / 2.21 pages per session)
Using the formula, we calculate N as 100 (1.35 Throughput * 74.21 (Response Time + Think Time)).
Therefore, according to my calculations, we can simulate the load the server experienced on the peak day during the peak hour with 100 users going through the business processes at a pace of 75 seconds between iterations (think time ignored).
So, in order to determine how the system responds under a heavier than normal load, we can double (200 users) or triple (300 users) the value of N and record the average response time for each transaction.
Is this all correct?
When you do a direct observation of the logs for the site, blocked by session duration, what are the maximum number of IP addresses counted in each block?
Littles law tends to undercount sessions and their overhead in favor of transactional throughput. That's OK if you have instantaneous recovery on your session resources, but most sites are holding onto them for a period longer than 110% of the longest inter-request window for a user (the period from one request to the next).
Below formula always worked good for me, If you are looking to calculate pacing
"Pacing = No. of Users * Duration of Test (in seconds) / Transactions you want to achieve in said Test Duration"
You should be able to get closer to the Transactions you want to achieve using this Formula. If Its API, then its almost always accurate.
For Example, You want to achieve 1000 transactions using 5 users in one hour of Test Duration
Pacing = 5 * 3600/1000
= 18 seconds

azure what is Time Aggregation Max or AVG

I am using SQL Azure SQL Server for my App. My app was in was working perfectly till recently and the MAX dtu usage has been 100% but the AVG DTU usage ois around 50%.
Which value should i monitor to scale the services, MAX or AVG?
I found on the net after lots of searching:
CPU max/min and average within that 1 minute. As 1 minute (60 seconds) is the finest granularity, if you chose for example max, if the CPU has touched 100% even for 1 second, it will be shown 100% for that entire minute. Perhaps the best is to use the Average. In this case the average CPU utilization from 60 seconds will be shown under that 1 minute metric.
which sorta helped me with what it all meant, but thanks to bradbury9 too for your input.

Performance testing - Jmeter results

I am using Jmeter (started using it a few days ago) as a tool to simulate a load of 30 threads using a csv data file that contains login credentials for 3 system users.
The objective I set out to achieve was to measure 30 users (threads) logging in and navigating to a page via the menu over a time span of 30 seconds.
I have set my thread group as:
Number of threads: 30
Ramp-up Perod: 30
Loop Count: 10
I ran the test successfully. Now I'd like to understand what the results mean and what is classed as good/bad measurements, and what can be suggested to improve the results. Below is a table of the results collated in the Summary report of Jmeter.
I have conducted research only to find blogs/sites telling me the same info as what is defined on the jmeter.apache.org site. One blog (Nicolas Vahlas) that I came across gave me some very useful information,but still hasn't help me understand what to do next with my results.
Can anyone help me understand these results and what I could do next following the execution of this test plan? Or point me in the right direction of an informative blog/site that will help me understand what to do next.
Many thanks.
According to me, Deviation is high.
You know your application better than all of us.
you should focus on, avg response time you got and max response frequency and value are acceptable to you and your users? This applies to throughput also.
It shows average response time is below 0.5 seconds and maximum response time is also below 1 second which are generally acceptable but that should be defined by you (Is it acceptable by your users). If answer is yes, try with more load to check scaling.
In you requirement it is mentioned that you need have 30 concurrent users performing different actions. The response time of your requests is less and you have ramp-up of 30 seconds. Can you please check total active threads during the test. I believe the time for which there will be 30 concurrent users in system is pretty short so the average response time that you are seeing seems to be misleading. I would suggest you run a test for some more time so that there will be 30 concurrent users in the system and that would be correct reading as per your requirements.
You can use Aggregate report instead of summary report. In performance testing
Throughput - Requests/Second
Response Time - 90th Percentile and
Target application resource utilization (CPU, Processor Queue Length and Memory)
can be used for analysis. Normally SLA for websites is 3 seconds but this requirement changes from application to application.
Your test results are good, considering if the users are actually logging into system/portal.
Samples: This means the no. of requests sent on a particular module.
Average: Average Response Time, for 300 samples.
Min: Min Response Time, among 300 samples (fastest among 300 samples).
Max: Max Response Time, among 300 samples (slowest among 300 samples).
Standard Deviation: A measure of the variation (for 300 samples).
Error: failure %age
Throughput: No. of request processed per second.
Hope this will help.

Logstash metrics plugin: What does events.rate_5m mean?

This is should be a fairly easy question for Logstash veterans.
When I use the metrics plugin, what does events.rate_5m mean?
Does it mean: Number of events per second in a 5 minute window?
Does it mean: Number of events every 5 minutes?
Also, what's the difference between using this over timer.rate_5m?
The documentation isn't very clear and I have problems understanding it.
Thanks in advance!
Logstash uses the Metriks library to generate the metrics.
According to that site:
A meter that measures the mean throughput and the one-, five-, and fifteen-minute exponentially-weighted moving average throughputs.
and
A timer measures the average time as well as throughput metrics via a meter.
A meter counts events and a timer is used to look at durations (you have to pass a name and a value into a timer).
To answer your specific question, the rate_5m is the per second rate over the last 5 minute sliding window.

Resources