Difference between Think time and Pacing Time in Performace testing - performance-testing

Pacing is used to achieve X number of iterations in X minutes, But I'm able to achieve x number of iterations in X minutes or x hours or x seconds by specifying only think time without using pacing time.
I want to know the actual difference between think time and pacing time? pacing time is necessary to mention between iterations? what this pacing time does?

Think time is a delay added after iteration is complete and before the next one is started. The iteration request rate depends on the sum of the response time and the think time. Because the response time can vary depending on a load level, iteration request rate will vary as well.
For constant request rate, you need to use pacing. Unlike think time, pacing adds a dynamically determined delay to keep iteration request rate constant while the response time can change.
For example, to achieve 3 iteration in 2 minutes, pacing time should be 2 x 60 / 3 = 40 seconds. Here's an example how to use pacing in our tool http://support.stresstimulus.com/display/doc46/Delay+after+the+Test+Case

Think Time
it introduces an element of realism into the test execution.
With think time removed, as is often the case in stress testing, execution speed and throughput can increase tenfold, rapidly bringing an application infrastructure that can comfortably deal with a thousand real users to its knees.
always include think time in a load test.
think time influences the rate of transaction execution
Pacing
another way of affecting the execution of a performance test
affects transaction throughput

Related

AnyLogic: Total Delay Time in a discrete event simulation

Is there any function to measure the total delay time needed in an iteration of a DES? I want to do a Monte Carlo Experiment with my DES and iterate the DES 1000 times. Every iteration I want to measure the total delay time needed in this iteration and plot it to a histogram. I have already implemented aMonteCarlo Experiment. My idea was to have a variable totalDelayTime and instantiate this variable with the total delay time needed in every iteration. In my monte carlo experiment I want to plot this variable via a histogram. Is there any solution or simple anylogic function to measure/get the total delay time? I tried to call my variable totalDelayTime in the sink and said: totalDelayTime = time(). But when trace this variable via traceln(totalDelayTime)to the console I get the exact same delay time for any iteration. However, when I just write traceln(time()) in the sink, I get other delay times for every iteration.
You can get the total simulation run time by calling time() in the "On destroy" code box of Main. It returns the total time in the model time units.
If you need it in a special unit, call time(MINUTE) or similar.

Measuring a feature's share of a web service's execution time

I have a piece of code that includes a specific feature that I can turn on and off. I want to know the execution time of the feature.
I need to measure this externally, i.e. by simply measuring execution time with a load test tool. Assume that I cannot track the feature's execution time internally.
Now, I execute two runs (on/off) and simply assume that the difference between the resulting execution time is my feature's execution time.
I know that it is not entirely correct to do this as I'm looking at two separate runs that may be influenced by networking, programmatic overhead, or the gravitational pull of the moon. Still, I hope I can assume that the result will still be viable if I have a sufficiently large number of requests.
Now for the real question. I do the above using the average response time. Which is not perfect, but more or less ok.
My question is, what if I now use a percentile (say, 95th) instead?
Would my imperfect subtract-A-from-B approach become significantly more imperfect when using percentiles?
I would stick to the percentiles as the "average" approach can mask the problem, for example if you have very low response times during the initial phase of the test when the load is low and very high response times during the main phase of test when the load is immense the arithmetic mean approach will give you okayish values while with the percentiles you will get the information that the response time for 95% of requests was X or higher.
More information: Understanding Your Reports: Part 3 - Key Statistics Performance Testers Need to Understand

What is a reasonable amount of time to wait when making concurrent requests?

I'm working on a crawler and I've noticed that by setting the length of time for waiting 1 minute per request has made the application more reliable and I now get fewer connection resets. Can you recommend a reasonable amount of time to wait? I think 1 minute is quite the belts and braces approach and I would like to reduce this ideally.

Node.js: When does process.hrtime start

What is the starting time for process.hrtime (node.js version of microTime)? I couldn't find any event that starts it. Is the timestamp of the start saved anywhere?
I need my server to be able to get latency to client both ways and for that, I need reliable microtime measure. (+ for some other things)
The starting time is arbitrary; its actual value alone is meaningless. Call hrtime when you want to start the stopwatch, and call it again when your operation is done. Subtract the former from the latter and you have the elapsed time.
process.hrtime()
Returns the current high-resolution real time in a
[seconds, nanoseconds] tuple Array. It is relative to an arbitrary
time in the past. It is not related to the time of day and therefore
not subject to clock drift. The primary use is for measuring
performance between intervals.
You may pass in the result of a previous call to process.hrtime() to
get a diff reading, useful for benchmarks and measuring intervals:

Confused about average response time and calls per second correlation

I have average response time, lets say its 10 secs, also I have a max number of parallel connections my service can handle, lay say its 10. Now, how do I calculate calls per second (CPS) value my service has handled from these data?
My guess is it's
1 / 10 (= av time) = 0.1 CPS or
1 / 10 (av time) * 10 (parallel flows) = 1 CPS.
If you are just measuring average throughput then yes, 10 calls in 10 seconds is 1 per second.
Your users/consumers may also be (more) concerned with latency (average response time) which is 10 seconds for all of them.
As noted in the comment, average is only part of the story. How does your service handle peak loads - does throughput drop off precipitously after a certain point, or is degradation more graceful as load goes up? Is 10 seconds the best possible response time, or is this better under low load conditions? Worse under high load?
There are some old but useful guidelines targeting .Net, but of general interest, here.

Resources