Cannot understand call tree view of jprofiler - performance-testing

I am new to jprofiler and i am not able to understand what invocations means,
1)If one method is taking 1 invocation how come the each sub methods are taking more than 1 invocation?
2)And the time is for per invocation or is it total time taken for total number of invocations?
3)And in screenshot of my resultwhat is the total %, for example one method is taking 21.6% so all the sub-methods should add up and they should be 21.6% but that is not the case here.
It would be really helpful if someone can explain me the call tree view.
Thank you in advance.
EDIT:
1.In the imagescreenshot 2 i have highlighted the time 869s in 91 inv, i wanted to know how to find out time for 1 invocation because when i divide 869/91= 9.54 but when i check my logs that service is taking less than 1s.
Can you please explain it to me?

1)If one method is taking 1 invocation how come the each sub methods are taking more than 1 invocation?
For example: Method A can be called once and it can call method B 10 times.
2)And the time is for per invocation or is it total time taken for total number of invocations?
It is the total time for all invocations.
3)And in screenshot of my resultwhat is the total %, for example one method is taking 21.6% so all the sub-methods should add up and they should be 21.6% but that is not the case here.
The remainder is "self-time".
See
https://www.ej-technologies.com/resources/jprofiler/help/doc/#jprofiler.cpu
for a detailed explanation.

Related

How to increase or decrease flakiness retry for hypothesis?

Hypothesis tries a test example 3 times if the test example initially fails.
eg
Flaky: Hypothesis ...
produces unreliable results: Falsified on the first call but did not
on a subsequent one
Is there a way to increase or decrease the number of tries?
No, this is not a configurable setting.
Why would you want to change it? There might be something else that would solve your problem.

reading Azure application insights report

I have an API call that sometimes takes like 10 seconds to run.
I've checked the duration for each external call and it seems fine. I mean that there is external resource calls so I would understand that it should wait for ~200 ms.
What I don't understand is the time between a resource call and then there is nothing in between for 6 seconds until the next step.
What could be the reason?
Furthermore, it usually takes less than 1 second so I don't think my could cause a wait of 6 seconds :|
What I don't understand is the time between a resource call and then there is nothing in between for 6 seconds until the next step. What could be the reason?
If we call multiple dependencies for the first time, there is really a big gap in different dependencies normally. We need a period of time to load the new dependency. After the first call, if we run the same page again, we could see there is only slightly delay time in different dependencies. And later call results are similar to this.
If your problem does not happen on calling dependencies the first time, you could find which dependency’s duration is longest by clicking ‘View as timeline’. And you could optimize the code about this dependency in your project. Sometimes the delay also occurred in our internal processing. The official docs also have related explanation.
Request timeline
In a different case, there is no dependency call that is particularly long. But by switching to the timeline view, we can see where the delay occurred in our internal processing:
Call dependencies the first time:
Call after the first time:

Jmeter - how to get higher randomize effect?

I need to simulate "real traffic" on Web farm, by other words I need to generate high peaks but as well periods which less or even no HTTP requests (hits) at all. Reason for that is to test some atomized mechanisms for adding and reducing CPU and memory for Web servers itself (that is another story). That is why I need "totally random" sceneries when I have loads but as well period with zero or less traffic (so I can add or reduce compute power).
This is situation that I get now, as you can see I always have some avg load its always around some number of hits, even if I change 10 to 100 threads. Values (results) will always have some average value. There are no periods with less or more traffic which would be separated be +10 mints or so, only by few seconds.
Current situation
I would like to get "higher" variations by HITS/REQUESTS with some time breaks between it.
Situation that I want: i.stack.imgur.com/I4LhU.png
I tried several timers but no success and I do not want to use "Ultimate Thread Group" and similar components because I want test to be totaly randome and not predefined with time breaks and pause periods (thread delays). I would like test which will be totally randomized by it self - which could for example generate from 1 to 100 users per XY time.
This is my current Jmeter setup: i.stack.imgur.com/I4LhU.png
I do not know if I am missing some parameter in current setup or there is totally another way to do this.
Thanks a lot!
If this is something you really want (I strongly believe that the test needs to be repeatable, not random), I would suggest using Constant Throughput Timer for this. Despite the word "Constant" in its name you can use a Function or a Variable there, for instance __Random() and you will get different controllable "spikes" each iteration.
Moreover, you put a __P() function and amend its value via Beanshell Server while the test is running

Coded UI test fails after specific period of time on its own

It goes like this: in my test method, I have 3 Playback.Wait() calls. Each one is set to 2 minutes. Between those waits, I am doing some stuff here and there, - that stuff works, was tested without those 3 waits and is all OK. As soon as 3 waits are there, test method just exists on its own, somewhere during 2nd wait. There is no call stack, no useful info regarding the cause for test termination, nothing. I am pretty much clueless at the moment what is happening. Elapsed time is always 5 minutes, - no more no less.
Do you have any idea what can be wrong?
This is a setting in your .testsettings file, under Test Timeouts section. Specifically, "Mark individual test as failed if its execution time exceeds:", and then gives you fields to enter time in hours, minutes, and seconds. I believe that 5 minutes is the default. You can increase this to fit your purpose, or just remove it entirely (not reccommended).

modified activity selection

in activity selection we sort on finish time of activities and then apply the constraint that no two activities can overlap.i want to know whether can we do it by sorting on start time andthenseeing if activities do not overlap
i was going through http://www.geeksforgeeks.org/dynamic-programming-set-20-maximum-length-chain-of-pairs/
this link has a dynamic programming solution for finding maximum length chain of pairs of numbers .. this according to me is another formulation of activity selection problem but i have searched on net and as also have read cormen but everywhere they ask to sort on finish times ...
i guess it shouldnt matter on what times(start or finish)we sort but just want to confirm the same
In greedy algorithm we always try to maximize our result. Thus, In activity selection we try to accommodate as many processes as we can in a given time interval without overlapping each other.
If you sort on start time then your solution might not be an optimal solution. Let's take an example,
Processes start Time Finish Time
A 1 9
B 3 5
C 6 8
Sorted on start Time:
If you execute process A because it starts at the earliest no other process can be executed because they will overlap. Therefore, for a given time interval you can only execute one process.
Sorted on Finish Time:
If you execute process B because it ends at the earliest you can execute process C after that. Therefore, for a given time interval you can execute two processes.

Resources