I am trying to load multiple users from different ThreadGroup who are suppose to hit 3 different URLs at once with following scenario:
Test Plan
Synchronizing Timer (Number of simulated users to group by is: 100)
ThreadGroup 1(40 Users)
ThreadGroup 2(20 Users)
ThreadGroup 3(40 Users)
Is the above scenario correct to proceed with Synchronizing Timer as common timer with Total count of all ThreadGroup Users i.e: 40+20+40=100?
Help Appreciated!
If you want your 100 Users to hit the server at nearly exactly the same time then yes.
Note you would end up having very short delay between each set of 100 hits.
So you need to be sure this is realistic.
If you only want to simulate 100 Users running 3 types of URLs then there is no need for Synchronizing Timer here, use other timers like Gaussian Random Timer for example.
Related
My question is somehow related to this and this , but somehow doesn't really answer my question.
My test case is pretty simple, I need to generate constant active concurrent users (e.g. 10 concurrent users) constantly over a period of time (e.g. 60 sec).
My codes look like this
val TestProtocolBuilder: HttpProtocolBuilder = http.baseUrl("https://computer-database.gatling.io")
object Test {
val test =
exec(http("Test_Only")
.get("/computers")
.check(status.in(200, 404))
)
}
val TestOnly = scenario("Test Only").exec(Test.test)
setUp(
TestOnly.inject(
constantConcurrentUsers(10) during(60 seconds)
).protocols(TestProtocolBuilder)
)
This documentation says
constantConcurrentUsers(nbUsers) during(duration) : Inject so that number of concurrent users in the system is constant
I am expecting to get 10 concurrent active users constantly hitting the API for 60 seconds. No more than 10 users and no less than 10 users at any time.
What I see in the HTML report is that the active users at any given time is much higher than 10 (almost double).
Learning from the documentation , it says
This chart displays the active users during the simulation : total and per scenario.
“Active users” is neither “concurrent users” or “users arrival rate”. It’s a kind of mixed metric that serves for both open and closed workload models and that represents “users who were active on the system under load at a given second”.
It’s computed as:
(number of alive users at previous second) + (number of users that were started during this second) - (number of users that were terminated during previous second)
Questions:
Why Gatling keep terminating users and starting new users during the test period? What's the point?
How can I get a constant 10 concurrent active users (no more, no less) keep hitting my API for the duration of the test, if constantConcurrentUsers(10) during(60 seconds) give me a much higher active users and keep fluctuating during the test period ? I need to stick to my test case and not over-loading the API.
In the image above, at that given time the number of request = 7 and the active users = 20. Does it mean that at that given time, there are 7 active users sending out requests and there are 20 - 7 = 13 active users sitting idle waiting for the responses to come back from the API ?
Thanks.
Why Gatling keep terminating users and starting new users during the test period? What's the point?
Virtual users' lifespan is driven by your scenario. Injection profiles only drive when they are injected/started.
If you want to have your users not terminate after just one requests, add a loop in your scenario.
How can I get a constant 10 concurrent active users
Nonsensical. As you quoted yourself concurrent != active. I guarantee you have a constant number of concurrent users, meaning exactly 10 users alive at the same time. The thing is that as your scenario only has 1 single request, users terminate right after and are replaced with a new one.
Does it mean that at that given time, there are 7 active users sending out requests and there are 20 - 7 = 13 active users sitting idle waiting for the responses to come back from the API ?
It means virtual users lifespan overlapped between 2 seconds so they were seen alive in 2 different second buckets.
Is there a way to do graceful ramp down of my user load while running load tests using locust framework ?
For e.g. using the below command -
I want to run a single user for a time period of 5 minutes in a loop, but here what happens is that the last iteration end abruptly with lets say 5 requests on some of the tasks and 4 on some others.
Yes! There is a new flag (developed by yours truly :) called --stop-timeout that will make locust wait for the users to complete (up to the specified time span)
There is (as of now) no way to do actual ramp down (gradually reducing the load by shutting down users over time), but hopefully this is enough for you. Such a feature might be added at a later time.
You can create your own Shape that allows you to specify how many user should be present at any time of the test.
This is explained there: https://github.com/locustio/locust/pull/1505
I've a application, need to use windows service with multiple threads to read 100 task record from SQL server table or mongodb collection, once a thread finish to read the 100 task, it must to set the status from 0('not processing') to 50(means 'processing'), then the thread will do some business logic, once it's done, then update the status from 50 to 100(means 'DONE') OR -1(means 'error encountered'). My question is how to prevent several different threads to read the same 100 records from table if I choose not to use the lock?
It's very simple. Just pick a reasonable number of records to process in a pass. Say you have 53,000 records and you choose 250 as the number to do per pass. That means you have 212 work units that need to be done and you can number them from 0 to 211 inclusive. Work unit N includes records 250*N to 250*N+249 inclusive.
Now the problem reduces to giving each thread an undone work unit to do. So long as all work units are done and no two threads attempt to do the same work unit, no database locks are needed.
How you assign the work units to threads is language specific. But the simplest way is to use some kind of a thread pool and dispatch each work unit to that thread pool.
I need two users on web page both have different actions and login so I am using two threads (Login--action--Logout)
now I want user2 to start after user1 has completed 100 requests and user1 is still continuing 101,102,.....
how do I do this?
I think I cannot use delays as sometime 100 requests may take 100 sec or 90 sec etc.
is there any way to start another thread in between the running of present thread
If you're using two different users with two sets of actions that should be done at two different times, it sounds to me like you want to use two different Thread Groups. Make sure to use Run Thread Group Consecutively.
I have a long-running SharePoint timer job and I would like to display it's progress in central administration (so I'm using SPJobDefinition.UpdateProgress(int percentage) method).
Let's say I have 50 000 elements on a list that i want to update in a foreach loop. If I place something like job.UpdateProgress((int) itemNo / itemCount) in a loop, would it send a web request to SharePoint server each time method is called (50 000 times), or only if the percentage actually changed (no more than 100 times)?
I don't want any noticeable performance degradation because of this and I suppose that more requests might slow down the job.
Also, what tool is good to see the requests and responses (Fiddler? Or something else would be better for Sharepoint?).
(in SP2010) Every time you call job.UpdateProgress, the SPJobDefinition class will send an SPRequest. SPJobDefinition does not internally track its' percent complete, so it has no way of knowing whether your new int is an update or not unless it contacts the server, so it just contacts the server. So yes, calling this 50000 times may slow down your code significantly.
The easiest way to figure out stuff like this (since the online MSDN documentation can be very sparse at times) is to use a .Net reflector on the Microsoft.SharePoint.dll. Personally, I tend to use ilSpy.