Why does the Grinder console only show 200 "Successful Tests" - performance-testing

Using the Grinder 3.11 on Windows I am trying to simulate 300 simultaneous users performing a logon, some actions and then logoff scenario by launching 300 threads by means of 2 agents on 2 different PCs each agent launching 150 threads. So each agent has a grinder.properties with these values:
grinder.processes=1
grinder.threads=150
grinder.runs=1
As the test is launched I notice on the console Results tab, in the column "Successful Tests" that Grinder ramps quickly up to 200, these 200 "users" complete the test scenario. Finally the remaining 100 are launched, they too complete the scenario. But this is not what I wanted or expected : I wish to have 300 users perform the test simultaneously not 200 then 100.
Why does Grinder run 200 followed by 100 users instead of 300 at once?
How do I run 300 users at once?

No of simulated users are below :-
number of worker threads x number of worker processes x number test machines
In your case it seems only 150 threads will be created , however I doubt that all of them gets spawned at once and there is limitation to it . Try sending the same from more than one machine and u will see desired load .. try virtual machines as well

Related

Number of users drops during load test in Locust

I have a simple Locust load test that I run with 1000 users and 100 spawn rate. When I get to the web UI I see that the amount of users grows up to 1000 stays like this for few seconds (up to 20) and then drops to 500+ and sometime 400+. The interesting thing is that when I edit the value from web UI to bring it back up to 1000 it drops again but to 900 - 950 range. Did anyone have such problem and possibly a solution?

Dataflow exceeds number_of_worker_harness_threads

I deployed Dataflow job with param --number_of_worker_harness_threads=5 (streaming mode).
Next I send 20x PubSub messages triggering 20x loading big CSV files from GCS and start processing.
In the logs I see that job took 10 messages and process it in parallel on 6-8 threads (I checked several times, sometimes it was 6, sometimes 8).
Nevertheless all the time it was more than 5.
Any idea how it works? It does not seem to be expected behavior.
Judging from the flag name, you are using Beam Python SDK.
For Python streaming, the total number of threads running DoFns on 1 worker VM in current implementation may be up to the value provided in --number_of_worker_harness_threads times the number of SDK processes running on the worker, which by default is the number of vCPU cores. There is a way to limit number of processes to 1 regardless of # of vCPUs. To do so, set --experiments=no_use_multiple_sdk_containers.
For example, if you are using --machine_type=n1-standard-2 and --number_of_worker_harness_threads=5, you may have up to 10 DoFn instances in different threads running concurrently on the same machine.
If --number_of_worker_harness_threads is not specified, up to 12 threads per process are used. See also: https://cloud.google.com/dataflow/docs/resources/faq#how_many_instances_of_dofn_should_i_expect_dataflow_to_spin_up_

How could I simulate the following scenario in JMETER?

I want to do the following:
-20 users log into the application.
-The users have to remain connected to the session, but each 20 minutes they have to make an specific request to the page.
-I would like to measure the time of the last request for each user after being connected the 2 hours and making the specific request each 20 minutes.
Which thread group or actions will be recomendable?
Add Thread Group and configure it as follows:
where:
20 - is the number of connected users
6 - is the number of iterations (user makes a request each 20 minutes == 3 requests per hour == 6 requests per 2 hours)
Add Once Only Controller
Add sampler(s) performing the login under the Once Only Controller. This way the users will perform the login only once
Add HTTP Request sampler which you're going to run each 20 minutes below the Once Only Controller
Add JSR223 PostProcessor as a child of the HTTP Request and put the following code into "Script" area:
if (vars.get('__jm__Thread Group__idx') != '5') {
prev.setIgnore()
}
this will instruct JMeter to not to record first 5 iterations metrics, only last one will be recorded.
Add Flow Control Action sampler and configure it to sleep for 1200000 milliseconds (20 minutes)
Test plan overview:

Execute (or queue) set of tasks simultaneously

I have the situation like:
between 5 and 20 test environments separated to groups by 5 VMs (1 set = 5 VMs usually)
hundreds of test cases which should be executed simultaneously on 1 VM set.
celery with 5 workers (each worker for 1 VM item from VM's set: alpha, beta, charlie, delta, echo)
Test sets can run in different order and use diff. amount of time to execute.
Each worker should execute only one test case without overlapping or concurrency.
Each worker run tasks only from its own queue/consumer.
In previous version I had a solution with multiprocessing and it works fine. But with Celery I can't add all 100 tests cases for all 5VMs from one set, it only starts adding tasks for VM alpha and wait until they all finished to start tasks for next VM beta and so on.
Now when I've tried to use multiprocessing to create separate threads for each worker I got: AssertionError: daemonic processes are not allowed to have children
Problem is - how to add 100 tests for 5 workers at the same time?
So each worker (from alpha, beta, ...) will run its own set of 100 test cases simultaneously.
This problem can be solved using task keys based on each consumer, like:
app.control.add_consumer(
queue='alpha',
exchange = 'local',
exchange_type = 'direct',
routing_key = 'alpha.*',
destination = ['worker_for_alpha#HOSTNAME'])
So now you can send any task to this consumer for separate worker using key and queue name:
#app.task(queue='alpha', routing_key = 'alpha.task_for_something')
def any_task(arg_1, arg_2):
do something with arg_1 and arg_2
Now you can scale it to any amount of workers or consumers for single worker. Just make a collection and iter them one by one for multiple workers\consumers.
Another issue can be solved with --concurrency option of each worker.
You can set concurrency to 5 to have 5 simultaneously threads on one worker. Or break the task flow on separate threads for each worker with unique key and consumer(queue).

Transaction per second - vuser relation

I want to setup loadtest with Loadrunner. System requirements are as below
1- max 30K users can be online i want to test if system can reach 15TPS.
2- i want to test if system can reach 2000TPS while some of online
users can visit 5 different pages. With how many vusers i should do this test ?
For both browsing and login operations response time is 0.1 or 0.2 seconds but think-time is ignored for login operation but 5 minutes for browsing operations. ( This value can be changed for sake of simplecity.) For login operation i setup vusers count to 30 and used 1000 iterations for reaching 15TPS.
i know that we can calculate vusers with below
number of required VUsers = required transaction per seconds * user
scenario length (sec)
but i m not sure how to apply this to second scenario.
Required TPS =15
users 5
Pacing =5/15
use this and it will work

Resources