I have 2 thread groups in my test case. Group A should complete first before Group B starts to execute(at least I though this was the way it worked).
Unfortunately they are firing at the same time and tests from both tread groups are executing, what can I do to prevent this from happening. Not to allow group B to start before group A is done executing?
Go to the test plan element and check the box "Run Thread Groups consecutively". This will give you what you want.
http://jmeter.apache.org/usermanual/component_reference.html#Test_Plan
Related
I have load testing scenario where test-plan has multiple thread-group and each thread-group has different type of HTTP request and this group is designed to executed in sequence .
Below is scenario I'm testing -
Test-Plan
+---Thread-Group(Register-Request)
+---Thread-Group(Container-Request)
+---Thread-Group(Subscription-Request)
+---Thread-Group(Data-Request)
+---Thread-Group(Deregister-Request)
Load testing has to follow the defined sequence. Each user-thread reads thread specific values from CSV file and during testing, JMeter output shows that:
User-threads don't move from Thread-Group(Register-Request) to Thread-Group(Container-Request) until all user threads have completed execution which looks odd to me.
Any idea what could the reason of this behaviour ?
User threads don't "move" from one Thread Group to another Thread Group, each Thread Group has its own pool of virtual users and they're not connected by any means.
So if you want each user to execute some actions (Register-Request, Container-Request, etc.) sequentially - you need to put the relevant Samplers under the same Thread Group.
If your workload model is more complex and i.e. you need to run different scenarios with different throughputs and maintain user session across Thread Groups at the same time - you can take a look at i.e. Using JMeter Variables With Multiple Thread Groups article Inter-Thread Communication Plugin or
Because "reasons", we know that when we use azureml-sdk's HyperDriveStep we expect a number of HyperDrive runs to fail -- normally around 20%. How can we handle this without failing the entire HyperDriveStep (and then all downstream steps)? Below is an example of the pipeline.
I thought there would be an HyperDriveRunConfig param to allow for this, but it doesn't seem to exist. Perhaps this is controlled on the Pipeline itself with the continue_on_step_failure param?
The workaround we're considering is to catch the failed run within our train.py script and manually log the primary_metric as zero.
thanks for your question.
I'm assuming that HyperDriveStep is one of the steps in your Pipeline and that you want the remaining Pipeline steps to continue, when HyperDriveStep fails, is that correct?
Enabling continue_on_step_failure, should allow the rest of the pipeline steps to continue, when any single steps fails.
Additionally, the HyperDrive run consists of multiple child runs, controlled by the HyperDriveConfig. If the first 3 child runs explored by HyperDrive fail (e.g. with user script errors), the system automatically cancels the entire HyperDrive run, in order to avoid further wasting resources.
Are you looking to continue other Pipeline steps when the HyperDriveStep fails? or are you looking to continue other child runs within the HyperDrive run, when the first 3 child runs fail?
Thanks!
I'm running scripts that require a different thread for each user account I pull from a database. So the script starts by running a JDBC processor to get all the accounts and store them (using the "Variable Names" field) in "accounts". Then I run a BeanShell PreProcessor to convert the variable "accounts_#" to a property:
props.put("p_accounts_#",vars.get("accounts_#"));
Then, I have a thread group start. Under "Number of Threads (users)", I have
${__P(p_accounts_#)}
The FIRST time I run this script (after launching jMeter), I only get a SINGLE thread. Every subsequent time I run it, it runs for all accounts.
It seems like for some reason, the property is not being saved until the end of the first execution. This is a very big problem as when jMeter is launched without the UI, it only does a single thread every time.
Am I setting the property incorrectly? I also tried it with a Beanshell Assertion with the same result.
Just as a test, I created a new test with the bare minimum I needed to reproduce this. Here's the script (images): http://imgur.com/a/WB5J2
It's a Beanshell PreProcessor with "props.put("accounts","12");"
Then a Thread group using "${__P(accounts)}" as the Number of Threads
Then inside that thread group is a Debug Sampler outputting the JMeter properties.
At the end is a View Results Tree.
When I run it the first time, there's only one output: "Thread 1 Running".
When I run it again, there's 12 outputs, "Tread 1 Running", "Thread 2 running", etc.
I can see that for both Debug Samplers (for the first run and second run), the "Accounts" property is set to 12. But the Thread Group needed to execute TWICE before it would work.
Any ideas?
This can be solved by adding another ThreadGroup called a 'setUp ThreadGroup' to contain the setup portion. If you put all of your staging steps into this type of threadgroup, it will run prior to any other threadgroups. You can then have your preprocessor, or move the logic to a beanshell sampler if you'd like, and set the property from there.
So I have a few scripts to run in the controller, I am wondering how it actually allocates vuser ids.
Does it go through each Vuser list and allocate it a Vuser ID something like:
Script 1 1,5,9
Script 2 2,6,10
Script 3 3,7,11
Script 4 4,8,12
Or will it allocate the User Count based on the script so 3 users in Script 1 will give 1,2,3 for instance. If need be I can give more detail, just cant think how to explain it!
Or will it do it in a different method?
Thanks,
Vuser ID-s are allocated in the scope of a Vusers group.
Every time a vuser is added a new ID, greater than the previously assigned ID is allocated. Removing vusers will not “release” IDs of removed vusers.
In the Controller we provide feature to “renumber” vusers. Actually it reallocates IDs starting from 1, but still in scope of vuser groups.
This answer was kindly provided by an HP expert
I am not sure I understand your question. Uniqueness is enforced if this is your concern. My working hypothesis is that it would be as you add users to the scenario that this then would be the sequence. Since most people add a group at a time it would appear sequential by group. You can view the virtual user numbers on the run screen details for the group to check your thoughts on numbering as users are added versus ramped up/executed.
So to explain my situation:
I have a JMeter test plan that runs some test groups constantly in a loop. In addition to this I need to have multiple sampler requests go through together each minute (to simulate spiked usages). I can't set a constant timer to delay each of these because some may finish up quicker than others and they won't be in sync.
Is there a way to make multiple test groups send a request every minute the test is running?
OR
Is there a way to put all these samplers in 1 thread group and make them all run concurrently?
As far as I'm able to understand your use case, you need 2 Thread Groups.
First Thread Group which is SOAP Sampler A
Second Thread Group which is SOAP Sampler B
Then you need to set different variables for both thread groups to make them behave according to your use case and implement spikes you need.
Important: make sure that "Run Thread Groups consecutively" under your test plan is UNCHECKED elsewise you'll be having SOAP Sampler B running after SOAP Sampler A, not in the same time.
Lets consider your scenario is,
5 Users hitting 5 URLs(samplers) simultaneously.
So what you need to do is, in your Test Plan, add 5 Thread Groups. In each Thread Group configure the number of Threads to 5 and Ramp Up to 0.
Now, add one HTTP Request sampler in each Thread Group. Configure each sampler according to the URL you want to test.
Add Listener(s) to your Test Plan. Save the Test Plan and Run your test.
Make sure you haven't selected the "Run Thread Groups consecutively" in the Test Plan.