I'm running scripts that require a different thread for each user account I pull from a database. So the script starts by running a JDBC processor to get all the accounts and store them (using the "Variable Names" field) in "accounts". Then I run a BeanShell PreProcessor to convert the variable "accounts_#" to a property:
props.put("p_accounts_#",vars.get("accounts_#"));
Then, I have a thread group start. Under "Number of Threads (users)", I have
${__P(p_accounts_#)}
The FIRST time I run this script (after launching jMeter), I only get a SINGLE thread. Every subsequent time I run it, it runs for all accounts.
It seems like for some reason, the property is not being saved until the end of the first execution. This is a very big problem as when jMeter is launched without the UI, it only does a single thread every time.
Am I setting the property incorrectly? I also tried it with a Beanshell Assertion with the same result.
Just as a test, I created a new test with the bare minimum I needed to reproduce this. Here's the script (images): http://imgur.com/a/WB5J2
It's a Beanshell PreProcessor with "props.put("accounts","12");"
Then a Thread group using "${__P(accounts)}" as the Number of Threads
Then inside that thread group is a Debug Sampler outputting the JMeter properties.
At the end is a View Results Tree.
When I run it the first time, there's only one output: "Thread 1 Running".
When I run it again, there's 12 outputs, "Tread 1 Running", "Thread 2 running", etc.
I can see that for both Debug Samplers (for the first run and second run), the "Accounts" property is set to 12. But the Thread Group needed to execute TWICE before it would work.
Any ideas?
This can be solved by adding another ThreadGroup called a 'setUp ThreadGroup' to contain the setup portion. If you put all of your staging steps into this type of threadgroup, it will run prior to any other threadgroups. You can then have your preprocessor, or move the logic to a beanshell sampler if you'd like, and set the property from there.
Related
I need to set a Timeout, in a JCL step that calls a Unix script through bpxbtach. I did it with
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
However, After some time I realized that does not include the time in the queue. they say " This run time refers to actual execution time only, and does not include the time that the job spends in the INPUT or INPUT HOLD queues" https://supportline.microfocus.com/documentation/books/rd60/cbwjto.htm
That is microfocus JCL, but I verified the behavior is that on IBM Z too.
So even if I set the timeout to 10 seconds, the step can take several minutes if the queue is attending other things. I need a timeout that kills the step no matter the reason it took so long. I haven't been able to find what I need. Please help.
z/OS batch really isn't the best choice for time-critical work. As you figured out, the JCL "TIME" parameter is about CPU time consumption, not an elapsed time control. If this is a business-critical need, then by all means talk to your z/OS administrators - they can certainly configure your system such that your job is very likely to run without delay, but this isn't usually default behavior.
You don't provide a lot of detail as to what else your job might be doing and how it gets submitted. If you have the ability to control how your job is submitted, one option might be to spawn your shell script directly rather than submitting a batch process to run your script.
For example, what you've described is submitting JCL that spawns BPXBATCH, then BPXBATCH spawns your shell script. Instead, you might write a small C program that simply calls "spawn()" to run the shell as a distinct UNIX process - that's not difficult, depending on how you're submitting the JCL you shared. You cut out the need for the batch job - just run your script directly.
If you're running in a TSO environment, the OSHELL command lets you interactively run your script. You can even automate the whole process with a simple REXX script, and none of this requires a pass through a batch initiator.
If your site runs SSH or similar, you might consider launching your script through an SSH command - this even works across a network. SSH lets you launch a shell session and pass a command for execution...again, there's no JCL or input queue here.
If your administrators would allow it, another alternative would be to run your JCL via a "START" command. Unlike batch JCL, when a START command is encountered, the work you're starting runs immediately - there's no input queue for started tasks. Start commands can be issued from JCL too, and since they're issued as the JCL is scanned and not when the job starts, these are fairly immediate too.
Inside your shell script, it's pretty easy to setup an elapsed time limit - there are examples here.
I see a couple of problems in your code...
//STEPX EXEC PGM=BPXBATCH, PARM='sh /x.sh',TIME=(,10)
First, you have a space between BPXBATCH, and PARM= which will not execute your shell script and may result in a JCL error.
Second, you are using the TIME parameter of the EXEC statement, which limits CPU time, yet you reference a desire to cancel the job step if it waits more than some amount of time in the input queue, which is a clock time limitation.
There is no way to cancel the job from the job itself via JCL parameters based on clock time, either including or excluding time spent in the input queue.
If you really need to do this, I suggest you look into capabilities of your shop's job scheduler package. You might want to reexamine why you need to cancel a job if it doesn't run to completion within 10 clock seconds after you submit it.
I tried to test how many limits of VUs that my pc can handle. So far i tried to run with 5000 VUs with GUI mode, its working fine so wondering if this is really the correct way to do it. Here's my laptop specification:
16GB DDR4, 512GB SSD, i5 7th gen
For now I already test with 6 HTTP requests, thread group information as below:
Can anyone verify whether im doing correct way to test the limit? And why does when the test is running, the number of user run at top right is still 47? But at the end of the test, it really shows that it ends with 5000 VUs in the summary report listener (Please refer Jmeter logs in the first screenshot)
You have only 1 loop and 120 seconds ramp-up in the Thread Group so JMeter starts 41 users each second. Once started user starts executing Samplers upside down and when the last sampler is done the thread is being shut down.
So you're running into the situation when some threads have already finished their job and ended and some haven't been yet started. If you want to reach 5000 users then you need to tick Infinite next to Loop Count and specify Thread Lifetime to be greater than your ramp-up time.
More information: JMeter Test Results: Why the Actual Users Number is Lower than Expected
According to JMeter Best Practices you should:
Run your test in command-line non-GUI mode
Disable or delete all the Listeners as they don't add any value, just consume the resources
Once your non-GUI test execution is finished you can either open the .jtl results file using a Listener of your choice or generate HTML Reporting Dashboard out of it and analyze the results
Because "reasons", we know that when we use azureml-sdk's HyperDriveStep we expect a number of HyperDrive runs to fail -- normally around 20%. How can we handle this without failing the entire HyperDriveStep (and then all downstream steps)? Below is an example of the pipeline.
I thought there would be an HyperDriveRunConfig param to allow for this, but it doesn't seem to exist. Perhaps this is controlled on the Pipeline itself with the continue_on_step_failure param?
The workaround we're considering is to catch the failed run within our train.py script and manually log the primary_metric as zero.
thanks for your question.
I'm assuming that HyperDriveStep is one of the steps in your Pipeline and that you want the remaining Pipeline steps to continue, when HyperDriveStep fails, is that correct?
Enabling continue_on_step_failure, should allow the rest of the pipeline steps to continue, when any single steps fails.
Additionally, the HyperDrive run consists of multiple child runs, controlled by the HyperDriveConfig. If the first 3 child runs explored by HyperDrive fail (e.g. with user script errors), the system automatically cancels the entire HyperDrive run, in order to avoid further wasting resources.
Are you looking to continue other Pipeline steps when the HyperDriveStep fails? or are you looking to continue other child runs within the HyperDrive run, when the first 3 child runs fail?
Thanks!
I'm facing an annoying issue in Blueprism a little help will be appreciated.
The error is when I run the task I have created in object studio directly in object studio it runs successfully but when I try to run the same task from process studio using action It throws an error. The application is launched but get this error. (Application is web-based.)
Internal: Failed to perform step 1 in Read Stage 'Reader1' on page 'Main' - No elements match the supplied query terms
this is Application Modeller Settings
Application Modeller
And this is how I call it in Process
Object Called in process
Action Properties
Wait Settings are following
When I try to highlight the link it does highlight it.
I think that after your Reader1 should be a decision if element was found or not and then you can proceed with log-in. But I'd check if the element you're spying is working correctly. Maybe try passing value of reader from object to process.
A process in BluePrism can have different speed of execution depending on the way you're running it.
If you're running application using "Step" function (hotkey F5), then BluePrism waits a long time between executing actions. A "Step above" (hotkey F10) is much faster, but the fastest possible speed of execution is from control room.
The delay from "Step", or "Step above" can be enough to make process work during the development. Once the process is moved to control room, then the delay is gone and sometimes the process might be running too fast. It can happen, that the BluePrism is trying to interact with the element that does not yet exist.
To make process work in control room, you need to have additional wait stages that will ensure that the process is not running ahead of the applications that are being automated. Whenever you're interacting with any element then you need to be sure that it exists.
i suspect that you're waiting for an element, but then you're trying to read a different one. It's important to wait for the exact element that you want to interact with, as elements can appear in an order, that can make your process crash.
So to explain my situation:
I have a JMeter test plan that runs some test groups constantly in a loop. In addition to this I need to have multiple sampler requests go through together each minute (to simulate spiked usages). I can't set a constant timer to delay each of these because some may finish up quicker than others and they won't be in sync.
Is there a way to make multiple test groups send a request every minute the test is running?
OR
Is there a way to put all these samplers in 1 thread group and make them all run concurrently?
As far as I'm able to understand your use case, you need 2 Thread Groups.
First Thread Group which is SOAP Sampler A
Second Thread Group which is SOAP Sampler B
Then you need to set different variables for both thread groups to make them behave according to your use case and implement spikes you need.
Important: make sure that "Run Thread Groups consecutively" under your test plan is UNCHECKED elsewise you'll be having SOAP Sampler B running after SOAP Sampler A, not in the same time.
Lets consider your scenario is,
5 Users hitting 5 URLs(samplers) simultaneously.
So what you need to do is, in your Test Plan, add 5 Thread Groups. In each Thread Group configure the number of Threads to 5 and Ramp Up to 0.
Now, add one HTTP Request sampler in each Thread Group. Configure each sampler according to the URL you want to test.
Add Listener(s) to your Test Plan. Save the Test Plan and Run your test.
Make sure you haven't selected the "Run Thread Groups consecutively" in the Test Plan.