Skip K6 test iterations without failing the test - performance-testing

I'm authoring a K6 test which is configured to run many iterations.
During init or setup, I determine that I want to skip this test (i.e. based on something about the environment).
I'm familiar with fail() and test.abort(); both of these will fail the test.
Is there a way to complete it immediately (without running through the configured iterations) and successfully (e.g. with 0 out of 0 checks passing)?

Related

how to make a jest test fail at first time even using jest.retryTimes?

in my Jest test suite,
i use
jest.retryTimes(4)
because of the particular and instable architecture of the software. And this works as expected.
There are some test that must to pass at the first time so for these particular test i need to set
jest.retryTimes(1)
at the beginning of the test, restoring
jest.retryTimes(4)
at the end.
There are two problems :
The problem is this configuration is global, and test are executed
in parallel, so when this test start, it put 1 to jest retry for all
the test running at this moment. I would like to make only this
particular test fail the first time.
Jest Circus ignore the update of jest.retryTimes at the beginning and at the end
of the test, it keep considering 4 temptative before of raise the failure.
I read the documentation but I think I cannot obtain this result.
any suggestion?
Thanks

VSTest: Order the execution of test assemblies

Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.
Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.
We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.

Threshold for allowed amount of failed Hyperdrive runs

Because "reasons", we know that when we use azureml-sdk's HyperDriveStep we expect a number of HyperDrive runs to fail -- normally around 20%. How can we handle this without failing the entire HyperDriveStep (and then all downstream steps)? Below is an example of the pipeline.
I thought there would be an HyperDriveRunConfig param to allow for this, but it doesn't seem to exist. Perhaps this is controlled on the Pipeline itself with the continue_on_step_failure param?
The workaround we're considering is to catch the failed run within our train.py script and manually log the primary_metric as zero.
thanks for your question.
I'm assuming that HyperDriveStep is one of the steps in your Pipeline and that you want the remaining Pipeline steps to continue, when HyperDriveStep fails, is that correct?
Enabling continue_on_step_failure, should allow the rest of the pipeline steps to continue, when any single steps fails.
Additionally, the HyperDrive run consists of multiple child runs, controlled by the HyperDriveConfig. If the first 3 child runs explored by HyperDrive fail (e.g. with user script errors), the system automatically cancels the entire HyperDrive run, in order to avoid further wasting resources.
Are you looking to continue other Pipeline steps when the HyperDriveStep fails? or are you looking to continue other child runs within the HyperDrive run, when the first 3 child runs fail?
Thanks!

Converting existing cypress tests to cucumber style bdd using cypress-cucumber-preprocessor. Second scenario is not picked up

We have an existing application, the tests are written in cypress. We now want to integrate a cucumber style feature which will internally run using cypress. We used cypress-cucumber-preprocessor for the same. I followed the steps given here on the github page. The problem I'm facing now, is while running tests, it shows both the scenarios, but runs only one. Shows a green tick mark next to it, but doesn't start the second one, and the clock keeps on ticking. On clicking the second scenario in the cypress launcher it says - no commands were issued in this test.
What have I tried:
I tried to duplicate the same scenario twice in the same feature file. It still runs only first one and does not move to the next one.
I moved both different scenarios in two different feature files. It runs both of them successfully.
I tried to run the example repo (cypress-cucumber-example) locally with n number of scenarios. That works seamlessly.
Some observations:
While the first test is run I ran chrome console, and saw some errors due to some network calls failing. But these calls were made (with same errors) even when I was using only cypress and hadn't integrated with cucumber, and all tests were passing. Is it because of some magic cucumber is bringing along with it? Read somewhere default cucumber waits for a test is 60 seconds, I waited for maximum 170 seconds, and then stopped the suite. At the end all I get is one scenario green and other not even started.
It took me quite a long time, but I actually figured out what the issue was. I had an enter key after Feature: in my feature file. The ide didn't raise it as any problem and all was good. I was just comparing successful runs against this issue and saw that the feature name is not appearing in the UI, and hence took away the \n. It works like a charm now. Wondering what a small enter key can do.

Execute groovy code in last step of current test case in modular framework without teardown script

I have a soapui framework which is modular. This means that I can execute test cases based upon business operations which are organized into different suites. With this in mind, I will need data from other test cases to use in my current test case (which is in a different suite). To accomplish this, I use a Run TestCase step in my current test case which runs the test case in suite 1 and brings the needed data into my current test case (suite 2) via project properties. After I run the current test case, I need the project properties to be cleared. I have the groovy code to do that. Here’s the issue: Since this is modular, I need to ONLY clear the project properties after the CURRENT test case is run. Using a teardown script within the test case level, isn’t working because it will always clear the project properties EVEN IF this is not the current test case being run. Meaning, my current suite is suite 2. And all the test cases in suite 2 have a teardown script that removes the project properties. When I run a test case in suite 3, and need data from a test case in suite 2, the properties will not be present due to the teardown scripts found in suite 2 (at the test case level). Again, I only need it to clear when the last step is run from the current test case, but not effect any other test cases when doing the modular execution. I hope that makes sense.
As a side note, this framework allows me to test business operations by suite for ad hoc testing. It also allows me to run a full regression from beginning to end (testing all suites in a row). I need the solution to not ruin the full regression run as well.
Any ideas on how to do this?
In order to do this I had to create a setup and tear down script at every level: Project, Suite, and Test Case.
Within the setup script, I created a variable called Is_Running. I then create an if statement which says: If “Is_Running” is NULL, then fill that variable with the name of the project, suite, or test case that is currently being executed. For example, if I’m executing at the project level, this code first checks to see if there is anything in the container Is_running, and if not it writes the project name in that variable.
Then I use the teardown script in each level which says that if the Is_Running variable is equal to the current name of what ever level I’m running, then erase the project properties. This ensures that the project properties are only erased once the current level is finished executing and not during the middle of a test (when using other suites).
For example: If I start my testing at the suite level, and I choose to run “Suite3”, the setup script will write “Suite3” in the Is_Running variable. Once Suite3 engages Suite2 to run the needed test cases, Suite2’s setup script see’s that the Is_Running variable is not null so it does NOT write it’s name to the Is_Running container. As such, the Suite2 teardown script does not erase the project properties since the name does not match. Once Suite3 has completed all it’s test steps, the teardown script sees that the Is_Running is filled with Suite3, so it deletes the project properties.
This approach allows me to run the project at any level and for the project properties to be deleted only after the current suite is finished running. I needed to know groovy well enough to do all the work mentioned above, but the approach is what I was looking for in this question. If you know of a less complicated way, please leave me a note!

Resources