How to run specifications based on the order of the tags inputtted - getgauge

Example:
- Consider I have two specs (Spec 1 and Spec 2).
- In both the specs I have few scenarios and each scenario has a tag representing the stages it has to run. Say spec1 has scenarios relevant to "STAGE_1" and "STAGE_2". And same is the case in "Spec 2".
Now, I want to run all scenarios across all specifications (spec 1 and spec 2) in a particular order.
The order I want is
a. Run all the "STAGE1" scenarios first and then
b. Run all the "STAGE2" scenarios.
Further Constraints:
I do have requirement to place these in a seperate specification because
- I may choose to run at a specification without bothering the stage level sorting
- I also want the "STAGE1" to set some data in the store, which can be consumed by the steps in the next stage say "STAGE2".
So, In effect, I see my requirement is to have a command something like
gauge run specs -tags="STAGE1 | STAGE2"
but expect gauge to sort all "STAGE1" scenarios first to execute and then execute all the STAGE2 scenarios next.

Gauge does not consider tags for order of specs. Furthermore, in your example you have listed a tag expression, which can be hard to determine order from. ex. if you did !STAGE1, all it tells gauge is to ignore the tag, it becomes difficult to determine the order.
Instead if you passed in a list of spec files or directories, Gauge will try to preserve the order of execution.
By default, gauge does not guarantee any order. You'll have to use --sort flag with gauge run. Ref: https://manpage.gauge.org/gauge_run.html

Related

TestCafe How to share runner in the global scope?

I'm using TestCafe(TC) and writing a test which implements multiple tests in a single TC test. This is for an investment reporting app.
Clients are offered a view of their portfolios, with assets grouped into various categories.
The app offers a "current month" view, with the ability to switch to previous month's data -- called AsOfDates. Within each monthly view, the data is organized into various periods; e.g., CYTD, FYTD, 1Year, 3Years... etc each of which offers a view of the portfolio over the respective time period.
There are numerous graphs throughout the app, with different display specs for the graph type (line, bar, ...): for example how many x-axis points there are for each period and how they are labelled.
I have a working TC regression test that: loops thru multiple clients; loops through the AsOfDates; loops through the available Periods; and examines the various graphs to ensure that the x-axis data is presented according to spec.
In the event of one or more failures I simply collect information documenting the failure and continue to the end of the test.
When the test completes, I create a success or failure report which we can use in our CI/CD pipeline. When done, I want to quietly close the TC task so that it doesn't also generate a test report.
To do that I've been told I need to share the TC runner in the global scope and use the global.runner.stop() stop method.
I'm curently using the TC/CLI approach:
testcafe chrome ... src/pages/regression/graphDataPoints.js
How can I grab the runner to do this or do I have to write my own script using testcafe.createRunner()?
There are two ways:
Create your own script using testcafe.createRunner() and pass options from CLI to runner.
Fork the reporter that you use, modify it in the way you want, and use it in your tests. In the reporter, you can add a condition when it has to show messages.

Handling Multithreading in XML files for running testcases in parallel

I'm new with multithreading, here is my problem statement,
I have an xml file (TestCase.xml) where each tag resembles a test case something like below,
TestCase.xml
In turn, each main tag has a child-tag that links to another xml(TestStep.xml) which dictates the steps of the test case, it’s TS in the above example.
TestStep.xml
The execution always starts from the TestCase.xml based on the id provided. With this overview, I have 100 test cases in my suite & I want to execute them in parallel, i.e. execute at least 5-6 test cases at the same time. I’m not able to use external plug-ins like Testng, Junit, BDD or mavensurefire etc. After a lot of R&D we have ended up with Multithreading. I would need assistance on how to implement the same.

Gauge test run : skip subsequent scenarios if one scenario fails in a spec file

With gauge run specs, it runs all scenarios even if any fails- that works in most of the cases, however, I need a spec execution to stop if it fails on any scenario.
For example, a spec has the following scenarios
A
B
C
if A fails it should not execute B, C and mark the spec as fail.
Gauge encourage scenarios to be independent of each other. If scenario A is failed it should not break the execution of scenario B and C. Read the Gauge FAQ why-we-cannot-skip-all-tests-dynamically-during-a-gauge-run-if-there-is-a-test-failure

How to create situational or job specific test program flows?

I am wondering how folks create situational or test program specific test program flows based on silicon feedback data. I see that their are job based flows talked about in these videos:
http://origen-sdk.org/origen/videos/5-create-program-flow/
http://origen-sdk.org/origen/videos/6-create-program-tests/
How do folks use silicon test results to alter their flows without putting brittle condition based test exclusions (e.g. next if test == 'mytest')? I guess I would say there are at least this many jobs or scenarios:
debug (aka first silicon)
samples (can be multiple)
characterization (can be multiple)
ttr (can be multiple)
quality assurance (all tests or perhaps a specific quality flow like HTOL or HTOL time-zero)
Is there a way to pass in silicon based test names to prevent having to alter flows all of the time?
thx
This is what the if/unless_enable controls are for: http://origen-sdk.org/origen/guides/program/flowapi/#Execution_Based_on_the_Runtime_Environment
This creates what are called user flags (I think) on V93K, which are designed to be set by the "user" before the flow is executed and not really change state during execution. As opposed to flow flags which can be changed at runtime by tests during the flow execution.
if/unless_job is a similar user flag that is intended to indicate the insertion in the test flow (e.g. wafer test 1, wafer test 2, etc) and is inspired by the column/attribute of the same name on Teradyne platforms. On V93K it generates a regular user flag called #JOB.
The three different types of controls you have then are:
if/unless_job - Use to model the test insertion name, normally this naming would be something that you would want all of your test modules to agree on - you can't really have module specific values for this. e.g. WT1, WT2, FTR, FTH, etc.
if/unless_enable - Option switches to be set at the start of the flow to enable/disable different parts of the flow. These can either be very specific to a particular test module, or common to the whole flow, or a mixture of both. e.g. SAMPLES, TTR, SRAM_CZ etc.
if/unless_flag - To respond to flags which can be changed at runtime, normally depending on the result of a particular test(s).
Finally, the enables are usually set by either the test floor controller software, or they can be set within the flow itself, depending on the platform and local conventions.
If you want to enable/disable these flags within the flow itself then Origen provides the following API:
enable :samples
if_enable :samples do
test :test1 # Will be hit due to the samples flag being set
end
disable :samples
if_enable :samples do
test :test1 # Now it won't be
end

Configuring Jenkins to programmatically determine slave at build time from build parameter?

This is perhaps a slightly unusual Jenkins query, but we've got a project that spans many projects. All of them are Linux based, but they span multiple architectures (MIPS, SPARC, ARMv6, ARMv7).
For a specific component, let's call it 'video-encoder', we'll therefore have 4 projects: mips-video-encoder, sparc-video-encoder, etc.
Each project is built on 4 separate slaves with a label that correlates to their architecture, i.e. the MIPS slave has the labels 'mips' 'linux'.
My objectives are to:
Consolidate all of our separate jobs. This should make it easier for us to modify job properties as well as easier to add more jobs without the duplicitous effort of adding so many architecture specific jobs.
To allow us to build only one architecture at a time if we so wish. If the MIPS job fails, we'd like to build just for MIPS and not for others.
I have looked at the 'Multi-configuration' type job -- at the moment we are just using Single confguration jobs which are simple. I am not sure if the Multi-configuration type allows for us to build only individual architectures at once. I had a play with the configuration matrix, but wasn't sure if this could be changed / adapted to just build a for single platform. It looks like I may be able to use a Groovy statement to do this? Something like:
(label=="mips".implies("slave"=="mips")
Maybe that could be simplified to something like slave == label where label is the former name of the job when it was in its single-configuration state and is now a build parameter?
I am thinking that we don't need a Multi-config job for this, if we can programatically choose the slave for this.
I would greatly appreciate some advice on how we can consolidate the number of jobs we have and programatically change the target slave based on the architecture of the project which is a build parameter.
Many thanks in advance,
You can make a wrapper job with a system groovy script. You need the groovy plugin for this. let call the wrapper job - video-encoder-wrapper, here are the bullets how to configure it:
Define the parameter ARCH
Assign the label to the video-encoder job based on the ARCH parameter by the step Execute system Groovy script
import hudson.model.*
encoder=Hudson.instance.getItem('video-encoder')
def arch =build.buildVariableResolver.resolve("ARCH")
label= Hudson.instance.getLabel(arch)
encoder.setAssignedLabel(label)
Invoke non blocking downstream project video-encoder, don't forget to pass the ARCH parameter
Check the option Set Build Name in the video-encoder job's configuration and set it to the something as ${ENV,var="ARCH"} - #${BUILD_NUMBER}. It will allow you to track easily the build history.
Disable the concurrent builds of video-encoder-wrapper job. It will prevent the assigning of 2 different labels in the same time to the video-encoder job
Hope it helps

Resources