I created data-driven coded ui test with more iterations. When I run the test and when first iteration is done, I want do put some clean up code before every other iteration to clean all changes that are made in the previous iteration.
Any idea how to do that?
Coded UI tests allow method with the [TestCleanup] attribute. Such methods are run after each test. If you are creating Coded UI test file you should find an example of a [TestCleanup] in comments in the CodedUItestN.cs file.
Methods with [ClassCleanup] and [AssemblyCleanup] attributes are also supported.
This SO question has more information Test Method that runs once at the Start of the Test?
Related
I'm trying to run a scenario with Cucumber that is using a before hook to load a dataset. My problem is the Scenario has a set of examples and the before step is called before every example, meaning I'm getting stopped at the start of the second example because of DatabaseUnitExceptions.
Is there some way to only call the before hook once for the whole scenario and not for each example?
Cheers
As stated by Gaƫl; each example under a scenario outline is a scenario. So Cucumber will run the #Before hook before each scenario / example. (Scenario and example being synonyms to Cucumber).
If you want to run a hook before all scenarios, have a look at #BeforeAll. Please refer to the Cucumber documentation on hooks.
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I have some tests I'd like to exclude from the spock-report. Is it possible to exclude specific classes or tests from generating it?
I don't know of such a feature out of the box but you could write your own report template.
just copy the default templates and add your filter code directly to the template.
another way I could imagine is that you run your tests twice and exclude the tests to hide with an #IgnoreIf annotation (http://mrhaki.blogspot.com/2014/06/spocklight-ignore-specifications-based.html?m=1). This could make the decision based on an environment variable.
However, tests are important and it is even more important what has NOT been tested. So you should report that certain tests where excluded in order to have a valid test report.
I have a coded ui test that is written in C#.
When the test case is opened in MTM, no test steps are observed.
I have associated the automated, and this seems to be ok.
I tried to add some methods in my test method, and commented these methods with a summary, but this didn't help.
I will need these steps and I would like them tied back into the code, how is this done?
You must create an action recording for your test case/steps. A test method is created per step. You can then edit those test methods manually.
If you've handcoded a codedUi test then you can only associated the codedUI test method to the overall test case. Manual test steps should indicate what the codedUI test is doing, but there it's no connection between the two.
See Generating a Coded UI Test from an Existing Action Recording
Test steps are added to test cases in MTM. Commonly they provide instructions for manual testing. Such a manual test can be recorded when executed through MTM and that is referred to as an "action recording". The recorded test can be executed again via MTM, that avoids the tester having to perform the text entry and mouse-clicking needed. However, the action recording will not perform any validations of the expected results; that must be done manually.
The next facility is that a Coded UI test can be created from an action recording. The new Coded UI test does not included any validations of the expected results but the facilities of Coded UI can be used to assertions that can make the test fully automated. Having created a Coded UI test it can be linked back to the test case and will then be seen in the "associated automation" part of the test case. The linkage is created via the "Team Explorer" window in Visual Studio.
The order of events stated in the question suggests that the Coded UI test was created without using an "action recording"; which is a perfectly valid approach. The Coded UI test was then linked to an MTM test case. MTM has no mechanisms to decode the Coded UI test to create the test steps.
It would be possible to create a test case in MTM and specify its test steps but have an associated Coded UI test that does something different.
Can DalekJS call or use a previous test (like a login test) and continue once that test has completed? I would like to write my test files as singular tests so that individual people are able to edit only a small portion of it.
I would like to test if a menu item actually links to a page, but call the test that checks if a user can login to the site as the menu item test requires that the user is logged in.
As DalekJS files are just "normal Node.js" files, you can basically do whatever you want ;)
I have some resources on how to keep your tests DRY & modular, go and check out this repository https://github.com/asciidisco/jsdays-workshop/tree/8-dry I made for a workshop. To be more specific, these files in particular:
https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/form2.js https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/functions/configuration.js https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/functions/formHelper.js https://github.com/asciidisco/jsdays-workshop/blob/8-dry/test/dalek/functions/selectors.js