We have introduced the PDK lately into our developments chain and are now trying to make everybody happy with the test outputs it generates.
We need an output as JUnit test report for our jenkins jobs. That we have solved.
And we need the output still on the console because some of the developers find it very annoying having to open the JUnit report file before they can see failed tests.
pdk test unit --format=junit:report.xml
Is how we configured the output for JUnit.
Unfortunately as soon as you configure the JUnit report no output gets printed on the console/stdout anymore. Even if you add another format like --format=text without target file.
Is there a way to achieve both without running the PDK twice?
It doesn't appear to be in the docs, but this should work.
pdk test unit --format=junit:report.xml --format=text:stdout
See https://github.com/puppetlabs/pdk/blob/7b2950bc5fb2e88ead7321c82414459540949eb1/lib/pdk/cli/util/option_normalizer.rb#L10-L24
I've filed a ticket to ensure that gets promoted to the docs at https://puppet.com/docs/pdk/1.x/pdk_reference.html#pdk-test-unit-command
From PDK documentation
--format=[:]
Specifies the format of the output. Optionally, you can specify a target file
for the given output format,
such as --format=junit:report.xml . Multiple --format options can be
specified as long as they all have distinct output targets
So I believe ,you can try as below
pdk test unit --tests=testcase_name --format=junit:report.xml --format=text:log.txt
Hope it helps.
Related
I have recently upgraded to version 1.0.0 from 0.9.6 and noticed that the generated karate-summary.html file, it doesn't display all the tested feature files in the JUnit 5 Runner unlike in 0.9.6.
What it displays instead was the last tested feature file only.
The below screenshots are from the provided SampleTest.java sample code (excluding other Tests for simplicity).
package karate;
import com.intuit.karate.junit5.Karate;
class SampleTest {
#Karate.Test
Karate testSample() {
return Karate.run("sample").relativeTo(getClass());
}
#Karate.Test
Karate testTags() {
return Karate.run("tags").relativeTo(getClass());
}
}
This is from Version 0.9.6.
And this one is from Version 1.0.0
However, when running the test below in 1.0.0, all the features are displayed in the summary correctly.
#Karate.Test
Karate testAll() {
return Karate.run().relativeTo(getClass());
}
Would anyone be kind to confirm if they are getting the similar result? It would be very much appreciated.
What it displays instead was the last tested feature file only.
This is because for each time you run a JUnit method, the reports directory is backed up by default. Look for other directories called target/karate-reports-<timestamp> and you may find your reports there. So maybe what is happening is that you have multiple JUnit tests that are all running, so you see this behavior. You may be able to over-ride this behavior by calling the method: .backupReportDir(false) on the builder. But I think it may not still work - because the JUnit runner has changed a little bit. It is designed to run one method at a time, when you are in local / dev-mode.
So the JUnit runner is just a convenience. You should use the Runner class / builder for CI execution, and when you want to run multiple tests and see them in one report: https://stackoverflow.com/a/65578167/143475
Here is an example: ExamplesTest.java
But in case there is a bug in the JUnit runner (which is quite possible) please follow the process and help the project developers replicate and then fix the issue to release as soon as possible.
I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.
Can I catch the failures found by py.test ? I would like to build a log where I will write the numbers of failures and also the OS tested.
You can log a machine readable result file for this purpose using:
py.test --resultlog=path
Documentation is available here. This file is what many other tools use to inspect the output of a py.test run and, for example, compare results between different runs.
Apart from --resultlog which #srowland showed, you can also use --junitxml to write a JUnit XML file, or write your own plugin to log in a custom format.
I'm attempting to run tagged features in the order that they are submitted.
example:
I have tests that i'd like to run in a specific order (#test1, #test2, #test3). After looking at the cucumber documentation is looks like i'm only able to run them in an and/or option like
cucumber features/.feature --t #test1; cucumber features/.feature --t #test2; cucumber features/*.feature --t #test3;
but this prevents me from having a single report which contains all of the results.
Is there anyway which I can run these tests in their respective order and have all of the results contained in the same report?
If you put the tests that have to run in a specific order in a feature file together cucumber will run them in the order they are given. As this will be in your normal test run it should all show up in the same report.
But it might be worth looking into why your tests are dependant on each other and if there is a way to remove this dependancy as it is generally bad practice to have it.
I'd like to run just a single cucumber feature file on autotest. I'd like the test to be run, report failures, then run again as soon as I save a change to my code base. Anyone know a way to do this?
--Jack
I found a solution myself:
Watchr - https://github.com/mynyml/watchr
It watches whenever you save specified files and runs specified tests at that point. Uses pattern matching.