py.test : logging the number of failures - python-3.x

Can I catch the failures found by py.test ? I would like to build a log where I will write the numbers of failures and also the OS tested.

You can log a machine readable result file for this purpose using:
py.test --resultlog=path
Documentation is available here. This file is what many other tools use to inspect the output of a py.test run and, for example, compare results between different runs.

Apart from --resultlog which #srowland showed, you can also use --junitxml to write a JUnit XML file, or write your own plugin to log in a custom format.

Related

Why is the stdout empty when running "python --version" from groovy?

I am currently working on a pre-flight-check script for the Apache PLC4X project. There I check the existence of required third party tools and their versions.
If I run "python --version" on the commandline, I get a nice response.
However if I run it in Groovy:
print "Detecting Python version: "
def output = ("python --version").execute().text
I just get an empty string.
All the other tools don't show this behavior. All others have the console output in "output".
How can I do the check I want to do? What am I doing wrong?
Don't assume everything you see on the terminal comes via standard output.
Informational messages are frequently sent to standard error instead, to avoid having them get caught in any processing pipelines (which was why the two channels were created way back in the early UNIX days).

Jest snapshot is redundant

I am writing snapshot tests using Jest for a node.js and React app and have installed snapshot-tools extension in VS code.
Some of my tests are displaying this warning in the editor:
[snapshot-tools] The snapshot is redunant
(Presumably it is supposed to say redundant)
What does this warning mean? I am wondering how I can fix it.
I was having the same problem, so I took a look at the "snapshot-tools" code. It marks a snapshot section as redundant, if it doesn't see a corresponding test in the test file that has a matching name and that calls "expect().toMatchSnapshot()" or something similar.
The problem is (as it says on the "Limitations" section of the plugin's marketplace page), it does a static analysis of the test file to find those tests that use snapshots. And the static analysis cannot detect tests that have dynamically generated names, or that don't directly call "expect().toMatchSnapshot()" in the test's body.
For example, I was getting false positive "redundant" warnings, because I had some tests that were doing "expect().toMatchSnapshot()" in their "afterEach()" function, rather than directly in the test body.
This could indicate that the snapshot is no longer linked to a valid test - have you changed your describe/it strings without updating the snapshots? Try running the tests with -- -u appended (eg: npm test -- -u). If that doesn't work, have a look at your snapshots file and compare the titles to your test descriptions.

Puppet Development Kit test unit with multiple output targets

We have introduced the PDK lately into our developments chain and are now trying to make everybody happy with the test outputs it generates.
We need an output as JUnit test report for our jenkins jobs. That we have solved.
And we need the output still on the console because some of the developers find it very annoying having to open the JUnit report file before they can see failed tests.
pdk test unit --format=junit:report.xml
Is how we configured the output for JUnit.
Unfortunately as soon as you configure the JUnit report no output gets printed on the console/stdout anymore. Even if you add another format like --format=text without target file.
Is there a way to achieve both without running the PDK twice?
It doesn't appear to be in the docs, but this should work.
pdk test unit --format=junit:report.xml --format=text:stdout
See https://github.com/puppetlabs/pdk/blob/7b2950bc5fb2e88ead7321c82414459540949eb1/lib/pdk/cli/util/option_normalizer.rb#L10-L24
I've filed a ticket to ensure that gets promoted to the docs at https://puppet.com/docs/pdk/1.x/pdk_reference.html#pdk-test-unit-command
From PDK documentation
--format=[:]
Specifies the format of the output. Optionally, you can specify a target file
for the given output format,
such as --format=junit:report.xml . Multiple --format options can be
specified as long as they all have distinct output targets
So I believe ,you can try as below
pdk test unit --tests=testcase_name --format=junit:report.xml --format=text:log.txt
Hope it helps.

Using cucumber to run different tagged features sequentially

I'm attempting to run tagged features in the order that they are submitted.
example:
I have tests that i'd like to run in a specific order (#test1, #test2, #test3). After looking at the cucumber documentation is looks like i'm only able to run them in an and/or option like
cucumber features/.feature --t #test1; cucumber features/.feature --t #test2; cucumber features/*.feature --t #test3;
but this prevents me from having a single report which contains all of the results.
Is there anyway which I can run these tests in their respective order and have all of the results contained in the same report?
If you put the tests that have to run in a specific order in a feature file together cucumber will run them in the order they are given. As this will be in your normal test run it should all show up in the same report.
But it might be worth looking into why your tests are dependant on each other and if there is a way to remove this dependancy as it is generally bad practice to have it.

Is there a way to run a single cucumber feature file on autotest?

I'd like to run just a single cucumber feature file on autotest. I'd like the test to be run, report failures, then run again as soon as I save a change to my code base. Anyone know a way to do this?
--Jack
I found a solution myself:
Watchr - https://github.com/mynyml/watchr
It watches whenever you save specified files and runs specified tests at that point. Uses pattern matching.

Resources