Documentation for failures in nightwatch - node.js

I am running the nightwatch tests and want to separate out failures based on whether they are
actual bugs
flaky tests
failures
success
I want to document this in junit reporting system that nightwatch has as I use the junit report to create a report in Jenkins. Does anyone know of a system that will allow me to do that? Or how I can go about changing the framework to do that?

You could annotate those test with different tag names.
module.exports = {
'#tags' : ['bugs'],
So you can treat them seperatetly in jenkins running different jobs for them, for instance.

Related

Running tests in a directory parallely in Vitest

I am using Vitest as the testing framework for my project.
I have a directory called canRunInParallel which contains multiple test files, like A.spec.ts, B.spec.ts ..... Z.spec.ts. Since this directory contains multiple test files, and none of the tests can race condition, I want to configure Vitest to run all these tests concurrently, so that I can improve my testing time.
Can anyone help me in figuring out how to achieve the same (by most probably modifying the configuration of Vitest runner)?
This functionality is not yet supported by Vitest.
You can only run the tests in a test suite (test file) concurrently using Vitest.

Grouping tests in Rust/Cargo

I really love cargo and how easy it is to write unit tests.
However, it seems like it's testing functionality is fairly basic. What I'd like to be able to do is have named groups of tests somehow. What I am trying to accomplish is to have a default set of tests that execute when you run the basic cargo test. However, some of my tests take much longer to run, so I'd like to be able to move these to another group of extended tests that I can run with some command like cargo test --extended, and also the ability to be able to run all the tests at once easily. I also have a third group of tests that I have currently implemented as ignored tests so I can run them separately.
Even though all my tests are effectively unit tests, I tried to accomplish this by creating a tests directory as you would do with integration tests. However it seems that the basic cargo test command wants to run the all these tests, i.e. the normal tests that are part of my crate as well as the extended tests in the tests crate.
Does anyone know how to accomplish this or whether there is some crate that provides this functionality?
You could use a combination of feature flags and the #ignore macro as mentioned here: https://www.reddit.com/r/rust/comments/3i1nki/how_to_skip_expensive_tests_with_cargo_test/

Code Coverage Report for AWS Lambda Integration test using Python

I have written integration tests for lambdas that hit the dev site (in AWS). The tests are working fine. Tests are written in a separate project that uses a request object to hit the endpoint to validate the result.
Currently, I am running all the tests from my local. Lambdas are deployed using a separate Jenkins job.
However, I need to generate a code coverage report for these tests. I am not sure how I can generate the code coverage report as I am directly hitting the dev URL from my local. I am using Python 3.8.
All the lambdas have lambda layers which provide a database connection and some other common business logic.
Thanks in advance.
Code coverage is probably not the right metric for integration tests. As far as I can tell you use integration tests to test your requirements/use cases/user stories.
Imagine you have an application with a shopping cart feature. A user has 10 items in that shopping cart and now deletes one of those items. Your integration test would make sure that after this operation only (the correct) 9 items are left in the shopping cart.
For this kind of testing it is not relevant which/how much code was run. It is more like a black box test. You want to know that for a given "action" the correct "state" is created.
Code coverage is usually something you use with unit tests. For integration tests I think you want to know how many of your requirements/use cases/user stories are covered.

How to run the same tests with different configuration in jest?

I have a test suite and because it contains some expensive tests, I disable some of them for our CI. However once a day, I'd like to run the whole test suite.
The issue is that running against the same set of test files, it causes snapshot failures because when running the whole test suite it is missing some. If I generate them, then the CI fails because it complains about snapshots being removed (i.e. the one from the whole test suite that are not being checked on the CI.)
What would be the proper way to handle this with jest?
Thanks!

How to get cucumber to run the same steps against Selenium and a headless browser

I've been doing some work testing web applications with Cucumber and I currently have a number of steps set up to run with Culerity. This works well, but there are times when it would be nice to run the exact same stories in Selenium.
I see two possible approaches that may work:
Writing each step so that it performs the step appropriately depending on the value of some global variable.
Having separate step definition files and somehow selectively including the correct one.
What is the preferred method for accomplishing this?
Third option: See if Culerity implements the Webrat API. Its README file says: "Culerity lets you (...) reuse existing Webrat-Style step definitions". Couldn't find much more than that though. Ideally, you would be able to switch backends with a config option or command-line argument without having to touch the step definitions.
Of course this would only work if you're not testing Javascript, which Culerity supports, but Webrat doesn't.
HI, have you looked at Capybara? It will allow you to use a variety of web drivers, and will allow you to test javascript-related features as well.
I think this is the one you are looking for. http://robots.thoughtbot.com/post/1658763359/thoughtbot-and-the-holy-grail
You can schedule the tests to run in Jenkins. Local machine Jenkins software is open source. You can get cucumber plugin in Jenkins so that you can achieve reporting part to your project on top of continuous test run

Resources