Are there any extensions to HUnit or QuickCheck that allow a continuous integration system like Bamboo to do detailed reporting of test results?
So far, my best idea is to simply trigger the tests as part of a build script, and rely on the tests to fail with a non-zero exit code. This is effective for getting attention when a test fails, but confuses build failures with test failures and requires wading through console output to determine the problem's source.
If this is the best option with current tools, my thought is to write a reporting module for HUnit that would produce output in the JUnit XML format, then point the CI tool at it as though it were reporting on a Java project. This seems somewhat hackish, though, so I'd appreciate your thoughts both on existing options and directions for new development.
The test-framework package provides tools for integrating tests using different testing paradigms, including HUnit and QuickCheck, and its console test runner can be passed a flag that makes it produce JUnit-compatible XML. We use it with Jenkins for continuous integration.
Invocation example:
$ ./test --jxml=test-results.xml
I've just released a package which generates test-suites based off modules containing quickCheck properties: http://hackage.haskell.org/package/tasty-integrate
This is one step above test-framework/tasty at the moment, as it forcefully pulls/aggregates them off the filesystem, instead of relying upon per-file record keeping. I hope this helps your CI process.
Related
I really love cargo and how easy it is to write unit tests.
However, it seems like it's testing functionality is fairly basic. What I'd like to be able to do is have named groups of tests somehow. What I am trying to accomplish is to have a default set of tests that execute when you run the basic cargo test. However, some of my tests take much longer to run, so I'd like to be able to move these to another group of extended tests that I can run with some command like cargo test --extended, and also the ability to be able to run all the tests at once easily. I also have a third group of tests that I have currently implemented as ignored tests so I can run them separately.
Even though all my tests are effectively unit tests, I tried to accomplish this by creating a tests directory as you would do with integration tests. However it seems that the basic cargo test command wants to run the all these tests, i.e. the normal tests that are part of my crate as well as the extended tests in the tests crate.
Does anyone know how to accomplish this or whether there is some crate that provides this functionality?
You could use a combination of feature flags and the #ignore macro as mentioned here: https://www.reddit.com/r/rust/comments/3i1nki/how_to_skip_expensive_tests_with_cargo_test/
I have created a few coded ui tests and linked them to the test case, and they now appear as automated and you can see the dll they link to in the test case details.
Now that I want to run the tests, MTM refuses to even start the test unless a build is defined.
However: I want to run the tests against a statically installed application in the lab environment. This is an application that I manually install, and I get this application already compiled, so no need to play around building it.
So how can I take the build server out of the loop? I don't need the application built or deployed, I'm already doing that.
All I want is the tests to run on the lab environment specified against an application that is already preinstalled.
It's asking you to define the build of the test solution, assuming that it's different from your application under test. The test assembly will be deployed to the test environment after you specify it in MTM. This article may help you with the specifics.
It is asking you to create a build for your Coded UI test solution. It requires that the tests be built so that it has something to execute when you run the tests. Assuming that your tests were recorded using your statically deployed application then they will test that same application.
In the current project we are using TeamCity as a CI platform and we have a bunch of projects and builds up and running.
The next step in our process is to track some statistics around our tests. So we are looking for a tool that could help us to get this numbers and make them visible for each build.
In the first place we want to keep track of the following numbers:
Number of unit tests
Number of specflow tests tagged as #ui
Number of specflow tests tagged as #controller
And also time spent running each of the test categories above.
Some details about the current scenario:
.net projects
nUnit for the unit tests
SpecFlow for functional tests categorized as #controller and #ui
rake for the build scripts
TeamCity as a CI Server.
I'm looking for tools and/or practices suggestions to help us to track those numbers.
The issue here is your requirements for tags. SpecFlow/NUnit/TeamCity/DotCover integration is already developed enough to do everything that you require, except for the tagging.
I'm wondering how much of a mix you expect to have between UI and Controller tests. Assuming that you are correctly seperating your domains (see Dan North - Whose domain is it anyway) then you should never get scenarios tagged with these two tags in the same feature. So then I assume its just a case of separating the UI features from the functionality (controller) features.
I've recently started separating my features in just this way, by adding Namespace folders in my tests assembly, mirroring how you would separate Models, ViewModels and Views (etc), and TeamCity is definitely clever enough to report coverage and each stage of the drill down through assemblies and namespaces.
Is it possible to automatically run a jasmine test suite as part of a cruise control.net build?
And If so how?
My server code is C# and I already had my CI server running lots of unit tests. So I added a unit test that uses Watin to launch a browser to run the Jasmine tests and check the results. It took a morning to get all the pieces playing happily together.
An alternative might be to investigate NJasmine -- I saw this was available on NuGet but didn't pursue this myself partly due to lack of documentation.
Also, if you're using ReSharper, you might like to look at their integration with QUnit: http://blogs.jetbrains.com/dotnet/2011/03/resharper-6-introduces-support-for-javascript-unit-testing/ (there's every chance they'll integrate it with Jasmine too). Although this is aimed at running JS UTs within Visual Studio, you might find it offers you a "hook" to run them from your CI server too.
I've been doing some work testing web applications with Cucumber and I currently have a number of steps set up to run with Culerity. This works well, but there are times when it would be nice to run the exact same stories in Selenium.
I see two possible approaches that may work:
Writing each step so that it performs the step appropriately depending on the value of some global variable.
Having separate step definition files and somehow selectively including the correct one.
What is the preferred method for accomplishing this?
Third option: See if Culerity implements the Webrat API. Its README file says: "Culerity lets you (...) reuse existing Webrat-Style step definitions". Couldn't find much more than that though. Ideally, you would be able to switch backends with a config option or command-line argument without having to touch the step definitions.
Of course this would only work if you're not testing Javascript, which Culerity supports, but Webrat doesn't.
HI, have you looked at Capybara? It will allow you to use a variety of web drivers, and will allow you to test javascript-related features as well.
I think this is the one you are looking for. http://robots.thoughtbot.com/post/1658763359/thoughtbot-and-the-holy-grail
You can schedule the tests to run in Jenkins. Local machine Jenkins software is open source. You can get cucumber plugin in Jenkins so that you can achieve reporting part to your project on top of continuous test run