Generate Junit Results in Jenkins from Groovy Script - groovy

How can you get a jenkins groovy script to produce a junit xml results file? I'm doing this purely for the purpose of generating junit results with a specific number of passed/failed and skipped test cases. I need this so that I have a set of test data to test against for another application. This other app goes out to various jenkins jobs and analyzes the junit results from the job's json output. I want to point my functional tests at this jenkins job for testing. (I can't use my real continuous integration jobs because that wouldn't be deterministic).
I've got a basic groovy test case like what's below. It runs but doesn't produce junit output. I didn't expect it to, but I'm also not sure how to get it to generate one.
class BunchOfTests extends GroovyTestCase {
void testOne(){}
void testTwo(){fail()}
}
I also played around with writing code that prints the junit results xml but it's getting lengthy and quite ugly. I've seen the threads on here about what the junit results xsd looks like but I'm thinking there's got to be an easier route to generating some results without needing a pre-made results file. 10 results or so ought to be enough for what I need.

Generally unit testcases in groovy has to be written as below
import groovy.util.GroovyTestCase
class sampleTest extends GroovyTestCase {
assertEquals(true, val);
}
Incase you want only Junit reporting in groovy.
How would I produce JUnit test report for groovy tests, suitable for consumption by Jenkins/Hudson?

Related

How to change the Cucumber Scenario result into the #After hook

I want to change the scenario status for known issues into the After hook.
Something like:
#After
public void afterScenario(Scenario scenario) {
if(scenario.isFailed() && scenario.getSourceTagNames().contains("knownIssue")){
//scenario.add(Result.SKIPPED)
}
}
The idea is tests, which fail because of known bug to be skipped into the test report.
Thanks,
Nayden
You can annotate the scenario with #KnownIssue and then run cucumber with --tags "not #KnownIssue" or its #CucumberOptions equivalent.
it is your test execution engine that is responsible for this aspect, aka TestNG, JUnit.
So the question is: How to change Test Execution Status Programmatically? Here is one explained. You can hook in to the Test Execution Engine either by the way explained in the artcicle or from your method - what is important - you have to work with the Test Execution Engine and not Cucumber.
Gherkin with qaf supports what you are looking for. With qaf
you can achieve it by test ng after method invocation listener.
In addition you also can add meta-data at step level and handle it in onFailure method of step listener to modify exception to Skip exception depending on step meta data. Which automatically skips tests where that step is called and failed.

How to get groovy compiler time (phase) source code

Groovy console allows me to get the Groovy AST browser. In this, I can select the compiler "end of phase" option. Then, I can see the source and bytecode of the source code. My goal is to automate this process based on compiler phases and to get the "source" section as illustrated in the console output figure below. More particularly, I don't want to use the Groovy console every time. Is there a way to do this?
For example, the code below is shown in source section of the AST browser, starting with public class Dog extends...
class Dog {
static void main(String[] args) {
println "hello"
}
}
class Cat {
"My goal is to automate this process..." What process? Unless you rewrite the "Groovy AST Browser" yourself, I think you have to use the Groovy Console to get to it. However, if what you want to do is automate access to the AST Tree so you can, for example, intercept method calls or inject methods, then there is a way to do that. Check out http://groovy-lang.org/metaprogramming.html section 2 - compile time meta-programming

Getting junit output from node-qunit?

I have the following test runner for my node.js application's unit tests using the node wrapper for qunit:
var testrunner = require("qunit");
testrunner.run({
code: "./lib/application.js",
tests: ["./lib/test1.js","./lib/test2.js"]
}, function(){
});
This outputs a pretty human-readable report of my unit tests. All seems to be working as planned.
How do I modify the above file to make it output JUnit format? I've been trying to integrate qunit-reporter-junit but everything I try results in errors. Where do I put the code to override QUnit.jUnitReport such that the QUnit object is in scope?

What are fixtures in programming?

I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?
I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.
The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state. 
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia
I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.
Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}
In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.
I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs

Is there a cppunit equivalent to nunit's Category attribute for test cases?

I'd like an equivalent function to nUnit's Category attribute for test cases.
I have inherited a large number of C++ test cases, some of which are unit tests and some of which are longer-running integration tests, and I need to set up my continuous integration build process to ignore the integration test cases.
I would prefer to simply tag all the integration test cases and instruct cppunit to exclude them during CI builds.
Am I overlooking a feature of cppunit or is there an alternative way to achieve this?
There are no native test category attributes. CppUnit is a bit simpler than that. CppUnit doesn't even come with a command-line test runner for your app. You have to write your own simple main() function that executes a TestRunner.
Here's the canonical example.
#include <cppunit/extensions/TestFactoryRegistry.h>
#include <cppunit/ui/text/TestRunner.h>
int main( int argc, char **argv)
{
CppUnit::TextUi::TestRunner runner;
CppUnit::TestFactoryRegistry &registry = CppUnit::TestFactoryRegistry::getRegistry();
runner.addTest( registry.makeTest() );
bool wasSuccessful = runner.run( "", false );
return wasSuccessful;
}
A TestSuite is a collection of TestCases. A TestRunner executes a collection of TestSuites. Notice that in this example it gets the TestSuites from the TestFactoryRegistry, which you populate by using a macro call to CPPUNIT_TEST_SUITE_REGISTRATION(MyTestSuite). But the TestCases are still your test classes.
You can certainly implement these attributes yourself, just as you would extend any class with a facade. Derive your new class from TestSuite. Add attributes to your tests that you could select on, then populate your TestRunner executing "just unit tests" or "just integration tests" or whatever you want.
For that matter, a TestRunner can select tests to execute based on name. If you named all your integration tests with a prefix like ITFoo, ITBar, etc., you could select all tests that begin with "IT".
There are dozens of ways to solve your problem, but you'll have to do it yourself. If you can write code worthy of unit testing, it shouldn't be a big deal for you. :-)

Resources