I started a new project in IntelliJ and out of the box it doesn't quite work. Currently the problem is that when I try to run a unit test it prompts me to pick a device/VM for the code to run on. Except that these are jUnit classes and shouldn't need a device. It really doesn't need anything:
public class ExampleUnitTest {
#Test
public void addition_isCorrect() throws Exception {
assertEquals(4, 2 + 2);
}
}
Any thoughts? The project can be viewed here, but it's pretty much a stock project.
Android Studio does not provide execution environment of its own, even for unit tests. AS distinguishes between instrumented tests that need Android runtime environment, either device or emulator, and unit tests that can be run on the host computer in Java virtual machine (JVM). But even for the latter you must setup the environment.
See more at https://developer.android.com/training/testing/unit-testing/index.html
Related
I'm trying to write some new unit tests for a jHipster application generated w/ [jHipster version 3.3.0][1], I've imported my project into [STS (w/ Gradle)][2], and it runs fine if I select "Run As Spring Boot App" or "Debug As Spring Boot App", and running ./gradlew test seems to run all the tests, however, I'd like to run just individual tests with JUnit Integration tests [as stated][3]:
Those tests can be run directly in your IDE, by right-clicking on each
test class, or by running mvn clean test (or ./gradlew test if you run
Gradle).
When I right-click on my test, and use 'Run As Junit Test', the entire app appears to run (though it mentions No Profile Selected using Default).
Here is my simple test class:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = MyAPP.class)
#IntegrationTest
#Transactional
public class UtilTest {
#Test
public void testGenerateRandomName(){
Assert.assertNotEquals(null,RandomUtil.generatedRandomWordString());
}
}
Advice?
You've declared your test as an integration test using MyApp.class as spring context, so your test starts the full application, this is the expected behavior.
If you want to run a simple unit test, remove all your annotations.
I have solution with a WPF application, its class libraries, WiX installer and numerous MSTest test projects.
When I build the solution the test runner (we are using Visual Studio 2012.3) correctly discovers all the tests and allows us to run them.
Now I have created a Coded UI test project with just 1 CUIT test in it. I have added the project to the solution; because it is a complete app test of the thing the solution builds this seems logically correct to me.
However the default behaviour is the new test projects CUIT test is being discovered by the test runner and so gets run when I run all tests. I do not want this I only want the other (unit) tests to be found and run this way.
Is there a way (test class attributes perhaps) to supress the DISCOVERY of the test by the test runner?
(Note this is a similar question to Testing an WPF app with CodedUI tests, should the coded ui test project share a solution or not? but asking the specific question that is;
If a solution contains some test how can I prevent those tests being discovered by the test runner?
Any solution should still allow the "undiscovered" test to be selected in a lab build for automation of MTM tests.
I simply unload the project until I want to run the tests.
Which projects are unloaded are saved in the .suo file, which can be excluded from source control if your lab build is being done automatically, or you can reload in manually if your lab building is created manually.
First some context; we are developing a large desktop WPF application in .NET 4.5 targeting 64 bit Windows 7 and 8. We are using Visual Studio 2012.2 (soon to be .3 then probably 2013!) and TFS 2012 (again .2 soon to be .3 then 2013).
Currently this product is all in a single large solution (just over 50 projects) yielding a WPF exe, a load of dlls and a nice MSI to install it.
We use TFS (gated and scheduled) to build the solution, its installer (WiX) and run its tests (SpecFlow for BDD and MSTest unit tests) and this is working very well.
I have a separate scheduled TFS build that deploys the MSI to physical test rig in a untrusted AD domain via a PowerShell script (see TFS2012 LabDefault.11 template deploy scripts fail with “Team Foundation Server could not complete the deployment task” for details of the challenges involved with that!)
OK so that's where I am, now I want to take things to the next step; CodedUI tests to drive full app integration test; I want to "Smoke Test" my builds.
So being a simple soul I added a new project to my products solution; a CodedUI test project.
This happily runs the locally installed product (rather then the just built one; as I ultimately want the CUIT to be running on a deployed test rig as a smoke test, and that rig has just installed the MSI I just built) and performs some UI tests with assertions.
Now my problem is with the CUIT project as part of the products solution a local test run finds and runs my CUIT tests, and this is undesired. I only want to run the CUIT tests in a lab builds test phase.
So is putting the CUIT project into the product solution a bad idea? or should it be a separate solution? Splitting them seems wrong somehow as they are related; the CUIT project is the full stack integration test for the solution's deployable application.
Can I include the CUIT in the products solution and stop the test runner seeing the tests? or is it better just to have two solutions?
What are the pros and cons folks?
Update
In the end we created a new solution containing a coded UI test project and ensured this was built with the same TFS build that built the UI solution. This allows us to load and run the coded UI tests locally without issues, the unit tests in the main UI project are left unmolested. Still seems a little disjointed but on a multiple person team per user test settings were too awkward splitting the coded UI into a different solution was simpler.
What I did was make one Solution and made a CUIT project within, I then made multiple Coded UI test's within that. This is good because using an orderedTest you can run them together and they also share a UIMap which helps too.
I also have/had this problem, because we are at the beginning of using CUIT. For now the CUIT remains in product solution. We do this because the tests should remain in memory of developers. When tests stay in on solution I'm afraid they get lost in oblivion. But indeed there is sometimes a bad feeling that the CUIT pollute the products solution, so i guess they will get their own solution after some time pass and the test become established.
Edit: If you use different Versions of Visual Studio you have to consider that for example a VS Prof. can’t build a solution with Code UI Tests. This means in “multi VS-version environments” you have to separate Coded UI Tests from “real” code.
I have the following setup
Visual studio 2012 Update 2
NUnit Test Adapter Beta 4
NUnit 2.62
I mark a test with either of the following attributes Category from NUnit or TestCategory from MSTest:
[Category("WebServer")]
public void FooTest() {
//test
}
[TestCategory("WebServer")]
public void FooTest2() {
//test
}
In the TFS Build Template, I Set the property
Basic -> 1. Test Source -> Test Case Filter to the value: TestCategory!=WebServer
When the build executes NO TESTS execute. Removing the Filter and all tests run again.
The output from the build log is
No test is available in C:\Builds\2\Proj\Build\bin\Debug\Tests.Integration.dll C:\Builds\2\Proj\Build\bin\Debug\Tests.Unit.dll C:\Builds\2\Proj\Build\bin\Debug\Tests.Web.dll C:\Builds\2\Proj\Build\bin\Debug\TestStack.BDDfy.dll. Make sure that installed test discoverers & executors, platform & framework version settings are appropriate and try again.
Any clues how I can get a test to be excluded base on Category name?
I can easily verify the attribute works if I use the NUnit console runner.
When using NUnit Test Adapter v1.1+, test Categories can be used on TFSBuild. You just need to install the package with your Test Project, and configure test case filter on your build definition.
From here it would appear that you cannot use the TestCategory filters on NUnit tests, only on MSTest tests.
As a note, it would also appear that you are changing the property on the TFS Build Definition, not the Build Template. This is what I would expect, as changing the build definition would be the wrong place to change this.
Answering an old question as this was one of the first results Google returned.
From here it would seem that this was a bug that has been fixed in version 1.1, which unfortunately (as of 22 Jan 2014) has not yet been released to nuget
How to easily integrate Jenkins with qUnit? I gonna use real browser (like firefox and chrome) to run tests. My server runs on RedHat 6.1 Linux. I think I have all needed plugins/libraries but I still don't know how to make it working. I'm working with Jenkins 1st time (on server side).
//Edit:
It would be wonderful if someone can share idea how to build coverage report too.
Thanks in advance :).
Saying Jenkins and QUnit is only part of the puzzle. You still need a web browser and a way to get a JUnit style XML file from the QUnit results on to disk. While there is Selenium and Webdriver for controlling numerous browsers, the easiest way to get started is to use PhantomJS (http://phantomjs.org/). PhantomJS is a headless webkit based browser meant just for tasks like this.
If you browse the "Test Frameworks" sections of this page ( http://code.google.com/p/phantomjs/wiki/WhoUsesPhantomJS ) you will see several scripts for running QUnit (some with JSCoverage support). The phantomjs-jscoverage-qunit script looks like it will hit all the major points you want to hit, as does United. Both look like they will require some fiddling to get them going though.
Alas, I haven't discovered any method for running QUnit tests and getting JUnit output for either Selenium, WebDriver, or PhantomJS that will just work without modification.
EDIT: Now several months later, it have become clear to me that webdriver is the future of Selenium (it probably should have been clear to me back then, but it wasn't). Also, PhantomJS now works with WebDriver via GhostDriver, so supporting only WebDriver and choosing PhantomJS as a target is probably the best advice going forward.
It's been over a year since this question was posted, but there is a Jenkins plugin for TestSwarm. My layman's understanding is that you can use TestSwarm run your QUnit tests continuously across all of the major browsers. It is open sourced on GitHub.
Disclosure: I'm contributor of the Arquillian project.
You can use the Arquillian Qunit Extension open source extension to execute your QUnit tests on Jenkins. In general, Arquillian Qunit Extension can be easily used in continuous integration environments. On this GitHub repo you can find a real example of how Arquillian Qunit Extension can be used to execute QUnit tests on Travis CI headless machines.
Arquillian is a JBoss Community project.
Arquillian Qunit Extension is is an Arquillian extension which automates the QUnit JavaScript testing. Arquillian Qunit Extension integrates transparently with the JUnit testing framework.
You can find more information on this README file. In addition, there is a showcase which can be executed through Maven and shows how to setup your test case.
Using this extension, you have the option to deploy an archive during the QUnit test executions and/or execute one or more QUnit Test Suites in a single execution. Furthermore you can define the QUnit Test Suite execution order using the #InSequence annotation.
For example, assume that you want to execute two QUnit Test Suites (qunit-tests-ajax.html and qunit-tests-dom.html) and that your QUnit tests included in these test suites perform Ajax requests to a Web Service. Apparently, you need this Web Service to be on host while the tests are executed. Arquillian can automatically perform the deployment of the Web Service to a container. In a such case your Arquillian test case will look like:
#RunWith(QUnitRunner.class)
#QUnitResources("src/test/resources/assets")
public class QUnitRunnerTestCase {
private static final String DEPLOYMENT = "src/test/resources/archives/ticket-monster.war";
/**
* Creates the Archive which will be finally deployed on the AS.
*
* #return Archive<?>
*/
#Deployment()
public static Archive<?> createDeployment() {
return ShrinkWrap.createFromZipFile(WebArchive.class, new File(DEPLOYMENT));
}
/**
* Execute the qunit-tests-ajax.html QUnit Test Suite.
*/
#QUnitTest("tests/ticketmonster/qunit-tests-ajax.html")
#InSequence(1)
public void qunitAjaxTests() {
// empty body - only the annotations are used
}
/**
* Execute the qunit-random-tests.html QUnit Test Suite.
*/
#QUnitTest("tests/ticketmonster/qunit-random-tests.html")
#InSequence(2)
public void qunitRandomTests() {
// empty body - only the annotations are used
}
}
If using real browsers:
Run the QUnit tests in multiple browsers simultaneously by using bunyip (https://github.com/ryanseddon/bunyip). It is built on top of Yeti which can provide JUnit XML compatible reports - thus readable by Jenkins
If using PhantomJS (headless browser which acts almost like a real WebKit based one):
Just shared here https://stackoverflow.com/a/17553889/998008 a walk-through on adding QUnit test runner task into Apache Ant build script. Jenkins runs the script while pulling project working copy from a VCS. You need to specify in Jenkins project the location of the output file. Output is JUnit XML compatible.
BlanketJS is a fantastic code coverage tool that works well with QUnit. I've been using it for about a year now.
For Jenkins integration, I use grunt which exits with a 0 if the grunt task fails, and 1 if it passes, so it integrates with Jenkins perfectly.
There was no existing Grunt plugin that handled Blanket and QUnit together, so I wound up writing my own Grunt plugin. The plugin supports "enforcement" of a minimum threshold, or else the Grunt task fails.
I wrote a blog post with all the details here: http://www.geekdave.com/2013/07/20/code-coverage-enforcement-for-qunit-using-grunt-and-blanket/