Integrating YUI tests with CruiseControl - yui

I am using YUI to test my JavaScript app, and want to integrate the test results into my CruiseControl build system. How can I use CruiseControl to run the tests? I initially thought about using the JUnit plugin to drive the tests, but that is a no go.
Does anyone else have this working?
(Please note: Changing either YUI or CruiseControl isn't an option for me.)

We have YUI Tests integrated with Hudson for our CI builds. The process should be pretty identical for CC, since we kick off the testing through a Java task in ANT.
We have a selenium driver (a java impl - we're working on making it public) which talks to a SeleniumRC instance, pointing it to the HTML files in build workspace to run tests.
You could take a stab at writing your own selenium driver:
http://seleniumhq.org/docs/05_selenium_rc.html#learning-the-api
The driver code talks to a SeleniumRC instance, and asks it kick off a browser pointing to the YUI Test based HTML test files from the build.
The HTML files run YUI Test Runner on load, and the driver injects code on page load, to pick up test results from YUI Test Runner when it's done running, and store them as files for Hudson to parse.
Regards,
Satyen
YUI Team

My solution, in the end, is a bit of a hack.
I modified our test runner HTML page to post the test results (the entire XML object that it creates) to a PHP page, then to close itself.
I added a PHP page to the build server (with a PHP processor attached to an Apache HTTPD instance) to accept the posted XML document and save it to disk.
The unit tests are now run by a 'test driver', which fires up a browser with the test runner HTML page, and waits for the browser process to end.
This gets the output of the tests onto the local disk of the build server. I then merge the output file into the CC log by adding the following to the project configuration:
<log>
<merge file="path_to_file" />
</log>
There are a few drawbacks, but are (currently) willing to live with them:
Had to introduce a test runner app to the infrastructure
Had to add an Apache server and PHP processor to the build machine
Because only IE will allow a browser page to close itself without a user prompt, the build server must be a Windows machine.

Related

Include custom Dojo modules in Intern coverage

I'll apologize now because I am very new to Intern and know just enough to know that I don't know anywhere near enough. I am using the latest version of Intern. I see lots of details about how to exclude files from the coverage reports that Intern generates, but nothing on what it includes in coverage by default, and now to get other things included. Intern already instruments and provides coverage reports on the test files that I run, but that doesn't do me any good. I need to I have several custom Dojo modules that need to be instrumented for coverage, but I can't seem to find how to make that happen. I am only running functional tests at this time.
The website under test is being served by local IIS, but the test files are in a completely different folder. Be default, it appears that Intern is instrumenting the test files and showing me nice reports about how much of my tests were covered in the run. Seeing this, my thought was that I needed to move all of the Intern install and configuration to the local IIS folder, which I did. Intern is still only providing coverage reports for the test files and not the Dojo modules.
Folder structure in IIS
wwwroot
|
--js
|
--Chai
--ckeditor
--myScripts
--dojo
--node_modules
Gruntfile.js
internConfig.js
package.json
I need the files in the myScripts folder instrumented for code coverage. Here is what I am excluding:
excludeInstrumentation: /^(?:Chai|node_modules|ckeditor|dojo)\//
It appears that nothing in those folders is being is instrumented, so at least I have that right. I don't have anything defined under loaderOptions at this time, and I'm not entirely sure that that is where the stuff in the myScripts folder should be listed when it comes to functional testing. So, the question is how do I get the stuff in that folder instrumented for code coverage?
In order to be instrumented, code need to be requested from the HTTP server that Intern creates when you run intern-runner. If you are loading code directly from IIS, it will never be instrumented and no code coverage analysis can be performed. If you need to use IIS instead of the built in server, you will also need to configure IIS to reverse proxy requests for these files to Intern, as described in the testing non-CORS APIs documentation.

Selenium webdriver UI tests over-riden by LocalHost

i have created unit tests for my web project but i have come across an error whereby the tests are being ignored and Visual Studio 2012 is running my localhost instead. i cannot use localhost to run my tests as there are a lot of java resources and overlays which aren't displayed. Essentially that is the point to UI testing that you test the correct interface.
The code i used in a blank project - runs perfectly and completes the test with no issues but since i need to include this into my web project, i need a way to stop Visual Studio running the localhost and get it to execute the console application test so that my selenium webdriver can run the test properly.
using: Visual Studio 2012
Selenium (webdriver)
chrome driver (latest version)
c#.Net
example code:
IWebDriver _driver;
ChromeOptions options = new ChromeOptions();
options.AddArgument("--start-maximized");
_driver = new ChromeDriver(options);
_driver.Url = "http://theURLimTesting.com/";
var verificationErrors = new StringBuilder();
_driver.Navigate().GoToUrl("http://theURLimTesting.com/");
_driver.FindElement(By.Id("Username")).Clear();
if anyone could help me and provide a solution as to how to run these tests without excluding them from the project and without having to create a proxy - i would be very grateful, as i am very much a novice.
UPDATE: as #mutt 's answer helped steer me towards the right direction with being able to resolve my question i marked the answer as right - i have managed to configure the error and create a work around and tweaking some settings to get this to work and now i can run all tests inside of the web application properly and they all function properly with executing and closing themselves in the background when done.
Separate your unit tests from the web project and it should work. Since you have them together your webapp probably has a default start page so when you "play" it will load that and VS is scoped to that browser running on IIS Express instead of the regular browser.
Personally I would have thought it would still work since Selenium is hitting the driver package that is referencing the browser, but I'm not sure what all VS is doing when it runs the webapp. If you want it with the console then move all your unit tests to the console project and they should still work on the WebApp because they will be hitting the server version and not the locally run project.
Update:
It looks like it is process bound. So Selenium and visual studio are sharing the same process. VS2008 debugging with firefox as default browser - how to make the debugger stop/close on exit?
Update2:
It looks like you should be able to determine if the process is being utilized. Then the question would be can you just kill it in your unit test script so that it will be forced to create a new one... Programmatically determine if code is running under IIS Express

How to debug tests with karma.js + require.js

I have a setup basically described here - http://karma-runner.github.io/0.8/plus/RequireJS.html
Problem is that I can't see source files of my tests in Chrome dev tools. So I can't debug it. Adding debugger; works but it is very uncomfortable, almost unusable since I can't browse any other file except the one with debugger; currently fired
Seems like karma load files, parse them, wrap each test and then unload files before run.
ng-boilerplate has a grunt build that will put all your plain js files into a build directory for testing and debugging.
Take a look at the Gruntfile and karma/karma-unit.tpl.js for how this is done.
Running grunt watch will leave your browser in a state where you can debug all your tests. Just click the debug button, set your break point(s) and reload the page.
Suddenly, you are debugging any or all your js files.
If you need to debug your test deeply, this is generally an indicator of badly organized code or badly made unit test. If you follow a TDD workflow, taking small step will help you prevent any major issue with your code. I warmly recommend you watch this video: http://blog.testdouble.com/posts/2013-10-03-javascript-testing-tactics.html?utm_source=javascriptweekly&utm_medium=email (it doesn't use Karma, but you should watch it for the workflow/the principles presented)
Then, if you really want to debug your test code, nothing beat the browser. As so, you should set up your test in a manner it can be runned both in Karma and the browser. We implemented this for QUnit, Jasmine and Mocha on the Backbone-Boilerplate. Feel free to base yourself on these settings to set up your own environment.

Jenkins + qUnit

How to easily integrate Jenkins with qUnit? I gonna use real browser (like firefox and chrome) to run tests. My server runs on RedHat 6.1 Linux. I think I have all needed plugins/libraries but I still don't know how to make it working. I'm working with Jenkins 1st time (on server side).
//Edit:
It would be wonderful if someone can share idea how to build coverage report too.
Thanks in advance :).
Saying Jenkins and QUnit is only part of the puzzle. You still need a web browser and a way to get a JUnit style XML file from the QUnit results on to disk. While there is Selenium and Webdriver for controlling numerous browsers, the easiest way to get started is to use PhantomJS (http://phantomjs.org/). PhantomJS is a headless webkit based browser meant just for tasks like this.
If you browse the "Test Frameworks" sections of this page ( http://code.google.com/p/phantomjs/wiki/WhoUsesPhantomJS ) you will see several scripts for running QUnit (some with JSCoverage support). The phantomjs-jscoverage-qunit script looks like it will hit all the major points you want to hit, as does United. Both look like they will require some fiddling to get them going though.
Alas, I haven't discovered any method for running QUnit tests and getting JUnit output for either Selenium, WebDriver, or PhantomJS that will just work without modification.
EDIT: Now several months later, it have become clear to me that webdriver is the future of Selenium (it probably should have been clear to me back then, but it wasn't). Also, PhantomJS now works with WebDriver via GhostDriver, so supporting only WebDriver and choosing PhantomJS as a target is probably the best advice going forward.
It's been over a year since this question was posted, but there is a Jenkins plugin for TestSwarm. My layman's understanding is that you can use TestSwarm run your QUnit tests continuously across all of the major browsers. It is open sourced on GitHub.
Disclosure: I'm contributor of the Arquillian project.
You can use the Arquillian Qunit Extension open source extension to execute your QUnit tests on Jenkins. In general, Arquillian Qunit Extension can be easily used in continuous integration environments. On this GitHub repo you can find a real example of how Arquillian Qunit Extension can be used to execute QUnit tests on Travis CI headless machines.
Arquillian is a JBoss Community project.
Arquillian Qunit Extension is is an Arquillian extension which automates the QUnit JavaScript testing. Arquillian Qunit Extension integrates transparently with the JUnit testing framework.
You can find more information on this README file. In addition, there is a showcase which can be executed through Maven and shows how to setup your test case.
Using this extension, you have the option to deploy an archive during the QUnit test executions and/or execute one or more QUnit Test Suites in a single execution. Furthermore you can define the QUnit Test Suite execution order using the #InSequence annotation.
For example, assume that you want to execute two QUnit Test Suites (qunit-tests-ajax.html and qunit-tests-dom.html) and that your QUnit tests included in these test suites perform Ajax requests to a Web Service. Apparently, you need this Web Service to be on host while the tests are executed. Arquillian can automatically perform the deployment of the Web Service to a container. In a such case your Arquillian test case will look like:
#RunWith(QUnitRunner.class)
#QUnitResources("src/test/resources/assets")
public class QUnitRunnerTestCase {
private static final String DEPLOYMENT = "src/test/resources/archives/ticket-monster.war";
/**
* Creates the Archive which will be finally deployed on the AS.
*
* #return Archive<?>
*/
#Deployment()
public static Archive<?> createDeployment() {
return ShrinkWrap.createFromZipFile(WebArchive.class, new File(DEPLOYMENT));
}
/**
* Execute the qunit-tests-ajax.html QUnit Test Suite.
*/
#QUnitTest("tests/ticketmonster/qunit-tests-ajax.html")
#InSequence(1)
public void qunitAjaxTests() {
// empty body - only the annotations are used
}
/**
* Execute the qunit-random-tests.html QUnit Test Suite.
*/
#QUnitTest("tests/ticketmonster/qunit-random-tests.html")
#InSequence(2)
public void qunitRandomTests() {
// empty body - only the annotations are used
}
}
If using real browsers:
Run the QUnit tests in multiple browsers simultaneously by using bunyip (https://github.com/ryanseddon/bunyip). It is built on top of Yeti which can provide JUnit XML compatible reports - thus readable by Jenkins
If using PhantomJS (headless browser which acts almost like a real WebKit based one):
Just shared here https://stackoverflow.com/a/17553889/998008 a walk-through on adding QUnit test runner task into Apache Ant build script. Jenkins runs the script while pulling project working copy from a VCS. You need to specify in Jenkins project the location of the output file. Output is JUnit XML compatible.
BlanketJS is a fantastic code coverage tool that works well with QUnit. I've been using it for about a year now.
For Jenkins integration, I use grunt which exits with a 0 if the grunt task fails, and 1 if it passes, so it integrates with Jenkins perfectly.
There was no existing Grunt plugin that handled Blanket and QUnit together, so I wound up writing my own Grunt plugin. The plugin supports "enforcement" of a minimum threshold, or else the Grunt task fails.
I wrote a blog post with all the details here: http://www.geekdave.com/2013/07/20/code-coverage-enforcement-for-qunit-using-grunt-and-blanket/

How do I deploy an OpenLaszlo solo application?

I have been looking at OpenLaszlo. I could not find how to deploy a solo application..
What do I have to copy or what programs do I have to run?
I know the deployment type can be dhtml or flash...
Thanks in advance.
For SOLO mode, you take the OpenLaszlo .lzx source "program" (expressed in XML format) and "compile" it into an Adobe Flash .swf file using the lzc utility.
For example, a hello.lzx source would be compiled as follows
lzc hello.lzx
into a Flash application called
hello.sw8.swf
Then you simply embed the .swf into an HTML page as you would any other Flash content. The client browser must have the Adobe Flash Player version 8 or version 9 to play the .swf Flash application.
To see the Flash application work right off the disk of your development machine, just point your browser directly at the local .swf file and it should show up in the browser.
The OpenLaszlo documentation contains a section with some information on how to deploy an application:
http://www.openlaszlo.org/lps3.4/docs/deploy/deployers-guide.html#deployers-guide.steps
There's another section in the docs describing the SOLO and proxied deployment mode.
http://www.openlaszlo.org/lps4.9/docs/developers/proxied.html
The documentation can be a bit confusing, since it has not been updated over the past years.
The simplest way to deploy an application is to use the developer console, which is displayed below the OpenLaszlo application in the browser. You'll see a "SOLO" button in the console, which will start the process of generating an embedding HTML page for you OpenLaszlo application, bundle up all static resources into a ZIP file, which can be processed by automated build scripts to generate a new version of your software.
All the compilation and deployment steps can be run from the command line using the "lzc" command for compilation, and the "lzdeploy" command to generate the deployment ZIP file. Both tools can be integrated into Ant. The commands can be found in folder
$LPS_HOME/WEB-INF/lps/server/bin
Check this blog post in the OpenLaszlo project blog for more information on the lzdeploy tool (which does not seem to be documented in the official documentation):
http://weblog.openlaszlo.org/archives/2008/04/lzdeploy-new-command-line-utility-for-deploying-solo-applications/
There has been a similar question regarding automated builds of an OpenLaszlo SOLO application using Apache Ant. The answer contains a full build script to compile either an SWF or DHTML/HTML5 application, including all required resources. The discussion can be found here:
How to build an OpenLaszlo DHTML application using Apache Ant

Resources