Running w/ intern-runner: nothing outputted to terminal, no code coverage data - intern

I am launching my Intern-based tests through the intern-runner script, like this:
<full_path>\intern\.bin\intern-runner config=unittest/intern
My unittest\intern.js configuration file contains the following:
define({
reporters: [ "junit", "console", "lcovhtml", "runner" ],
excludeInstrumentation: /(?:dojo|intern|istanbul|reporters|unittest)(?:\\|\/)/,
suites: [ "unittest/all_intern.js" ],
forDebug: console.log("Customized intern config for test runner loaded successfully!"),
loader: {
packages: [
{ name: 'resources', location: 'abc/resources' },
{ name: 'stats', location: 'abc/resources/stats' },
{ name: 'nls', location: 'abc/nls' },
{ name: 'widgets', location: 'abc/widgets' },
{ name: 'views', location: 'abc/views' },
]
},
useLoader: {
'host-browser': 'node_modules/dojo/dojo.js'
},
tunnel: 'NullTunnel',
useSauceConnect: false,
webdriver: {
host: 'localhost',
port: 4444
},
proxyUrl: "http://localhost:8010/",
environments: [
{
browserName: 'chrome'
}
]
});
Output to the terminal/command window looks hopeful:
Customized intern config for test runner loaded successfully!
Listening on 0.0.0.0:9000
Starting tunnel...
Initialised chrome 40.0.2214.111 on XP
And the Chrome browser is indeed launched, and I see my unittests running and passing in the browser contents. However, control never goes back to the terminal/command window--I don't see anything like "634/634 tests pass" or whatever, and I have to Ctrl+C to kill the intern-runner process. And of course, no code coverage files are generated. Is this due perhaps to my file structure? The Intern files are in a completely separate directory from these unit tests--I am not invoking intern-runner from a common parent directory for both Intern libraries and unit test files (and the product files they are testing).
I can create a diagram to illustrate the file/directory structure, if that is important. Note that I did change the Intern structure a bit, like:
<Dir_123>\intern\intern-2.2.2\bin\intern-runner.js
<Dir_123>\intern\intern-2.2.2\lib\<all_the_usual>
<Dir_123>\intern\intern-2.2.2\node_modules\<all_the_usual>
<Dir_123>\intern\.bin\intern-runner.cmd
i.e., what I had changed was to insert an extra "intern-2.2.2" directory after "intern", and the ".bin" directory containing intern-runner.cmd is a peer of "intern-2.2.2". Hope this is not confusing. :(
And note that the "proxyUrl" config property represents the URL that the unittest files and product files are available from the web server. Am I doing this right, by configuring the proxyUrl for this purpose? If I omit it, nothing runs because the default used is localhost:9000. I see in the "Configuring Intern" article on Github that proxyUrl is "the URL to the instrumentation proxy," but I don't really understand what that means.

It looks like you're making pretty good progress. Your directory structure is a bit non-standard (any particular reason for that?), but that shouldn't be a show-stopper. The problem you're seeing is probably due to a proxy misconfiguration. Intern is loading the test client and your unit tests, but the code in the browser is unable to communicate test results back to Intern.
As you mentioned, the proxyUrl parameter is the URL at which Intern's instrumenting proxy can be found. The "instrumenting proxy" is basically just an HTTP server than Intern runs to serve test files and to receive information from browsers under test. (It also instruments JS files as it serves them to gather code coverage data, hence the "instrumenting" part of the name.) By default, it's at localhost:9000. That means a browser under test running on localhost can GET or POST to localhost:9000 to talk to Intern.
You can also run Intern behind another server, like nginx, and have that server proxy requests to Intern. In that case, you need to 1) set Intern's proxyUrl to the address of the proxying server, and 2) setup proxying rules in the server to pass requests back to Intern at localhost:9000.
Intern also has a proxyPort parameter to control the port the instrumenting proxy serves on. The proxy listens at localhost:<proxyPort>, where proxyPort defaults to 9000. If tests are talking to Intern's proxy directly (with no intermediate nginx or Apache or anything), proxyPort will be the same as the port in proxyUrl. If an intermediate server is being used, the two can have different values.
When intern-runner runs unit tests, it tells the test browser to GET <proxyUrl>/client.html?config=.... Since you have some external server running and you've set proxyUrl to that server's address, that server will serve client.html and the other relevant Intern files, allowing the unit tests to run. However, when the unit tests are finished and the browser attempts to communicate this back to Intern at proxyUrl, it's going to fail unless you've configured the external server to proxy requests back to localhost:<proxyPort>.

Related

How can I run some code in Node prior to running a browser test with Intern?

With Intern, how can I run some setup code in Node prior to running browser tests, but not when running Node tests? I know that I could do that outside of Intern completely, but is there anything that's a part of Intern that could handle that?
For a more concrete example: I'm running tests for an HTTP library that communicates with a Python server. When running in Node, I can run spawn("python", ["app.py"]) to start the server. However, in the browser, I would need to run that command before the browser begins running the tests.
Phrased another way: is there a built-in way with Intern to run some code in the Node process prior to launching the browser tests?
By default, Intern will run the plugins configured for node regardless of which environment you're running in.
So, you could create a plugin that hooks into the runStart and runEnd events like this:
intern.on("runStart", () => {
console.log("Starting...");
// Setup code here
});
intern.on("runEnd", () => {
console.log("Ending...");
// Teardown code here
});
These handlers will run inside the Node process, and thus have access to all the available Node APIs.
Additionally, you can detect which environments are being tested by looking at intern.config.environments:
{
environments: [
{
browserName: 'chrome',
browserVersion: undefined,
version: undefined
}
]
}
By looking at the environments, you can determine whether or not you need to run your setup code.

intern serve command is not working after reporters added in intern.js

I have installed intern globally and on command prompt 'intern serve' commands works fine. However after adding reporters property in intern.js if I run 'intern serve' command then nothing happens and command stuck as below saying "Running runner tests…".
D:\start>intern serve
Running runner tests…
Here is my intern.js file
// Learn more about configuring this file at <https://theintern.github.io/intern/#configuration>.
// These default settings work OK for most people. The options that *must* be changed below are the packages, suites,
// excludeInstrumentation, and (if you want functional tests) functionalSuites
define({
// Default desired capabilities for all environments. Individual capabilities can be overridden by any of the
// specified browser environments in the `environments` array below as well. See
// <https://theintern.github.io/intern/#option-capabilities> for links to the different capabilities options for
// different services.
//
// Note that the `build` capability will be filled in with the current commit ID or build tag from the CI
// environment automatically
capabilities: {
'browserstack.selenium_version': '2.45.0'
},
// Browsers to run integration testing against. Options that will be permutated are browserName, version, platform,
// and platformVersion; any other capabilities options specified for an environment will be copied as-is. Note that
// browser and platform names, and version number formats, may differ between cloud testing systems.
environments: [
{ browserName: "chrome", platform: "WINDOWS" }
],
// Maximum number of simultaneous integration tests that should be executed on the remote WebDriver service
maxConcurrency: 2,
// Name of the tunnel class to use for WebDriver tests.
// See <https://theintern.github.io/intern/#option-tunnel> for built-in options
/*tunnel: 'BrowserStackTunnel',*/
// Configuration options for the module loader; any AMD configuration options supported by the AMD loader in use
// can be used here.
// If you want to use a different loader than the default loader, see
// <https://theintern.github.io/intern/#option-useLoader> for more information.
loaderOptions: {
// Packages that should be registered with the loader in each testing environment
packages: [
{ name: "dojo", location: "node_modules/dojo" },
{ name: "dojox", location: "node_modules/dojox" },
{ name: "dijit", location: "node_modules/dijit" },
{ name: "showcase", location: "dist/src/showcase" },
{ name: "common", location: "dist/src/common" },
{ name: "technical-topics", location: "dist/src/technical-topics" }
]
},
// Unit test suite(s) to run in each browser
suites: [ 'tests/**/*.js' ],
tunnel: 'NullTunnel',
// Functional test suite(s) to execute against each browser once unit tests are completed
functionalSuites: [ /* 'myPackage/tests/functional' */ ],
reporters: [
{ id: 'Runner', filename: 'tests/reporters/Runner/report.html' }
],
// A regular expression matching URLs to files that should not be included in code coverage analysis. Set to `true`
// to completely disable code coverage.
excludeInstrumentation: /^(?:tests|node_modules)\//
});
There are a several issues here:
The message "Running runner tests...' is only emitted by the Combined reporter, not the Runner reporter.
The Runner reporter is intended to output to the console; giving it an HTML filename will just output text to an HTML file (it won't generate an HTML report).
The reporter config doesn't actually do anything when you're running intern serve because intern in serve mode is just serving files (not running tests).
Is your intern.js file in tests\? Intern assumes the project directory looks like:
D:\start
tests\
intern.js
node_modules\
intern\
When running intern serve in D:\start, intern will look for its config file in tests\intern.js.

how to start angular2 inside nodeJS application

this seemed easy question now blocked my brain, hope to get your help.
I am now using webpack to start angular2 app, it is fine, just run npm start, in realty, it is running this command to boost angular2 project
webpack-dev-server --inline --progress --port 8080
Now everything is fine until I want to start actual development. Our actual development is using nodeJS, I want to use nodeJS to boost the whole angular2 project. I know I can use npm build to build angular project and then using static page inside nodeJS/express to loda that static page. This is ok for deployment or production enviornment. But for development, how can I do?
As above mentioned, I am using webpack-dev-server to boost ng2 project, which is reading a lot webpack.configuration, such as type script loader, sass loader, by default it is port 8080, but in my nodeJs project, it is using "node app" to start, and port is 3000. Obviously, this has caused cross domain issue here.
So is that possible to let nodeJS to boost my local development environment in ng2 so as to avoid the cross domain issue? If I use nodeJS, then where the webpack solution goes?
Hope to hear your suggestion
You can try two solutions. In both you need to have 2 servers: webpack and yours. Both work for any backend.
Enable CORS on your server responses. This might be tricky. Also, it does not really replicate the production behaviour (folder structure, URL paths, etc.)
Proxy all non-webpack output to your server. This is achieved by a couple of lines in a webpack config but it is really good because you still see your non-angular pages, content, static files etc as you would see them in production. What is also good, you can specify your production (stage, develop, whatever) server as a target and simply have no backend running on your machine at all.
This is how-to:
devServer: {
port: 8080,
proxy: { '**': { target: 'http://localhost:3000', secure: false, headers } }
},
This will start angular server on 8080 and redirect any non-webpack generated file to the target.

Hooking up protractor E2E tests with node-replay

I've been messing around with node-replay (https://github.com/assaf/node-replay) to see if there is a way I can hook it up with my protractor tests to get my tests to run with recorded data (so they run quicker and not so damn slow).
I installed node-replay as instructed on the github page. Then in my test file I include some node replay code as follow
describe('E2E: Checking Initial Content', function(){
'use strict';
var ptor;
var Replay = require('replay');
Replay.localhost('127.0.0.1:9000/');
// keep track of the protractor instance
beforeEach(function(){
browser.get('http://127.0.0.1:9000/');
ptor = protractor.getInstance();
});
and my config file looks like this:
exports.config = {
seleniumAddress: 'http://0.0.0.0:4444/wd/hub',
// Capabilities to be passed to the webdriver instance.
capabilities: {
'browserName': 'chrome'
},
// Spec patterns are relatie to the current working directly when
// protractor is called.
specs: ['test/e2e/**/*.spec.js'],
// Options to be passed to Jasmine-node.
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 300000
}
};
Then I try to rub my tests with grunt by saying
REPLAY=record grunt protractor
But I get tons of failures. Grunt protractor was running all of tests fine and with no failures before I added node-replay so maybe my logic is flawed in how to connect these two together. Any suggestions as to what I'm missing
1) E2E: Sample test 1
Message:
UnknownError:
Stacktrace:
UnknownError:
at <anonymous>
Problem is that http requests to 127.0.0.1:9000 are done by the Browser, not within your NodeJS Protractor code, so replay won't work in this infrastructure scenario.
There is ongoing discussion on Protractor Tests without a Backend here and some folks relies on mocking the backend client side with Protractor's addMockModule in a similar way they already do for Karma unit tests.
Personally I don't agree with mocking for e2e since the whole point of end-to-end was to test the whole real app.
HTTP replay may not be such a bad idea to get things go faster.
Ideally what i hoped to find was a tool that works like this:
Run a proxy capture http server the first time for later replay:
capture 127.0.0.1:9000 --into-port 3333
Run your e2e tests against a baseUrl = '127.0.0.1:3333';. All requests/responses will be cached/saved.
Serve the cached content from now on:
replay --at-port 3333
Run your e2e tests again still on baseUrl por 3333. This time it should run faster since it's serving cached content.
Couldn't find it, let me know if you have better luck!

What's the purpose of gruntjs server task?

I'm learning how to propel use gruntjs. I found the server task but I can't get the point.
Can i use the server task mapping concatenated/minified files to test my application (uses backbone.js) without moving or placing source files in web server root? Without apache for example.
If no, what's the supposed use of server task?
The server task is used to start a static server with the base path set as the web root.
Example: Serve ./web-root as http://localhost:8080/:
grunt.initConfig({
server: {
port: 8080,
base: './web-root'
}
});
It will function similar to an Apache server, serving up static files based on their path, but uses the http module via connect to set it up (source).
If you need it to serve more than just static files, then you'll want to consider defining a custom server task:
grunt.registerTask('server', 'Start a custom web server.', function() {
grunt.log.writeln('Starting web server on port 1234.');
require('./server.js').listen(1234);
});
And custom server instance:
// server.js
var http = require('http');
module.exports = http.createServer(function (req, res) {
// ...
});
Can I use the server task mapping concatenated/minified files to test my application [...]
Concatenation and minification have their own dedicated tasks -- concat and min -- but could be used along with a server task to accomplish all 3.
Edit
If you want it to persist the server for a while (as well as grunt), you could define the task as asynchronous (with the server's 'close' event):
grunt.registerTask('server', 'Start a custom web server.', function() {
var done = this.async();
grunt.log.writeln('Starting web server on port 1234.');
require('./server.js').listen(1234).on('close', done);
});
The server task is now the connect task and it's included in the grunt-contrib-connect package.
The connect task starts a connect web server.
Install this plugin with this command:
npm install grunt-contrib-connect --save-dev
Note: --save-dev includes the package in your devDependencies, see https://npmjs.org/doc/install.html
Once the plugin has been installed, it may be enabled inside your Gruntfile with this line of JavaScript:
grunt.loadNpmTasks('grunt-contrib-connect');
Run this task with the grunt connect command.
Note that this server only runs as long as grunt is running. Once grunt's tasks have completed, the web server stops. This behavior can be changed with the keepalive option, and can be enabled ad-hoc by running the task like grunt connect:targetname:keepalive. targetname is equal to "server" in the code sample below.
In this example, grunt connect (or more verbosely, grunt connect:server) will start a static web server at http://localhost:9001/, with its base path set to the www-root directory relative to the Gruntfile, and any tasks run afterwards will be able to access it.
// Project configuration.
grunt.initConfig({
connect: {
server: {
options: {
port: 9001,
base: 'www-root'
}
}
}
});
The point of the server task is to have quick and dirty access to static files for testing. grunt server IS NOT a production server environment. It really should only be used during the grunt lifecycle to get static testing assets to the testing environment. Use a full-fledged server, possibly controlled by the NPM lifecycle scripts, for production environments.

Resources