Coverage for files not loaded - intern

How can I get essentially 0% coverage reported for files that are not loaded during tests. This feature could help me in identifying files I forget to write tests for

As of Intern 1.6, there is no way to do this out-of-the-box, but you could write a custom reporter that extends one of the coverage reporters, and at the conclusion of the testing (in the stop method of the reporter), loads a list of all other files from the directory you are concerned about and adds empty coverage objects for them to the collector. Something like this:
define([
'intern/lib/reporters/lcovhtml',
'intern/dojo/topic'
], function (lcovhtml, topic) {
var reporter = Object.create(lcovhtml);
reporter.stop = function () {
var files = getFiles();
for (var i = 0, file; (file = files[i]); ++i) {
topic.publish('/coverage', createCoverageForFile(file));
}
lcovhtml.stop();
};
});

Related

can mocha tests load reuqirements once?

I think every spec files first load by mocha and mocha runs them at least describe part if "it"s wasn't selected with "only".
// Lines before first "it" will run for every spec files
// even if I don't mark them with ".only" word
var db = require("../../node/server/db"),
should = require('should')
...;
describe("main describe...", function () {
var user = {},
apiRootUrl = "http://127.0.0.1:3000";
user.nameSurname = "Cem Topkaya";
kullanici = schema.AJV.validate(schema_name, user);
describe("child describe", function () {
it(....)
it.only(....)
it(....)
}
}
I want to run only one spec file not others. Is there any way to prevent this?
If you give to Mocha the full path of your test file, it will just load that file and no other file:
$ mocha path/to/test.js

Run a gulp tasks on multiple sets of files

I have a gulp task that I would like to run on multiple sets of files. My problem is pretty much similar to what is described here except that I define my sets of files in an extra config.
What I've come up with so far looks like the following:
config.json
{
"files": {
"mainScript": [
"mainFileA.js",
"mainFileB.js"
],
"extraAdminScript": [
"extraFileA.js",
"extraFileB.js"
]
}
}
gulpfile.js
var config = require ('./config.json');
...
gulp.task('scripts', function() {
var features = [],
dest = (argv.production ? config.basePath.compile : config.basePath.build) + '/scripts/';
for(var feature in config.files) {
if(config.files.hasOwnProperty(feature)) {
features.push(gulp.src(config.files[feature])
.pipe(plumper({
errorHandler: onError
}))
.pipe(jshint(config.jshintOptions))
.pipe(jshint.reporter('jshint-stylish'))
.pipe(sourcemaps.init())
.pipe(concat(feature + '.js'))
.pipe(gulpif(argv.production, uglify()))
.pipe(sourcemaps.write('.'))
.pipe(gulp.dest(dest))
);
}
}
return mergeStream(features);
});
My problem is that this doesn't seem to work. The streams are not combine or at least nothing really happens. Some while ago others ran into a similar problem, see here, but even though it should have been fixed it's not working for me.
By the way I've also tested merging the streams in this way:
return es.merge(features)
return es.merge.apply(null, features)
And if I just run the task on a single set of files it works fine.
Motivation
The reason why I want to do this is that at some point concatenating and minifying ALL scripts into one final file doesn't make sense when the sheer number of files is too large. Also, sometimes there is no need to load everything at once. For example all scripts related to an admin interface doesn't need to be load by every visitor.

Can Blanket.js work with Jasmine tests if the tests themselves are loaded with RequireJS?

We've been using Jasmine and RequireJS successfully together for unit testing, and are now looking to add code coverage, and I've been investigating Blanket.js for that purpose. I know that it nominally supports Jasmine and RequireJS, and I'm able to successfully use the "jasmine-requirejs" runner on GitHub, but this runner is using a slightly different approach than our model -- namely, it loads the test specs using a script tag in runner.html, whereas our approach has been to load the specs through RequireJS, like the following (which is the callback for a requirejs call in our runner):
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 1000;
var htmlReporter = new jasmine.TrivialReporter();
var jUnitReporter = new jasmine.JUnitXmlReporter('../JasmineTests/');
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.addReporter(jUnitReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
var specs = [];
specs.push('spec/models/MyModel');
specs.push('spec/views/MyModelView');
$(function () {
require(specs, function () {
jasmineEnv.execute();
});
});
This approach works fine for simply doing unit testing, if I don't have blanket or jasmine-blanket as dependencies for the function above. If I add them (with require.config paths and shim), I can verify that they're successfully fetched, but all that appears to happen is that I get jasmine-blanket's overload of jasmine.getEnv().execute, which simply prints "waiting for blanket..." to the console. Nothing is triggering the tests themselves to be run anymore.
I do know that in our approach there's no way to provide the usual data-cover attributes, since RequireJS is doing the script loading rather than script tags, but I would have expected in this case that Blanket would at least calculate coverage for everything, not nothing. Is there a non-attribute-based way to specify the coverage pattern, and is there something else I need to do to trigger the actual test execution once jasmine-blanket is in the mix? Can Blanket be made to work with RequireJS loading the test specs?
I have gotten this working by requiring blanket-jasmine then setting the options
require.config({
paths: {
'jasmine': '...',
'jasmine-html': '...',
'blanket-jasmine': '...',
},
shim: {
'jasmine': {
exports: 'jasmine'
},
'jasmine-html': {
exports: 'jasmine',
deps: ['jasmine']
},
'blanket-jasmine': {
exports: 'blanket',
deps: ['jasmine']
}
}
});
require([
'blanket-jasmine',
'jasmine-html',
], function (blanket, jasmine) {
blanket.options('filter', '...'); // data-cover-only
blanket.options('branchTracking', true); // one of the data-cover-flags
require(['myspec'], function() {
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 250;
var htmlReporter = new jasmine.HtmlReporter();
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
jasmineEnv.addReporter(new jasmine.BlanketReporter());
jasmineEnv.currentRunner().execute();
});
});
The key lines are the addition of the BlanketReporter and the currentRunner execute. Blanket jasmine adapter overrides jasmine.execute with a no-op that just logs a line, because it needs to halt the execution until it is ready to begin after it has instrumented the code.
Typically the BlanketReport and currentRunner execute would be done by the blanket jasmine adapter but if you load blanket-jasmine itself in require, the event for starting blanket test runner will not get fired as subscribes to the window.load event (which by the point blanket-jasmine is loaded has already fired) therefore we need to add the report and execute the "currentRunner" as it would usually execute itself.
This should probably be raised as a bug, but for now this workaround works well.

Best practice for minifying TypeScript modules

I'm using requirejs and AMD modules for my TypeScript project, with something like 20 different source files at the moment and likely to grow substantially. All of this works, but it's very slow to load all 20 files, so it would be better to have them minified. But because of how requirejs wants to load everything, it seems like it's going to require that I keep the modules in separate files - I don't think I can just take the generated module1.js and module2.js files and minify them into one file and then have requirejs load those without changing some code. (I could be wrong on this.)
The other way that I see to do this is to use the r.js file that requirejs provides to merge all the different files together in a way that still keeps requirejs happy. But r.js requires node.js, and I'd rather not introduce that as a dependency in my build process if there's any other way to do it.
So before I dive into this and try half a dozen different solutions - how are other folks approaching this with big projects?
What you could do is to implement a thin RequireJS shim to use in a minified build. Depending on how much of the RequireJS API you want to use, you could get by with very little. For simplicity you could also use named modules.
Say, while developing you use RequireJS to load your modules. When you want to make a minified build, you could simply include a simple loader in the minified file.
If you have files app.js, foo.js and bar.js as follows:
//from app.js
define("app", ["foo", "bar"], function(foo, bar) {
return {
run: function() { alert(foo + bar); }
}
});
//from foo.js
define("foo", [], function() {
return "Hello ";
});
//from bar.js
define("bar", [], function() {
return "World!";
});
And let's say you minify all those files together. At the top of the file you include the following shim:
//from your-require-shim.js
(function(exports) {
var modules = {};
var define = function(name, dependencies, func) {
modules[name] = {
name:name,
dependencies:dependencies,
func:func,
result:undefined
};
};
var require = function(name) {
var module = modules[name];
//if we have cached result -> return
if(module.result) { return module.result; }
var deps = [];
//resolve all dependencies
for(var i=0,len=module.dependencies.length;i<len;i++) {
var depName = module.dependencies[i];
var dep = modules[depName];
if(!dep.result) {
//resolve dependency
require(depName);
}
deps.push(dep.result);
}
module.result = module.func.apply(this, deps );
return module.result;
};
exports.require = require;
exports.define = define;
}(window));
And execute the module defined in app.js
require("app").run();
Like in this fiddle.
It's a crude PoC of course, but I'm sure you get the meaning.
If you are using ASP.NET MVC 4, you can make a bundle which will minify everything when you deploy to production in a set of files or in a folder. You'll find more info on bundles here.

How to get nodeunit to detect and run tests included in subfolders?

I have the following folder structure to my nodeunit tests for a particular project:
/tests
/tests/basic-test.js
/tests/models/
/tests/models/model1-tests.js
/tests/models/model2-tests.js
My question is - how do I get nodeunit to automatically execute ALL of the tests in the tests folder, including the sub-directories contained within?
If I execute nodeunit tests it only executes basic-test.js and skips everything in the sub-folders by default.
Use make based magic (or shell based magic).
test:
nodeunit $(shell find ./tests -name \*.js)
Here your passing the result of running find ./tests -name \*.js to nodeunit which should run all javascript tests recursively
Nodeunit allows you to pass in a list of directories from which to run tests. I used a package called diveSync which synchronously and recursively loops over files and directories. I store all the directories in an array and pass it to nodeunit:
var diveSync = require("diveSync"),
fs = require("fs"),
nodeUnit = require('nodeunit'),
directoriesToTest = ['test'];
diveSync(directoriesToTest[0], {directories:true}, function(err, file) {
if (fs.lstatSync(file).isDirectory()) {
directoriesToTest.push(file);
}
})
nodeUnit.reporters.default.run(directoriesToTest);
While this is not an automatic solution as described above, I have created a collector file like this:
allTests.js:
exports.registryTests = require("./registryTests.js");
exports.message = require("./messageTests.js")
When I run nodeunit allTests.js, it does run all the tests, and indicates the hierarchical arrangement as well:
? registryTests - [Test 1]
? registryTests - [Test 2]
? messageTests - [Test 1]
etc...
While the creation of a new unit test file will require including it in the collector, that is an easy, one-time task, and I can still run each file individually. For a very large project, this would also allow collectors that run more than one, but not all tests.
I was looking for solutions for the same question. None of the presented answers fully suited my situation where:
I didn't want to have any additional dependencies.
I already had nodeunit installed globally.
I didn't want to maintain the test file.
So the final solution for me was to combine Ian's and mbmcavoy's ideas:
// nodeunit tests.js
const path = require('path');
const fs = require('fs');
// Add folders you don't want to process here.
const ignores = [path.basename(__filename), 'node_modules', '.git'];
const testPaths = [];
// Reads a dir, finding all the tests inside it.
const readDir = (path) => {
fs.readdirSync(path).forEach((item) => {
const thisPath = `${path}/${item}`;
if (
ignores.indexOf(item) === -1 &&
fs.lstatSync(thisPath).isDirectory()
) {
if (item === 'tests') {
// Tests dir found.
fs.readdirSync(thisPath).forEach((test) => {
testPaths.push(`${thisPath}/${test}`);
});
} else {
// Sub dir found.
readDir(thisPath);
}
}
});
}
readDir('.', true);
// Feed the tests to nodeunit.
testPaths.forEach((path) => {
exports[path] = require(path);
});
Now I can run all my tests, new and old, with a mere nodeunit tests.js command.
As you can see from the code, the test files should be inside tests folders and the folders should not have any other files.

Resources