I'm testing the performance of a custom transformer with Jest. Currently, the transformer does nothing but return the code that it gets from Jest. The transformer has implemented the getCacheKey function.
Here's the entire code for the transformer:
function process(src, path, config, transformOptions) {
return src;
}
exports.process = process;
function getCacheKey(fileData, filePath, configStr, options) {
return crypto.createHash('md5')
.update(fileData + filePath + configStr, 'utf8')
.digest('hex');
}
exports.getCacheKey = getCacheKey;
Link to the transformer
The jest config, in package.json is as follows:
"jest": {
"transform": {
"^.+\\.tsx?$": "<rootDir>/ts-transformer.js"
},
"testMatch": [
"<rootDir>/test-jest/**/*.ts"
],
"moduleFileExtensions": [
"ts",
"tsx",
"js",
"json"
]
}
Link to package.json
When testing this setup with Jest, it takes the same amount of time with and without --no-cache (around 9 seconds)
When testing this setup with Mocha, the first run takes around 7 seconds and subsequent runs take around 4 seconds.
In both cases (with jest and mocha), the subsequent runs were tested without changing any source or test file.
My questions:
Shouldn't subsequent Jest runs be faster due to caching?
Is there something in the transformer that is preventing an improvement in the testing duration?
Is there a minimum overhead that Jest incurs which is clouding this issue?
It might be faster to update the pieces (fileData, filePath, configStr) separately so there does not have to be a copy of the file contents on the concatenation.
function getCacheKey(fileData, filePath, configStr, options) {
const hash = crypto.createHash('md5');
hash.update(fileData);
hash.update(filePath);
hash.update(configStr);
return hash.digest('hex');
}
Note: If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced.
Related
Having this node.js app which is going to be quite huge.
First I created a file named
user.account.test.js
In that I begun putting all the possible tests (positive and negative tests) for the usuale flow: signup, singin, activation, restore password etc.
At the end I have this file that is over 600 rows. Now, Im going to create a lot of more tests. And having everything in the same file sounds silly to me.
I could not really find resources that explain how to split the test in severals test files.
Im having a nightmare when I created a new test file where to put other tests. I mostly got timeout issues.
And a lot of things look strange. For example:
In the user.account.test.js I had this line:
beforeAll(async () => {
await mongoose.connect(process.env.MONGODB_TEST_URI);
});
In the second test file, named user.step2.test.js, I was unsure if I had to also put the same function.
At the end I did it, and incredibly that file did not know anything about "process.env.MONGODB_TEST_URI".
What is the best practice when you want to split tests into multiple files?
Ok, solution seems to be adding the flag --runInBand. Then they will run sequentially.
For me each routes I write one test file.
For example:
Each test file have sth like this:
import request from 'supertest'
import {app} from '../../app'
it('create new user', async() => {
return request(app).post('/api/users')
.send({account: "123", password:"123"})
.expect(201)
})
and I create a test setup file:
beforeAll(async () => {
// init your database connection here
})
beforeEach(() => {
// delete all data in your database
})
afterAll(async() => {
//close your db connection
})
and in package.json:
"jest": {
"preset": "ts-jest",
"testEnvironment": "node",
"setupFilesAfterEnv": [
"./src/test/setup.ts"
]
},
I started a Office Web Add-in with Typescript&React project by following this tutorial: https://github.com/OfficeDev/office-js-docs-pr/blob/master/docs/includes/file-get-started-excel-react.md . Any taskpane function and page works properly, but functions on the function-file page cannot be properly executed.
By deleting code, I found Object.defineProperty(exports, "__esModule", { value: true }); is one of line in compiled function-file.js casing the problem. Whenever it presents, any function in the file won't be executed. Fiddler shows the script is correctly loaded in Excel without any warning. Status bar shows "[add-in name] is working on your [function name]".
This line of code is generated by Typescript Compiler, in this case, for loading Node module '#microsoft/office-js-helpers'. I tried to modify tsconfig.json file to avoid generating that line, but then the import of '#microsoft/office-js-helpers' fails. In addition, Webpack 4 will add webpackBootstrap code blocking functions in this file. At this point, I can only avoid any import in function-file.ts and do a 'tsc' after building the project by Webpack.
My question is: what is the correct way to setup this project so function-file.js does not contain any code blocking its functions being executed?
If there is no clear answer, at least, why this line of code causes problem where other pages work fine?
The following is my tsconfig.json which can avoid that line but cannot load any module:
"compilerOptions": {
"target": "es5",
"module": "es2015",
"moduleResolution": "node",
"lib": ["es2015", "dom"],
"typeRoots": ["node_modules/#types"]
},
I manually edit the compiled function-file.js into two versions:
Object.defineProperty(exports, "__esModule", { value: true });
(function () {
Office.initialize = function () { }
};
})();
function writeText(event) {
Office.context.document.setSelectedDataAsync('test');
event.completed();
}
VS
(function () {
Office.initialize = function () { }
};
})();
function writeText(event) {
Office.context.document.setSelectedDataAsync('test');
event.completed();
}
The first one has this problem whereas the second one doesn't.
With some hints from my colleague who used to work on JavaScript during a lunch talk, I made some progress of calling functions in function-file.ts. I wish my path of getting this work would help other people suffering the same pain as I did and still do on this project.
First of all, once I got the function-file.js works properly, I noticed there are two different behaviours when a function does not work:
status bar shows "[add-in name] is working on your [function name]" and stays with it, I believe the function is called but not the line of event.completed() couldn't be reached;
status bar flashes the same message and becomes blank, which indicates the function is not even been found.
Please correct me if there is a better way to diagnose this file.
The original Yeoman generated Webpack configuration of function-file.html is something like this:
new HtmlWebpackPlugin({
title: 'demo',
filename: 'function-file/function-file.html',
template: '../function-file/function-file.html',
chunks: ['function-file']
}),
In order to use any module, 'vendor'(not necessary for my custom modules, but needed by 'office-js-helpers'?) and 'polyfills' entry needs to be included in chunks as well.
Mine Webpack 4 configuration is:
new HtmlWebpackPlugin({
title: "demo",
filename: "function-file/function-file.html",
template: "../function-file/function-file.html",
chunks: ["babel-polyfill", "function-file/function-file"]
}),
The last step is making sure functions declared in function-file.ts can be found: asking Webpack to export global functions in function-file.ts, which I am still not sure if I am hacking Typescript development or doing fine.
Sample function-file.ts:
import * as OfficeHelpers from '#microsoft/office-js-helpers';
(() => {
Office.initialize = () => {};
})();
declare global {
namespace NodeJS {
interface Global {
writeText: (event: Office.AddinCommands.Event) => void;
}
}
}
global.writeText = (event: Office.AddinCommands.Event) => {
Office.context.document.setSelectedDataAsync('test');
event.completed();
};
Notice: even office-js-helpers is imported, some of functions are still not working. I tested my custom modules, they are working properly.
I really wish there are some function-file examples on NodeJS hosted React&Typescript project for Office Web Add-in, as detail configuration is really different from ordinary NodeJS + JavaScript project.
I have been experimenting with Intern as a testing platform for our code base which has a number of oddities to it. We basically load outside of the dojoLoader and core dojo files. This means we are outside the release process and loose all the goodness in terms of testing what that brings.
I have taken on the task of developing a tool chain that will manage the code (linting, build) and finally testing. And I have got to grips with most aspects of Unit testing and Functional Tests but testing 3rd Party APIs has really had me scratching my head.
On reading the docs I should be able to use Nock to mock the api, I have tried many different examples to get a basic hello world working and I have had varying degrees of success.
What I am noting is that Nock seems to play nicely when you are natively using node but it all falls to pieces once dojo is brought in to the equation. I have tried using axios, get, with tdd and bdd. All of which seem to fail miserably.
I have had a breakthrough moment with the below code, which will allow me to test a mock API with Nock successfully.
I have seen other examples taking a TDD approach to this but when I use the define pattern, there is no done() to signify the async process is complete.
While the below does work, I feel I have had to jump through many hoops to get to this point, i.e. lack of core node module util.promisify (I am currently running with Node V9.10.x). Lack of support for import et al, all which makes adopting examples very tough.
I am new to intern and I wonder if there a preferred or standard approach that I am missing which would make this simpler. I honestly prefer the TDD/BDD pattern visually but if the below is my only option for my setup then I will accept that.
define([
'require',
'dojo/node!nock',
'dojo/node!http',
'dojo/node!es6-promisify'
], function (require, nock, http, util) {
const { registerSuite } = intern.getInterface('object');
const { assert } = intern.getPlugin('chai');
const { get } = http;
const { promisify } = util;
const _get = promisify(get);
registerSuite('async demo', {
'async test'() {
const dfd = this.async();
nock('http://some-made-up-service.com')
.get('/request')
.reply(200, {a:'Hello world!'});
http.request({
host:'some-made-up-service.com',
path: '/request',
port: 80
}, function(res){
res.on('data', dfd.callback((data) => {
var toJSON = JSON.parse(data.toString())
assert.strictEqual(toJSON.a, 'Hello world!');
}))
}).end();
}
});
});
My config is here also, I am sure there are entries in the file which are unnecessary but I am just figuring out what works at the moment.
{
"node": {
"plugins": "node_modules/babel-register/lib/node.js"
},
"loader": {
"script": "dojo",
"options": {
"packages": [
{ "name": "app", "location": "asset/js/app" },
{ "name": "tests", "location": "asset/js/tests" },
{ "name": "dojo", "location": "node_modules/dojo" },
{ "name": "dojox", "location": "node_modules/dojox" },
{ "name": "dijit", "location": "node_modules/dijit" }
],
"map": {
"plugin-babel": "node_modules/systemjs-plugin-babel/plugin-babel.js",
"systemjs-babel-build": "node_modules/systemjs-plugin-babel/systemjs-babel-browser.js"
},
"transpiler": "plugin-babel"
}
},
"filterErrorStack": false,
"suites": [
"./asset/js/common/sudo/tests/all.js"
],
"environments": ["node", "chrome"],
"coverage": "asset/js/common/sudo/dev/**/*.js"
}
If you're using dojo's request methods, you can use Dojo's request registry to setup mocking, which can be a bit easier to deal with than nock when working with Dojo. Overall, though, the process is going to be similar to what's in your example: mock a request, make a request, asynchronously resolve the test when the request completes and assertions have been made.
Regarding util.promisify, that's present in Node v8+, so you should be able to use it in 9.10.
Regarding tdd vs bdd, assuming you're referring to Intern test interfaces (although it sounds like you may be referring to something else?), they all support the same set of features. If you can do something with the "object" interface (registerSuite), you can also do it with the "tdd" (suite and test) and "bdd" (describe and it) interfaces.
Regarding the lack of support for import and other language features, that's dependent on how the tests are written rather than a function of Intern. If tests need to run in the Dojo loader, they'll need to be AMD modules, which means no import. However, tests can be written in modern ES6 and run through the TypeScript compiler or babel and emitted as AMD modules. That adds a build step, but at least tests can be written in a more modern syntax.
Note that no node-specific functionality (nock, promisify, etc.) will work in a browser.
I have a gulp task that I would like to run on multiple sets of files. My problem is pretty much similar to what is described here except that I define my sets of files in an extra config.
What I've come up with so far looks like the following:
config.json
{
"files": {
"mainScript": [
"mainFileA.js",
"mainFileB.js"
],
"extraAdminScript": [
"extraFileA.js",
"extraFileB.js"
]
}
}
gulpfile.js
var config = require ('./config.json');
...
gulp.task('scripts', function() {
var features = [],
dest = (argv.production ? config.basePath.compile : config.basePath.build) + '/scripts/';
for(var feature in config.files) {
if(config.files.hasOwnProperty(feature)) {
features.push(gulp.src(config.files[feature])
.pipe(plumper({
errorHandler: onError
}))
.pipe(jshint(config.jshintOptions))
.pipe(jshint.reporter('jshint-stylish'))
.pipe(sourcemaps.init())
.pipe(concat(feature + '.js'))
.pipe(gulpif(argv.production, uglify()))
.pipe(sourcemaps.write('.'))
.pipe(gulp.dest(dest))
);
}
}
return mergeStream(features);
});
My problem is that this doesn't seem to work. The streams are not combine or at least nothing really happens. Some while ago others ran into a similar problem, see here, but even though it should have been fixed it's not working for me.
By the way I've also tested merging the streams in this way:
return es.merge(features)
return es.merge.apply(null, features)
And if I just run the task on a single set of files it works fine.
Motivation
The reason why I want to do this is that at some point concatenating and minifying ALL scripts into one final file doesn't make sense when the sheer number of files is too large. Also, sometimes there is no need to load everything at once. For example all scripts related to an admin interface doesn't need to be load by every visitor.
We've been using Jasmine and RequireJS successfully together for unit testing, and are now looking to add code coverage, and I've been investigating Blanket.js for that purpose. I know that it nominally supports Jasmine and RequireJS, and I'm able to successfully use the "jasmine-requirejs" runner on GitHub, but this runner is using a slightly different approach than our model -- namely, it loads the test specs using a script tag in runner.html, whereas our approach has been to load the specs through RequireJS, like the following (which is the callback for a requirejs call in our runner):
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 1000;
var htmlReporter = new jasmine.TrivialReporter();
var jUnitReporter = new jasmine.JUnitXmlReporter('../JasmineTests/');
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.addReporter(jUnitReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
var specs = [];
specs.push('spec/models/MyModel');
specs.push('spec/views/MyModelView');
$(function () {
require(specs, function () {
jasmineEnv.execute();
});
});
This approach works fine for simply doing unit testing, if I don't have blanket or jasmine-blanket as dependencies for the function above. If I add them (with require.config paths and shim), I can verify that they're successfully fetched, but all that appears to happen is that I get jasmine-blanket's overload of jasmine.getEnv().execute, which simply prints "waiting for blanket..." to the console. Nothing is triggering the tests themselves to be run anymore.
I do know that in our approach there's no way to provide the usual data-cover attributes, since RequireJS is doing the script loading rather than script tags, but I would have expected in this case that Blanket would at least calculate coverage for everything, not nothing. Is there a non-attribute-based way to specify the coverage pattern, and is there something else I need to do to trigger the actual test execution once jasmine-blanket is in the mix? Can Blanket be made to work with RequireJS loading the test specs?
I have gotten this working by requiring blanket-jasmine then setting the options
require.config({
paths: {
'jasmine': '...',
'jasmine-html': '...',
'blanket-jasmine': '...',
},
shim: {
'jasmine': {
exports: 'jasmine'
},
'jasmine-html': {
exports: 'jasmine',
deps: ['jasmine']
},
'blanket-jasmine': {
exports: 'blanket',
deps: ['jasmine']
}
}
});
require([
'blanket-jasmine',
'jasmine-html',
], function (blanket, jasmine) {
blanket.options('filter', '...'); // data-cover-only
blanket.options('branchTracking', true); // one of the data-cover-flags
require(['myspec'], function() {
var jasmineEnv = jasmine.getEnv();
jasmineEnv.updateInterval = 250;
var htmlReporter = new jasmine.HtmlReporter();
jasmineEnv.addReporter(htmlReporter);
jasmineEnv.specFilter = function (spec) {
return htmlReporter.specFilter(spec);
};
jasmineEnv.addReporter(new jasmine.BlanketReporter());
jasmineEnv.currentRunner().execute();
});
});
The key lines are the addition of the BlanketReporter and the currentRunner execute. Blanket jasmine adapter overrides jasmine.execute with a no-op that just logs a line, because it needs to halt the execution until it is ready to begin after it has instrumented the code.
Typically the BlanketReport and currentRunner execute would be done by the blanket jasmine adapter but if you load blanket-jasmine itself in require, the event for starting blanket test runner will not get fired as subscribes to the window.load event (which by the point blanket-jasmine is loaded has already fired) therefore we need to add the report and execute the "currentRunner" as it would usually execute itself.
This should probably be raised as a bug, but for now this workaround works well.