Profiling TS-based Jest tests - jestjs

I have a Typescript-based React project in which I am running jest tests (also in TS). I can run tests fine but am trying to profile the performance of some which take quite a long time to run. I have tried using Chrome Devtools to attach to the tests, which it does, however it fails due to it being TS and not plain Js. Is there any way I can profile my tests individually to see where the performance issue is occurring? Using VS Code.

Instead of a React project, it was just a regular TypeScript library for me, but I bet this also works for your use-case. I am leaving this here, in case it's usable, or for future me.
The ONLY solution I found that worked was manually setting up the profiler v8-profiler-next.
import v8Profiler from 'v8-profiler-next';
v8Profiler.setGenerateType(1);
const title = 'good-name';
describe('Should be able to generate with inputs', () => {
v8Profiler.startProfiling(title, true);
afterAll(() => {
const profile = v8Profiler.stopProfiling(title);
profile.export(function (error, result: any) {
// if it doesn't have the extension .cpuprofile then
// chrome's profiler tool won't like it.
// examine the profile:
// Navigate to chrome://inspect
// Click Open dedicated DevTools for Node
// Select the profiler tab
// Load your file
fs.writeFileSync(`${title}.cpuprofile`, result);
profile.delete();
});
});
test('....', async () => {
// Add test
});
});
This then gives you the CPU profile as such, which works fine with TypeScript.

Related

Check in nightwatchjs if E2E test correctly saved to database?

For a meteor project with typescript and react I use nightwatch testing which work's great:
https://github.com/arichter83/meteor-react-typescript-nightwatch
a.) Checking database results via Client
Now I want to check in the database if the end2end test successfully added the data and that turned out surprisingly difficult. I can go via the client and look in the Mongo.Collection (on github):
browser
.execute(function() {
return (Meteor as any).connection._stores['links']._getCollection()
.insert({title:"new link"})
}, [], (result) => {
const newid = result.value
browser
.assert.containsText('#' + newid, 'new link')
.execute(function(newid) {
return (Meteor as any).connection._stores['links']._getCollection()
.remove({_id: newid})
}, [newid], () => {
browser
.assert.elementNotPresent('#' + newid)
})
})
With this approach it is quite difficult to use my existing models and interacting with nightwatch.
b.) Checking database results in test
But I'd would instead use nightwatch's unit test capability in between, but from the docs it seems that E2E and unit tests can't be mixed.
Furthermore, when importing my models in the test on the server:
import { Links } from '../../imports/api/links'
console.log(Links.findOne())
Typescript throws an error that it can't resolve the atmosphere package meteor/mongo - so #types/meteor seems not to be loaded (probably meteor specific):
Cannot find module 'meteor/mongo'
Questions
Is it generally advisable to check database results for E2E tests?
What is the most elegant way to do this with nightwatch (+ meteor)? (I also created a Feature Request there)
How to use meteor libraries in nightwatch tests?

Reusing same Puppeteer instance in all Jest tests

Problem
I'm replacing CasperJS with Jest + Puppeteer. Putting everything in one file works great:
beforeAll(async () => {
// get `page` and `browser` instances from puppeteer
});
describe('Test A', () => {
// testing
});
describe('Test B', () => {
// testing
});
afterAll(async () => {
// close the browser
});
Now, I don't really want to keep everything in one file. It's harder to maintain and harder to run just part of the tests (say, just 'Test A').
What I've tried
I've looked at Jest docs and read about setupScript. It would be perfect, but it runs before every test file. I don't want this because puppeteer setup takes quite a lot of time. I want to reuse same browser instance and pay the setup cost only once no matter how many test files I'll run.
So, I thought about:
// setup puppeteer
await require('testA')(page, browser, config);
await require('testB')(page, browser, config);
// cleanup
This solves modularization, reuses same browser instance, but doesn't allow me to run tests separately.
Finally, I stumbled upon possibility to create a custom testEnviroment. This sounds great but it isn't well documented, so I'm not even sure if env instance is created per test file, or per Jest run. Stable API is also a missing a setup method where I could set up puppeteer (I'd have to do that in constructor that can't be async).
Why I'm asking
Since I'm new to Jest I might be missing something obvious. Before I dig deeper into this I though I'll ask here.
UPDATE (Feb 2018): Jest now have official Puppeteer guide, featuring reusing one browser instance across all tests :)
It was already answered on Twitter, but let's post it here for clarity.
Since Jest v22 you can create a custom test environment which is async and has setup()/teardown() hooks:
import NodeEnvironment from 'jest-environment-node';
class CustomEnvironment extends NodeEnvironment {
async setup() {
await super.setup();
await setupPuppeteer();
}
async teardown() {
await teardownPuppeteer();
await super.teardown();
}
}
And use it in your Jest configuration:
{
"testEnvironment": "path/to/CustomEnvironment.js"
}
It's worth to note, that Jest parallelizes tests in sandboxes (separate vm contexts) and needs to spawn new test environment for every worker (so usually the number of CPU cores of your machine).

Why does Electron/React app freeze and without sending errors to the log?

I have an Electron/React app with a load screen. The vast majority of the time when I make a mistake the app will send errors to Node or the console and I can debug. But with certain mistakes the app will freeze on the load screen with no logging at all. For example, if I add
const t = 5;
const t = 5;
to src/renderer/app/actiontypes.js I do not get the usual "Uncaught SyntaxError" message and I have to read very carefully through code to figure out what's going wrong.
Here is how the app loads:
main.js
app.on('ready', async () => {
await installExtensions();
createLoadingScreen();
ipcMain.on('robot-load-finished', () => {
mainWindow.show();
...
index.js
function run() {
ipcRenderer.send('robot-load-finished');
...
loadRobotModels().then(run);
Does anyone why this is occurring? Thank you.
Fixed this issue by setting up a chromium remote debug configuration in Webstorm. If you have the same issue and you use Webstorm, hopefully this tutorial will help you too.
Two other options are to use VSCode or node-inspector. However, node-inspector is not compatible with the latest versions of Node and the whole module seems to be abandoned due to Node's new --inspect flag. The electron team is planning to add support for the --inspect flag, here is the ticket to watch.

Properly configuring mocha.json in visual studio for sails.js app testing

I am trying to startup a new project node.js with proper testing and tools.
I choose the framework sails.js , I use travis as a my CI tool https://travis-ci.org/lomithrani/InteractiveResume.
I use npm to launch my test with this line in my package.json
"test": "mocha test/bootstrap.test.js test/unit/**/*.test.js"
bootstrap.test.js:
var Sails = require('sails'),
sails;
before(function (done) {
// Increase the Mocha timeout so that Sails has enough time to lift.
this.timeout(5000);
Sails.lift({
// configuration for testing purposes
}, function (err, server) {
sails = server;
if (err) return done(err);
// here you can load fixtures, etc.
done(err, sails);
});
});
after(function (done) {
// here you can clear fixtures, etc.
Sails.lower(done);
});
the test I use as a sample:
var request = require('supertest');
describe('ResumeController', function () {
console.log('test');
describe('#hi()', function () {
it('should say hi', function (done) {
request(sails.hooks.http.app)
.get('/resume/hi')
.expect(200, done);
});
});
});
So If I run npm test or on travis everything works fine. But running it within visual studio doesn't work. As , as far as I understand it only runs the *.test.js and not the bootstrap.test.js first. I get a undefined error on sails at sails.hooks.http.app the official doc from github.com/Microsoft provide very little detail on the way to configure, only that I can create a mocha.json such as this one :
{
"ui": "tdd",
"timeout": 300000,
"reporter": "xunit"
}
but I fail to see which element I could use of https://mochajs.org/#usage
in order to execute bootstrap first.
If you have any workaround to suggest or any Idea you are very welcome.
Here is the full stacktrace I get within visual
Test Name: ResumeController #hi() should say hi
Test Outcome: Failed
Result StandardOutput:
1..1
not ok 1 ResumeController hi() should say hi
ReferenceError: sails is not defined
at Context.<anonymous> (C:\interactiveResume\test\unit\controllers\ResumeController.test.js:8:21)
at callFnAsync (C:\interactiveResume\node_modules\mocha\lib\runnable.js:306:8)
at Test.Runnable.run (C:\interactiveResume\node_modules\mocha\lib\runnable.js:261:7)
at Runner.runTest (C:\interactiveResume\node_modules\mocha\lib\runner.js:421:10)
at C:\interactiveResume\node_modules\mocha\lib\runner.js:528:12
at next (C:\interactiveResume\node_modules\mocha\lib\runner.js:341:14)
at C:\interactiveResume\node_modules\mocha\lib\runner.js:351:7
at next (C:\interactiveResume\node_modules\mocha\lib\runner.js:283:14)
at Immediate._onImmediate (C:\interactiveResume\node_modules\mocha\lib\runner.js:319:5)
# tests 1
# pass 0
# fail 1
You should use the -r or --require flag. Running directly from the command line will work.
However, that said, I'm pretty sure that the visual studio mocha.json options file doesn't support the require flag properly. Also, they haven't documented which flags are supported and which ones aren't. I've done some experimenting with modifying mocha.js in the Program Files Node.js Tools for Visual Studio folder and it looks like it only accepts certain flags where functions are already predefined in the mocha npm package.
Either way, even if you did modify this file and make it work, next time you upgrade your visual studio node.js tools, your changes would probably be blown away. There just isn't good support or documentation for this yet. I wish I didn't have a team of developers demanding all the tests worked certain ways in visual studio, otherwise I'd just use visual studio code.
I'm guessing they'll take pull requests on the mocha.js file though if you can get it working. I just don't have the time to look into it too much.

Mocha browser tests with Node.js command-line runner?

I have a suite of client-side Mocha tests that currently run with the browser test runner. But I also have a suite of server-side Mocha tests that run with the Node.js command-line test runner.
I know I can run the client-side tests from the command-line in a headless browser like PhantomJS (e.g. like this), but they'd still run separately from the server-side tests.
Is there any way to run the client-side tests as part of the command-line run?
E.g. always run both sets of tests, and have one combined output like "all 100 tests passed" or "2 tests failed" — across both client-side and server-side suites.
I imagine if this were possible, there'd need to be some sort of "proxy" layer to dynamically describe to the command-line runner each browser test, and notify it of each result (maybe even any console.log output too) as the tests ran in the browser.
Does there exist anything that achieves this? I've had a hard time finding anything. Thanks!
I use Zombie for this. There's surely a way to do it with Phantom too. Just write your client-side tests under the same directory as your server-side tests and they'll get picked up by Mocha and executed along with the rest.
I'm not sure whether you need some sample test code but here's some just in case:
var app = require('../server').app; // Spin up your server for testing
var Browser = require('zombie');
var should = require('should');
describe('Some test suite', function () {
it('should do what you expect', function (done) {
var browser = new Browser();
browser.visit('http://localhost:3000', function (err) {
// Let's say this is a log in page
should.not.exist(err);
browser
.fill('#username', 'TestUser')
.fill('#password', 'TestPassword')
.pressButton('#login', function (err) {
should.not.exist(err);
// etc...
return done();
});
});
});
});

Resources