I have a tests written in Typescript with Jest/Puppeteer and the tests are splitted into multiple files. I want to close a browser after all tests, so I made:
afterAll(async () => {
await page.close();
await browser.close();
});
in file, that is common for the rest (all tests import that file). But when I try to do that, I get "Error: Protocol error (Runtime.callFunctionOn): Target closed." because browser closes too fast, before all tests are done. What should be the correct version of afterAll for this case, and how to check if tests in other files are really done, before closing a browser?
Related
I have a Typescript-based React project in which I am running jest tests (also in TS). I can run tests fine but am trying to profile the performance of some which take quite a long time to run. I have tried using Chrome Devtools to attach to the tests, which it does, however it fails due to it being TS and not plain Js. Is there any way I can profile my tests individually to see where the performance issue is occurring? Using VS Code.
Instead of a React project, it was just a regular TypeScript library for me, but I bet this also works for your use-case. I am leaving this here, in case it's usable, or for future me.
The ONLY solution I found that worked was manually setting up the profiler v8-profiler-next.
import v8Profiler from 'v8-profiler-next';
v8Profiler.setGenerateType(1);
const title = 'good-name';
describe('Should be able to generate with inputs', () => {
v8Profiler.startProfiling(title, true);
afterAll(() => {
const profile = v8Profiler.stopProfiling(title);
profile.export(function (error, result: any) {
// if it doesn't have the extension .cpuprofile then
// chrome's profiler tool won't like it.
// examine the profile:
// Navigate to chrome://inspect
// Click Open dedicated DevTools for Node
// Select the profiler tab
// Load your file
fs.writeFileSync(`${title}.cpuprofile`, result);
profile.delete();
});
});
test('....', async () => {
// Add test
});
});
This then gives you the CPU profile as such, which works fine with TypeScript.
I’m setting up mongodb-memory-server in my backend for test purposes and am experiencing some issues when running tests that I need to debug. My issue is that when I run my test (which will create a mongodb doc somewhere in the service being tested), the test times out.
As I understand it, this is because when the test is executed and a new mongo doc is trying to be created during the test, I console log mongoose.connection.readyState and it says it’s 0, meaning that mongoose is disconnected. This is strange to me because I added console logs to my connectMongoose() function (pictured below) and it says that mongoose is connected.
So my main question is why does it say mongoose is connected at the end of connectMongoose(), but it says it’s disconnected during the execution of the unit test/service function? How can I ensure that MongoDB-memory-server is fully connected prior to test execution?
Below is a screenshot showing how I am doing the mongoose test connection:
Below this is a screenshot of exactly where and how mongodb-memory-server is being used:
Here is a screenshot of my jest.config.js:
And finally the actual test file which has the failing test (what I’m asking about):
beforeAll(connectMongoose)
beforeEach(clearDatabase)
afterAll(disconnectMongoose)
Your three functions here are async functions, but you don't await them - is it possible that the connect Mongoose returns whilst the promise is still awaiting, and the other code continues despite the async function not having completed yet?
Perhaps this would better serve your purpose?
beforeAll(() => {
await connectMongoose
})
Before :
beforeAll(connectMongoose)
beforeEach(clearDatabase)
afterAll(disconnectMongoose)
After:
beforeAll(async() => {await connectMongoose})
beforeEach(async() => {await clearDatabase})
afterAll(async () => { await disconnectMongoose})
The reason is you should wait until the mongoose connection is done completely and remove
set timeout in connectMongoose function not needed there.If you want to use jest timeout you can use it in the beforeEach function.
I have made a few integration test using mocha which run fine when run independently but when i try to run them using : mocha test --recursively .
The behaviour I noticed here is that all the after hooks (probably the before too) are getting combined.
I drop my db in the after hook of each test so I check in between tests and I can find data from the previous tests.
It gets cleared up after the last test somehow.
I have already tried importing them into one file but even that won't serve the purpose.
Here are my hooks.
before(async () => {
app.set('port', SERVER_PORT);
server = http.createServer(app);
server.listen(SERVER_PORT, () => console.log(`API running on localhost:${SERVER_PORT}`));
// Initial feeding of the database
await dookie.push('mongodb://localhost:27017/tests', SEEDDATA);
});
after(async () => {
await mongoose.connection.db.dropDatabase();
server.close();
process.exit(0);
});
THANKS
Use jest as it provides the functionality you're looking for inbuilt.
It's hard to tell what is wrong with your tests without having a closer look at the code, so I'm going to drop here a few ideas that come to my mind, with no guarantee that anything will help.
Possibility 1
Use beforeEach and afterEach rather than before and after. This will ensure that your DB cleanup code is executed after each test, rather than after the last test in a describe function block. Details here.
Possibility 2
You are running your tests in multiple threads with mocha-parallel-tests or some other tool. Make sure that the tests where the DB is being accessed are not being parallelized.
Possibility 3
Your db.dropDatabase call returns before the database is actually dropped, while the request is still pending. You'll have to check your connection or database settings.
If nothing helps, try inserting log statements at the start of each unit test and before/after hook, this will help you understand when the code is actually being run and see what is happening in the wrong order.
Problem
I'm replacing CasperJS with Jest + Puppeteer. Putting everything in one file works great:
beforeAll(async () => {
// get `page` and `browser` instances from puppeteer
});
describe('Test A', () => {
// testing
});
describe('Test B', () => {
// testing
});
afterAll(async () => {
// close the browser
});
Now, I don't really want to keep everything in one file. It's harder to maintain and harder to run just part of the tests (say, just 'Test A').
What I've tried
I've looked at Jest docs and read about setupScript. It would be perfect, but it runs before every test file. I don't want this because puppeteer setup takes quite a lot of time. I want to reuse same browser instance and pay the setup cost only once no matter how many test files I'll run.
So, I thought about:
// setup puppeteer
await require('testA')(page, browser, config);
await require('testB')(page, browser, config);
// cleanup
This solves modularization, reuses same browser instance, but doesn't allow me to run tests separately.
Finally, I stumbled upon possibility to create a custom testEnviroment. This sounds great but it isn't well documented, so I'm not even sure if env instance is created per test file, or per Jest run. Stable API is also a missing a setup method where I could set up puppeteer (I'd have to do that in constructor that can't be async).
Why I'm asking
Since I'm new to Jest I might be missing something obvious. Before I dig deeper into this I though I'll ask here.
UPDATE (Feb 2018): Jest now have official Puppeteer guide, featuring reusing one browser instance across all tests :)
It was already answered on Twitter, but let's post it here for clarity.
Since Jest v22 you can create a custom test environment which is async and has setup()/teardown() hooks:
import NodeEnvironment from 'jest-environment-node';
class CustomEnvironment extends NodeEnvironment {
async setup() {
await super.setup();
await setupPuppeteer();
}
async teardown() {
await teardownPuppeteer();
await super.teardown();
}
}
And use it in your Jest configuration:
{
"testEnvironment": "path/to/CustomEnvironment.js"
}
It's worth to note, that Jest parallelizes tests in sandboxes (separate vm contexts) and needs to spawn new test environment for every worker (so usually the number of CPU cores of your machine).
I have a suite of client-side Mocha tests that currently run with the browser test runner. But I also have a suite of server-side Mocha tests that run with the Node.js command-line test runner.
I know I can run the client-side tests from the command-line in a headless browser like PhantomJS (e.g. like this), but they'd still run separately from the server-side tests.
Is there any way to run the client-side tests as part of the command-line run?
E.g. always run both sets of tests, and have one combined output like "all 100 tests passed" or "2 tests failed" — across both client-side and server-side suites.
I imagine if this were possible, there'd need to be some sort of "proxy" layer to dynamically describe to the command-line runner each browser test, and notify it of each result (maybe even any console.log output too) as the tests ran in the browser.
Does there exist anything that achieves this? I've had a hard time finding anything. Thanks!
I use Zombie for this. There's surely a way to do it with Phantom too. Just write your client-side tests under the same directory as your server-side tests and they'll get picked up by Mocha and executed along with the rest.
I'm not sure whether you need some sample test code but here's some just in case:
var app = require('../server').app; // Spin up your server for testing
var Browser = require('zombie');
var should = require('should');
describe('Some test suite', function () {
it('should do what you expect', function (done) {
var browser = new Browser();
browser.visit('http://localhost:3000', function (err) {
// Let's say this is a log in page
should.not.exist(err);
browser
.fill('#username', 'TestUser')
.fill('#password', 'TestPassword')
.pressButton('#login', function (err) {
should.not.exist(err);
// etc...
return done();
});
});
});
});