While doing testing with Jest I am getting a warning saying "A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks." I realize that this is coming because inside of one of functions I use Bull https://github.com/OptimalBits/bull which uses Redis. So when adding a task to the queue it results in this warning. I use default Bull configuration (no configuration). I do have a mockup for the add function on the queue which is used by Jest, however it didn't help.
const notificationQueue = {
add: jest.fn().mockImplementation((data: any, opts?: JobOptions) => {}),
};
I'd like to know if there is a way to avoid this warning. If it helps I use in memory mongo for testing but redis is an actual one. As a side note when I run each test suite separately I am not seeing this warning, only when I run all tests.
As suggested in the warning, add --detectOpenHandles option to jest's script in package.json file:
"scripts": {
"test": "jest --watchAll --detectOpenHandles"
}
Dont forget to stop then start the server !
This solution can work whatever your problem. But, according to your case, your problem is coming from the redis connection. You need to close redis at the end of the test:
import { redis } from "redis_file_path";
afterAll(async () => {
await redis.quit();
});
Related
I already tested with "test": "jest --detectOpenHandles" and without the --detectOpenHandles.
All tests pass but somehow it keeps pointing at the mongo connect function.
Is this a bug with Jest and Mongoose? Im learning right now, and although all tests pass its kind of annoying to have error messages. Thanks in advance
The Setup
I have a NodeJS project that uses:
Jest for testing.
Sequelize as an ORM.
Sequelize is instantiated when loading the models module which gets imported by a few of the files that are being tested with Jest.
The Problem
Jest tests pass but then hang with the message:
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't
stopped in your tests. Consider running Jest with
--detectOpenHandles to troubleshoot this issue.
Note: Adding --detectOpenHandles to the test call does not affect the output.
I do not actually invoke the sequelize object from any test paths, however some of the tested files import the models module and therefore Sequelize is instantiated.
(I will also note that this only occurs on my TravisCI environment, but I suspect this is a red herring.)
The Context
Because Jest is running tests in parallel, the models module is loaded multiple times during the entire test process. I confirmed this with debug output saying SEQUELIZE LOADED which appears multiple times when I run the tests.
The Attempts
I did attempt to invoke sequelize.close() inside of a globalTeardown but this appears to simply open (and then close) a new sequelize connection.
Since none of the tests actually rely on a database connection, I attempted running sequelize.close() within the models module immediately before export. This fixed the issue (though obviously is not a solution).
I have attempted to configure the test connection pools to aggressively end connections.
const sequelizeConfig = {
...
pool: {
idle: 0,
evict: 0,
}
}
This did nothing.
The Requirements
I don't want to use a brute force solution such as running --forceExit via Jest when I run my tests. This feels like it is ignoring the root issue and might expose me to other kinds of mistakes down the line.
My tests are spread across dozens of files, which means needing to invoke something in an afterAllTests would require a lot of redundancy and would introduce a code smell.
The Question
How can I ensure that sequelize connections are closed after tests finish, so they don't cause Jest to hang?
Jest provides a way to create universal setup before every test suite. This means it is possible to leverage Jest's afterAll to close the sequelize connection pool without having to manually include it in every single test suite.
Example Jest config in package.json
"jest": {
...
"setupFilesAfterEnv": ["./src/test/suiteSetup.js"]
}
Example suiteSetup.js
import models from '../server/models'
afterAll(() => models.sequelize.close())
// Note: in my case sequelize is exposed as an attribute of my models module.
Since the setupFilesAfterEnv is loaded before every test suite, this ensures that every Jest thread that opened a connection ultimately has that connection closed.
This doesn't violate DRY and it doesn't rely on a clunky --forceExit.
It does mean that connections will be closed that may never have been opened (which is a bit brute force) but that might be the best option.
I have a REST API and I'm writing TDD for this project. My TDD is consisted of two parts: route and service. I chose to use Jest. I have a MongoDB database that I use for testing. When each test is completed, I reset my database using the afterAll() method. In this method, I run the mongoose.connection.dropDatabase function.
There is no error when I ran only one test file but when I run multiple test files, I get an error. The error message:
MongoError: Cannot create collection auth-db.users - database is in
the process of being dropped.
I share sample codes with you:
users.route.test.ts:
https://gist.github.com/mksglu/8c4c4a3ddcb0e56782725d6457d97a0e
users.service.test.ts:
https://gist.github.com/mksglu/837202c1048687ad33b4d1dee01bd29c
When all my tests run, "sometimes" gives errors. I wrote the above error message. The reason for this error is that the reset process still continues. I can't solve this problem. I'd appreciate it if you could help.
Thanks.
https://jestjs.io/docs/en/cli.html#runinband
What you are looking for is --runInBand command. Which makes jest to run serially instead of creating a worker pool of child processes that run tests
I'm trying to take advantage of this new feature of Heroku to test a parse-server/nodejs application that we have on Heroku, using mocha.
I was expecting Heroku to launch an ephemeral instance of my app along with the tests so that they could be run against it, but it doesn't seem like that's happening. Only the tests get launched.
Now, I found at least one snippet about configuring the Dyno formation to use dynos other than performance-m for the test, so I'm trying to declare my other dynos there as well:
"environments": {
"test": {
"scripts": {
"test-setup": "echo done",
"test": "npm run test"
},
"addons": [
{
"plan": "rediscloud:30",
"as": "REDISCLOUD_URL"
}
],
"formation": {
"test": {
"quantity": 1,
"size": "standard-1x"
},
"worker": {
"quantity": 1,
"size": "standard-1x"
},
"web": {
"quantity": 1,
"size": "standard-1x"
}
}
}
}
in my app.json, but it seems to be getting totally ignored.
I know my mocha script could import the relevant part of the web server and test against it, and that's what I've seen in the non-heroku-related examples, but our app consists of a worker too, and I'd like to profile the interaction of both and test the job lengths against our expectations of performance, rather than individual components, hence "integration tests". Is this a legit use for Heroku tests or I'm doing something wrong or have wrong expectations? I'm more concerned about this than getting it to work, because I'm quite certain I could get it to work in a certain number of ways (mocha spawning the server processes, npm concurrently package, etc), but if I can avoid hacks, all the better.
Locally, I was able to get both imported in the script, but the performance is degraded since it's now 2 processes + the tests running in a single memory process, with nodejs's memory cap limitations and a single event loop instead of 3. While writing this I'm thinking I could probably use throng and spawn different functions depending on the process ID. I'll try this if I don't get any better solutions.
Edit: I already managed to make it run by spawning the server/worker as separate processes in a before step in mocha, calculating the proper ram amounts to allow to each using the env vars. I'm still interested in knowing if there's a better solution.
When I run unit tests via Ospec for Mithril, I can see if tests fail locally in the console.
What I'm looking for is a solution that will not allow a following Node.js build script to execute if one or more of the tests fail.
I don't want code to be pushed up to another environment/lane if the unit tests aren't passing.
I don't see how to accomplish this in the docs.
In Node, I'm running ospec && someBuildProcess.
The answer might be a Node.js thing, but I'm at a loss for what to look for now.
ospec calls process.exit(1) if any tests fail, and the command string you posted should work. I just verified it locally with the following setup:
https://gist.github.com/tivac/d90c07592e70395639c63dd5100b50a6
ospec runs, fails, and the echo command never gets called.
Can you post some more details about your setup?