Unit tests never ending with jest and strapi - node.js

I'm using jest to develop my unit tests for strapi.
In the strapi documentation (https://docs.strapi.io/developer-docs/latest/guides/unit-testing.html#testing-auth-endpoint-controller) they invite the developers to create one .test.js file that jest will discover and add all the tests files using requires.
The result is something like this :
it('strapi is defined', () => {
expect(strapi).toBeDefined();
});
require('mytest1.js')
require('mytest2.js')
require('mytest3.js')
The problem is that I have, now, a lot of tests, and when I run this huge test file, the execution stops at 81 tests and webstorm display them like if they were still pending.
I tried many jest options like :
--forceExit
--detectOpenHandles
--watchAll=false
--no-watchman
But the problem is still there.
Help !

Hum, it seems that the problem came from one of my test that blocked all the process.

Related

how can i run a sails server in jest unit testing

1
The documentation for Sails 0.12.11 explains how to set up testing using the Mocha framework. I would like to use Jest to do that.
I tried using the same code for bootstrap.test.js, but replacing before with beforeAll and after with afterAll; replacing this.timeout(5000) with jasmine.DEFAULT_TIMEOUT_INTERVAL = 5000. Because Jest expects at least one test in a file, I made a dummy test there to make sure it did not complain.

How to prevent Mocha from preserving require cache between test files?

I am running my integration test cases in separate files for each API.
Before it begins I start the server along with all services, like databases. When it ends, I close all connections. I use Before and After hooks for that purpose. It is important to know that my application depends on an enterprise framework where most "core work" is written and I install it as a dependency of my application.
I run the tests with Mocha.
When the first file runs, I see no problems. When the second file runs I get a lot of errors related to database connections. I tried to fix it in many different ways, most of them failed because of the limitations the Framework imposed me.
Debugging I found out that Mocha actually loads all files first, that means that all code written before the hooks and the describe calls is executed. So when the second file is loaded, the require.cache is already full of modules. Only after that the suite executes the tests sequentially.
That has a huge impact in this Framework because many objects are actually Singletons, so if in a after hook it closes a connection with a database, it closes the connection inside the Singleton. The way the Framework was built makes it very hard to give a workaround to this problem, like reconnecting to all services in the before hook.
I wrote a very ugly code that helps me before I can refactor the Framework. This goes in each test file I want to invalidate the cache.
function clearRequireCache() {
Object.keys(require.cache).forEach(function (key) {
delete require.cache[key];
});
}
before(() => {
clearRequireCache();
})
It is working, but seems to be very bad practice. And I don`t want this in the code.
As a second idea I was thinking about running Mocha multiple times, one for each "module" (as of my Framework) or file.
"scripts": {
"test-integration" : "./node_modules/mocha/bin/mocha ./api/modules/module1/test/integration/*.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file1.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file2.integration.js"
}
I was wondering if Mocha provides a solution for this problem so I can get rid of that code and delay the code refacting a bit.

Gruntfile to run app and mock test from single grunt command

I have a Node.js Express REST API app that works. Good.
I have a Mocha/Chai/Supertest mock that tests the API app above. Good.
But I have to start the app and then independently run the mock test.
How can I run a single grunt command that starts the API app, let's it get up and going, and then runs the mock test?
Or do I need to run the API app in some kind of test mode (via env var) and have test-only logic somehow invoke the mock test?
I can try some things and get something to work, but what is the good way? (Avoiding overused phrase 'best practice'.)
You can do that with grunt-express-server and grunt-mocha-test, you will juste have to setup your task like below :
grunt.registerTask('test', ['express:test', 'mochaTest']);
This will run your express server with the config you have set for the test environement then run mocha when you run grunt test.
Since you are using supertest I suppose you are doing functionnal testing which means that you will be using the same database for developement and testing (if you are not mocking something). That can be time loosing and make your test fail because of bad data. Using two different environement makes sure of the state of your data when you are running the test.
You can still use grunt watch plugins to relaunch your test on file change if you don't want to have to do it manually.
Hope this helps

Correct configuration with Gulp, Mocha, Browserify to execute client side test with server side tests

I'm working on a node application utilizing gulp for our build processes and the gulp-mocha plugin for our test-runner.
gulp.task('test', function () {
return gulp.src(TESTJS)
.pipe(mocha({reporter: 'spec'}))
.on("error", function (err) {
// handle the mocha errors so that they don't cloud the test results,
// or end the watch
console.log(err.toString());
this.emit('end');
});
});
Currently TESTJS is only my server-side tests. I am wanting to use this same process to execute my client tests as well. I looked into gulp-blanket-mocha and gave it a shot but I keep running into the same issue. When trying to test my backbone code, it fails because the other client components necessary (namely jquery) are not found by the test runner and it fails. I get that I need to use some sort of headless webkit like phantomJS. But I am having real trouble figuring out how to incorporate that into this gulp process with browserify.
Anyone tried getting a setup like this going or have any ideas what I am missing here in terms of having my gulp "test" task execute my client side mocha tests as well as my server side?
A potential setup is :
Test runner - this is the glue between gulp and karma and provides option to set the karma options.files with the gulp.src() stream. Frankly if you have no steps before your karma tests, then use karma directly within gulp task, without gulp plugin.
Use associated karma plugins, to run on phantom/chrome/firefox
Use associated karma plugins for coverage, alt-js compilation
More plugins & configuring karma options for reporting of tests and coverage.
Using browserify will change the whole setup above.
Since it needs to resolve requires, it must run on all the "entry point" files. Typically your tests should require sources, and must be entry points.
Use karma-bro - it solves the problems in karma-browserify (ATM this doesnt even work - it cant work with bfy 5.0 api) & karma-browserifast.
Coverage becomes tricky since sources/vendor-sources/tests are all bundled. So I had created a custom coverage transform, that marks which code whould be instrumented while bfy is bundling
browserify should be a "preprocessor" in karma.
A bunch of "transform: []" should be configured in browserfy options
The transforms can be configured by taking an existing transform module and wrapping with a custom module like what I did above for browserify-istanbul

How to test an AngularJS/SocketStream/Node.js app using Karma

I am working on an AngularJS application that is delivered by a SocketStream/node.js server.
I have an AngularJS service that calls api functions on the SocketStream server and progress has been good so far.
But now the time has come to start writing the first tests and the first testing framework that came to mind is Karma/Jasmine, since this is the recommend AngularJS set up.
So far so good, but since my AngularJS modules are imported using 'require' (SocketStream's version, not require.js) and server api calls are part of the test, I need to configure Karma to load SocketStream (at least its client side).
I took a good look at 'https://github.com/yiwang/angular-phonecat-livescript-socketstream' but when I run this example I get run time errors, possibly because I have later versions of variuous dependencies installed.
I managed to get 'required' resolved by packing my SocketStream app by adding 'ss.client.packAssets()' to app.js and run 'SS_PACK=1 node app.js', but when I start karma it logs an error message saying:
'Chrome 23.0 (Linux) ERROR
Uncaught TypeError: undefined is not a function
at /the...path/client/static/assets/app/1368026081351.js:25'
'1368026081351.js' is the SocketStream packed assets file. If I don't load it the error message is something like 'require is undefined', so my best guess is that the error is happening somewhere inside the SocketStream require code. Also because I run karma in DEBUG mode and can see all the files being served.
I have been trying different approaches as to find out what is happening but to now avail. So my questions are:
Is anybody else successfully testing AngularJS/SocketStream using Karma?
Does anybody have any suggestions as to how I can fix, or at least debug this problem?
Are there any alternatives/better solutions?
Time to answer, sort of, my own question:
Sort of, because I came to the conclusion that Karma and node.js/SocketStream have a lot of overlap, so I decided to see if I can omit Karma altogether and deliver the Jasmine testing platform through SocketStream. It turns out that that is possible and here's how I did it:
I defined a new SocketStream route and client in my 'app.js' file:
ss.client.define( 'test', {
view: 'SpecRunner.html',
css: ['libs/test'],
code: ['libs', 'tests', 'app'],
tmpl: 'none'
});
ss.http.route( '/test', function(req, res) {
res.serveClient( 'test' );
});
I downloaded jasmine-standalone-1.3.1.zip and copied 'SpecRunner.html' to the 'client/views' folder. I then edited it to make it load AngularJS and all SocketStream client files, like all other views:
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular-resource.min.js"></script>
<SocketStream/>
I removed the 'script' tags that import the sample source files ( 'Player.js' and 'Song.js' ) and specs but let the last 'script' block in place unmodified.
I then created a new folder inside 'client/css/libs' called 'test' and copied 'jasmine.css' in there unmodified.
Then I copied 'jasmine.js' and 'jasmine-html.js' renamed to '01-jasmine.js' and '02-jasmine-html.js' but otherwise unmodified, into '/client/code/libs'.
Now Jasmine is in place and will be invoked by using the '/test' route. The slightly unsatisfactory bit is that I haven't found an elegant place to store my spec files. They only work so far if I place them inside the 'libs' folder. Anywhere else and they are served by SocketStream as modules and are not run.
But I can live with that for now. I can run Jasmine tests without having to configure a special Karma setup.

Resources