How to prevent Mocha from preserving require cache between test files? - node.js

I am running my integration test cases in separate files for each API.
Before it begins I start the server along with all services, like databases. When it ends, I close all connections. I use Before and After hooks for that purpose. It is important to know that my application depends on an enterprise framework where most "core work" is written and I install it as a dependency of my application.
I run the tests with Mocha.
When the first file runs, I see no problems. When the second file runs I get a lot of errors related to database connections. I tried to fix it in many different ways, most of them failed because of the limitations the Framework imposed me.
Debugging I found out that Mocha actually loads all files first, that means that all code written before the hooks and the describe calls is executed. So when the second file is loaded, the require.cache is already full of modules. Only after that the suite executes the tests sequentially.
That has a huge impact in this Framework because many objects are actually Singletons, so if in a after hook it closes a connection with a database, it closes the connection inside the Singleton. The way the Framework was built makes it very hard to give a workaround to this problem, like reconnecting to all services in the before hook.
I wrote a very ugly code that helps me before I can refactor the Framework. This goes in each test file I want to invalidate the cache.
function clearRequireCache() {
Object.keys(require.cache).forEach(function (key) {
delete require.cache[key];
});
}
before(() => {
clearRequireCache();
})
It is working, but seems to be very bad practice. And I don`t want this in the code.
As a second idea I was thinking about running Mocha multiple times, one for each "module" (as of my Framework) or file.
"scripts": {
"test-integration" : "./node_modules/mocha/bin/mocha ./api/modules/module1/test/integration/*.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file1.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file2.integration.js"
}
I was wondering if Mocha provides a solution for this problem so I can get rid of that code and delay the code refacting a bit.

Related

vite rebuilds dev server on every http request, causing graphql schema duplicates instantly crashing server -- build and production works, repo inside

I'm trying to build a graphql server using nestjs and using vite + swc as the compiler/builder for performance reasons, webpack would take 50-60+ seconds on each rebuild on a big project, SWC/vite seems to cut that down by a factor of 5 at least.
Here's a repository that reproduces the issue with a basic 'health check' endpoint and graphql query.
The main tools concerning this:
"#nestjs/graphql": "10.0.9",
"#nestjs/apollo": "10.0.9",
"typescript": "4.7.4",
"vite-plugin-node": "1.0.0",
"vite": "2.9.13",
"#swc/core": "1.2.207",
"vite-tsconfig-paths": "3.5.0"
Now, I have played around with these fixed versions trying out various combinations of older versions. But I've narrowed down the flaw to be a problem with vite specifically.
There's this github issue opened over a month ago that's probably directly related, with this being merely a symptom of that issue.
If you build the app and serve it, everything works fine, because the production version calls the bootstrap() function which is not handled by the vite development server.
This is also a nestjs-specific problem due to nestjs doing the code-first approach.
I'm trying to patch this issue somehow by attempting three things:
stop the development server from rebuilding on every request
configure the development server to cleanup after itself on every request
configure nestjs's graphql in a way that only builds the schema 1 time, something as simple as:
let built = false;
if(!built) { buildSchema(); built = true; }
I'm counting on that built variable not changing between requests, but if it does, I might find a way to tie it to the start command via a file outside of vite's scope.
Thank you.

intern-runner just hangs ('/client/end' publish/subscribe doesn't work?)

When launched through the intern-runner command, my tests are still hanging--intern-runner never exits to give me a report and I can tell that the proxy server is still running on port 9000. The browser I specified through my config just remains open (and no, I did not set leaveRemoteOpen to true). I added some debug to lib/reporters/webdriver.js, because I saw that's what logged the "Tests complete" message. I could see that the topic.publish('/client/end') code was invoked, but nothing ever responded to this event. Doesn't lib/ClientSuite subscribe to this topic? From that module:
topic.subscribe('/client/end', function (sessionId) {
console.log("subscribed to '/client/end' for session", sessionId);
if (sessionId === remote.session.sessionId) {
clearHandles();
// get about:blank to always collect code coverage data from the page in case it is
// navigated away later by some other process; this happens during self-testing when
// the new Leadfoot library takes over
remote.setHeartbeatInterval(0).get('about:blank').then(lang.hitch(dfd, 'resolve'));
}
})
But nothing ever happens, and I don't see my console.log() output. Sorry if I am bringing up things that are red herrings, but I just wanted to do some initial investigation first.
All I want is for my test to end and my JUnit and LCOV reports generated! :( What could be going wrong?
And note: no error messages are logged to the command terminal from which I invoked intern-runner config=unittest/myInternConfig. No errors (obvious ones at least) appear in terminal where Selenium server is running.
Update 03/15/15: I added this info in my last comment, but maybe comments get lost in the shuffle on Stackoverflow. In our legacy DOH tests, we used Sinon to fake a server so as to not make real I/O requests to the backend server in unittests. I didn't see a problem with keeping this in the Intern tests, but apparently, there is. When I disabled the test modules that just do
var server = sinon.fakeServer.create();
(well, that, in addition to calling server.respondWith() and server.respond())
intern-runner completed, I got my reports, and etc. Then I searched for "intern with sinon" and stumbled upon https://github.com/theintern/intern/issues/47, where jason0x43 linked to his Sinon-with-Intern code at https://github.com/theintern/intern/blob/sinon/sinon.js. So, I found that very helpful--it seems that in my situation, Sinon's FakeXMLHttpRequest was ALSO faking requests to Intern's proxy server, and that was what was hanging the process.
So, after pretty much using jason0x43's sinon.js code to filter out the "real request," I re-enabled the problematic test modules, re-ran, and everything worked beautifully.
Again, no errors or any sort of warnings reported in terminal or browser console--it would be great if there could be some sort of head's up about this pitfall. Even if just in a Readme file.
(I also edited my original post to add this info.) In our legacy DOH tests, we used Sinon to fake a server so as to not make real I/O requests to the backend server in unittests. I didn't see a problem with keeping this in the Intern tests, but apparently, there is. When I disabled the test modules that just do
var server = sinon.fakeServer.create();
(well, that, in addition to calling server.respondWith() and server.respond())
intern-runner completed, I got my reports, and etc. Then I searched for "intern with sinon" and stumbled upon https://github.com/theintern/intern/issues/47, where jason0x43 linked to his Sinon-with-Intern code at https://github.com/theintern/intern/blob/sinon/sinon.js. So, I found that very helpful--it seems that in my situation, Sinon's FakeXMLHttpRequest was ALSO faking requests to Intern's proxy server, and that was what was hanging the process.
So, after pretty much using jason0x43's sinon.js code to filter out the "real request," I re-enabled the problematic test modules, re-ran, and everything worked beautifully.
Again, no errors or any sort of warnings reported in terminal or browser console--it would be great if there could be some sort of head's up about this pitfall. Even if just in a Readme file.

Correct configuration with Gulp, Mocha, Browserify to execute client side test with server side tests

I'm working on a node application utilizing gulp for our build processes and the gulp-mocha plugin for our test-runner.
gulp.task('test', function () {
return gulp.src(TESTJS)
.pipe(mocha({reporter: 'spec'}))
.on("error", function (err) {
// handle the mocha errors so that they don't cloud the test results,
// or end the watch
console.log(err.toString());
this.emit('end');
});
});
Currently TESTJS is only my server-side tests. I am wanting to use this same process to execute my client tests as well. I looked into gulp-blanket-mocha and gave it a shot but I keep running into the same issue. When trying to test my backbone code, it fails because the other client components necessary (namely jquery) are not found by the test runner and it fails. I get that I need to use some sort of headless webkit like phantomJS. But I am having real trouble figuring out how to incorporate that into this gulp process with browserify.
Anyone tried getting a setup like this going or have any ideas what I am missing here in terms of having my gulp "test" task execute my client side mocha tests as well as my server side?
A potential setup is :
Test runner - this is the glue between gulp and karma and provides option to set the karma options.files with the gulp.src() stream. Frankly if you have no steps before your karma tests, then use karma directly within gulp task, without gulp plugin.
Use associated karma plugins, to run on phantom/chrome/firefox
Use associated karma plugins for coverage, alt-js compilation
More plugins & configuring karma options for reporting of tests and coverage.
Using browserify will change the whole setup above.
Since it needs to resolve requires, it must run on all the "entry point" files. Typically your tests should require sources, and must be entry points.
Use karma-bro - it solves the problems in karma-browserify (ATM this doesnt even work - it cant work with bfy 5.0 api) & karma-browserifast.
Coverage becomes tricky since sources/vendor-sources/tests are all bundled. So I had created a custom coverage transform, that marks which code whould be instrumented while bfy is bundling
browserify should be a "preprocessor" in karma.
A bunch of "transform: []" should be configured in browserfy options
The transforms can be configured by taking an existing transform module and wrapping with a custom module like what I did above for browserify-istanbul

Sail.js requires server restart after running command to refresh database

From this question, Sails js using models outside web server I learned how to run a command from the terminal to update records. However, when I do this the changes don't show up until I restart the server. I'm using the sails-disk adapter and v0.9
According to the source code, the application using sails-disk adapter loads the data from file only once, when the corresponding Waterline collection is being created. After that all the updates and destroys happen in the memory, then the data is being dumped to the file, but not being re-read.
That said, what's happening in your case is that once your server is running, it doesn't matter if you are changing the DB file (.tmp/disk.db) using your CLI instance, 'cause lifted Sails server won't know about the changes until it's restarted.
Long story short, the solution is simple: use another adapter. I would suggest you to check out sails-mongo or sails-redis (though the latter is still in development), for both Mongo and Redis have data auto expiry functionality (http://docs.mongodb.org/manual/tutorial/expire-data/, http://redis.io/commands/expire). Besides, sails-disk is not production-suitable anyways, so sooner or later you would need something else.
One way to accomplish deleting "expired records" over time is by rolling your own "cron-like job" in /config/bootstrap.js. In psuedo code it would look something like this:
module.exports.bootstrap = function (cb) {
setInterval(function() { < Insert Model Delete Code here> }, 300000);
cb();
};
The downside to this approach is if it throws it will stop the server. You might also take a look at kue.

How to test an AngularJS/SocketStream/Node.js app using Karma

I am working on an AngularJS application that is delivered by a SocketStream/node.js server.
I have an AngularJS service that calls api functions on the SocketStream server and progress has been good so far.
But now the time has come to start writing the first tests and the first testing framework that came to mind is Karma/Jasmine, since this is the recommend AngularJS set up.
So far so good, but since my AngularJS modules are imported using 'require' (SocketStream's version, not require.js) and server api calls are part of the test, I need to configure Karma to load SocketStream (at least its client side).
I took a good look at 'https://github.com/yiwang/angular-phonecat-livescript-socketstream' but when I run this example I get run time errors, possibly because I have later versions of variuous dependencies installed.
I managed to get 'required' resolved by packing my SocketStream app by adding 'ss.client.packAssets()' to app.js and run 'SS_PACK=1 node app.js', but when I start karma it logs an error message saying:
'Chrome 23.0 (Linux) ERROR
Uncaught TypeError: undefined is not a function
at /the...path/client/static/assets/app/1368026081351.js:25'
'1368026081351.js' is the SocketStream packed assets file. If I don't load it the error message is something like 'require is undefined', so my best guess is that the error is happening somewhere inside the SocketStream require code. Also because I run karma in DEBUG mode and can see all the files being served.
I have been trying different approaches as to find out what is happening but to now avail. So my questions are:
Is anybody else successfully testing AngularJS/SocketStream using Karma?
Does anybody have any suggestions as to how I can fix, or at least debug this problem?
Are there any alternatives/better solutions?
Time to answer, sort of, my own question:
Sort of, because I came to the conclusion that Karma and node.js/SocketStream have a lot of overlap, so I decided to see if I can omit Karma altogether and deliver the Jasmine testing platform through SocketStream. It turns out that that is possible and here's how I did it:
I defined a new SocketStream route and client in my 'app.js' file:
ss.client.define( 'test', {
view: 'SpecRunner.html',
css: ['libs/test'],
code: ['libs', 'tests', 'app'],
tmpl: 'none'
});
ss.http.route( '/test', function(req, res) {
res.serveClient( 'test' );
});
I downloaded jasmine-standalone-1.3.1.zip and copied 'SpecRunner.html' to the 'client/views' folder. I then edited it to make it load AngularJS and all SocketStream client files, like all other views:
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.0.6/angular-resource.min.js"></script>
<SocketStream/>
I removed the 'script' tags that import the sample source files ( 'Player.js' and 'Song.js' ) and specs but let the last 'script' block in place unmodified.
I then created a new folder inside 'client/css/libs' called 'test' and copied 'jasmine.css' in there unmodified.
Then I copied 'jasmine.js' and 'jasmine-html.js' renamed to '01-jasmine.js' and '02-jasmine-html.js' but otherwise unmodified, into '/client/code/libs'.
Now Jasmine is in place and will be invoked by using the '/test' route. The slightly unsatisfactory bit is that I haven't found an elegant place to store my spec files. They only work so far if I place them inside the 'libs' folder. Anywhere else and they are served by SocketStream as modules and are not run.
But I can live with that for now. I can run Jasmine tests without having to configure a special Karma setup.

Resources