Grunt-Karma: Use Node.js fs-framework in Jasmine Testfile - node.js

I'm writing unit-tests with the Jasmine-framework.
I use Grunt and Karma for running the Jasmine testfiles.
I simply want to load the content of a file on my local file-system (e.g. example.xml).
I thought I can do this:
var fs = require('fs');
var fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
This works well in my Gruntfile.js and even in my karma.conf.js file, but not in my
Jasmine-file. My Testfile looks like this:
describe('Some tests', function() {
it('load xml file', function() {
var fs = require("fs");
fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
});
});
The first error I get is:
'ReferenceError: require is not defined'.
Does not know why I cannot use RequireJS here, because I can use it
in Gruntfiel.js and even in karma.conf.js?!?!?
Okay, but when manually add require.js to the files-property in karma.conf.js-file,
then I get the following message:
Module name "fs" has not been loaded yet for context: _. Use require([])
With the array-syntax of requirejs, nothing happens.
I guess that is not possible to access Node.js functionality in Jasmine when running the
testfiles with Karma. So when Karma runs on Node.js, why is it not possible to access the 'fs'-framework of Nodejs?
Any comment/advice is welcome.
Thanks.

Your test do not work because karma - is a testrunner for client-side JavaScript (javascript who run in browser), but you want to test node.js code with it (which run on the server part). So karma just can't run server-side tests. You need different testrunner, for example take a look to jasmine-node.

Since this comes up first in the Google search, I received a similar error but wasn't using any node.js-style code in my project. Turns out the error was one of my bower components had a full copy of jasmine in it including its node.js-style code, and I had
{ pattern: 'src/**/*.js', included: false },
in my karma.conf.js.
So unfortunately Karma doesn't provide the best debugging for this sort of thing, dumping you out without telling you which file caused the issue. I had to just tear that pattern down to individual directories to find the offender.
Anyway, just be wary of bower installs, they bring a lot of code down into your project directory that you might not really care to have.

I think you're missing the point of unit testing here, because it seems to me that you're copying application logic into your test suite. This voids the point of a unit test because what it is supposed to do is run your existing functions through a test suite, not to test that fs can load an XML file. In your scenario if your XML handling code was changed (and introduced a bug) in the source file it would still pass the unit test.
Think of unit testing as a way to run your function through lots of sample data to make sure it doesn't break. Set up your file reader to accept input and then simply in the Jasmine test:
describe('My XML reader', function() {
beforeEach(function() {
this.xmlreader = new XMLReader();
});
it('can load some xml', function() {
var xmldump = this.xmlreader.loadXML('inputFile.xml');
expect(xmldump).toBeTruthy();
});
});
Test the methods that are exposed on the object you are testing. Don't make more work for yourself. :-)

Related

Mocking file system for unit testing file watcher library

I am currently unit testing a wrapper I've written around Chokidar, which is a file system watcher, which is itself a wrapper around native fs.watch functionality. I write my tests with mocha/chai.
I know there is this wonderful library mock-fs, the caveat however is that it says at the bottom
The following fs functions are not currently mocked (if your tests use these, they will work against the real file system): fs.FSWatcher, fs.unwatchFile, fs.watch, and fs.watchFile. Pull requests welcome.
So it will not help me unit testing my watcher.
Currently I have it set up with true read/writes to the fs, without mocking, but it's tedious and timing dependent (which makes it hardware dependent).
Would anybody be able to advise me on perhaps better approaches?
I have had this problem earlier today. There is another library for mocking node fs called fs-mock . This library allows you to mock fs.watch() . You can do the following :
const Fs = require("fs-mock")
const fsmock = new Fs({
'./mock-directory': {
'file9.txt': 'fileContent9',
'file8.txt': 'fileContent8',
'file7.txt': 'fileContent7',
'file6.txt': 'fileContent6',
'file5.txt': 'fileContent5',
'file4.txt': 'fileContent4',
'file3.txt': 'fileContent3',
'file2.txt': 'fileContent2',
}
})
fsmock.watch(directory, { recursive: false },
(eventType, filename) => {
//YOUR CODE GOES HERE
})

node.js issues with Meteor's file system

I have tried to figure out what i am missing from this puzzle between. Node.js and Meteor.js. Meteor is built on Node.js i know this. But Meteor doesn't not work properly with Node.js. Either I need to do 20 more steps to get the same result, which I don't know what they are. Or there is a serious bug between the two. Standalone Node.js runs the command below just fine. Running the same commands on Meteor cause errors or undefined results. Wish i had a why to solve this or they need to patch this so it will work the way it should work.
examples #1
var fs = require('fs');
fs.readFile('file.txt', 'utf8', function (err,data) {
if (err) {
return console.log(err);
}
console.log(data);
});
example #2
var jetpack = require('fs-jetpack');
var data = jetpack.read('file.txt');
console.log(data);
example #3
var fs = require ('fs');
var readMe = fs.readFileSync('file.txt', 'utf8');
console.log(readMe);
You shouldn't try to load files like this because you don't know what the folder structure looks like. Meteor creates builds from your project directory, both in development and production mode. This means that even though you have a file.txt in your project folder, it doesn't end up in the same place in the build (or it isn't even included in the build at all).
For example, your code tries to read the file from the development build folder .meteor/local/build/programs/server. However, this folder doesn't contain file.txt.
Solution: Store file.txt in the private folder of your project and use Assets.getText to read it. If you still want to use the functions from fs to load the file, you can retrieve the absolute path with Assets.absoluteFilePath.

How to compile ReactJS for use on server with command line arguments?

I've decided to try out ReactJS. Along with that, I've decided to use Gulp for compiling .jsx to .js, also for the first time.
I can compile it no problem for the client use with browserify. Here's my gulp task:
browserify("./scripts/main.jsx")
.transform(
babelify.configure({
presets: ["react"]
}))
.bundle()
.pipe(source('bundle.js'))
.pipe(gulp.dest('./scripts/'));
But since I use PHP to generate data, I need to get those data to node. If I use browserify, it will prevent me from using process.argv in node. I can save data to file and read that file in node, so I wouldn't need to pass the whole state to node, but I still need to pass the identifying arguments, so the node knows which file to load.
What should I use instead of browserify?
If you need to compile a React module to es5 for use on the server, use Babel itself.
A module that may help with reading and writing files is this one: https://nodejs.org/api/fs.html
Have you considered posting and getting from a database?
Here's how I solved it:
I have learnt that you can create standalone bundles with browserify, so I've compiled all the server code I need (components + rendering) as a standalone bundle. Then I have created small node script which is responsible only for reading arguments, loading data and sending it to the rendering code.
I'm not sure if this is a proper way how it should be done but it works.
Here's code for the "setup" script:
var fs = require('fs');
var Server = require('./server.js');
if (process.argv[2]) {
region = process.argv[2].toLowerCase().replace(/[^a-z0-9]/, '');
if (region != '') {
var data = JSON.parse(fs.readFileSync(__dirname + '/../tmp/' + region + '.json', 'utf8'));
console.log(Server.render(data.deal, data.region));
}
}
This way I only need to deploy two files and I still can easily compile jsx to js.

require.main.require works but not inside Mocha test

I have written a global function for requiring certain files of my app/framework:
global.coRequireModel = function(name) {
// CRASH happens here
return require.main.require('./api/_co' + name + '/_co' + name + '.model');
}
This module is in /components/coGlobalFunctions.
It is required in my main app app.js like this:
require('./components/coGlobalFunctions');
Then in other modules using "something" from the framework I use:
var baseScheme = coRequireModel('Base');
This works but not in the Mocha tests which give me a "Error: Cannot find module" right before the require.main.require call.
It seems that the test is coming from another source folder. But I thought the require.main.require would take out the aspect of having to relatively linking to modules.
EDIT:
An example test file living in api/user:
var should = require('should');
var app = require('../../app');
var User = require('./user.model');
...
require.main points to the module that was run directly from node. So, if you run node app.js, then require.main will point to app.js. If, on the other hand, you ran it using mocha, then require.main will point to mocha. This is likely why your tests are failing.
See the node docs of more details.
Because require.main was not index.html in my node-webkit app when running mocha tests, it threw errors left and right about not being able to resolve modules. Hacky fix in my test-helper.js (required first thing in all tests) fixed it:
var path = require('path')
require.main.require = function (name) {
// navigate to main directory
var newPath = path.join(__dirname, '../', name)
return require(newPath)
}
This feels wrong, though it worked. Is there a better way to fix this? It's like combining some of the above solutions with #7 to get mocha testing working, but modifying main's require just to make everything work when testing feels really wrong.
For other avoid-the-".."-mess solutions, see here:
https://gist.github.com/branneman/8048520
This is pretty old, but here is my solution.
I needed a test harness module to be published to a private registry and required by the mocha test suite. I wanted the calling test code to pass the code under test to the harness rather than requiring it directly:
var harness = require('test-harness');
var codeUnderTest = harness('../myCode');
Inside harness (which was found in the project node_modules directory), I used the following code to make require find the correct file:
if (!path.isAbsolute(target)) {
target = path.join(path.dirname(module.parent.paths[0]), target);
}
var codeUndertest = require(target);
...
return codeUnderTest;
This relies on the require path resolution that always starts with looking for a node_modules subdirectory relative to the calling file. Couple that with module.parent and you can get access to that search path. Then just remove the trailing node_modules part and concatenate the relative filename.
For other scenarios not using relative paths, this could be accomplished with the options parameter to require:
var codeUndertest = require(target, {paths: module.parent.paths});
...
return codeUnderTest;
And the two could be combined as well. I used the first form because I was actually using proxyquire which does not offer the paths option.

Write qUnit output to file via Grunt

I need to be able to report qUnit tests to a file so my build server can parse them.
I'm using qUnit (grunt-contrib-qunit) through Grunt along with the jUnit reporter found here.
I can get the report to write to the log just as it states but I'm having trouble getting it into a file. I've tried qunit callbacks in my gruntfile but none of them seem to get the xml info. I also tried to simply redirect stdout but it (of course) printed all of the non-xml command-line stuff along with the xml.
In short, I've got the XML echoing properly in the console.log statement. I just need to get this to a file somehow. Either through Grunt, phantomjs, or any other means.
Well, if you're running QUnit tests from Grunt, then you have the full power of Node at your disposal. I've never used that JUnit plugin, but if it just gives you callback in your QUnit HTML file, then you would need a browser solution (even if that is phantomjs).
Phantom uses QtWebKit, which has implemented the File API so you could implement a solution using that from JUnit's callback, but, of course, that would fail if you run the tests in certain other browsers (namely IE9 or under). Here's how that might look (no guarantees on this being exact, I have not run it):
QUnit.jUnitReport = function(report) {
function onInitFs(fs) {
fs.root.getFile('qunit_report.xml', {create: true}, function(fileEntry) {
fileEntry.createWriter(function(fileWriter) {
fileWriter.onwriteend = function(e) { /* if you need it */ };
fileWriter.onerror = function(e) { /* if you need it */ };
var blob = new Blob([report.xml], {type: 'application/xml'});
fileWriter.write(blob);
}, someErrorHandlerFunction);
}, someErrorHandlerFunction);
}
window.requestFileSystem(window.TEMPORARY, 1024*1024, onInitFs, someErrorHandlerFunction);
}
And again, if you need to do something to write the file in IE9 or under (or some mobile browsers) you'll need another solution, like kicking off an ajax request to upload the data to a server that stores the file. You could even run that "server" from within Grunt and have Node write the file.

Resources