Using Jest to test a command line tool - jestjs

I have an app which exposes a script as a command. How do I test this script using jest. More specifically how to execute this script using jest and then apply the corresponding expectations? The script does not export any functions, It just contains a bunch of lines of code which are executed sequentially.

You could wrap your code in a main function, export it, and only run the function when executing the module from the command line, and then write tests for it. A simplified example could be:
// script.js
const toUpper = text => text.toUpperCase();
module.exports.toUpper = toUpper;
// It calls the function only if executed through the command line
if (require.main === module) {
toUpper(process.argv[2]);
}
And then import the toUpper function from the test file
// script.test.js
const { toUpper } = require('./script');
test('tranforms params to uppercase', () => {
expect(toUpper('hi')).toBe('HI');
});
See Node: Accessing the main module

Related

Why is my function running twice in command line but not in vscode

I am using a function from one file, in another file, and calling it there. This is causing the function to run twice at the same time when run from the command line, but not when I run it in VSCode.
Here is an example:
// fileOne
async function task() {
console.log('Hello')
}
module.exports = { task }
// fileTwo
const fileOne = require('./fileOne');
fileOne.task();
Output when ran in VSCode:
Hello
Output when ran in Command Line:
Hello
Hello
I'm not sure why this is happening... No I am not calling it in fileOne by accident because then it would also run twice in VSCode.
Thanks.
If your fileOne and fileTwo look exactly as in your problem statement, i.e.:
fileOne.js:
async function task() {
console.log('Hello')
}
module.exports = { task }
fileTwo.js:
const fileOne = require('./fileOne');
fileOne.task();
the output is 1 single 'Hello' when run in the following ways:
in Command Prompt
node fileTwo.js
in Windows PowerShell
node .\fileTwo.js
in Linux Bash Terminal
$ nodejs fileTwo.js
The same applies if you run the script having both files within 1 file (as you mention in the comments).
There were some cases where Node.js would print the output twice, but those were different scenarios.
You can try running just the fileTwo.js separately, but as already mentioned, it worked well also under a common file (e.g. your my_program_here.js in case it is just a combination of fileOne.js and fileTwo.js).
const fileOne = require('./fileOne');
This is based on the './' in different command lines.

Jest cannot test commander help function

With jest I'm not able to test commander module functions that result in process exit.
For example, if I pass the --help option or an invalid parameter like -x (see below) process.exit or process.stdout.write are not called as they should looking at the commander sources.
import {Command} from "commander";
let mockExit: jest.SpyInstance;
let mockStdout: jest.SpyInstance;
beforeAll(() => {
mockExit = jest.spyOn(process, "exit").mockImplementation();
mockStdout = jest.spyOn(process.stdout, "write").mockImplementation();
});
afterAll(() => {
mockExit.mockRestore();
mockStdout.mockRestore();
});
test("Ask for help", () => {
// Setup
const save = JSON.parse(JSON.stringify(process.argv));
process.argv = ["--help"]; // Same setting it to "-x"
const program = new Command();
program
.option("-v, --verbose [level]", "verbose level")
.parse(process.argv);
expect(mockExit).toBeCalled();
// expect(mockStdout).toBeCalled();
// Cleanup
process.argv = save;
});
What is strange is that, from the behavior of other tests, process.argv is not restored after this one.
Tests are in typescript and passed through ts-jest.
Any ideas?
Thanks!
I suggest you use .exitOverride(), which is the approach Commander uses in its own tests. This means early "termination" is via a throw rather than exit.
https://github.com/tj/commander.js#override-exit-handling
The first problem though (from comments) is the arguments. Commander expects the parse arguments follow the conventions of node with argv[0] is the application and argv[1] is the script being run, with user parameters after that.
So instead of:
argsToParse = ["--help"];
something like:
argsToParse = ['node", "dummy.js", "--help"];
(No need to modify process.argv as such.)

Mocha how to use utils function stackTraceFilter()

i try to use the mocha utils stackTraceFilter() function
but i cannot find an example usage case where someone explains how to use it in ones test. I found the official tests here: link
But how can i implement it in my tests, which somehow look like that:
import { expect } from 'chai'
import 'mocha'
import { main, main2 } from './'
describe.only('index.ts', async () => {
it('should start a job', async () => {
// const R_RUN_MAIN = await main()
await main2()
// TEST
expect(1).to.equal(1) // fails
})
})
In the tests i can see the line
expect(filter(stack.join('\n')), 'to be', stack.slice(0, 3).join('\n'));
But how do i get the Stack for my test?
expect(1).to.equal(1) // fails
or in general, how do i get the stack and initialize the filter function for the whole file when, for example, code from an imported file is already failing and creating a long stack trace?
UPDATE (2018.08.15)
so i got mocha running in a programmatic way:
export {}
import * as MOCHA from 'mocha'
async function run() {
const mocha = new MOCHA({
reporter: 'progress',
reporterOptions: {
verbose: true,
},
})
mocha.addFile(`./src/utils/mocha/index.spec.ts`)
const R = mocha.run((failures) => {
process.on('exit', () => {
process.exit(failures)
})
})
}
run()
I dont know where to add and run the Filter function?
const filter = MOCHA.utils.stackTraceFilter
The stackTraceFilter() function in mocha isn't meant to filter your code, but rather the mocha internals that in theory shouldn't be relevant to your tests. You can view the source code, but to sum it up it just filters out 'mocha' and 'node' lines from the stack, depending on the environment you're in.
I think what you're looking for could be accomplished through the package StackTraceJS, which allows you to grab a stack from anywhere, and do what you want with it. We created a custom reporter for mocha which uses it, and it works quite well.
So, using the example from their site:
StackTrace.get()
.then(function(stack){
// you now have a stack, and can filter as you wish
})
.catch(function(err){});

running node app with exports.object from command line and lambda

I am still in the process of learning node and have come across this issue. In the following situation, and using a silly example (the full code can not be placed here), when I run in the terminal node index.js somethinghere, the code does not execute. I realise that event and context have no bearing in this example, but they do in the code I am currently writing.
Is this because I am doing exports.imageRs?
How would I get it to run on the command line by passing in arguments?
Note that the original code is to be run on both aws lambda and from the command line.
file index.js
exports.imageRs = function (event, context) {
console.log(process.argv);
}
In the example you have shown Node will define exports.imageRs function but it won't execute it.
The fix is something along these lines:
exports.imageRs = function (event, context) {
console.log(process.argv);
};
if (!module.parent) {
exports.imageRs();
}
!module.parent check prevents the code inside from executing when your module is required from other modules, which is probably what you want.
$ node index.js somethinghere
[ '/path/to/node',
'/path/to/index.js',
'somethinghere' ]
$ node
> require('./index')
{ imageRs: [Function] }

Load and execute external file in node.js

is it easy/possible to run a node js file from another node js file?
For instance, i'm having two files test1.js and test2.js. i want to execute the test1.js file from test2.js.
I think the better way to accomplish what you're trying to do would be to do what my other answer suggests. But to execute commands on the command line as your questions suggests, you want to use child_process.exec. For example:
var exec = require('child_process').exec,
child;
child = exec('node test2.js {{args}}',
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
});
You simply run require('test2.js'), and then call a function on the exported object. From the documentation on modules:
Node has a simple module loading system. In Node, files and modules are in one-to-one correspondence. As an example, foo.js loads the module circle.js in the same directory.
The contents of foo.js:
var circle = require('./circle.js');
console.log( 'The area of a circle of radius 4 is ' + circle.area(4));
The contents of circle.js:
var PI = Math.PI;
exports.area = function (r) {
return PI * r * r;
};
exports.circumference = function (r) {
return 2 * PI * r;
};
The module circle.js has exported the functions area() and circumference(). To export an object, add to the special exports object.
Note that exports is a reference to module.exports making it suitable for augmentation only. If you are exporting a single item such as a constructor you will want to use module.exports directly instead.
function MyConstructor (opts) {
//...
}
// BROKEN: Does not modify exports
exports = MyConstructor;
// exports the constructor properly
module.exports = MyConstructor;
Variables local to the module will be private. In this example the variable PI is private to circle.js.
The module system is implemented in the require("module") module.
There are different scenarios here - using modules, loading them "the right way" - it's the way to go when writing your own code.
What about "random" .js files, e.g. downloaded via web scraping? (If this is a good idea to execute them is beyond the scope of this answer...)
Well - you can just require them, if you're only interested in the side effects:
test2.js:
console.log('hello')
test1.js:
console.log('about to execute')
require('./test2.js')
console.log('done')
Note the ./ in require(). But, if you want to run it twice, this won't work:
test3.js:
console.log('about to execute twice?')
require('./test2.js')
require('./test2.js')
console.log('surprise')
This shows, that require works like a Python import - only executing the file if it hasn't been loaded yet. But - it's possible to circumvent it and force a reload: How to remove module after "require" in node.js?
test4.js:
console.log('about to execute twice!')
require('./test2.js')
delete require.cache[require.resolve('./test2.js')]
require('./test2.js')
console.log('NO surprise this time around')
The difference from a Python import is that you can't import anything unless it's exported. So you would have to change the required file and do something with module.exports.
If you're working with the node shell, there is an alternative:
test5.js:
console.log('the const below is private?')
const x = 5
And then:
$ node
> .load test5.js
console.log('the const below is private?')
const x = 5
the const below is private?
undefined
> x
5
Note that there are no quotes around filename in .load, and also no ./. This is somewhat verbose when used (echoing the loaded script). But it is at least some way of playing with the values the script creates.
Final warning: always be careful about what you're about to execute!

Resources