I am still in the process of learning node and have come across this issue. In the following situation, and using a silly example (the full code can not be placed here), when I run in the terminal node index.js somethinghere, the code does not execute. I realise that event and context have no bearing in this example, but they do in the code I am currently writing.
Is this because I am doing exports.imageRs?
How would I get it to run on the command line by passing in arguments?
Note that the original code is to be run on both aws lambda and from the command line.
file index.js
exports.imageRs = function (event, context) {
console.log(process.argv);
}
In the example you have shown Node will define exports.imageRs function but it won't execute it.
The fix is something along these lines:
exports.imageRs = function (event, context) {
console.log(process.argv);
};
if (!module.parent) {
exports.imageRs();
}
!module.parent check prevents the code inside from executing when your module is required from other modules, which is probably what you want.
$ node index.js somethinghere
[ '/path/to/node',
'/path/to/index.js',
'somethinghere' ]
$ node
> require('./index')
{ imageRs: [Function] }
Related
I am using a function from one file, in another file, and calling it there. This is causing the function to run twice at the same time when run from the command line, but not when I run it in VSCode.
Here is an example:
// fileOne
async function task() {
console.log('Hello')
}
module.exports = { task }
// fileTwo
const fileOne = require('./fileOne');
fileOne.task();
Output when ran in VSCode:
Hello
Output when ran in Command Line:
Hello
Hello
I'm not sure why this is happening... No I am not calling it in fileOne by accident because then it would also run twice in VSCode.
Thanks.
If your fileOne and fileTwo look exactly as in your problem statement, i.e.:
fileOne.js:
async function task() {
console.log('Hello')
}
module.exports = { task }
fileTwo.js:
const fileOne = require('./fileOne');
fileOne.task();
the output is 1 single 'Hello' when run in the following ways:
in Command Prompt
node fileTwo.js
in Windows PowerShell
node .\fileTwo.js
in Linux Bash Terminal
$ nodejs fileTwo.js
The same applies if you run the script having both files within 1 file (as you mention in the comments).
There were some cases where Node.js would print the output twice, but those were different scenarios.
You can try running just the fileTwo.js separately, but as already mentioned, it worked well also under a common file (e.g. your my_program_here.js in case it is just a combination of fileOne.js and fileTwo.js).
const fileOne = require('./fileOne');
This is based on the './' in different command lines.
I know how to retrieve command-line args in JS by using following manner,
`Config.getTestArgs = () => {
try {
return global.commandLineArgs.args["test-args"];
}
catch (e) {
logger.error(`Error reading test - args from command line: ${ e } `);
return null;
}
};`
When i use the same way in Typescript, i get an error Cannot find module- global
If i pass my input like
`--build --test-args TestArument1`
getTestArgs should return TestArgument1 as output.
Consider we have our own build system which uses nodeJs and Typescript. Which nodeJS dependencies should i need to consider?
In Typescript and Node.js in general, there's a few ways to retrieve command line arguments. You can either use the built-in process.argv property, which returns an array containing the command line arguments passed when the Node.js process was launched. Since the first two arguments will almost always be node and path/to/script.js, it is generally used as process.argv.slice(2).
Example:
node script.js --build --test-args TestArgument1
script.js
console.log(process.argv.slice(2)) // [ '--build', '--test-args', 'TestArgument1' ]
The other, arguably better, way is to use a tried and tested library to parse your command line arguments. Popular options include:
Minimist: For minimal argument parsing.
Commander.js: Most adopted module for argument parsing.
Meow: Lighter alternative to Commander.js
Yargs: More sophisticated argument parsing (heavy).
Vorpal.js: Mature / interactive command-line applications with argument parsing.
For your case Minimist would probably the best solution.
node script.js --build --test-args TestArgument1 would look like this:
const argv: minimist.ParsedArgs = require('minimist')(process.argv.slice(2));
console.dir(argv);
/*
{
_: [ 'build' ],
test-args: 'TestArgument1'
}
*/
Here is a simple example of adding command in nodejs using commander:
'use strict';
const {Command} = require('commander');
const run = () => {
const program = new Command();
console.log('CMD');
program.command('cmd [opts...]')
.action((opts) => {
console.log('OPTS');
});
program.parse(process.argv);
};
run();
In this case everything works fine, but when I'm adding description and options, commander throws an error:
program.command('cmd [opts...]', 'DESCRIPTION', {isDefault: true})
node test-commander.js cmd opts
test-commander-cmd(1) does not exist, try --help
My env:
node v8.9.3
npm 5.3.0
commander 2.12.2
That is the declared behavior of commander. From the npm page under Git-style sub-commands...
When .command() is invoked with a description argument, no .action(callback) should be called to handle sub-commands, otherwise there will be an error. This tells commander that you're going to use separate executables for sub-commands, much like git(1) and other popular tools.
The commander will try to search the executables in the directory of the entry script (like ./examples/pm) with the name program-command, like pm-install, pm-search.
So, when you add a description like you have, it'll assume you have another executable file called test-commander-cmd for the sub command.
If commander's behavior is not what you were expecting, I might recommend looking into a package I published, called wily-cli... only if you're not committed to commander, of course ;)
Assuming your code rests in file.js, your example with wily-cli would look like this...
const cli = require('wily-cli');
const run = () => {
cli
.command('cmd [opts...]', 'DESCRIPTION', (options, parameters) => { console.log(parameters.opts); })
.defaultCommand('cmd');
};
run();
// "node file.js option1 option2" will output "[ 'option1', 'option2' ]"
I have an app which exposes a script as a command. How do I test this script using jest. More specifically how to execute this script using jest and then apply the corresponding expectations? The script does not export any functions, It just contains a bunch of lines of code which are executed sequentially.
You could wrap your code in a main function, export it, and only run the function when executing the module from the command line, and then write tests for it. A simplified example could be:
// script.js
const toUpper = text => text.toUpperCase();
module.exports.toUpper = toUpper;
// It calls the function only if executed through the command line
if (require.main === module) {
toUpper(process.argv[2]);
}
And then import the toUpper function from the test file
// script.test.js
const { toUpper } = require('./script');
test('tranforms params to uppercase', () => {
expect(toUpper('hi')).toBe('HI');
});
See Node: Accessing the main module
I have the following NodeJS code:
var spawn = require('child_process').spawn;
var Unzipper = {
unzip: function(src, dest, callback) {
var self = this;
if (!fs.existsSync(dest)) {
fs.mkdir(dest);
}
var unzip = spawn('unzip', [ src, '-d', dest ]);
unzip.stdout.on('data', function (data) {
self.stdout(data);
});
unzip.stderr.on('data', function (data) {
self.stderr(data);
callback({message: "There was an error executing an unzip process"});
});
unzip.on('close', function() {
callback();
});
}
};
I have a NodeUnit test that executes successfully. Using phpStorm to debug the test the var unzip is assigned correctly
However if I run the same code as part of a web service, the spawn call doesn't return properly and the server crashes on trying to attach an on handler to the nonexistent stdout property of the unzip var.
I've tried running the program outside of phpStorm, however it crashes on the command line as well for the same reason. I'm suspecting it's a permissions issue that the tests don't have to deal with. A web server spawning processes could cause chaos in a production environment, therefore some extra permissions might be needed, but I haven't been able to find (or I've missed) documentation to support my hypothesis.
I'm running v0.10.3 on OSX Snow Leopard (via MacPorts).
Why can't I spawn the child process correctly?
UPDATES
For #jonathan-wiepert
I'm using Prototypical inheritance so when I create an "instance" of Unzipper I set stdout and stderr ie:
var unzipper = Unzipper.spawn({
stdout: function(data) { util.puts(data); },
stderr: function(data) { util.puts(data); }
});
This is similar to the concept of "constructor injection". As for your other points, thanks for the tips.
The error I'm getting is:
project/src/Unzipper.js:15
unzip.stdout.on('data', function (data) {
^
TypeError: Cannot call method 'on' of undefined
As per my debugging screenshots, the object that is returned from the spawn call is different under different circumstances. My test passes (it checks that a ZIP can be unzipped correctly) so the problem occurs when running this code as a web service.
The problem was that the spawn method created on the Object prototype (see this article on Protypical inheritance) was causing the child_process.spawn function to be replaced, so the wrong function was being called.
I saved child_process.spawn into a property on the Unzipper "class" before it gets clobbered and use that property instead.