Node.JS: Make module runnable through require or from command line - node.js

I have a script setupDB.js that runs asynchronously and is intended to be called from command line. Recently, I added test cases to my project, some of which require a database to be set up (and thus the execution of aforementioned script).
Now, I would like to know when the script has finished doing its thing. At the moment I'm simply waiting for a few seconds after requiring setupDB.js before I start my tests, which is obviously a bad idea.
The problem with simply exporting a function with a callback parameter is that it is important that the script can be run without any overhead, meaning no command line arguments, no additional function calls etc., since it is part of a bigger build process.
Do you have any suggestions for a better approach?

I was also looking for this recently, and came across a somewhat-related question: "Node.JS: Detect if called through require or directly by command line
" which has an answer that helped me build something like the following just a few minutes ago where the export is only run if it's used as a module, and the CLI library is only required if ran as a script.
function doSomething (opts) {
}
/*
* Based on
* https://stackoverflow.com/a/46962952/7665043
*/
function isScript () {
return require.main && require.main.filename === /\((.*):\d+:\d+\)$/.exec((new Error()).stack.split('\n')[ 2 ])[ 1 ]
}
if (isScript) {
const cli = require('some CLI library')
opts = cli.parseCLISomehow()
doSomething(opts)
} else {
module.exports = {
doSomething
}
}
There may be some reason that this is not a good idea, but I am not an expert.

I have now handled it this way: I export a function that does the setup. At the beginning I check if the script has been called from command line, and if so, I simply call the function. At the same time, I can also call it directly from another module and pass a callback.
if (require.main === module) {
// Called from command line
runSetup(function (err, res) {
// do callback handling
});
}
function runSetup(callback) {
// do the setup
}
exports.runSetup = runSetup;

make-runnable npm module can help with this.

Related

duktape js - have multiple contexts with own global and reference to one common 'singleton'

We are in the process of embedding JS in our application, and we will use a few dozen scripts each assigned to an event. Inside these scripts we provide a minimal callback api,
function onevent(value)
{ // user javascript code here
}
which is called whenever that event happens. The scripts have to have their own global, since this funtion has always the same name and we access it from cpp code with
duk_get_global_string(js_context_duk, "onevent");
duk_push_number(js_context_duk, val);
if (duk_pcall(js_context_duk, 1) != 0)
{
printf("Duk error: %s\n", duk_safe_to_string(js_context_duk, -1));
}
duk_pop(js_context_duk); /* ignore result */
Then again we want to allow minimal communication between scripts, e.g.
Script 1
var a = 1;
function onevent(val)
{
log(a);
}
Script 2
function onevent(val)
{
a++;
}
Is there a way we achieve this? Maybe by introducing an own 'ueber-' global object, that is defined once and referencable everywhere? It should be possible to add properties to this 'ueber-global object' from any script like
Script 1
function onevent(val)
{
log(ueber.a);
}
Script 2
function onevent(val)
{
ueber.a=1;
}
Instead of simple JS files you could use modules. duktape comes with a code example to implement a module system (including its code isolation) like in Node.js. Having that in place you can export variables that should be sharable.
We have an approach that seems to work now. After creating the new context with
duk_push_thread_new_globalenv(master_ctx);
new_ctx = duk_require_context(master_ctx, -1);
duk_copy_element_reference(master_ctx, new_ctx, "ueber");
we issue this call sequence in for all properties/objects/functions created in the main context:
void duk_copy_element_reference(duk_context* src, duk_context* dst, const char* element)
{
duk_get_global_string(src, element);
duk_require_stack(dst, 1);
duk_xcopy_top(dst, src, 1);
duk_put_global_string(dst, element);
}
It seems to work (because everything is in the same heap and all is single threaded). Maybe someone with deeper insight into duktape can comment on this? Is this a feasible solution with no side effects?
edit: mark this as answer. works as expected, no memory leaks or other issues.

Out of memory NODE.JS can't be solved

I'm trying to to run an executable in a node.js application. When I call the executable with its parameters from the command line everything is ok.
However when I call it from the application like this :
const exec = require('child_process').execFile;
function f() {
exec('code.exe', [some_arguments], function(err, data) {
console.log(err);
});
}
I get an OutOfMemoryException, which I couldn't solve, even by calling this way :
node --max-old-space-size=8192 .\app.js
I know for sure the process cannot take 8Go of memory, it takes at most a hundred Ko.
Thanks.

Node.js Spawning multiple threads within a class method

How can I run a single method multiple times multi-threaded when called as a method of a class?
At first I tried to use the cluster module, but I realize it just re-runs the whole process from the start, rightfully so.
How can I achieve something like what's outlined below?
I want a class's method to spawn n processes, and when the parallel tasks are completed, I can resolve a promise which the method returns.
The problem with the code below is that calling cluster.fork() will fork index.js process.
index.js
const Person = require('./Person.js');
var Mary = new Person('Mary');
Mary.run(5).then(() => {...});
console.log('I should only run once, but I am called 5 times too many');
Person.js
const cluster = require('cluster');
class Person{
run(distance){
var completed = 0;
return new Promise((resolve, reject) => {
for(var i = 0; i < distance; i++) {
// run a separate process for each
cluster.fork().send(i).on('message', message => {
if (message === 'completed') { ++completed; }
if (completed === distance) { resolve(); }
});
}
});
}
}
I think the short answer is impossible. It's even worse - this has nothing to do with js. To multi (process or thread) in your particular problem you will essentially need a copy of the object in every thread, since it needs (maybe) access to fields - in this case you would need to either initialize it in every thread or share memory. That last one I don't think is provided in cluster, and not trivial in other languages in every use case.
If the calculation is independent of the Person I suggest you extract it, and use the usual (in index.js):
if(cluster.isWorker) {
//Use the i for calculation
} else {
//Create Person, then fork children in for loop
}
You then collect the results and change the Person as needed. You will be copying index.js, but this is standard and you only run what you need.
The problem is if results are dependent on Person. If these are constant for all i you can still send them to your forks independently. Otherwise what you have is the only way to fork. In general forking in cluster is not meant for methods, but for the app itself, which is the standard forking behavior.
Another solution
Following your comment, I suggest you checkout child_process.execFile or child_process.exec on same file.
This way you can spawn a totally independent process on the fly. Now instead of calling cluster.fork you call execFile. You can use either the exit code or stdout as return values (stderr etc.). Promise is now replaced with:
var results = []
for(var i = 0; i < distance; i++) {
// run a separate process for each
results.push(child_process.execFile().child.execFile('node', 'mymethod.js`,i]));
}
//... catch the exit event from all results or return a callback using results.
Inside mymethod.js Have your code that takes i and returns what you want either in the exit code or through stdout, both properties of the returned child_process. This is a bit un-node.js-y since you're waiting on asynchronous calls, but you're requirements are non standard. Since I'm not sure how you use this perhaps returning a callback with the array is a better idea.

Intern Promise Timeouts

I am writing some functional tests with Intern and came across the following section of text...
"The test will also fail if the promise is not fulfilled within the timeout of the test (the default is 30 seconds; set this.timeout to change the value)."
at...
https://github.com/theintern/intern/wiki/Writing-Tests-with-Intern#asynchronous-testing
How do I set the promise timeout for functional tests? I have tried calling timeout() directly on the promise but it isn't a valid method.
I have already set the various WD timeout (page load timeout, implicit wait etc...) but I am having issues with promises timing out.
Setting the timeout in my tests via the suggested API's just didn't work.
Its far from ideal but I ended up modifying Test.js directly and hard coding in the timeout I wnated.
I did notice when looking through the source that there was a comment on the timeout code saying something like // TODO timeouts not working correctly yet
It seems to be working okay on the latest version:
define([
'intern!object',
'intern/chai!assert',
'require',
'intern/dojo/node!leadfoot/helpers/pollUntil'
], function (registerSuite, assert, require, pollUntil) {
registerSuite(function(){
return {
name: 'index',
setup: function() {
},
'Test timeout': function () {
this.timeout = 90000;
return this.remote.sleep(45000);
}
}
});
});
You can also add defaultTimeout: 90000 to your configuration file (tests/intern.js in the default tutorial codebase) to globally set the timeout. This works well for me.
A timeout for a test is either set by passing the timeout as the first argument to this.async, or by setting this.timeout (it is a property, not a method).
For anyone who found their way here while using InternJS 4 and utilizing async/await for functional testing: timeout and executeAsync just wouldn't work for me, but the pattern below did. Basically I just executed some logic and used the sleep method at a longer interval than the setTimeout. Keep in mind that the javascript run inside of execute is block scoped so you will want to cache anything you want to reference later on the window object. Hopefully this saves someone else some time and frustration...
test(`timer test`, async ({remote}) => {
await remote.execute(`
// run setup logic, keep in mind this is scoped and if you want to reference anything it should be window scoped
window.val = "foo";
window.setTimeout(function(){
window.val = "bar";
}, 50);
return true;
`);
await remote.sleep(51); // remote will chill for 51 ms
let data = await remote.execute(`
// now you can call the reference and the timeout has run
return window.val;
`);
assert.strictEqual(
data,
'bar',
`remote.sleep(51) should wait until the setTimeout has run, converting window.val from "foo" to "bar"`
);
});

When using gulp. Is there any way to suppress the 'Started' and 'Finished' log entries for certain tasks

When using gulp. Is there any way to suppress the 'Started' and 'Finished' log entries for certain tasks? I want to use the dependency tree, but I have a few tasks in the tree that I don't want logging for because they are intermediary steps that have their own logging facilities.
You can use the --silent flag with the gulp CLI to disable all gulp logging.
https://github.com/gulpjs/gulp/blob/master/docs/CLI.md
[UPDATE]
As of July 2014, a --silent option has been added to gulp (https://github.com/gulpjs/gulp/commit/3a62b2b3dbefdf91c06e523245ea3c8f8342fa2c#diff-6515adedce347f8386e21d15eb775605).
This is demonstrated in #slamborne answer below, and you should favor using it instead of the below solution if it matches your use case.
[/UPDATE]
Here is a way of doing it (inside your gulpfile):
var cl = console.log;
console.log = function () {
var args = Array.prototype.slice.call(arguments);
if (args.length) {
if (/^\[.*gulp.*\]$/.test(args[0])){
return;
}
}
return cl.apply(console, args);
};
... and that will ignore EVERY message sent using gutil.log.
The trick here obviously is to inspect messages sent to console.log for a first argument that looks like "[gulp]" (see gulp.util.log source code) and eventually ignore it entirely.
Now, this really is dirty - you really shouldn't do that without parental supervision, and you have been warned :-)
Bit late but i think it would be better to use noop from gulp-util no?
var gutil = require('gulp-util');
// ...
gutil.log = gutil.noop;
// or
gutil.log = function() { return this; };
As addressed here

Resources