Out of memory NODE.JS can't be solved - node.js

I'm trying to to run an executable in a node.js application. When I call the executable with its parameters from the command line everything is ok.
However when I call it from the application like this :
const exec = require('child_process').execFile;
function f() {
exec('code.exe', [some_arguments], function(err, data) {
console.log(err);
});
}
I get an OutOfMemoryException, which I couldn't solve, even by calling this way :
node --max-old-space-size=8192 .\app.js
I know for sure the process cannot take 8Go of memory, it takes at most a hundred Ko.
Thanks.

Related

Node.JS: Make module runnable through require or from command line

I have a script setupDB.js that runs asynchronously and is intended to be called from command line. Recently, I added test cases to my project, some of which require a database to be set up (and thus the execution of aforementioned script).
Now, I would like to know when the script has finished doing its thing. At the moment I'm simply waiting for a few seconds after requiring setupDB.js before I start my tests, which is obviously a bad idea.
The problem with simply exporting a function with a callback parameter is that it is important that the script can be run without any overhead, meaning no command line arguments, no additional function calls etc., since it is part of a bigger build process.
Do you have any suggestions for a better approach?
I was also looking for this recently, and came across a somewhat-related question: "Node.JS: Detect if called through require or directly by command line
" which has an answer that helped me build something like the following just a few minutes ago where the export is only run if it's used as a module, and the CLI library is only required if ran as a script.
function doSomething (opts) {
}
/*
* Based on
* https://stackoverflow.com/a/46962952/7665043
*/
function isScript () {
return require.main && require.main.filename === /\((.*):\d+:\d+\)$/.exec((new Error()).stack.split('\n')[ 2 ])[ 1 ]
}
if (isScript) {
const cli = require('some CLI library')
opts = cli.parseCLISomehow()
doSomething(opts)
} else {
module.exports = {
doSomething
}
}
There may be some reason that this is not a good idea, but I am not an expert.
I have now handled it this way: I export a function that does the setup. At the beginning I check if the script has been called from command line, and if so, I simply call the function. At the same time, I can also call it directly from another module and pass a callback.
if (require.main === module) {
// Called from command line
runSetup(function (err, res) {
// do callback handling
});
}
function runSetup(callback) {
// do the setup
}
exports.runSetup = runSetup;
make-runnable npm module can help with this.

How to run remote external commands on meteor server from MeteorJS Application?

I create some CasperJS scripts that login into Duolingo, do click on a module and open as If I were playing there.
I create a simple meteorJS application and I want that when I click a button be able to execute that casperjs script. I am looking for someone with that experience to help me or oriented me in the right way because I don't have much idea of what can I use to achieve this little personal game.
I have read about RPC - Remote Procedure Call of MeteorJS, and I have read that with PHP and NodeJS you can run a function that executes the script as if I were typing the commands to run the script.
I have found these resources:
ShellJS: https://github.com/shelljs/shelljs
and NodeJS child process: https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback.
but I don't have much experience, I am doing this to learn more about CasperJS, MeteorJS.
What I need is to be able to run this command -> "casperjs duolingo.js --engine=slimerjs --disk-cache=no" using my Meteorjs app so I can continue creating my little automation bot to play Duolingo totality.
Thank you very much for your help.
it is a "simple" if you know what to do :-)
Just to know what will happen:
1.) You create a method on server side which can run external processes
2.) You create a meteor remote method which can be called by client
3.) You create the action on client and call remote meteor method
4.) You bind the click event to call the action on client
Method to call external processes
process_exec_sync = function (command) {
// Load future from fibers
var Future = Npm.require("fibers/future");
// Load exec
var child = Npm.require("child_process");
// Create new future
var future = new Future();
// Run command synchronous
child.exec(command, function(error, stdout, stderr) {
// return an onbject to identify error and success
var result = {};
// test for error
if (error) {
result.error = error;
}
// return stdout
result.stdout = stdout;
future.return(result);
});
// wait for future
return future.wait();
}
Meteor remote server method
// define server methods so that the clients will have access to server components
Meteor.methods({
runCasperJS: function() {
// This method call won't return immediately, it will wait for the
// asynchronous code to finish, so we call unblock to allow this client
// to queue other method calls (see Meteor docs)
this.unblock();
// run synchonous system command
var result = process_exec_sync('casperjs duolingo.js --engine=slimerjs --disk-cache=no');
// check for error
if (result.error) {
throw new Meteor.Error("exec-fail", "Error running CasperJS: " + result.error.message);
}
// success
return true;
}
})
Client event and remote method call
Template.mytemplate.events({
'click #run-casper': function(e) {
// try to run remote system call
Meteor.call('runCasperJS', function(err, res) {
// check result
if (err) {
// Do some error notification
} else {
// Do some success action
}
});
}
});
Resume
You need to place the server side methods into files on directory "yourproject/server" (e.g.) main.js and the client part into your template with the Button you wish to press (rename mytemplate to your defined one).
Hope you get out what you need.
Cheers
Tom

NodeJS sometimes gets killed because out of memory while streaming/piping files

Problem
I have a NodeJS Server with the request module.
I use requests pipe() for serving files.
Sometimes, the app throws an exception, all downloads cancel and I have to restart the app:
Out of memory: Kill process 9342 (nodejs) score 793 or sacrifice child
Killed process 9342 (nodejs) total-vm:1333552kB, anon-rss:410648kB, file-rss:0kB
I wrote another script which restarts the server automatically (with childprocess & fork) when it ends unexpectedly, which sometimes throws this error:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Server data
RAM: 500MB (I know that this is not much, but it's cheap)
Ubuntu 12.04.5 LTS
NodeJS version: v0.10.36
Assumptions
Too much downloads in parallel
Something wrong with pipe related to the RAM
Regarding 1:
When somebody downloads a big file a bit of it is loaded in the RAM (I know nothing about this, say 20MB at once, please correct me if I'm too wrong). When 400MB is available and 20 current downloads with the same download speed, the server crashes because he can't load more than 400MB at once in the RAM.
Regarding 2:
In addition to pipe() I use the following code to track current & canceled downloads:
req.on("close", function() {
currentDownloads--;
});
The pipe() doesn't close properly and the RAM it used doesn't get cleared.
Questions
If any of my assumptions should be right, how could I fix it?
If not, what could it be or rather where could be the cause (is it NodeJS/request module/is my code wrong or bad, are there better methods)?
Full Code
var currentDownloads = 0;
app.post("/", function (req, res) {
var open = false;
req.on("close", function () {
if (open) {
currentDownloads--;
open = false;
}
});
request.get(url)
.on("error", function (err) {
log("err " + err);
if (open) {
currentDownloads--;
open = false;
}
})
.on("response", function () {
open = true;
currentDownloads++;
})
.pipe(res);
});

CasperJS, parallel browsing WITH the testing framework

Question : I would like to know if it's possible to do parallel browsing with the testing framework in one script file, so with the tester module and casperjs test command.
I've seen some people create two casper instances :
CasperJS simultaneous requests and https://groups.google.com/forum/#!topic/casperjs/Scx4Cjqp7hE , but as said in the doc, we can't create new casper instance in a test script.
So i tried doing something similar-simple example- with a casper testing script (just copy and execute this it will work):
var url1 = "http://casperjs.readthedocs.org/en/latest/testing.html"
,url2 = "http://casperjs.readthedocs.org/en/latest/testing.html"
;
var casperActions = {
process1: function () {
casper.test.begin('\n********* First processus with our test suite : ***********\n', function suite(test) {
"use strict";
casper.start()
.thenOpen(url1,function(){
this.echo("1","INFO");
});
casper.wait(10000,function(){
casper.test.comment("If parallel, it won't be printed before comment of the second processus !");
})
.run(function() {
this.test.comment('----------------------- First processus over ------------------------\n');
test.done();
});
});
},
process2: function () {
casper.test.begin('\n********* Second processus with our test suite : ***********\n', function suite(test) {
"use strict";
casper.start()
.thenOpen(url1,function(){
this.echo("2","INFO");
});
casper.test.comment("Hi, if parallel, i'm first !");
casper.run(function() {
this.test.comment('----------------------- Second processus over ------------------------\n');
test.done();
});
});
}
};
['process1', 'process2'].forEach(function(href) {
casperActions[href]();
});
But it's not parallel, they are executed one by one.
Currently, i do some parallel browsing but with node so not in the file itself, using child process. So if you split my previous code in two files -proc1.js,proc2.js- (just the two scenarios->casper.test.begin{...}), and launch the code below via node, something like that will work-with Linux, i have to search the equivalent syntax for windows- :
var exec = require("child_process").exec
;
exec('casperjs test proc1.js',function(err,stdout,stderr){
console.log('stdout: ' + stdout);
console.log('endprocess1');
});
exec('casperjs test proc2.js',function(err,stdout,stderr){
console.log('stdout: ' + stdout);
console.log('endprocess2');
});
My problem is that the redirections and open new urls is quite long, so i want for some of them being execute in parallel. I could do XXX files and launch them in parallel with node, but i don't want XXX files with 5 lines of code, so if someone succeeded (if it's possible) to open urls in parrallel in the same testing file without node (so without multiple processes), please teach me!
And i would like to know what is the difference between chaining instructions, or re-use the casper object each time :
so between that :
casper.test.begin('\n********* First processus with our test suite : ***********\n', function suite(test) {
"use strict";
casper.start()
.thenOpen(url1,function(){
this.echo("1","INFO");
})
.wait(10000,function(){
casper.test.comment("If parallel, it won't be print before comment of the second processus !");
})
.run(function() {
this.test.comment('----------------------- First processus over ------------------------\n');
test.done();
});
});
And that :
casper.test.begin('\n********* First processus with our test suite : ***********\n', function suite(test) {
"use strict";
casper.start();
casper.thenOpen(url1,function(){
this.echo("1","INFO");
});
casper.wait(10000,function(){
casper.test.comment("If parallel, it won't be print before comment of the second processus !");
})
casper.run(function() {
this.test.comment('----------------------- First processus over ------------------------\n');
test.done();
});
});
Chaining my instructions, will it block all the chain if one of my step fail (promise rejected) instead of executing every casper steps?
So it would be better to chaining instructions with dependant steps [like thenClick(selector)] and use the casper object with independant steps (like open a new url), wouldn't it?
Edit : i tried and if a step fail, chained or not, it will stop all the next steps, so i don't see the difference using chained steps or not...
Well, chaining or using the casper object each time is just a matter of taste, it does the same, and we can't launch several instances of casper in a testing script. If you have a loop which opens some links, you'll have to wait for each page to be loaded sequentially.
To launch parallel browsing with the testing framework, you have to do multiple processes, so using node does the trick.
After digging, I finally split files with too many redirections to be not longer than my main scenario which can't be split. A folder with 15 files is executed -parallel- in 2/4 min, on local machine.
There's no official support for parallel browsing right now in casperjs, There are multiple work arounds I've set up several different environments and am about to test witch one is the best
I see one person is working with multiple casper instances this way.
var google = require('casper').create();
var yahoo = require('casper').create();
google.start('http://google.com/');
yahoo.start('http://yahoo.com/', function() {
this.echo(google.getTitle());
});
google.run(function() {});
yahoo.run(function() {});
setTimeout(function() {
yahoo.exit();
}, 5000);
Currently I am running multiple caspers in node using 'child_process'. It is very heave on both CPU and memory

NODE fs.readFile, JSON.parse and fs.writeFile

I'm writing an app in Node and have been running into a rare but detrimental occurrence.
So I have a schedule.txt and I write to it when the user makes a change but then also read it every second and then parse it for use throughout the program.
Rarely what happens is as a user is writing to the file (asynchronously) the app (based on the timer) reads the same file and attempts to parse it and fails.
I know from a design stand-point maybe this is just bound to happen... but I'm wondering if there is a quick fix I can do now. Would using writeFileSync help my situation? (make it more 'atomic'?) I just want to make sure that the app doesn't read the file while another process is still writing to the file.
TIA!
Niko
Seems like you'd want to serialize your read/writes. If it were me, I might try having a "manager" object which encapsulates the serialization, which you'd use like:
var fileManager = require('./file-manager');
// somewhere in the program
fileManager.scheduleWrite(data, function(err){
// now the write is done
});
// somewhere else in the program
fileManager.scheduleRead(function(err, data){
// `data` contains the data
});
Then implement it using Q or a similar promises lib, like:
// in file-manager.js
var wait = Q();
module.exports = {
scheduleWrite: function(data, cb){
wait = wait.then(function(){
// write data and call cb()
});
},
scheduleRead: function(){
wait = wait.then(function(){
// read data and call cb(data)
});
}
};
The wait var will "stack up" into a serialized chain of tasks where the next one won't start until the previous one completes.

Resources