Here is my Node.JS code
var rec = spawn('rec.exe');
rec.stdout.setEncoding('ascii');
rec.stdout.on('data', function (data) {
console.log(data);
//do something
}
);
And my C++ program (compiled by Cygwin) writes something to stdout using printf().
if(bufsize!=0&&rec[0]=='0')
{
printf("%s\n",rec);
//printf("Received,write into exchange file\n");
//fp=fopen("recvdata.txt","w");
//fprintf(fp,"%s\n",rec);
//fclose(fp);
}
But the data event is never emitted.
I'm sure there is something wrong with my C++ code because it works with some other commands like ping.
Then I noticed that in this case, the event is emitted
if(fd==-1)
{
printf("Failed,again\n");
return 0;
}
It means when the process exits, everything works fine. But that's not what I want.
Can someone help me? Thanks.
It sounds like you've got a buffering issue with your C++ program; it's probably not automatically flushing stdout's buffers with each line, so all the output "bunches up" until it exits. I know there are some ioctl settings that it could use to change that, but I've long since forgotten what they are. A search on "stdout buffering" might bring up something useful.
Related
Hi Linux kernel/net guru,
I'm looking for a way how to hook and print out NL(netlink) messages between wpa_supplicant and kernel. As of now I just inserted several printk messages to print those but it's very painful I think.
Please let me know if you have a better idea.
Thanks.
This is not a good answer given the OP is using wpa_supplicant specifically but might help people drawn here by accident.
If somebody is using libnl (wpa_supplicant doesn't), all you have to do is, in userspace, once the socket has been initialized,
error = nl_socket_modify_cb(sk, NL_CB_MSG_IN, NL_CB_DEBUG, NULL, NULL);
if (error < 0)
log_err("Could not register debug cb for incoming packets.");
error = nl_socket_modify_cb(sk, NL_CB_MSG_OUT, NL_CB_DEBUG, NULL, NULL);
if (error < 0)
log_err("Could not register debug cb for outgoing packets.");
The userspace client will print all messages whenever it sends or receives them.
(Also, you can alternatively call nl_msg_dump(msg, stderr) whenever you want.)
For stuff that doesn't use libnl, you can always copy the relevant functions from libnl and call them. See nl_msg_dump() in libnl's source code (libnl/lib/msg.c).
When I run my node script in Sublime 3 (as a build system ... Ctrl-B), if I add a listener to the stdin's data event, the process stays running until killed. This makes sense, since there's potentially still work to do.
process.stdin.on('data', (d)=> {
// ... do some work with `d`
});
However, I expected that if I removed the listener to that data event, my process would naturally exit. But it doesn't!
// This program never exits naturally.
function processData(d) {
// ... do some work with `d`, then...
process.stdin.removeListener('data', processData);
}
process.stdin.on('data', processData);
Even if you remove the event handler immediately after adding it, the process still sticks around...
function processData() {}
process.stdin.on('data', processData);
process.stdin.removeListener('data', processData);
In this exact case, I could use the once() function instead of on(), but that doesn't clear this up for me. What am I missing? Why does the stdin stream prevent the process from exiting, given it has no listeners of any kind?
I am checking for USB drive removal on linux. I am simply monitoring the output of a command line process with child_process.spawn. But for some reason the child's stdout data event doesn't emit until like 20 lines have been printed, which makes it unable to detect a removed drive. After removing the drive many times, it does finally go. But obviously that won't do.
Original:
var udevmonitor = require("child_process").spawn("udevadm", ["monitor", "--udev"]);
udevmonitor.stdout.on("data", function(data) {
return console.log(data.toString());
});
Pretty simple. So I figure it's an issue with the piping node is using internally. So instead of using the pipe, I figure I'll just use a simple passthrough stream. That could solve the problem and give me real-time output. That code is:
var stdout = new require('stream').PassThrough();
require("child_process").spawn("udevadm", ["monitor", "--udev"], { stdio: ['pipe', stdout, 'pipe'] });
stdout.on("data", function(data) {
console.log(data.toString());
});
But that gives me an error:
child_process.js:922 throw new TypeError('Incorrect value for stdio stream: ' + stdio);
The documentation says you can pass a stream in. I don't see what I'm doing wrong and stepping through the child_process source didn't help.
Can someone help? You can run this yourself, provided you're on Linux. Run the code and insert a USB drive. Perhaps you can run the command 'udevadm monitor --udev' in another terminal to see what happens. Remove and reinsert a few times and eventually node will print out.
mscdex, I love you. Changing the spawn command to
spawn("stdbuf", ["-oL", "-eL", "udevadm", "monitor", "--udev"]);
Did the trick. I really appreciate your help!
I've been using REPL recently, but it looks as though there is no way to call the 'exit' listener without typing into the actual shell. Is there a way to call this from within the code?
var shell = require('repl').start();
shell.on('exit', function(){
console.log("Done");
});
setTimeout(shell.exit, 180000); // This is pretty much what I want
Is this possible? The documentation for REPL is seemingly quite sparse - I'm not entirely sure if there's anything undocumented which might be useful here.
Thanks in advance.
Looks like the event listeners are actually available via shell['_events'].exit(). Seems to exit the shell just fine, and then execute the listener code.
Consider:
node -e "setTimeout(function() {console.log('abc'); }, 2000);"
This will actually wait for the timeout to fire before the program exits.
I am basically wondering if this means that node is intended to wait for all timeouts to complete before quitting.
Here is my situation. My client has a node.js server he's gonna run from Windows with a Shortcut icon. If the node app encounters an exceptional condition, it will typically instantly exit, not leaving enough time to see in the console what the error was, and this is bad.
My approach is to wrap the entire program with a try catch, so now it looks like this: try { (function () { ... })(); } catch (e) { console.log("EXCEPTION CAUGHT:", e); }, but of course this will also cause the program to immediately exit.
So at this point I want to leave about 10 seconds for the user to take a peek or screenshot of the exception before it quits.
I figure I should just use blocking sleep() through the npm module, but I discovered in testing that setting a timeout also seems to work. (i.e. why bother with a module if something builtin works?) I guess the significance of this isn't big, but I'm just curious about whether it is specified somewhere that node will actually wait for all timeouts to complete before quitting, so that I can feel safe doing this.
In general, node will wait for all timeouts to fire before quitting normally. Calling process.exit() will exit before the timeouts.
The details are part of libuv, but the documentation makes a vague comment about it:
http://nodejs.org/api/all.html#all_ref
you can call ref() to explicitly request the timer hold the program open
Putting all of the facts together, setTimeout by default is designed to hold the event loop open (so if that's the only thing pending, the program will wait). You can programmatically disable or re-enable the behavior.
Late answer, but a definite yes - Nodejs will wait around for setTimeout to finish - see this documentation. Coincidentally, there is also a way to not wait around for setTimeout, and that is by calling unref on the object returned from setTimeout or setInterval.
To summarize: if you want Nodejs to wait until the timeout has been called, there's nothing you need to do. If you want Nodejs to not wait for a particular timeout, call unref on it.
If node didn't wait for all setTimeout or setInterval calls to complete, you wouldn't be able to use them in simple scripts.
Once you tell node to listen for an event, as with the setTimeout or some async I/O call, the event loop will loop until it is told to exit.
Rather than wrap everything in a try/catch you can bind an event listener to process just as the example in the docs:
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
});
setTimeout(function() {
console.log('This will still run.');
}, 500);
// Intentionally cause an exception, but don't catch it.
nonexistentFunc();
console.log('This will not run.');
In the uncaughtException event, you can then add a setTimeout to exit after 10 seconds:
process.on('uncaughtException', function(err) {
console.log('Caught exception: ' + err);
setTimeout(function(){ process.exit(1); }, 10000);
});
If this exception is something you can recover from, you may want to look at domains: http://nodejs.org/api/domain.html
edit:
There may actually be another issue at hand: your client application doesn't do enough (or any?) logging. You can use log4js-node to write to a temp file or some application-specific location.
Easy way Solution:
Make a batch (.bat) file that starts nodejs
make a shortcut out of it
Why this is best. This way you client would run nodejs in command line. And even if nodejs program returns nothing would happen to command line.
Making bat file:
Make a text file
put START cmd.exe /k "node abc.js"
Save it
Rename It to abc.bat
make a shortcut or whatever.
Opening it will Open CommandLine and run nodejs file.
using settimeout for this is a bad idea.
The odd ones out are when you call process.exit() or there's an uncaught exception, as pointed out by Jim Schubert. Other than that, node will wait for the timeout to complete.
Node does remember timers, but only if it can keep track of them. At least that is my experience.
If you use setTimeout in an arrow / anonymous function I would recommend to keep track of your timers in an array, like:
=> {
timers.push(setTimeout(doThisLater, 2000));
}
and make sure let timers = []; isn't set in a method that will vanish, so i.e. globally.