flushing output from child process or reopening after .end - node.js

I am using child_process spawn to create a process and piping input to its stdin.
However, I don't get any .on('data', ... event until I do a
child.stdin.end()
But this will close the pipe for further input.
unhandledRejection Error: write after end
So is there a way to either
force data to get flushed to stdout
reopen a child for pipe to stdin after stdin.end has been called?
other github notes + code is here
https://github.com/callemall/fasttext-js/issues/4

this was my problem I needed to terminate each input with a \n
doh!

Related

Detecting when a child process is waiting for stdin

I am making a terminal program that is able to run any executable (please ignore safety concerns). I need to detect when the child process is waiting for the user input (from stdin). I start the child process using:
process = subprocess.Popen(command, close_fds=False, shell=True, **file_descriptors)
I can think of 2 ways of detecting if the child process is waiting for stdin:
Writing a character then backspace and checking if the child has processed those 2 bytes. But here it says that "CMD does support the backspace key". So I need to find a character that when printed to the screen will delete what ever is in the stdin buffer in the command prompt.
The second method is to use the pywin32 library and use the WaitForInputIdle function as described here. I looked at the source code for the subprocess library and found that it uses pywin32 and it keeps a reference to the process handle. So I tried this:
win32event.WaitForInputIdle(proc._handle, 100)
But I got this error:
(1471, 'WaitForInputIdle', 'Unable to finish the requested operation because the specified process is not a GUI process.')
Also in the windows api documentation here it says: "WaitForInputIdle waits only once for a process to become idle; subsequent WaitForInputIdle calls return immediately, whether the process is idle or busy.". I think that means that I can't use the function for its purpose more than once which wouldn't solve my problem
Edit:
This only needs to work on Windows but later I might try to make my program computable with Linux as well. Also I am using pipes for the stdin/stdout/stderr.
Why I need to know if the child is waiting for stdin:
Currently, when the user presses the enter key, I send all of the data, that they have written so far, to stdin and disable the user from changing it. The problem is when the child process is sleeping/calculating and the user writes some input and wants to change it before the process starts reading from stdin again.
Basically lets take this program:
sleep(10)
input("Enter value:")
and lets say that I enter in "abc\n". When using cmd it will allow me to press backspace and delete the input if the child is still sleeping. Currently my program will mark all of the text as read only when it detects the "\n" and send it to stdin.
class STDINHandle:
def __init__(self, read_handle, write_handle):
self.handled_write = False
self.working = Lock()
self.write_handle = write_handle
self.read_handle = read_handle
def check_child_reading(self):
with self.working:
# Reset the flag
self.handled_write = True
# Write a character that cmd will ignore
self.write_handle.write("\r")
thread = Thread(target=self.try_read)
thread.start()
sleep(0.1)
# We need to stop the other thread by giving it data to read
if self.handled_write:
# Writing only 1 "\r" fails for some reason.
# For good measure we write 10 "\r"s
self.write_handle.write("\r"*10)
return True
return False
def try_read(self):
data = self.read_handle.read(1)
self.handled_write = False
def write(self, text):
self.write_handle.write(text)
I did a bit of testing and I think cmd ignores "\r" characters. I couldn't find a case where cmd will interpret it as an actual character (like what happened when I did "\b"). Sending a "\r" character and testing if it stays in the pipe. If it does stay in the pipe that means that the child hasn't processed it. If we can't read it from the pipe that means that the child has processed it. But we have a problem - we need to stop the read if we can't read from stdin otherwise it will mess with the next write to stdin. To do that we write more "\r"s to the pipe.
Note: I might have to change the timing on the sleep(0.1) line.
I am not sure this is a good solution but you can give it a try if interested. I just assumed that we execute the child process for its output given 2 inputs data and TIMEOUT.
process = subprocess.Popen(command, close_fds=False, shell=True, **file_descriptors)
try:
output, _ = process.communicate(data, TIMEOUT)
except subprocess.TimeoutExpired:
print("Timeout expires while waiting for a child process.")
# Do whatever you want here
return None
cmd_output = output.decode()
You can find more examples for TimeoutExpired here.

Node.js get fd file descriptor

Is fs.open/close the only way to get the file descriptor or is there a faster/efficient way?
The reason I close the file is cause I assume I should otherwise I'd end up leaving tones of files open.
fs.open('/my/file.txt','r',function(e,fd){
console.log(fd);//12
fs.close(fd,function(){
fs.fsync(fd,function(){// more code ...
It seems a bit silly to open an close the file just to get an fd but fs.sync requires an fd (a number) instead of string.
I can call close and don't wait for the callback
This way is Asynchronous (non-blocking)
fs.open('/my/file.txt','r',function(e,fd){
console.log(fd);//12
fs.close(fd,function(){});
fs.fsync(fd,function(){// more code ...

tail -f implementation in node.js

I have created an implementation of tail -f in node.js using socket.io and fs.watch function.
I read the file using fs.readFile, convert it into array of lines and returns it to the client. Stores the current length in variable.
Then whenever the "file changed" event fires, I re-read the whole file, converts it into array of lines. And then compare the old length and current length. and slice it like
fileContent.slice(oldLength, fileContent.length)
this gives me the changed content. So running perfectly fine.
Problem: I am reading the whole file every time the file gets changed, which is not efficient if file is too large. So is there any way, of reading a file once, and then gets the changed content if there is any change?
I have also tried, spawning child process for "tail -f"
var spawn = require ('child_process').spawn;
var child = spawn ('tail', ['-f', logfile]);
child.stdout.on ('data', function (data){
linesArray = data.toString().split("\n")
console.log ( "Data sent" + linesArray[0]);
io.emit('changed', {
data: linesArray,
});
});
the problem with this is:
on("data") event fires multiple time when I save the logfile by writing some content.
On first load, it correctly returns the last ten line of the file. But if there is a change then it return the whole content again and again.
So if you have any idea of solving this problem let me know. Till then I will dig the internet.
So, I got the solution by reading someone else's code. So solution was to use fs.open which will open the file and then instead of reading whole file we can read the particular block from the file using fs.read() function.
To know about the fs.open/fs.read, read this nodejs-file-system.
Official doc : fs.read

Node.JS write to stdout without a newline

I have the feeling that is probably not possible:
I am trying to print on the terminal text without a new line.
I have tried process.stdout.write and npm jetty but they all seem to automatically append a new line at the end.
Is it possible to write to stdout without having an automatic newline?
Just to be clear: I am not concerned about browsers, I am only interested in UNIX/Linux writing what in C/C++ would be the equivalent of:
std::cout << "blah";
printf("blah");
process.stdout.write() does not automatically add a new line. If you post precise details about why you think it does, we can probably tell you how you are getting confused, but while console.log() does add a newline, process.stdout.write() has no frills and will not write anything you don't explicitly pass to it.
Here's a shell session providing supporting evidence:
echo 'process.stdout.write("123")' > program.js
node program.js | wc -c
3
According to this link process.stdout.write():
console.log equivalent could look like this:
console.log = function(msg) {
process.stdout.write(`${msg}\n`);
};
So process.stdout.write should meet your request...

fork() and STDOUT/STDERR to the console from child processes

I'm writing a program that forks multiple child processes and I'd like for all of these child processes to be able to write lines to STDERR and STDOUT without the output being garbled. I'm not doing anything fancy, just emitting lines that end with a new line (that, at least in my understanding would be an atomic operation for Linux). From perlfaq it says:
Both the main process and the backgrounded one (the "child" process) share the same STDIN, STDOUT and STDERR filehandles. If both try to access them at once, strange things can happen. You may want to close or reopen these for the child. You can get around this with opening a pipe (see open) but on some systems this means that the child process cannot outlive the parent.
It says I should "close or reopen" these filehandles for the child. Closing is simple, but what does it mean by "reopen"? I've tried something like this from within my child processes and it doesn't work (the output still gets garbled):
open(SAVED_STDERR, '>&', \*STDERR) or die "Could not create copy of STDERR: $!";
close(STDERR);
# re-open STDERR
open(STDERR, '>&SAVED_STDERR') or die "Could not re-open STDERR: $!";
So, what am I doing wrong with this? What would the pipe example it alludes to look like? Is there a better way to coordinate output from multiple processes together to the console?
Writes to a filehandle are NOT atomic for STDOUT and STDIN. There are special cases for things like fifos but that's not your current situation.
When it says re-open STDOUT what that means is "create a new STDOUT instance" This new instance isn't the same as the one from the parent. It's how you can have multiple terminals open on your system and not have all the STDOUT go to the same place.
The pipe solution would connect the child to the parent via a pipe (like | in the shell) and you'd need to have the parent read out of the pipe and multiplex the output itself. The parent would be responsible for reading from the pipe and ensuring that it doesn't interleave output from the pipe and output destined to the parent's STDOUT at the same time. There's an example and writeup here of pipes.
A snippit:
use IO::Handle;
pipe(PARENTREAD, PARENTWRITE);
pipe(CHILDREAD, CHILDWRITE);
PARENTWRITE->autoflush(1);
CHILDWRITE->autoflush(1);
if ($child = fork) { # Parent code
chomp($result = <PARENTREAD>);
print "Got a value of $result from child\n";
waitpid($child,0);
} else {
print PARENTWRITE "FROM CHILD\n";
exit;
}
See how the child doesn't write to stdout but rather uses the pipe to send a message to the parent, who does the writing with its stdout. Be sure to take a look as I omitted things like closing unneeded file handles.
While this doesn't help your garbleness, it took me a long time to find a way to launch a child-process that can be written to by the parent process and have the stderr and stdout of the child process sent directly to the screen (this solves nasty blocking issues you may have when trying to read from two different FD's without using something fancy like select).
Once I figured it out, the solution was trivial
my $pid = open3(*CHLD_IN, ">&STDERR", ">&STDOUT", 'some child program');
# write to child
print CHLD_IN "some message";
close(CHLD_IN);
waitpid($pid, 0);
Everything from "some child program" will be emitted to stdout/stderr, and you can simply pump data by writing to CHLD_IN and trust that it'll block if the child's buffer fills. To callers of the parent program, it all just looks like stderr/stdout.

Resources