I am running a one process using subprocess module. After starting a process using while Loop and poll method i am reading the process output line by line. Stdout becomes empty after sometime, however while Loop keeps running as process poll is not none. When process exits all the remaining output is read.
I tried changing bufsize.when i used strace to track the process i found that when main process starts child process stdout stops displaying at the same point
Related
In Node JS's child process documentation. The following is mentioned.
By default, pipes for stdin, stdout, and stderr are established between the parent Node.js process and the spawned child. These pipes have limited (and platform-specific) capacity.If the child process writes to stdout in excess of that limit without the output being captured, the child process will block waiting for the pipe buffer to accept more data.
I've been trying to spawn (child_process.spawn) a console application from node. The application is interactive and outputs a lot of data on initiation. Apparently, when you pipe, if the process outputting data is getting a head of the process getting input, the OS pause the process outputting. For e.g, when you run ps ax | cat, if ps process is getting ahead of cat process, ps will pause until cat can accept input again. But it should eventually pipe all data.
In my case, this console application (written in C) seems to be pausing completely when this happens. I could be wrong but this seems to be the case because the piped output stops mid way no matter what I do. This happens even when I'm piping from bash, so there's no problem with nodejs here. May be this application is not very compatible with how piping works in linux.
There's nothing I can do regarding this console application but is there some other way in node to get these data chunks without piping?
Edited:
The main code
const js = spawn('julius', ['-C', 'julius.jconf', '-dnnconf', 'dnn.jconf']);
I tried
js.stdout.on("data", (data) => { console.log(`stdout: ${js.stdout}`); });
and
js.stdout.pipe(process.stdout);
to capture the output. Same result, the output is being cut off.
I have a Python script which needs to capture stdout and write to stdin of an already running process, meaning that when the script is started, the target process has been started long before it.
The process runs as a daemon, so it makes no sense to start it again.
How do I do that?
I was playing on my virtual machine with some exploit learning tricks when i came across this script that was printing to 2 lines and then exit to prompt, and after 10 seconds it printed in my prompt like this :
[!] Wait for the "Done" message (even if you'll get the prompt back).
user#ubuntu:~/tests$ [+] Done! Now run ./exp
How is this possible ? It is clone involved or something like that ?
The program informs you that you should wait for the "Done" message even if you get the prompt back earlier.
This is because some other process is running, detached, in the background.
The process you started has finished, which is why you are getting the prompt back. But it spawned another (background) process, e.g. via fork() or some other mechanic. By the time you get your prompt back, that other process is still running, and you are told to wait for it to finish.
When it does, it prints "Done" to the standard output (stdout) it inherited from its parent -- which is (by default) the same terminal you used to start the initial process.
Not the smoothest design -- the main process could wait for the spawned process to finish before giving you that prompt back, since it is apparently important that other process finishes before you carry on. Perhaps the author didn't know how to do that. ;-)
The process, responsible for printing the messages are running in background (background process).
In general, running a process in background means detaching only the stdin, the stdout and stderr are still linked to the actual parent shell, so all the outputs are still visible on the terminal.
I have to write console app which starts another process (GUI). Then with other app or option of the same I have to be able to stop the child process. In addition, if the child process is closed from GUI, I have to be informed to do final tasks (same if killed).
I suppose it is good to keep first (parent) app running while child (GUI) is working and continue with final tasks. For example in .Net this is made with Process.WaitForExit() after Process.Start().
Read wait(2) and exit(2) system calls manpages. wait(2) stops the calling process until some of it's children has exit(2) and exit(2) just do the reciprocal, exits the program and lets the kernel inform its parent process of that, passing it the exit code supplied.
I implemented a simple c shell to take in commands like sleep 3 &. I also implemented it to "listen" for sigchild signals once the job complete.
But how do I get the job id and command to be printed out like the ubuntu shell once it is completed?
I would advise against catching SIGCHLD signals.
A neater way to do that is to call waitpid with the WNOHANG option. If it returns 0, you know that the job with that particular pid is still running, otherwise that process has terminated and you fetch its exit code from the status parameter, and print the message accordingly.
Moreover, bash doesn't print the job completion status at the time the job completes, but rather at the time when the next command is issued, so this is a perfect fit for waitpid.
A small disadvantage of that approach is that the job process will stay as a zombie in the period between its termination and the time you call waitpid, but that probably shouldn't matter for a shell.
You need to remember the child pid (from the fork) and the command executed in your shell (in some sort of table or map structure). Then, when you get a SIGCHILD, you find the child pid and that gives you the corresponding command.