This question already has answers here:
How would you stream output from a Process?
(2 answers)
Run command, stream stdout/stderr and capture results
(2 answers)
Closed 8 months ago.
When someone just wants to get the output of a simple command like ls, then one can write like this:
let output = Command::new("ls")
.stdout(Stdio::piped())
.output()
.unwrap();
let stdout = String::from_utf8(output.stdout).unwrap();
However, if I want to get the output of an infinite running program like http-server, it won't stop until I kill it manually. So if I use the method above, my project will stuck there forever. How should I do for such situation? More specifically, how to get the output of the first five seconds, or the first five lines.
Use .stdout(Stdio::piped()).spawn().unwrap().stdout.unwrap() and read from it. If you want to read the first five lines, use a BufReader and BufRead::lines(). The first five seconds is more tricky, you'll either need to do the reading in a separate thread, or read nonblocking.
Playground
Related
This question already has answers here:
Programmatically get parent pid of another process?
(7 answers)
Closed 5 months ago.
I know that the PCB is a data structure which includes parent
process id(pPID), process id(PID), pointers, and etc. Is there anyway
to find the PID of parent process without using getppid function?
The fields of /proc/self/stat (as documented in proc(5)) include PPID. Take care in parsing as comm may contain spaces and other unusual characters.
(But I second #JohnZwinck's comment. Why?)
This question already has answers here:
Command line command to auto-kill a command after a certain amount of time
(15 answers)
Closed 3 years ago.
I am trying to run a script which takes input from text file and based on the number of entries in it, a command is executed as many number of times.
Below is an overview:
cat /tmp/file.txt | while read name
do
<<execute a command using value of $name>>
done
What is happening is sometimes the command executed for particular $name is getting hung due to known issues. Therefore I need in such cases that the command on every value of $name runs only for X number of seconds and if it is not able to complete within that stipulated time, terminate the process and increment loop counter.
I was able to make use of sleep and kill but it is terminated the entire loop. I want the next values to be processed in case command gets hung on a row/value.
Please advise.
Sounds like you might want something like timeout.
timeout 4 <command>
This question already has answers here:
How to kill a child process after a given timeout in Bash?
(9 answers)
simple timeout on I/O for command for linux
(3 answers)
Closed 5 years ago.
Here's my situation: I've made a script in a while loop, but sometimes (say after 20-30 loops) it stops unexpectedly.
I tried to debug it but I couldn't.
I noticed that it stops while executing a command, and it just doesn't do anything when it stops. Now I was thinking: is there a way to tell to another script when the first script stops and it doesn't execute any command in, say 120 seconds? Maybe by constantly observing the output of the first script and when it's giving no output, the second script kills the first one and makes it start again? Sorry for my bad English hope I was clear.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have some code running. Because of the complexity and the length I thought maybe use some code to make my life easy. So the code is running using >commandA
output
results
are
popping
...
here
I want to count the number of times banana appears in the output of commandA (which is running) and when the count is 10, I want to stop the processing (using CTRL+Z) and
echo "************we reached 10**********************"
and start again.
I am writing the code in perl on unix system.
EDIT: I cannot use grep function here as the command has already run. Or will be run but without a grep function. Before the command runs, I will turn on my program to look for the specific words in the terminal output. Now it would very easy to use grep, but I don't know which function in perl actually takes in the output to the terminal as stdin
You can start the other program by opening a pipe from it to your Perl program, and then read its output line by line until you reach the terminating condition:
open my $pipe, 'commandA |'
or die "Error opening pipe from commandA: $!\n";
my $n = 0;
while (<$pipe>) {
$n++ if /banana/;
last if $n >= 10;
}
close $pipe; # kills the command with SIGPIPE if it's not done yet
print "commandA printed 'banana' ", ($n >= 10 ? "at least 10" : $n), " times.\n";
There are a couple of pitfalls to note here, though. One is that closing the pipe will only kill the other program when it next tries to print something. If the other program might run for a long time without generating any output, you may want to kill it explicitly.
For this, you will need to know its process ID, but, conveniently, that's exactly what open returns when you open a pipe. However, you may want to use the multi-arg version of open, so that the PID returned will be that of the actual commandA process, rather than of a shell used to launch it:
my $pid = open my $pipe, '-|', 'commandA', #args
or die "Error opening pipe from commandA: $!\n";
# ...
kill 'INT', $pid; # make sure the process dies
close $pipe;
Another pitfall is output buffering. Most programs don't actually send their output directly to the output stream, but will buffer it until enough has accumulated or until the buffer is explicitly flushed. The reason you don't usually notice this is that, by default, many programs (including Perl) will flush their output buffer at the end of every output line (i.e. whenever a \n is printed) if they detect that the output stream goes to an interactive terminal (i.e. a tty).
However, when you pipe the output of a program to another program, the I/O libraries used by the first program may notice that the output goes to a pipe rather than to a tty, and may enable more aggressive output buffering. Often this won't be a problem, but in some problematic cases it could add a substantial delay between the time when the other programs prints a string and the time when your program receives it.
Unfortunately, if you can't modify the other program, there's not much you can easily do about this. It is possible to replace the pipe with something called a "pseudo-tty", which looks like an interactive terminal to the other command, but that gets a bit complicated. There's a CPAN module to simplify it a bit, though, called IO::Pty.
(If you can modify the other program, it's a lot easier. For example, if it's another Perl script, you can just add $| = 1; at the beginning of the script to enable output autoflushing.)
Whenever I need to limit shell command output, I use less to paginate the results:
cat file_with_long_content | less
which works fine and dandy, but what I'm curious about is, less still works even if the output is never ending, consider having the following script in inf.sh file:
while true; do date; done
then I run
sh inf.sh | less
And it's still able to again paginate the results, so is it correct to say that pipe streams the result rather than waiting for the command to finish before outputting the result?
Yes, when you run sh inf.sh | less the two commands are run in parallel. Data written into the pipe by the first process is buffered (by the kernel) until it is read by the second. If the buffer is full (i.e., if the first command writes to the pipe faster than the second can read) then the next write operation will block until further space is available. A similar condition occurs when reading from an empty pipe: if the pipe buffer is empty but the input end is still open, a read will block for more data.
See the pipe(7) manual for details.
It is correct. Pipes are streams.
You can code your own version of the less tool in very few lines of C code. Take the time to do it, including a short research on files and pipes, and you'll emerge with the understanding to answer your own question and more :).