When reading from std::io::stdin(), input is buffered until EOF is encountered. I'd like to process lines as they arrive, rather than waiting for everything to be buffered.
Given a shell function bar that runs echo bar every second forever, I'm testing this with bar | thingy. It won't print anything until I ^C.
Here's what I'm currently working with:
use std::io;
use std::io::timer;
use std::time::Duration;
fn main() {
let mut reader = io::stdin();
let interval = Duration::milliseconds(1000);
loop {
match reader.read_line() {
Ok(l) => print!("{}", l),
Err(_) => timer::sleep(interval),
}
}
}
When reading from std::io::stdin(), input is buffered until EOF is encountered
Why do you say this? Your code appears to work as you want. If I compile it and run it:
$ ./i
hello
hello
goodbye
goodbye
yeah!
yeah!
The first of each pair of lines is me typing into the terminal and hitting enter (which is what read_line looks for). The second is what your program outputs.
Err(_) => timer::sleep(interval)
This is a bad idea - when input is closed (such as by using ^D), your program does not terminate.
Edit
I created a script bar:
#!/bin/bash
set -eu
i=0
while true; do
echo $i
sleep 1
done
And then ran it:
./bar | ./i
0
0
0
Your program continues to work.
Related
fn main() {
let output = Command::new("/bin/bash")
.args(&["-c", "docker","build", "-t", "postgres:latest", "-", "<>", "dockers/PostgreSql"])
.output()
.expect("failed to execute process");
println!("{:?}", output);
}
Above code runs fine but prints output only after docker script is completely ran, But I want to see all command output in my Linux terminal as it happens and want to see output as it happens,
I tried all combinations given in documentation and read it many times, not understanding, how do I redirect stdout to my terminal window,
According to the documentation stdout has default behavior depending on how you launch your subprocess:
Defaults to inherit when used with spawn or status, and defaults to piped when used with output.
So stdout is piped when you call output(). What does piped mean? This means the child process's output will be directed to the parent process (our rust program in this case). std::process::Command is kind enough to give us this as a string:
use std::process::{Command, Stdio};
let output = Command::new("echo")
.arg("Hello, world!")
.stdout(Stdio::piped())
.output()
.expect("Failed to execute command");
assert_eq!(String::from_utf8_lossy(&output.stdout), "Hello, world!\n");
// Nothing echoed to console
Now that we understand where the stdout is currently going, if you wish to have the console get streamed output, call the process using spawn():
use std::process::Command;
fn main() {
let output = Command::new("/bin/bash")
.args(&["-c", "echo hello world"])
.spawn()
.expect("failed to execute process");
println!("{:?}", output);
}
Notice also in this later example, I pass the full echo hello world command in one string. This is because bash -c splits its arg by space and runs it. If you were in your console executing a docker command through a bash shell you would say:
bash -c "docker run ..."
The quotes above tell the terminal to keep the third arg together and not split it by space. The equivalent in our rust array is to just pass the full command in a single string (assuming you wish to call it through bash -c of course).
I had luck with the other answer, but I had to modify it a little. For me, I
needed to add the wait method:
use std::{io, process::Command};
fn main() -> io::Result<()> {
let mut o = Command::new("rustc").arg("-V").spawn()?;
o.wait()?;
Ok(())
}
otherwise the parent program will end before the child.
https://doc.rust-lang.org/std/process/struct.Child.html#method.wait
I don't have much experience with perl, and would appreciate any/all feedback....
[Before I start: I do not have access/authority to change the existing perl scripts.]
I run a couple perl scripts several times a day, but I would like to begin capturing their output in a file.
The first perl script does not take any arguments, and I'm able to "tee" its output without issue:
/asdf/loc1/rebuild-stuff.pl 2>&1 | tee $mytmpfile1
The second perl script hangs with this command:
/asdf/loc1/create-site.pl --record=${newsite} 2>&1 | tee $mytmpfile2
FYI, the following command does NOT hang:
/asdf/loc1/create-site.pl --record=${newsite} 2>&1
I'm wondering if /asdf/loc1/create-site.pl is trying to process the | tee $mytmpfile2 as additional command-line arguments? I'm not permitted to share the entire script, but here's the beginning of its main routine:
...
my $fullpath = $0;
$0 =~ s%.*/%%;
# Parse command-line options.
...
Getopt::Long::config ('no_ignore_case','bundling');
GetOptions ('h|help' => \$help,
'n|dry-run|just-print' => \$preview,
'q|quiet|no-mail' => \$quiet,
'r|record=s' => \$record,
'V|noverify' => \$skipverify,
'v|version' => \$version) or exit 1;
...
Does the above code provide any clues? Other than modifying the script, do you have any tips for allowing me to capture its output in a file?
It's not hanging. You are "suffering from buffering". Like most programs, Perl's STDOUT is buffered by default. Like most programs, Perl's STDOUT is flushed by a newline when connected to a terminal, and block buffered otherwise. When STDOUT isn't connected to a terminal, you won't get any output until 4 KiB or 8 KiB of output is accumulated (depending on your version of Perl) or the program exits.
You could add $| = 1; to the script to disable buffering for STDOUT. If your program ends with a true value or exits using exit, you can do that without changing the .pl file. Simply use the following wrapper:
perl -e'
$| = 1;
$0 = shift;
do($0);
my $e = $# || $! || "$0 didn\x27t return a true value\n";
die($e) if $e;
' -- prog args | ...
Or you could fool the program into thinking it's connected to a terminal using unbuffer.
unbuffer prog args | ...
I'd like to print the PID of any process I run in my command at the beginning (even if it's not a background process).
I came up with this:
fn() {
eval $1
}
Now whenever I run a command, I want it to be processed as
fn "actual_command_goes_here"&; wait
So that this will trigger a background process (which will print the PID) and then run the actual command. Example:
$ fn "sleep 5; date +%s; sleep 5; date +%s; sleep 2"&; wait
[1] 29901
1479143885
1479143890
[1] + 29901 done fn "..."
My question is: is there any way to create a wrapper function for all commands in Bash/ZSH? That is, when ever I run ls, it should actually run fn "ls"&; wait.
Question
Why is nothing printed when using anonymous pipe, unless I print the actual data from pipe ?
Example
use strict;
use warnings;
my $child_process_id = 0;
my $vmstat_command = 'vmstat 7|';
$child_process_id = open(VMSTAT, $vmstat_command) || die "Error when executing \"$vmstat_command\": $!";
while (<VMSTAT>) {
print "hi" ;
}
close VMSTAT or die "bad command: $! $?";
Appears to hang
use strict;
use warnings;
my $child_process_id = 0;
my $vmstat_command = 'vmstat 7|';
$child_process_id = open(VMSTAT, $vmstat_command) || die "Error when executing \"$vmstat_command\": $!";
while (<VMSTAT>) {
print "hi" . $_ ;
# ^^^ Added this
}
close VMSTAT or die "bad command: $! $?";
Prints
hiprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
hi r b swpd free buff cache si so bi bo in cs us sy id wa st
hi 1 0 0 7264836 144200 307076 0 0 0 1 0 14 0 0 100 0 0
etc...
Expected behaviour
Would be to print hi for every line of output of vmstat for the first example.
Versions
perl, v5.10.0
GNU bash, version 3.2.51
Misc
It also appears to hang when using chomp before printing the line (which i thought only removes newlines).
I feel like i'm missing something fundamental to how the pipe is read and processed but could not find a similar question. If there is one then dupe this and I'll have a look at it.
Any further information needed just ask.
Alter
print "hi";
to
print "hi\n";
and it also "works"
the reason it fails is that output is line buffered by default
setting $| will flush the buffer straight away
If set to nonzero, forces a flush right away and after every write or print on the currently selected output channel. Default is 0 (regardless of whether the channel is really buffered by the system or not; "$|" tells you only whether you've asked Perl explicitly to flush after each write). STDOUT will typically be line buffered if output is to the terminal and block buffered otherwise. Setting this variable is useful primarily when you are outputting to a pipe or socket, such as when you are running a Perl program under rsh and want to see the output as it's happening. This has no effect on input buffering. See the getc entry in the perlfunc manpage for that. (Mnemonic: when you want your pipes to be piping hot.)
I am running my tests using TAP::Harness , when I run the tests from command line on a Linux system I get the test results on STDOUT as it is run but when i try to capture the output to a file as well as STDOUT using perl harness.pl | tee out.tap the results are buffered and displayed only at the end, I tried passing in a file handle to the new but the results are still buffered before being written to a file , Is there a way not to buffer the output, I have a long running suite and would like to look at the results while the tests are running as well as capture the output.
TAP::Harness version 3.22 and perl version 5.8.8
here is the sample code
harness.pl
#!/usr/bin/perl
use strict;
use warnings;
use TAP::Harness;
$|++;
my #tests = ('del.t',);
my $harness = TAP::Harness->new( {
verbosity => 1,
} );
$harness->runtests(#tests);
and the test del.t
use Test::More qw /no_plan/;
$|++;
my $count =1;
for (1 ..20 ) {
ok ( $count ++ == $_, "Pass $_");
sleep 1 if ( $count % 5 == 0 ) ;
}
Using script instead of tee does what you want:
script -c 'perl harness.pl' file
Found a simple change to make tee work as well: Specify a formatter_class:
my $harness = TAP::Harness->new( {
verbosity => 1,
formatter_class => 'TAP::Formatter::Console',
} );
This is because TAP::Harness normally uses a different default one if the output is not a tty, which is what is causing the buffering you're seeing.