Create child process that duplicates stderr of main process - rust

Is there a stable way I can create a child process that hangs out in the background and inherits stderr, in and out? From what I see, creating a child requires me to launch a separate program. Instead I want to create a child process that lasts as long as the main process, and only serves to allow me to duplicate stderr so I can read from it.
Here's an example of creating a process inside the link
use std::process::Command;
let output = Command::new("sh")
.arg("-c")
.arg("echo hello")
.output()
.unwrap_or_else(|e| { panic!("failed to execute process: {}", e) });
let hello = output.stdout;
what I'd like to do
use std::process::Command;
let leech = Command::new() // create process that hangs out in the background and inherits stderr, stdin and stdout from main process
// ....
// panic occurs somewhere in the program
if thread::panicking {
output = leech.output().stderr();
}
// screen clears
// print stderr of output
I need to create a leech of sorts because panics being displayed to the main screen are flushed due to terminal graphics. The library will clear the screen which in the process clears away panic messages, If I was able to duplicate stderr and somehow read it, I could reprint the panic message after the terminal restored the pre-program-running state.

I believe this is easier to do with a wrapper program, instead of launching something from the rust program itself. Here's an example of how to do it using shell script:
#!/bin/bash
# Redirection magic from http://stackoverflow.com/a/6317938/667984
{ errors=$(./my_rust_program 2>&1 1>&$original_out); } {original_out}>&1
if [[ $? -ne 0 ]]; then
echo
echo "--terminal reset shenanigans--"
echo
echo "$errors" >&2
fi
When used with this rust program:
fn main() {
println!("Normal program output");
panic!("oops");
}
It prints:
Normal program output
--terminal reset shenanigans--
thread '<main>' panicked at 'oops', my_rust_program.rs:3
I believe you can create one in stable rust as well, but since you mention sh in your question I assume you are in a unix environment anyway, and the shell script version should be simpler.

Related

What is the Rust equivalent of the system function in C++? [duplicate]

Is there a way to invoke a system command, like ls or fuser in Rust? How about capturing its output?
std::process::Command allows for that.
There are multiple ways to spawn a child process and execute an arbitrary command on the machine:
spawn — runs the program and returns a value with details
output — runs the program and returns the output
status — runs the program and returns the exit code
One simple example from the docs:
use std::process::Command;
Command::new("ls")
.arg("-l")
.arg("-a")
.spawn()
.expect("ls command failed to start");
a very clear example from the docs:
use std::process::Command;
let output = Command::new("/bin/cat")
.arg("file.txt")
.output()
.expect("failed to execute process");
println!("status: {}", output.status);
println!("stdout: {}", String::from_utf8_lossy(&output.stdout));
println!("stderr: {}", String::from_utf8_lossy(&output.stderr));
assert!(output.status.success());
It is indeed possible! The relevant module is std::run.
let mut options = std::run::ProcessOptions::new();
let process = std::run::Process::new("ls", &[your, arguments], options);
ProcessOptions’ standard file descriptors default to None (create a new pipe), so you can just use process.output() (for example) to read from its output.
If you want to run the command and get all its output after it’s done, there’s wait_with_output for that.
Process::new, as of yesterday, returns an Option<Process> instead of a Process, by the way.

How to handle updates from an continuous process pipe in Perl

I am trying to follow log files in Perl on Fedora but unfortunately, Fedora uses journalctl to read binary log files that I cannot parse directly. This, according to my understanding, means I can only read Fedora's log files by calling journalctl.
I tried using IO::Pipe to do this, but the problem is that $p->reader(..) waits until journalctl --follow is done writing output (which will be never since --follow is like tail -F) and then allows me to print everything out which is not what I want. I would like to be able to set a callback function to be called each time a new line is printed to the process pipe so that I can parse/handle each new log event.
use IO::Pipe;
my $p = IO::Pipe->new();
$p->reader("journalctl --follow"); #Waits for process to exit
while (<$p>) {
print;
}
I assume that journalctl is working like tail -f. If this is correct, a simple open should do the job:
use Fcntl; # Import SEEK_CUR
my $pid = open my $fh, '|-', 'journalctl --follow'
or die "Error $! starting journalctl";
while (kill 0, $pid) {
while (<$fh>) {
print $_; # Print log line
}
sleep 1; # Wait some time for new lines to appear
seek($fh,0,SEEK_CUR); # Reset EOF
}
open opens a filehandle for reading the output of the called command: http://perldoc.perl.org/functions/open.html
seek is used to reset the EOF marker: http://perldoc.perl.org/functions/seek.html Without reset, all subsequent <$fh> calls will just return EOF even if the called script issued additional output in the meantime.
kill 0,$pid will be true as long as the child process started by open is alive.
You may replace sleep 1 by usleep from Time::HiRes or select undef,undef,undef,$fractional_seconds; to wait less than a second depending on the frequency of incoming lines.
AnyEvent should also be able to do the job via it's AnyEvent::Handle.
Update:
Adding use POSIX ":sys_wait_h"; at the beginning and waitpid $pid, WNOHANG) to the outer loop would also detect (and reap) a zombie journalctl process:
while (kill(0, $pid) and waitpid($pid, WNOHANG) != $pid) {
A daemon might also want to check if $pid is still a child of the current process ($$) and if it's still the original journalctl process.
I have no access to journalctl, but if you avoid IO::Pipe and open the piped output directly then the data will not be buffered
use strict;
use warnings 'all';
open my $follow_fh, '-|', 'journalctl --follow' or die $!;
print while <$follow_fh>;

How do I invoke a system command and capture its output?

Is there a way to invoke a system command, like ls or fuser in Rust? How about capturing its output?
std::process::Command allows for that.
There are multiple ways to spawn a child process and execute an arbitrary command on the machine:
spawn — runs the program and returns a value with details
output — runs the program and returns the output
status — runs the program and returns the exit code
One simple example from the docs:
use std::process::Command;
Command::new("ls")
.arg("-l")
.arg("-a")
.spawn()
.expect("ls command failed to start");
a very clear example from the docs:
use std::process::Command;
let output = Command::new("/bin/cat")
.arg("file.txt")
.output()
.expect("failed to execute process");
println!("status: {}", output.status);
println!("stdout: {}", String::from_utf8_lossy(&output.stdout));
println!("stderr: {}", String::from_utf8_lossy(&output.stderr));
assert!(output.status.success());
It is indeed possible! The relevant module is std::run.
let mut options = std::run::ProcessOptions::new();
let process = std::run::Process::new("ls", &[your, arguments], options);
ProcessOptions’ standard file descriptors default to None (create a new pipe), so you can just use process.output() (for example) to read from its output.
If you want to run the command and get all its output after it’s done, there’s wait_with_output for that.
Process::new, as of yesterday, returns an Option<Process> instead of a Process, by the way.

What is the Perl equivalent of PHP's proc_open(), proc_close(), etc.?

Using PHP's proc_open(), I can start a process, read from STDOUT and STDERR (separately) an arbitrary number of bytes at a time using fread() while the process is running, detect when the process is done using feof() on the STDOUT and STDERR pipes, and then use proc_close() to get the exit code of the process. I've done all of this in PHP. It works well, and gives me a lot of control.
Is there a way to do all of these things in Perl? To summarize, I need to be able to do the following:
start an external process
read STDOUT and STDERR separately
read STDOUT and STDERR an arbitrary number of bytes at a time while the process is running (i.e. without having to wait for the process to finish)
detect when the process is finished
get the exit code of the process
Thanks in advance for your answers.
You could roll your own solution using Perl's system call interface, but it's easier to use the built-in module IPC::Open3. As for your list:
Start an external process:
use IPC::Open3;
use IO::Handle;
use strict;
my $stdout = IO::Handle->new;
my $stderr = IO::Handle->new;
my $pid = open3(undef, $stdout, $stderr, 'my-command', 'arg1', 'arg2');
Read STDOUT and STDERR separately, an arbitrary number of bytes at a time:
my $line = <$stdout>;
# Or
sysread $stderr, my $buffer, 1024;
Detect when the process is finished:
use POSIX qw(sys_wait_h);
waitpid $pid, 0; # Waits for process to terminate
waitpid $pid, WNOHANG; # Checks if the process has terminated
Get the exit code of the process:
my $status = $?; # After waitpid indicates the process has exited
Be sure to read the IPC::Open3 documentation; as it warns, it's easy to get yourself deadlocked when you have separate stdout and stderr pipes, if you're not careful. If the child process fills either pipe, it will block, and if the parent process reads the other pipe, it will block.
You want this module: IPC::Open3
You want IPC::Run, it captures the IO and returns the exit value

How can I change the current directory in a thread-safe manner in Perl?

I'm using Thread::Pool::Simple to create a few working threads. Each working thread does some stuff, including a call to chdir followed by an execution of an external Perl script (from the jbrowse genome browser, if it matters). I use capturex to call the external script and die on its failure.
I discovered that when I use more then one thread, things start to be messy. after some research. it seems that the current directory of some threads is not the correct one.
Perhaps chdir propagates between threads (i.e. isn't thread-safe)?
Or perhaps it's something with capturex?
So, how can I safely set the working directory for each thread?
** UPDATE **
Following the suggestions to change dir while executing, I'd like to ask how exactly should I pass these two commands to capturex?
currently I have:
my #args = ( "bin/flatfile-to-json.pl", "--gff=$gff_file", "--tracklabel=$track_label", "--key=$key", #optional_args );
capturex( [0], #args );
How do I add another command to #args?
Will capturex continue die on errors of any of the commands?
I think that you can solve your "how do I chdir in the child before running the command" problem pretty easily by abandoning IPC::System::Simple as not the right tool for the job.
Instead of doing
my $output = capturex($cmd, #args);
do something like:
use autodie qw(open close);
my $pid = open my $fh, '-|';
unless ($pid) { # this is the child
chdir($wherever);
exec($cmd, #args) or exit 255;
}
my $output = do { local $/; <$fh> };
# If child exited with error or couldn't be run, the exception will
# be raised here (via autodie; feel free to replace it with
# your own handling)
close ($fh);
If you were getting a list of lines instead of scalar output from capturex, the only thing that needs to change is the second-to-last line (to my #output = <$fh>;).
More info on forking-open is in perldoc perlipc.
The good thing about this in preference to capture("chdir wherever ; $cmd #args") is that it doesn't give the shell a chance to do bad things to your #args.
Updated code (doesn't capture output)
my $pid = fork;
die "Couldn't fork: $!" unless defined $pid;
unless ($pid) { # this is the child
chdir($wherever);
open STDOUT, ">/dev/null"; # optional: silence subprocess output
open STDERR, ">/dev/null"; # even more optional
exec($cmd, #args) or exit 255;
}
wait;
die "Child error $?" if $?;
I don't think "current working directory" is a per-thread property. I'd expect it to be a property of the process.
It's not clear exactly why you need to use chdir at all though. Can you not launch the external script setting the new process's working directory appropriately instead? That sounds like a more feasible approach.

Resources