Command child process output stream - rust

I have an Actix Web application and what I want to achieve is that making an interactive command in one of my routes.
Run the command and stream
Get the first data that is streamed
Return that to the user without killing the child process, let it keep running until it exits.
Rust child process not correctly streaming the data. For example I have the below code:
let mut command_output = command
.arg("arg1")
.arg("arg2")
.arg("arg3")
.spawn()
.unwrap();
match &mut command_output.stdout {
_ => {
// ...
}
}
println!("the data {:?}", command_output.stdout);
let status = command_output.wait();
I want that child process to stream the data. Right now, firstly there is one text printed to terminal that says "the data None" then one other line is printed with the output that I was expecting but it's not my println! I don't know where does that comes from, probably that is internal but I need to access that data, it is not printed within my println!.
I tried to read the stdout with BufReader and changed command.stdout() to different values like inherit and piped but still streaming is not working as expected. I can't println that value, it's printed by the process internally I think.

In order to capture output from a child process, you need to call Command::stdout supplying the argument Stdio::piped().
If you do not do that, the child process will inherit the same stdout as the parent process, and the output is displayed on the terminal instead of being captured by the parent process. In that scenario, the child's stdout will be None, which is what you are getting when you print it.
If you do call command.stdout(Stdio::piped()) then the child's stdout will be Some(stdout) where stdout is an instance of std::process::ChildStdout. This struct implements the Read trait, so you can read from it. For example, to read line by line:
let mut command = Command::new("ls")
.arg("-ltr")
.stdout(Stdio::piped())
.spawn()
.unwrap();
let stdout = command.stdout.take().unwrap();
let mut bufread = BufReader::new(stdout);
let mut buf = String::new();
while let Ok(n) = bufread.read_line(&mut buf) {
if n > 0 {
println!("Line: {}", buf.trim());
buf.clear();
} else {
break;
}
}
Playground

Related

Rust unix process

I am trying to use fork() for create child process and communicate it by pipes. I am using the nix library and os_pipe. When I try to execute the program, it is waiting and I can't see a message that I am sending in the child process to parent process. I try the drop function to close the pipe but not work. This is my code:
use nix::sys::wait::wait;
use nix::unistd::ForkResult::{Child, Parent};
use nix::unistd::{fork, getpid, getppid, pipe};
use std::io::prelude::*;
fn main() {
let pid = fork();
let (mut reader, mut writer) = os_pipe::pipe().unwrap();
match pid.expect("Error during creating child") {
Parent { child } => {
println!("From parent, {}", getpid());
wait().unwrap();
let mut data = String::new();
reader.read_to_string(&mut data).unwrap();
println!("data: {}", data);
}
Child => {
println!("from child, {}", getpid());
writer.write_all("hello".as_bytes()).unwrap();
}
}
}
Thanks!!!
First, you call fork() and then you create the pipes. As soon as fork() returns there are two processes (think as if fork() returns twice). That means that each process creates a separate pipe pair, unrelated to each other. Just swap those two lines so there is only one shared pipe (also you forgot an unsafe):
let (mut reader, mut writer) = os_pipe::pipe().unwrap();
let pid = unsafe { fork() };
Then, after the fork there are two copies of each end of the pipe, but since the parent is to do only reads and the child is to do only writes, you should close (drop) the unused ends:
match pid.expect("Error during creating child") {
Parent { child } => {
drop(writer);
//...
}
Child => {
drop(reader);
//...
}
If you fail to close the writer in the parent, even when the child ends, the pipe will still be opened for writing and the parent will never reach the end of the pipe and read_to_string() will never finish. The drop(reader) is not actually needed here but it is good practice anyways: a failure to write to the pipe can be used by the child to detect that the parent has died.

Run command, stream stdout/stderr and capture results

I'm trying to use std::process::Command to run a command and stream its stdout and stderr while also capturing a copy of stdout/stderr. I found I can use spawn.
This code will capture the output, but won't stream it to stdout/stderr while it's happening:
let mut child = command
.envs(env)
.stdout(Stdio::piped()) // <=== Difference here
.spawn()
.unwrap();
let output = child
.wait_with_output().unwrap();
println!("Done {}", std::str::from_utf8(&output.stdout).unwrap());
This code will stream the output but not capture it:
let mut child = command
.envs(env)
.spawn()
.unwrap();
let output = child
.wait_with_output().unwrap();
println!("Done {}", std::str::from_utf8(&output.stdout).unwrap());
Is there a way to capture a command's output while also streaming it to the parent stdout/stderr?
There might be a less verbose way to do this, but this is the solution I came up with.
Spawn the process with a piped io for stdout and stderr. Spawn a thread for stdout and stderr. In each thread read from the pipe and output directly to stdout or stderr then write the contents to a channel.
In the main thread wait for the process to finish, then join the threads and finally read each channel to get the contents of stdout and stderr.
use std::io::BufRead;
let mut child = command
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.unwrap();
let child_stdout = child
.stdout
.take()
.expect("Internal error, could not take stdout");
let child_stderr = child
.stderr
.take()
.expect("Internal error, could not take stderr");
let (stdout_tx, stdout_rx) = std::sync::mpsc::channel();
let (stderr_tx, stderr_rx) = std::sync::mpsc::channel();
let stdout_thread = thread::spawn(move || {
let stdout_lines = BufReader::new(child_stdout).lines();
for line in stdout_lines {
let line = line.unwrap();
println!("{}", line);
stdout_tx.send(line).unwrap();
}
});
let stderr_thread = thread::spawn(move || {
let stderr_lines = BufReader::new(child_stderr).lines();
for line in stderr_lines {
let line = line.unwrap();
eprintln!("{}", line);
stderr_tx.send(line).unwrap();
}
});
let status = child
.wait()
.expect("Internal error, failed to wait on child");
stdout_thread.join().unwrap();
stderr_thread.join().unwrap();
let stdout = stdout_rx.into_iter().collect::<Vec<String>>().join("");
let stderr = stderr_rx.into_iter().collect::<Vec<String>>().join("");
The channel isn't strictly needed. I originally wanted to mutate a string, but I'm new in Rust with threads and couldn't find any examples showing how to mutate a string in a thread and then read it back into main.
I'm accepting the other solution as it really answered my main question. I just wanted to post back to give everyone a fully-featured answer that does exactly what I originally asked
This is similar to how I stream the compilation and execution output on Rust Explorer.
To stream the output you can pipe the stdout and read it line by line using BufReader.
Playground
use std::io::BufRead;
use std::io::BufReader;
use std::process::Command;
use std::process::Stdio;
fn main() {
// Compile code.
let mut child = Command::new("bash")
.args([
"-c",
"echo 'Hello'; sleep 3s; echo 'World'"
])
.stdout(Stdio::piped())
.spawn()
.unwrap();
let stdout = child.stdout.take().unwrap();
// Stream output.
let lines = BufReader::new(stdout).lines();
for line in lines {
println!("{}", line.unwrap());
}
}

Writing to stdio & reading from stdout in Rust Command process

I'll try to simplify as much as possible what I'm trying to do accomplish but in a nutshell here is my problem:
I am trying to spawn the node shell as a process in Rust. I would like to pass to the process' stdin javascript code and read the nodejs output from stdout of the process. This would be an interactive usage where the node shell is spawned and keeps receiving JS instructions and executing them.
I do not wish to launch the nodejs app using a file argument.
I have read quite a bit about std::process::Command, tokio and why we can't write and read to a piped input using standard library. One of the solutions that I kept seeing online (in order to not block the main thread while reading/writing) is to use a thread for reading the output. Most solutions did not involve a continuous write/read flow.
What I have done is to spawn 2 threads, one that keeps writing to stdin and one that keeps reading from stdout. That way, I thought, I won't be blocking the main thread. However my issue is that only 1 thread can actively be used. When I have a thread for stdin, stdout does not even receive data.
Here is the code, comments should provide more details
pub struct Runner {
handle: Child,
pub input: Arc<Mutex<String>>,
pub output: Arc<Mutex<String>>,
input_thread: JoinHandle<()>,
output_thread: JoinHandle<()>,
}
impl Runner {
pub fn new() -> Runner {
let mut handle = Command::new("node")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn()
.expect("Failed to spawn node process!");
// begin stdout thread part
let mut stdout = handle.stdout.take().unwrap();
let output = Arc::new(Mutex::new(String::new()));
let out_clone = Arc::clone(&output);
let output_thread = spawn(move || loop {
// code here never executes...why ?
let mut buf: [u8; 1] = [0];
let mut output = out_clone.lock().unwrap();
let what_i_read = stdout.read(&mut buf);
println!("reading: {:?}", what_i_read);
match what_i_read {
Err(err) => {
println!("{}] Error reading from stream: {}", line!(), err);
break;
}
Ok(bytes_read) => {
if bytes_read != 0 {
let char = String::from_utf8(buf.to_vec()).unwrap();
output.push_str(char.as_str());
} else if output.len() != 0 {
println!("result: {}", output);
out_clone.lock().unwrap().clear();
}
}
}
});
// begin stdin thread block
let mut stdin = handle.stdin.take().unwrap();
let input = Arc::new(Mutex::new(String::new()));
let input_clone = Arc::clone(&input);
let input_thread = spawn(move || loop {
let mut in_text = input_clone.lock().unwrap();
if in_text.len() != 0 {
println!("writing: {}", in_text);
stdin.write_all(in_text.as_bytes()).expect("!write");
stdin.write_all("\n".as_bytes()).expect("!write");
in_text.clear();
}
});
Runner {
handle,
input,
output,
input_thread,
output_thread,
}
}
// this function should receive commands
pub fn execute(&mut self, str: &str) {
let input = Arc::clone(&self.input);
let mut input = input.lock().unwrap();
input.push_str(str);
}
}
In the main thread I'd like use this as
let mut runner = Runner::new();
runner.execute("console.log('foo'");
println!("{:?}", runner.output);
I am still new to Rust but at least I passed the point where the borrow checker makes me bang my head against the wall, I am starting to find it more pleasing now :)

How can I asynchronously read from both stdout and stderr of a subprocess using Tokio? [duplicate]

I'm making a small ncurses application in Rust that needs to communicate with a child process. I already have a prototype written in Common Lisp. I'm trying to rewrite it because CL uses a huge amount of memory for such a small tool.
I'm having some trouble figuring out how to interact with the sub-process.
What I'm currently doing is roughly this:
Create the process:
let mut program = match Command::new(command)
.args(arguments)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
{
Ok(child) => child,
Err(_) => {
println!("Cannot run program '{}'.", command);
return;
}
};
Pass it to an infinite (until user exits) loop, which reads and handles input and listens for output like this (and writes it to the screen):
fn listen_for_output(program: &mut Child, output_viewer: &TextViewer) {
match program.stdout {
Some(ref mut out) => {
let mut buf_string = String::new();
match out.read_to_string(&mut buf_string) {
Ok(_) => output_viewer.append_string(buf_string),
Err(_) => return,
};
}
None => return,
};
}
The call to read_to_string however blocks the program until the process exits. From what I can see read_to_end and read also seem to block. If I try running something like ls which exits right away, it works, but with something that doesn't exit like python or sbcl it only continues once I kill the subprocess manually.
Based on this answer, I changed the code to use BufReader:
fn listen_for_output(program: &mut Child, output_viewer: &TextViewer) {
match program.stdout.as_mut() {
Some(out) => {
let buf_reader = BufReader::new(out);
for line in buf_reader.lines() {
match line {
Ok(l) => {
output_viewer.append_string(l);
}
Err(_) => return,
};
}
}
None => return,
}
}
However, the problem still remains the same. It will read all lines that are available, and then block. Since the tool is supposed to work with any program, there is no way to guess out when the output will end, before trying to read. There doesn't appear to be a way to set a timeout for BufReader either.
Streams are blocking by default. TCP/IP streams, filesystem streams, pipe streams, they are all blocking. When you tell a stream to give you a chunk of bytes it will stop and wait till it has the given amout of bytes or till something else happens (an interrupt, an end of stream, an error).
The operating systems are eager to return the data to the reading process, so if all you want is to wait for the next line and handle it as soon as it comes in then the method suggested by Shepmaster in Unable to pipe to or from spawned child process more than once (and also in his answer here) works.
Though in theory it doesn't have to work, because an operating system is allowed to make the BufReader wait for more data in read, but in practice the operating systems prefer the early "short reads" to waiting.
This simple BufReader-based approach becomes even more dangerous when you need to handle multiple streams (like the stdout and stderr of a child process) or multiple processes. For example, BufReader-based approach might deadlock when a child process waits for you to drain its stderr pipe while your process is blocked waiting on it's empty stdout.
Similarly, you can't use BufReader when you don't want your program to wait on the child process indefinitely. Maybe you want to display a progress bar or a timer while the child is still working and gives you no output.
You can't use BufReader-based approach if your operating system happens not to be eager in returning the data to the process (prefers "full reads" to "short reads") because in that case a few last lines printed by the child process might end up in a gray zone: the operating system got them, but they're not large enough to fill the BufReader's buffer.
BufReader is limited to what the Read interface allows it to do with the stream, it's no less blocking than the underlying stream is. In order to be efficient it will read the input in chunks, telling the operating system to fill as much of its buffer as it has available.
You might be wondering why reading data in chunks is so important here, why can't the BufReader just read the data byte by byte. The problem is that to read the data from a stream we need the operating system's help. On the other hand, we are not the operating system, we work isolated from it, so as not to mess with it if something goes wrong with our process. So in order to call to the operating system there needs to be a transition to "kernel mode" which might also incur a "context switch". That is why calling the operating system to read every single byte is expensive. We want as few OS calls as possible and so we get the stream data in batches.
To wait on a stream without blocking you'd need a non-blocking stream. MIO promises to have the required non-blocking stream support for pipes, most probably with PipeReader, but I haven't checked it out so far.
The non-blocking nature of a stream should make it possible to read data in chunks regardless of whether the operating system prefers the "short reads" or not. Because non-blocking stream never blocks. If there is no data in the stream it simply tells you so.
In the absense of a non-blocking stream you'll have to resort to spawning threads so that the blocking reads would be performed in a separate thread and thus won't block your primary thread. You might also want to read the stream byte by byte in order to react to the line separator immediately in case the operating system does not prefer the "short reads". Here's a working example: https://gist.github.com/ArtemGr/db40ae04b431a95f2b78.
P.S. Here's an example of a function that allows to monitor the standard output of a program via a shared vector of bytes:
use std::io::Read;
use std::process::{Command, Stdio};
use std::sync::{Arc, Mutex};
use std::thread;
/// Pipe streams are blocking, we need separate threads to monitor them without blocking the primary thread.
fn child_stream_to_vec<R>(mut stream: R) -> Arc<Mutex<Vec<u8>>>
where
R: Read + Send + 'static,
{
let out = Arc::new(Mutex::new(Vec::new()));
let vec = out.clone();
thread::Builder::new()
.name("child_stream_to_vec".into())
.spawn(move || loop {
let mut buf = [0];
match stream.read(&mut buf) {
Err(err) => {
println!("{}] Error reading from stream: {}", line!(), err);
break;
}
Ok(got) => {
if got == 0 {
break;
} else if got == 1 {
vec.lock().expect("!lock").push(buf[0])
} else {
println!("{}] Unexpected number of bytes: {}", line!(), got);
break;
}
}
}
})
.expect("!thread");
out
}
fn main() {
let mut cat = Command::new("cat")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.expect("!cat");
let out = child_stream_to_vec(cat.stdout.take().expect("!stdout"));
let err = child_stream_to_vec(cat.stderr.take().expect("!stderr"));
let mut stdin = match cat.stdin.take() {
Some(stdin) => stdin,
None => panic!("!stdin"),
};
}
With a couple of helpers I'm using it to control an SSH session:
try_s! (stdin.write_all (b"echo hello world\n"));
try_s! (wait_forĖ¢ (&out, 0.1, 9., |s| s == "hello world\n"));
P.S. Note that await on a read call in async-std is blocking as well. It's just instead of blocking a system thread it only blocks a chain of futures (a stack-less green thread essentially). The poll_read is the non-blocking interface. In async-std#499 I've asked the developers whether there's a short read guarantee from these APIs.
P.S. There might be a similar concern in Nom: "we would want to tell the IO side to refill according to the parser's result (Incomplete or not)"
P.S. Might be interesting to see how stream reading is implemented in crossterm. For Windows, in poll.rs, they are using the native WaitForMultipleObjects. In unix.rs they are using mio poll.
Tokio's Command
Here is an example of using tokio 0.2:
use std::process::Stdio;
use futures::StreamExt; // 0.3.1
use tokio::{io::BufReader, prelude::*, process::Command}; // 0.2.4, features = ["full"]
#[tokio::main]
async fn main() {
let mut cmd = Command::new("/tmp/slow.bash")
.stdout(Stdio::piped()) // Can do the same for stderr
.spawn()
.expect("cannot spawn");
let stdout = cmd.stdout().take().expect("no stdout");
// Can do the same for stderr
// To print out each line
// BufReader::new(stdout)
// .lines()
// .for_each(|s| async move { println!("> {:?}", s) })
// .await;
// To print out each line *and* collect it all into a Vec
let result: Vec<_> = BufReader::new(stdout)
.lines()
.inspect(|s| println!("> {:?}", s))
.collect()
.await;
println!("All the lines: {:?}", result);
}
Tokio-Threadpool
Here is an example of using tokio 0.1 and tokio-threadpool. We start the process in a thread using the blocking function. We convert that to a stream with stream::poll_fn
use std::process::{Command, Stdio};
use tokio::{prelude::*, runtime::Runtime}; // 0.1.18
use tokio_threadpool; // 0.1.13
fn stream_command_output(
mut command: Command,
) -> impl Stream<Item = Vec<u8>, Error = tokio_threadpool::BlockingError> {
// Ensure that the output is available to read from and start the process
let mut child = command
.stdout(Stdio::piped())
.spawn()
.expect("cannot spawn");
let mut stdout = child.stdout.take().expect("no stdout");
// Create a stream of data
stream::poll_fn(move || {
// Perform blocking IO
tokio_threadpool::blocking(|| {
// Allocate some space to store anything read
let mut data = vec![0; 128];
// Read 1-128 bytes of data
let n_bytes_read = stdout.read(&mut data).expect("cannot read");
if n_bytes_read == 0 {
// Stdout is done
None
} else {
// Only return as many bytes as we read
data.truncate(n_bytes_read);
Some(data)
}
})
})
}
fn main() {
let output_stream = stream_command_output(Command::new("/tmp/slow.bash"));
let mut runtime = Runtime::new().expect("Unable to start the runtime");
let result = runtime.block_on({
output_stream
.map(|d| String::from_utf8(d).expect("Not UTF-8"))
.fold(Vec::new(), |mut v, s| {
print!("> {}", s);
v.push(s);
Ok(v)
})
});
println!("All the lines: {:?}", result);
}
There's numerous possible tradeoffs that can be made here. For example, always allocating 128 bytes isn't ideal, but it's simple to implement.
Support
For reference, here's slow.bash:
#!/usr/bin/env bash
set -eu
val=0
while [[ $val -lt 10 ]]; do
echo $val
val=$(($val + 1))
sleep 1
done
See also:
How do I synchronously return a value calculated in an asynchronous Future in stable Rust?
If Unix support is sufficient, you can also make the two output streams as non-blocking and poll over them as you would do it on TcpStream with the set_nonblocking function.
The ChildStdout and ChildStderr returned by the Command spawn are Stdio (and contain a file descriptor), you can modify directly the read behavior of these handle to make it non-blocking.
Based on the work of jcreekmore/timeout-readwrite-rs and anowell/nonblock-rs, I use this wrapper to modify the stream handles:
extern crate libc;
use std::io::Read;
use std::os::unix::io::AsRawFd;
use libc::{F_GETFL, F_SETFL, fcntl, O_NONBLOCK};
fn set_nonblocking<H>(handle: &H, nonblocking: bool) -> std::io::Result<()>
where
H: Read + AsRawFd,
{
let fd = handle.as_raw_fd();
let flags = unsafe { fcntl(fd, F_GETFL, 0) };
if flags < 0 {
return Err(std::io::Error::last_os_error());
}
let flags = if nonblocking{
flags | O_NONBLOCK
} else {
flags & !O_NONBLOCK
};
let res = unsafe { fcntl(fd, F_SETFL, flags) };
if res != 0 {
return Err(std::io::Error::last_os_error());
}
Ok(())
}
You can manage the two streams as any other non-blocking stream. The following example is based on the polling crate which makes really easy to handle read event and BufReader for line reading:
use std::process::{Command, Stdio};
use std::path::PathBuf;
use std::io::{BufReader, BufRead};
use std::thread;
extern crate polling;
use polling::{Event, Poller};
fn main() -> Result<(), std::io::Error> {
let path = PathBuf::from("./worker.sh").canonicalize()?;
let mut child = Command::new(path)
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.expect("Failed to start worker");
let handle = thread::spawn({
let stdout = child.stdout.take().unwrap();
set_nonblocking(&stdout, true)?;
let mut reader_out = BufReader::new(stdout);
let stderr = child.stderr.take().unwrap();
set_nonblocking(&stderr, true)?;
let mut reader_err = BufReader::new(stderr);
move || {
let key_out = 1;
let key_err = 2;
let mut out_closed = false;
let mut err_closed = false;
let poller = Poller::new().unwrap();
poller.add(reader_out.get_ref(), Event::readable(key_out)).unwrap();
poller.add(reader_err.get_ref(), Event::readable(key_err)).unwrap();
let mut line = String::new();
let mut events = Vec::new();
loop {
// Wait for at least one I/O event.
events.clear();
poller.wait(&mut events, None).unwrap();
for ev in &events {
// stdout is ready for reading
if ev.key == key_out {
let len = match reader_out.read_line(&mut line) {
Ok(len) => len,
Err(e) => {
println!("stdout read returned error: {}", e);
0
}
};
if len == 0 {
println!("stdout closed (len is null)");
out_closed = true;
poller.delete(reader_out.get_ref()).unwrap();
} else {
print!("[STDOUT] {}", line);
line.clear();
// reload the poller
poller.modify(reader_out.get_ref(), Event::readable(key_out)).unwrap();
}
}
// stderr is ready for reading
if ev.key == key_err {
let len = match reader_err.read_line(&mut line) {
Ok(len) => len,
Err(e) => {
println!("stderr read returned error: {}", e);
0
}
};
if len == 0 {
println!("stderr closed (len is null)");
err_closed = true;
poller.delete(reader_err.get_ref()).unwrap();
} else {
print!("[STDERR] {}", line);
line.clear();
// reload the poller
poller.modify(reader_err.get_ref(), Event::readable(key_err)).unwrap();
}
}
}
if out_closed && err_closed {
println!("Stream closed, exiting process thread");
break;
}
}
}
});
handle.join().unwrap();
Ok(())
}
Additionally, used with a wrapper over an EventFd, it becomes possible to easily stop the process from another thread without blocking nor active polling and uses and only a single thread.
EDIT: It seems the polling crate sets automatically the polled handles in non-blocking mode following my tests. The set_nonblocking function is still useful in case you want to directly use the nix::poll object.
I have encountered enough use-cases where it was useful to interact with a subprocess over line-delimited text that I wrote a crate for it, interactive_process.
I expect the original problem has long since been solved, but I thought it might be helpful to others.

How do I prefix Command stdout with [stdout] and [sterr]?

Using the Command struct, how can I add a prefix to the stdout and stderr buffers?
I would like the output to look something like this:
[stdout] things are good.
[sterr] fatal: repository does not exist.
This would also be nice to apply to the program's main stdout so anything the program prints is prefixed like that.
Here is the code I currently have:
let output = Command::new("git").arg("clone").output().unwrap_or_else(|e| {
panic!("Failed to run git clone: {}", e)
});
I don't believe you can do what you truly want to do right now. Ideally, you'd be able to provide an implementor of Write to the Process::stdout method. Unfortunately, the set of choices for Stdio is sparse. Perhaps you can campaign to have this be a feature request for Rust 1.1, or create a crate to start fleshing out some of the details (like cross-platform compatibility)
If it is acceptable to remove the interleaving of stdout / stderr, then this solution could help:
use std::io::{BufRead,BufReader};
use std::process::{Command,Stdio};
fn main() {
let mut child =
Command::new("/tmp/output")
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn().unwrap();
if let Some(ref mut stdout) = child.stdout {
for line in BufReader::new(stdout).lines() {
let line = line.unwrap();
println!("[stdout] {}", line);
}
}
if let Some(ref mut stderr) = child.stderr {
for line in BufReader::new(stderr).lines() {
let line = line.unwrap();
println!("[stderr] {}", line);
}
}
let status = child.wait().unwrap();
println!("Finished with status {:?}", status);
}

Resources