How to pipe from Command to File? [duplicate] - rust

I am implementing I/O redirection in a shell written in Rust. I succeeded in piping between two children processes by using unsafe code with raw file descriptors and pipe() from the libc crate.
When I try to redirect stdout of the last child process to a file that I have permission to, it fails:
extern crate libc;
use std::process::{Command, Stdio};
use std::os::unix::io::{FromRawFd, IntoRawFd};
use std::fs::File;
use self::libc::c_int;
fn main() {
let mut fds = [-1 as c_int, -1 as c_int];
let fd1 = File::open("test1").unwrap().into_raw_fd();
let fd2 = File::open("test2").unwrap().into_raw_fd();
let fd3 = File::open("test3").unwrap().into_raw_fd();
println!("{:?}, {:?}, {:?}", fd1, fd2, fd3);
unsafe {
libc::pipe(&mut fds[0] as *mut c_int);
let cmd1 = Command::new("ls")
.arg("/")
.stdout(Stdio::from_raw_fd(fds[1]))
.spawn()
.unwrap();
let mut cmd2 = Command::new("grep")
.arg("etc")
.stdin(Stdio::from_raw_fd(fds[0]))
.stdout(Stdio::from_raw_fd(fd1))
.spawn()
.unwrap();
let _result = cmd2.wait().unwrap();
}
}
The result of the above piece:
3, 4, 5
grep: write error: Bad file descriptor
It seems that the file descriptor isn't correctly returned, but if there were no file named test1, test2 and test3, File::open(_).unwrap() should panic instead of pretending to have opened a file.
The code works perfectly fine if redirection to the file is removed, i.e. only piping is used.

The documentation for File::open states (emphasis mine):
Attempts to open a file in read-only mode.
Switching to File::create appears to create the file and "etc" is written to it.
Additionally, you should:
Not open 2 additional files - nothing ever closes those file descriptors so you have a resource leak.
Check the return value of pipe to handle errors.
Check out the nix crate.
extern crate libc;
extern crate nix;
use std::process::{Command, Stdio};
use std::os::unix::io::{FromRawFd, IntoRawFd};
use std::fs::File;
use nix::unistd::pipe;
fn main() {
let fds = pipe().unwrap();
let fd1 = File::create("test1").unwrap().into_raw_fd();
let (pipe_in, pipe_out, file_out) = unsafe {
(Stdio::from_raw_fd(fds.0),
Stdio::from_raw_fd(fds.1),
Stdio::from_raw_fd(fd1))
};
Command::new("ls")
.arg("/")
.stdout(pipe_out)
.spawn()
.unwrap();
let mut cmd2 = Command::new("grep")
.arg("etc")
.stdin(pipe_in)
.stdout(file_out)
.spawn()
.unwrap();
cmd2.wait().unwrap();
}

Since Rust 1.20.0 (released on 2017-08-31), you can now directly create Stdio from a File:
let file = File::create("out.txt").unwrap();;
let stdio = Stdio::from(file);
let command = Command::new("foo").stdout(stdio);

Related

Why does reading from an exited PTY process return "Input/output error" in Rust?

I'm attempting to read from a process that's backed by a PTY in Rust, but once all bytes have been read from the process then reading from the process returns an Input/output error instead of the expected EOF. Is there an obvious reason for this behaviour, and how might it be resolved so that read returns Ok(0) instead of an error, as per the contract for read?
Here is a minimal working example:
use std::io;
use std::io::Read;
use std::io::Write;
use std::fs::File;
use std::os::unix::io::FromRawFd;
use std::process::Command;
use std::process::Stdio;
extern crate nix;
use crate::nix::pty;
use crate::nix::pty::OpenptyResult;
fn main() {
let OpenptyResult{master: controller_fd, slave: follower_fd} =
pty::openpty(None, None)
.expect("couldn't open a new PTY");
let new_follower_stdio = || unsafe { Stdio::from_raw_fd(follower_fd) };
let mut child =
Command::new("ls")
.stdin(new_follower_stdio())
.stdout(new_follower_stdio())
.stderr(new_follower_stdio())
.spawn()
.expect("couldn't spawn the new PTY process");
{
let mut f = unsafe { File::from_raw_fd(controller_fd) };
let mut buf = [0; 0x100];
loop {
let n = f.read(&mut buf[..])
.expect("couldn't read");
if n == 0 {
break;
}
io::stdout().write_all(&buf[..n])
.expect("couldn't write to STDOUT");
}
}
child.kill()
.expect("couldn't kill the PTY process");
child.wait()
.expect("couldn't wait for the PTY process");
}
This gives the following output:
Cargo.lock Cargo.toml build.Dockerfile scripts src target
thread 'main' panicked at 'couldn't read: Os { code: 5, kind: Uncategorized, message: "Input/output error" }', src/main.rs:35:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
I've also tried using nix::unistd::dup to duplicate the follower_fd for stdin, stdout and stderr, but this didn't change the observed behaviour.
For reference, I'm using Rust 1.60.0 with the following Cargo.toml for this MWE:
[package]
name = "mwe"
version = "0.0.0"
[dependencies]
nix = "=0.24.1"
It seems that this error is expected behaviour for PTYs on Linux, and essentially signals EOF. This information is supported by a number of non-authoritative sources, but a good summary is provided by mosvy on the Unix StackExchange:
On Linux, a read() on the master side of a pseudo-tty will return -1 and set ERRNO to EIO when all the handles to its slave side have been closed, but will either block or return EAGAIN before the slave has been first opened.
I don't know if there's any standard spec or rationale for this, but it allows to (crudely) detect when the other side was closed, and simplifies the logic of programs like script which are just creating a pty and running another program inside it.
It is presumed that the EIO described here corresponds to the "Input/output error" returned above.

Gzip response is not decompressed [duplicate]

How to iterate over a gziped file which contains a single text file (csv)?
Searching crates.io I found flate2 which has the following code example for decompression:
extern crate flate2;
use std::io::prelude::*;
use flate2::read::GzDecoder;
fn main() {
let mut d = GzDecoder::new("...".as_bytes()).unwrap();
let mut s = String::new();
d.read_to_string(&mut s).unwrap();
println!("{}", s);
}
How to stream a gzip csv file?
For stream io operations rust has the Read and Write traits. To iterate over input by lines you usually want the BufRead trait, which you can always get by wrapping a Read implementation in BufReader::new.
flate2 already operates with these traits; GzDecoder implements Read, and GzDecoder::new takes anything that implements Read.
Example decoding stdin (doesn't work well on playground of course):
extern crate flate2;
use std::io;
use std::io::prelude::*;
use flate2::read::GzDecoder;
fn main() {
let stdin = io::stdin();
let stdin = stdin.lock(); // or just open any normal file
let d = GzDecoder::new(stdin).expect("couldn't decode gzip stream");
for line in io::BufReader::new(d).lines() {
println!("{}", line.unwrap());
}
}
You can then decode your lines with your usual ("without gzip") logic; perhaps make it generic by taking any input implementing BufRead.

How to use read_line function with Rust's serialport crate

I am working with the serialport crate on a raspberry. The given example with port.read works fine. However port.read_to_end or port.read_to_string does not work, I get a timeout.
Can anybody explain this behavior? The two functions read all bytes until EOF. I am sending test strings with null termination.
I am more interested in a read_line function. But this is not directly supported with the serialport crate, is it? Can I use the BufRead trait for this?
Here a minimal example with read_line. Works when TX and RX are connected.
use serialport;
use std::time::Duration;
use std::io::BufReader;
use std::io::BufRead;
fn main() {
let mut serial_port = serialport::new("/dev/serial0", 9600)
.timeout(Duration::from_millis(1000))
.open()
.expect("Failed to open serial port");
let output = "This is a test.\n".as_bytes();
serial_port.write(output).expect("Write failed!");
serial_port.flush().unwrap();
let mut reader = BufReader::new(serial_port);
let mut my_str = String::new();
reader.read_line(&mut my_str).unwrap();
println!("{}", my_str);
}

How to implement redirection of stdout of a child process to a file?

I am implementing I/O redirection in a shell written in Rust. I succeeded in piping between two children processes by using unsafe code with raw file descriptors and pipe() from the libc crate.
When I try to redirect stdout of the last child process to a file that I have permission to, it fails:
extern crate libc;
use std::process::{Command, Stdio};
use std::os::unix::io::{FromRawFd, IntoRawFd};
use std::fs::File;
use self::libc::c_int;
fn main() {
let mut fds = [-1 as c_int, -1 as c_int];
let fd1 = File::open("test1").unwrap().into_raw_fd();
let fd2 = File::open("test2").unwrap().into_raw_fd();
let fd3 = File::open("test3").unwrap().into_raw_fd();
println!("{:?}, {:?}, {:?}", fd1, fd2, fd3);
unsafe {
libc::pipe(&mut fds[0] as *mut c_int);
let cmd1 = Command::new("ls")
.arg("/")
.stdout(Stdio::from_raw_fd(fds[1]))
.spawn()
.unwrap();
let mut cmd2 = Command::new("grep")
.arg("etc")
.stdin(Stdio::from_raw_fd(fds[0]))
.stdout(Stdio::from_raw_fd(fd1))
.spawn()
.unwrap();
let _result = cmd2.wait().unwrap();
}
}
The result of the above piece:
3, 4, 5
grep: write error: Bad file descriptor
It seems that the file descriptor isn't correctly returned, but if there were no file named test1, test2 and test3, File::open(_).unwrap() should panic instead of pretending to have opened a file.
The code works perfectly fine if redirection to the file is removed, i.e. only piping is used.
The documentation for File::open states (emphasis mine):
Attempts to open a file in read-only mode.
Switching to File::create appears to create the file and "etc" is written to it.
Additionally, you should:
Not open 2 additional files - nothing ever closes those file descriptors so you have a resource leak.
Check the return value of pipe to handle errors.
Check out the nix crate.
extern crate libc;
extern crate nix;
use std::process::{Command, Stdio};
use std::os::unix::io::{FromRawFd, IntoRawFd};
use std::fs::File;
use nix::unistd::pipe;
fn main() {
let fds = pipe().unwrap();
let fd1 = File::create("test1").unwrap().into_raw_fd();
let (pipe_in, pipe_out, file_out) = unsafe {
(Stdio::from_raw_fd(fds.0),
Stdio::from_raw_fd(fds.1),
Stdio::from_raw_fd(fd1))
};
Command::new("ls")
.arg("/")
.stdout(pipe_out)
.spawn()
.unwrap();
let mut cmd2 = Command::new("grep")
.arg("etc")
.stdin(pipe_in)
.stdout(file_out)
.spawn()
.unwrap();
cmd2.wait().unwrap();
}
Since Rust 1.20.0 (released on 2017-08-31), you can now directly create Stdio from a File:
let file = File::create("out.txt").unwrap();;
let stdio = Stdio::from(file);
let command = Command::new("foo").stdout(stdio);

How would you stream output from a Process?

I believe I understand, in general, one way of doing this:
Create a Command
Use Stdio::piped() to create a new pair of output streams
Configure command.stdout(), and command.stderr()
Spawn the process
Create a new thread and pass the stderr and stdout to it <-- ???
In the remote thread, continually poll for input and write it to the output stream.
In the main thread, wait for the process to finish.
Does that sound right?
My two actual questions:
Is there an easier way that doesn't involve a 'read thread' per process?
If there isn't an easier way, Read::read() requires &mut self; how do you pass that into a remote thread?
Please provide specific examples of how to actually stream the output, not just generic advice about how to do it...
To be more specific, here's the default example of using spawn:
use std::process::Command;
let mut child = Command::new("/bin/cat")
.arg("file.txt")
.spawn()
.expect("failed to execute child");
let ecode = child.wait()
.expect("failed to wait on child");
assert!(ecode.success());
How can the above example be changed to stream the output of child to the console, rather than just waiting for an exit code?
Although the accepted answer is correct, it doesn't cover the non-trivial case.
To stream output and handle it manually, use Stdio::piped() and manually handle the .stdout property on the child returned from calling spawn, like this:
use std::process::{Command, Stdio};
use std::path::Path;
use std::io::{BufReader, BufRead};
pub fn exec_stream<P: AsRef<Path>>(binary: P, args: Vec<&'static str>) {
let mut cmd = Command::new(binary.as_ref())
.args(&args)
.stdout(Stdio::piped())
.spawn()
.unwrap();
{
let stdout = cmd.stdout.as_mut().unwrap();
let stdout_reader = BufReader::new(stdout);
let stdout_lines = stdout_reader.lines();
for line in stdout_lines {
println!("Read: {:?}", line);
}
}
cmd.wait().unwrap();
}
#[test]
fn test_long_running_process() {
exec_stream("findstr", vec!("/s", "sql", "C:\\tmp\\*"));
}
See also Merge child process stdout and stderr regarding catching the output from stderr and stdout simultaneously.
I'll happily accept any example of spawning a long running process and streaming output to the console, by whatever means.
It sounds like you want Stdio::inherit:
use std::process::{Command, Stdio};
fn main() {
let mut cmd =
Command::new("cat")
.args(&["/usr/share/dict/web2"])
.stdout(Stdio::inherit())
.stderr(Stdio::inherit())
.spawn()
.unwrap();
// It's streaming here
let status = cmd.wait();
println!("Exited with status {:?}", status);
}

Resources