Piping input to stdin prevents read_line() blocking - rust

I'm trying to write a terminal program that pipes in a CSV file, parses the records, then launches a quiz based on them. My issue is that once I pipe the file into my command line program using io::stdin(), I can not longer use io::stdin().read_line() to get user input because it stops blocking or waiting for user input. Below is the Minimal Reproducible Example:
use std::io;
fn main() {
let stdin = io::stdin();
println!("Please enter input");
let mut user_input = String::new();
stdin.read_line(&mut user_input).expect("Failed to get input");
println!("The input was {}", user_input);
}
Using cargo run causes the normal blocking behavior. Using echo 'Hello World' | cargo run causes read_line() to no longer block anywhere throughout the program.
I assume it's not a bug and just how stdin works. Can anyone explain the fine detail of this behavior and if there is a workaround?

I assume it's not a bug and just how stdin works.
Correct. Assuming you're ok with targeting Unix systems, a workaround is to open /dev/tty explicitly:
use std::io::{BufReader, BufRead};
use std::fs::File;
fn main() {
// read piped stuff from stdin...
// read interactive input from the user
let mut input = BufReader::new(File::open("/dev/tty").unwrap());
println!("Please enter input");
let mut user_input = String::new();
input.read_line(&mut user_input).expect("Failed to get input");
println!("The input was {}", user_input);
}
Note that getting the input from the user this way is not the idiomatic way to write command-line programs, because it cannot be automated. Instead, consider supporting command-line options, which crates like clap make convenient.

Related

Why does reading from an exited PTY process return "Input/output error" in Rust?

I'm attempting to read from a process that's backed by a PTY in Rust, but once all bytes have been read from the process then reading from the process returns an Input/output error instead of the expected EOF. Is there an obvious reason for this behaviour, and how might it be resolved so that read returns Ok(0) instead of an error, as per the contract for read?
Here is a minimal working example:
use std::io;
use std::io::Read;
use std::io::Write;
use std::fs::File;
use std::os::unix::io::FromRawFd;
use std::process::Command;
use std::process::Stdio;
extern crate nix;
use crate::nix::pty;
use crate::nix::pty::OpenptyResult;
fn main() {
let OpenptyResult{master: controller_fd, slave: follower_fd} =
pty::openpty(None, None)
.expect("couldn't open a new PTY");
let new_follower_stdio = || unsafe { Stdio::from_raw_fd(follower_fd) };
let mut child =
Command::new("ls")
.stdin(new_follower_stdio())
.stdout(new_follower_stdio())
.stderr(new_follower_stdio())
.spawn()
.expect("couldn't spawn the new PTY process");
{
let mut f = unsafe { File::from_raw_fd(controller_fd) };
let mut buf = [0; 0x100];
loop {
let n = f.read(&mut buf[..])
.expect("couldn't read");
if n == 0 {
break;
}
io::stdout().write_all(&buf[..n])
.expect("couldn't write to STDOUT");
}
}
child.kill()
.expect("couldn't kill the PTY process");
child.wait()
.expect("couldn't wait for the PTY process");
}
This gives the following output:
Cargo.lock Cargo.toml build.Dockerfile scripts src target
thread 'main' panicked at 'couldn't read: Os { code: 5, kind: Uncategorized, message: "Input/output error" }', src/main.rs:35:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
I've also tried using nix::unistd::dup to duplicate the follower_fd for stdin, stdout and stderr, but this didn't change the observed behaviour.
For reference, I'm using Rust 1.60.0 with the following Cargo.toml for this MWE:
[package]
name = "mwe"
version = "0.0.0"
[dependencies]
nix = "=0.24.1"
It seems that this error is expected behaviour for PTYs on Linux, and essentially signals EOF. This information is supported by a number of non-authoritative sources, but a good summary is provided by mosvy on the Unix StackExchange:
On Linux, a read() on the master side of a pseudo-tty will return -1 and set ERRNO to EIO when all the handles to its slave side have been closed, but will either block or return EAGAIN before the slave has been first opened.
I don't know if there's any standard spec or rationale for this, but it allows to (crudely) detect when the other side was closed, and simplifies the logic of programs like script which are just creating a pty and running another program inside it.
It is presumed that the EIO described here corresponds to the "Input/output error" returned above.

Store command result in string doesn't always work

I'm trying to create a command line program with Rust and I will the need the program to be able to store the result of commands in strings
Here is the current program (source):
use std::process::{Command, Stdio};
fn main() {
let output = Command::new("ls")
// Tell the OS to record the command's output
.stdout(Stdio::piped())
// execute the command, wait for it to complete, then capture the output
.output()
// Blow up if the OS was unable to start the program
.unwrap();
// extract the raw bytes that we captured and interpret them as a string
let stdout = String::from_utf8(output.stdout).unwrap();
println!("{}", stdout);
}
This program works on some commands for example ls but others don't. For example if I try with ll or git branch (which is an example of what I'd like to achieve btw) I have this error:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/main.rs:10:10
Why does this happen and how can I correct it? My environment is Linux.
The solution for git branch (for example) is:
use std::process::{Command, Stdio};
fn main() {
let output = Command::new("git").arg("branch")
// Tell the OS to record the command's output
.stdout(Stdio::piped())
// execute the command, wait for it to complete, then capture the output
.output()
// Blow up if the OS was unable to start the program
.unwrap();
// extract the raw bytes that we captured and interpret them as a string
let stdout = String::from_utf8(output.stdout).unwrap();
println!("{}", stdout);
}

stdout hangs on blinking cursor

I have a process struct, which holds a process handle:
pub struct Process {
process: Child,
init: bool
}
I have a seperate function where I can 'talk' to the engine.
fn talk_to_engine(&mut self, input: &String) -> String {
let stdin = &mut self.process.stdin.as_mut().unwrap();
stdin.write_all(input.as_bytes()).expect("Failed to write to process.");
let mut s = String::new();
return match self.process.stdout.as_mut().unwrap().read_to_string(&mut s) {
Err(why) => panic!("stdout error: {}", why),
Ok(_) => s
}
}
Yet when I run the function, I get a blinking cursor in the terminal and it does nothing.
EDIT: I call the init_engine function which in turn calls the above mentioned function:
/// Initialize the engine.
pub fn init_engine(&mut self, _protocol: String, depth: String) {
//Stockfish::talk_to_engine(self, &protocol);
let output = Stockfish::talk_to_engine(self, &format!("go depth {}", &depth));
print!("{:?}", output);
self.init = true;
}
if you call init_engine, let's say, like this: struct.init_engine("uci".to_string(), "1".to_string());
Without any information a full reproduction case, or even knowing what the input are and subprocess are it's impossible to know, and hard to guess. Especially as you apparently didn't even try to find what was blocking exactly.
But there are two possible problem points I can see here:
The driver will only reads the output once all input has been consumed, if the subprocess interleaves reading and writing it could fill the entirety of the output pipe's buffer then block on writing to stdout forever, basically deadlocking.
read_to_string reads the entirety of the stream, meaning the subprocess must write everything out and terminate or at least close its stdout, otherwise more output remains possible, and the driver will keep waiting for it.

How do I write multiple strings into a file without them overwriting each other on rust 1.26 and onwards?

I'm having trouble writing a few strings into my .txt file without them overwriting each other
This is an example:
for i in 1..100{
fs::write("*PATH*", i.to_string()).expect("Unable to write file");
}
I'm thinking it should write every singe one right after each other but it doesn't! It overwrites and when I open the document it's just the last written number.
I couldn't find anything on the web since this way of writing into files seems to be rather new.
You can open the File before entering the loop. You can further simplify writing to the file, by using the write! and writeln! macros. Which allows you to utilize Rust's formatting functionality avoiding the need to explicitly do i.to_string().
Since you're performing a lot of (small) writes, then consider also wrapping it in a BufWriter to minimize the amount of total system calls performed.
use std::fs::File;
use std::io::{BufWriter, Write};
fn main() {
let path = "...";
let f = File::create(path).expect("unable to create file");
let mut f = BufWriter::new(f);
for i in 1..100 {
write!(f, "{}", i).expect("unable to write");
}
}
If the file already exists, and you want to continuously append to it every time you execute your program. Then you can open it using OpenOptions and specifically enabling append mode by using append(true):
use std::fs::OpenOptions;
use std::io::{BufWriter, Write};
fn main() {
let path = "...";
let f = OpenOptions::new()
.write(true)
.append(true)
.open(path)
.expect("unable to open file");
let mut f = BufWriter::new(f);
for i in 1..100 {
write!(f, "{}", i).expect("unable to write");
}
}
I couldn't find anything on the web since this way of writing into files seems to be rather new.
It's not rather new, it's rather wrong (for this use case). Open the file beforehand and write to that so it appends, instead of calling fs::write every single loop, which will reopen and close the file every iteration, which is not only slow, but causes your file to get overwritten:
use std::fs::OpenOptions;
use std::io::prelude::*;
let file = OpenOptions::new()
.write(true)
.open("/path/to/file")
.expect("Could not open file");
for i in 1..100 {
file.write_all(i.to_string().as_bytes()).expect("Unable to write to file");
}

How would you stream output from a Process?

I believe I understand, in general, one way of doing this:
Create a Command
Use Stdio::piped() to create a new pair of output streams
Configure command.stdout(), and command.stderr()
Spawn the process
Create a new thread and pass the stderr and stdout to it <-- ???
In the remote thread, continually poll for input and write it to the output stream.
In the main thread, wait for the process to finish.
Does that sound right?
My two actual questions:
Is there an easier way that doesn't involve a 'read thread' per process?
If there isn't an easier way, Read::read() requires &mut self; how do you pass that into a remote thread?
Please provide specific examples of how to actually stream the output, not just generic advice about how to do it...
To be more specific, here's the default example of using spawn:
use std::process::Command;
let mut child = Command::new("/bin/cat")
.arg("file.txt")
.spawn()
.expect("failed to execute child");
let ecode = child.wait()
.expect("failed to wait on child");
assert!(ecode.success());
How can the above example be changed to stream the output of child to the console, rather than just waiting for an exit code?
Although the accepted answer is correct, it doesn't cover the non-trivial case.
To stream output and handle it manually, use Stdio::piped() and manually handle the .stdout property on the child returned from calling spawn, like this:
use std::process::{Command, Stdio};
use std::path::Path;
use std::io::{BufReader, BufRead};
pub fn exec_stream<P: AsRef<Path>>(binary: P, args: Vec<&'static str>) {
let mut cmd = Command::new(binary.as_ref())
.args(&args)
.stdout(Stdio::piped())
.spawn()
.unwrap();
{
let stdout = cmd.stdout.as_mut().unwrap();
let stdout_reader = BufReader::new(stdout);
let stdout_lines = stdout_reader.lines();
for line in stdout_lines {
println!("Read: {:?}", line);
}
}
cmd.wait().unwrap();
}
#[test]
fn test_long_running_process() {
exec_stream("findstr", vec!("/s", "sql", "C:\\tmp\\*"));
}
See also Merge child process stdout and stderr regarding catching the output from stderr and stdout simultaneously.
I'll happily accept any example of spawning a long running process and streaming output to the console, by whatever means.
It sounds like you want Stdio::inherit:
use std::process::{Command, Stdio};
fn main() {
let mut cmd =
Command::new("cat")
.args(&["/usr/share/dict/web2"])
.stdout(Stdio::inherit())
.stderr(Stdio::inherit())
.spawn()
.unwrap();
// It's streaming here
let status = cmd.wait();
println!("Exited with status {:?}", status);
}

Resources