Since files and streams are closed automatically when being dropped, but io::stdin() only providing a handle to the underlying stream, I fail to see how to explicitly close stdin or stdout or detect EOF on stdin in my program.
Consider
fn main() {
let mut stdin = io::stdin();
let mut linebuffer = String::new();
loop {
match stdin.read_line(&mut linebuffer) {
Ok(i) if i == 0 => { break; },
Ok(i) => {
println!("{} {}", linebuffer, i);
},
Err(e) => { panic!(e); }
}
linebuffer.clear();
}
}
Checking the number of bytes put into the buffer seems flaky because the pipe might get flushed with zero bytes having being written to it. Reading from a closed stdin should cause an IOError, but it doesn't.
Somewhat related to that: How to explicitly close my own stdout / stderr?
Some time ago there was ErrorKind::EndOfFile enum variant which was emitted upon a read operation when the source stream is closed. It seems that it didn't get to the new I/O library implementation, and instead Read trait has been changed to return 0 read bytes upon EOF. And indeed, this is specified in I/O reform RFC. So yes, checking for zero is a valid way to detect end of stream in the current Rust.
By the way, you can write Ok(0) instead of Ok(i) if i == 0:
match stdin.read_line(&mut buf) {
Ok(0) => break,
...
}
As for how to close stdout()/stderr(), it seems that the current API does not provide a way to do it, unfortunately. It is probably a feature worth an RFC or at least an RFC issue.
Regarding my own sub-question on how to close stdout/stderr: The correct way is to use the wait- or the wait_with_output-method on a process::Child. Both methods close the subprocess's stdin before waiting for it to quit, eliminating the possibility of a deadlock between both processes.
Related
I am developing Rust Tokio library for ISO-TP. CAN protocol, which lets you send larger messages. The program is aimed towards linux only.
For this, I am using Tokio structure AsyncFd. When the write is called, I create the Future and then poll it. The problem is when I do two consecutive writes, one after the other.
socket_tx1.write_packet(packet.clone())?.await?;
socket_tx1.write_packet(packet.clone())?.await?;
The first write will end successfully, however second will end with
std::io::ErrorKind::WouldBlock
Which is OK and expected. The buffer is full and we should wait until it's clear and ready for the next write. The poll does not guarantee, that if it returns OK, the following write will be successful.
The problem is that I don't know how to handle this behavior correctly.
I tried the following implementations:
impl Future for IsoTpWriteFuture {
type Output = io::Result<()>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
loop {
let guard = ready!(self.socket.0.poll_write_ready(cx))?;
match self.socket.0.get_ref().0.write(&self.packet) {
Err(err) if err.kind() == io::ErrorKind::WouldBlock => continue,
Ok(_) => return Poll::Ready(Ok(())),
Err(err) => return Poll::Ready(Err(err))
}
}
}
}
This one works, but after I get WouldBlock, this loop results in busy waiting, which I would like to avoid. Since Socket is ready from poll perspective, write is immediately called, Wouldblock is again returned, and routine spins sometime before resolving the write.
The second implementation is more correct, from my perspective, but it doesn't work right now, and I am not sure how to make it work.
impl Future for IsoTpWriteFuture {
type Output = io::Result<()>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
loop {
let guard = ready!(self.socket.0.poll_write_ready(cx))?;
match guard.try_io(|inner|inner.get_ref().0.write(&self.packet)) {
Err(err) => continue,
Ok(_) => return Poll::Ready(Ok(())),
}
}
}
}
This doesn't work since once try_io() encounters WouldBlock, it will clear readiness from guard. And since poll is edge triggered, this will hang at poll_write_ready and won't make progress.
Is it possible to poll for change after the write returns WouldBlock? Or is the bussy waiting approach unavoidable?
In an asynchronous system, after you get a WouldBlock, you need to return control to the asynchronous runtime (tokio in this case), as well as tell it what it needs to wait on (the file descriptor being ready).
You say you are using Tokio's AsyncFd. It offers several methods, such as poll_read_ready, to check on the fd's readiness and register it with tokio if it needs to wait.
I have a process struct, which holds a process handle:
pub struct Process {
process: Child,
init: bool
}
I have a seperate function where I can 'talk' to the engine.
fn talk_to_engine(&mut self, input: &String) -> String {
let stdin = &mut self.process.stdin.as_mut().unwrap();
stdin.write_all(input.as_bytes()).expect("Failed to write to process.");
let mut s = String::new();
return match self.process.stdout.as_mut().unwrap().read_to_string(&mut s) {
Err(why) => panic!("stdout error: {}", why),
Ok(_) => s
}
}
Yet when I run the function, I get a blinking cursor in the terminal and it does nothing.
EDIT: I call the init_engine function which in turn calls the above mentioned function:
/// Initialize the engine.
pub fn init_engine(&mut self, _protocol: String, depth: String) {
//Stockfish::talk_to_engine(self, &protocol);
let output = Stockfish::talk_to_engine(self, &format!("go depth {}", &depth));
print!("{:?}", output);
self.init = true;
}
if you call init_engine, let's say, like this: struct.init_engine("uci".to_string(), "1".to_string());
Without any information a full reproduction case, or even knowing what the input are and subprocess are it's impossible to know, and hard to guess. Especially as you apparently didn't even try to find what was blocking exactly.
But there are two possible problem points I can see here:
The driver will only reads the output once all input has been consumed, if the subprocess interleaves reading and writing it could fill the entirety of the output pipe's buffer then block on writing to stdout forever, basically deadlocking.
read_to_string reads the entirety of the stream, meaning the subprocess must write everything out and terminate or at least close its stdout, otherwise more output remains possible, and the driver will keep waiting for it.
I am wrapping a C/C++ library in a Rust crate and calling into it using FFI (I am not using a subprocess).
This library logs to stdout/stderr (using, say, printf() or std::cout) but I would like to "catch" this output and use Rust's log crate to control the output.
Is it possible to redirect stdout/stderr of FFI calls to log?
Please find below an example illustrating the different
steps to redirect/restore stderr (file descriptor 2).
The (C-like) style used here was intended in order to keep this
example minimal ; of course, you could probably use the libc
crate and encapsulate properly all of this in a struct.
Note that, in trivial cases, you may repeat the
redirect/invoke/obtain/restore sequence as many times as you want,
provided you keep pipe_fd, saved_fd and log_file open.
However, in non-trivial cases, some kind of complication is implied:
if the C code produces a quite long message, how can we detect
that we have read it all?
we could inject an end-marker into STDERR_FILENO after the
message is produced at the invoke step and then read log_file
until this marker is detected in the obtain step. (this adds
some kind of text processing)
we could recreate the pipe and log_file before each redirect
step, close the PIPE_WRITE end before the invoke step, read
log_file until EOF is reached and close it in the obtain step.
(this adds the overhead of more system-calls)
if the C code produces a very long message, wouldn't it exceed
the pipe's internal buffer capacity (and then block writing)?
we could execute the invoke step in a separate thread and
join() it after the obtain step has completed (end-marker or
EOF is reached), so that the invocation still looks serial
from the application's point of view.
(this adds the overhead of spawning/joining a thread)
an alternative is to put all the logging part of the application
in a separate thread (spawned once for all) and keep all the
invocation steps serial.
(if the logging part of the application does not have to be
perceived as serial this is OK, but else this just reports the
same problem one thread further)
we could fork() to perform the redirect and invoke
steps in a child process (if the application data does not have
to be altered, just read), get rid of the restore step and
wait() the process after the obtain step has completed
(end-marker or EOF is reached), so that the invocation still
looks serial from the application's point of view.
(this adds the overhead of spawning/waiting a process, and
precludes the ability to alter the application data from the
invoked code)
// necessary for the redirection
extern "C" {
fn pipe(fd: *mut i32) -> i32;
fn close(fd: i32) -> i32;
fn dup(fd: i32) -> i32;
fn dup2(
old_fd: i32,
new_fd: i32,
) -> i32;
}
const PIPE_READ: usize = 0;
const PIPE_WRITE: usize = 1;
const STDERR_FILENO: i32 = 2;
fn main() {
//
// duplicate original stderr in order to restore it
//
let saved_stderr = unsafe { dup(STDERR_FILENO) };
if saved_stderr == -1 {
eprintln!("cannot duplicate stderr");
return;
}
//
// create resources (pipe + file reading from it)
//
let mut pipe_fd = [-1; 2];
if unsafe { pipe(&mut pipe_fd[0]) } == -1 {
eprintln!("cannot create pipe");
return;
}
use std::os::unix::io::FromRawFd;
let mut log_file =
unsafe { std::fs::File::from_raw_fd(pipe_fd[PIPE_READ]) };
//
// redirect stderr to pipe/log_file
//
if unsafe { dup2(pipe_fd[PIPE_WRITE], STDERR_FILENO) } == -1 {
eprintln!("cannot redirect stderr to pipe");
return;
}
//
// invoke some C code that should write to stderr
//
extern "C" {
fn perror(txt: *const u8);
}
unsafe {
dup(-1); // invalid syscall in order to set errno (used by perror)
perror(&"something bad happened\0".as_bytes()[0]);
};
//
// obtain the previous message
//
use std::io::Read;
let mut buffer = [0_u8; 100];
if let Ok(sz) = log_file.read(&mut buffer) {
println!(
"message ({} bytes): {:?}",
sz,
std::str::from_utf8(&buffer[0..sz]).unwrap(),
);
}
//
// restore initial stderr
//
unsafe { dup2(saved_stderr, STDERR_FILENO) };
//
// close resources
//
unsafe {
close(saved_stderr);
// pipe_fd[PIPE_READ] will be closed by log_file
close(pipe_fd[PIPE_WRITE]);
};
}
I'm trying to get a parent process and a child process to communicate with each other using a tokio::net::UnixStream. For some reason the child is unable to read whatever the parent writes to the socket, and presumably the other way around.
The function I have is similar to the following:
pub async fn run() -> Result<(), Error> {
let mut socks = UnixStream::pair()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
socks.0.write_u32(31337).await?;
Ok(())
}
Ok(ForkResult::Child) => {
eprintln!("Reading from master");
let msg = socks.1.read_u32().await?;
eprintln!("Read from master {}", msg);
Ok(())
}
Err(_) => Err(Error),
}
}
The socket doesn't get closed, otherwise I'd get an immediate error trying to read from socks.1. If I move the read into the parent process it works as expected. The first line "Reading from master" gets printed, but the second line never gets called.
I cannot change the communication paradigm, since I'll be using execve to start another binary that expects to be talking to a socketpair.
Any idea what I'm doing wrong here? Is it something to do with the async/await?
When you call the fork() system call:
The child process is created with a single thread—the one that called fork().
The default executor in tokio is a thread pool executor. The child process will only get one of the threads in the pool, so it won't work properly.
I found I was able to make your program work by setting the thread pool to contain only a single thread, like this:
use tokio::prelude::*;
use tokio::net::UnixStream;
use nix::unistd::{fork, ForkResult};
use nix::sys::wait;
use std::io::Error;
use std::io::ErrorKind;
use wait::wait;
// Limit to 1 thread
#[tokio::main(core_threads = 1)]
async fn main() -> Result<(), Error> {
let mut socks = UnixStream::pair()?;
match fork() {
Ok(ForkResult::Parent { .. }) => {
eprintln!("Writing!");
socks.0.write_u32(31337).await?;
eprintln!("Written!");
wait().unwrap();
Ok(())
}
Ok(ForkResult::Child) => {
eprintln!("Reading from master");
let msg = socks.1.read_u32().await?;
eprintln!("Read from master {}", msg);
Ok(())
}
Err(_) => Err(Error::new(ErrorKind::Other, "oh no!")),
}
}
Another change I had to make was to force the parent to wait for the child to complete, by calling wait() - also something you probably do not want to be doing in a real async program.
Most of the advice I have read that if you need to fork from a threaded program, either do it before creating any threads, or call exec_ve() in the child immediately after forking (which is what you plan to do anyway).
I'm trying to get into Rust from a Python background and I'm having an issue with a PoC I'm messing around with. I've read through a bunch of blogs and documentation on how to handle errors in Rust, but I can't figure out how to implement it when I use unwrap and get a panic. Here is part of the code:
fn main() {
let listener = TcpListener::bind("127.0.0.1:5432").unwrap();
// The .0 at the end is indexing a tuple, FYI
loop {
let stream = listener.accept().unwrap().0;
stream.set_read_timeout(Some(Duration::from_millis(100)));
handle_request(stream);
}
}
// Things change a bit in here
fn handle_request(stream: TcpStream) {
let address = stream.peer_addr().unwrap();
let mut reader = BufReader::new(stream);
let mut payload = "".to_string();
for line in reader.by_ref().lines() {
let brap = line.unwrap();
payload.push_str(&*brap);
if brap == "" {
break;
}
}
println!("{0} -> {1}", address, payload);
send_response(reader.into_inner());
}
It is handling the socket not receiving anything with set_read_timeout on the stream as expected, but when that triggers my unwrap on line in the loop it is causing a panic. Can someone help me understand how I'm properly supposed to apply a match or Option to this code?
There seems to be a large disconnect here. unwrap or expect handle errors by panicking the thread. You aren't really supposed to "handle" a panic in 99.9% of Rust programs; you just let things die.
If you don't want a panic, don't use unwrap or expect. Instead, pass back the error via a Result or an Option, as described in the Error Handling section of The Rust Programming Language.
You can match (or any other pattern matching technique) on the Result or Option and handle an error appropriately for your case. One example of handling the error in your outer loop:
use std::net::{TcpStream, TcpListener};
use std::time::Duration;
use std::io::prelude::*;
use std::io::BufReader;
fn main() {
let listener = TcpListener::bind("127.0.0.1:5432")
.expect("Unable to bind to the port");
loop {
if let Ok((stream, _)) = listener.accept() {
stream
.set_read_timeout(Some(Duration::from_millis(100)))
.expect("Unable to set timeout");
handle_request(stream);
}
}
}
Note that I highly recommend using expect instead of unwrap in just about every case.