I use tokio::net::TcpStream to connect a small TCP server, I write a few bytes and expect to read the response from the server.
When I do that with the nc command, it works perfectly
[denis#docker-1 ~]$ echo "get" | nc 10.0.0.11 9090
[37e64dd7-91db-4c13-9f89-f1c87467ffb3][processed]
and the server logs show
Incoming peer instructions.
Waiting for peer instructions...
Reading bytes...
Got a few bytes [4]
Got a few bytes [[103, 101, 116, 10, 0, ...]]
Reading bytes...
Got a few bytes [0]
Got a few bytes [[0, 0, 0, 0, 0, 0,...]]
Writing some data back from peer : [37e64dd7-91db-4c13-9f89-f1c87467ffb3]
But from my Rust client, I can write the bytes but as soon as I want to read the data from the server, everything is locked (even the write action)
use std::collections::HashMap;
use std::ops::DerefMut;
use tokio::io;
use tokio::net::{TcpListener, TcpStream};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use uuid::Uuid;
use std::sync::RwLock;
use lazy_static::*;
#[tokio::main]
async fn main() {
let data = "set".to_string();
let mut stream = TcpStream::connect("10.0.0.11:9090").await.unwrap();
let ( mut read, mut write) = tokio::io::split(stream);
let u2 = data.as_bytes();
write.write_all(u2).await.unwrap();
let mut msg : [u8;1024] = [0;1024];
let _response_size = read.read(&mut msg).await.unwrap();
println!("GOT = {:?}", msg);
}
When looking at the server logs (see below), it reads the 3 bytes sent by the client, but then it is not able to read further, waiting to detect there is 0 byte left to read.
Incoming peer instructions.
Waiting for peer instructions...
Reading bytes...
Got a few bytes [3]
Got a few bytes [[115, 101, 116, 0, 0, ...]]
Reading bytes...
Here is the server code
use std::collections::HashMap;
use std::ops::DerefMut;
use tokio::io;
use tokio::net::{TcpListener, TcpStream};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use uuid::Uuid;
use std::sync::RwLock;
use lazy_static::*;
struct DataPool {
data : [u8;1024],
size : usize,
}
async fn whirl_socket( socket : &mut TcpStream ) -> Vec<DataPool> {
let mut pool: Vec<DataPool> = vec![];
let mut buf = [0; 1024];
// In a loop, read data from the socket until finished
loop {
println!("Reading bytes...");
buf = [0; 1024];
let n = match socket.read(&mut buf).await {
Ok(n) => n,
Err(e) => {
eprintln!("failed to read from socket; err = {:?}", e);
break;
}
};
println!("Got a few bytes [{}]", n);
println!("Got a few bytes [{:?}]", &buf);
pool.push(DataPool {
data: buf,
size: n,
});
if n == 0 {
break;
}
}
pool
}
async fn launch_server_listener() -> io::Result<()> {
println!("Listen to 9090...");
let listener = TcpListener::bind("10.0.0.11:9090").await?;
loop {
println!("Waiting for peer instructions...");
let (mut socket, _) = listener.accept().await?;
println!("Incoming peer instructions.");
tokio::spawn(async move {
let mut pool= whirl_socket(&mut socket).await;
let my_uuid = Uuid::new_v4();
// Write the data back
println!("Writing some data back from peer : [{}]", my_uuid);
let s = format!( "[{}][processed]\n", my_uuid.to_string());
let u = s.as_bytes();
if let Err(e) = socket.write_all(u).await {
eprintln!("failed to write to socket; err = {:?}", e);
return;
}
});
}
}
async fn start_servers() -> Result<(), Box<dyn std::error::Error>> {
let _r = tokio::join!(launch_server_listener());
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
start_servers().await?;
Ok(())
}
A read of 0 bytes means the read stream has closed. So in your client code you need to close the write stream. You can do this with .shutdown() from the AsyncWriteExt trait:
write.write_all(u2).await.unwrap();
write.shutdown().await.unwrap();
Related
I am writing a client for a Unix Domain Socket, which should listen for messages from a server, I got it to work after hours of researching (it shows 1-2 messages), but after that tokio crashes with the error: Error: Kind(WouldBlock), here is my code:
use std::env::{var, VarError};
use std::io::{self, Write};
use tokio::net::UnixStream;
#[tokio::main]
async fn main() -> io::Result<()> {
let hypr_instance_sig = match var("HYPRLAND_INSTANCE_SIGNATURE") {
Ok(var) => var,
Err(VarError::NotPresent) => panic!("Is hyprland running?"),
Err(VarError::NotUnicode(_)) => panic!("wtf no unicode?"),
};
let socket_path = format!("/tmp/hypr/{hypr_instance_sig}/.socket2.sock");
let stream = UnixStream::connect(socket_path).await?;
loop {
stream.readable().await?;
let mut buf = [0; 4096];
stream.try_read(&mut buf)?;
io::stdout().lock().write_all(&buf)?;
}
}
Could someone please help me?
I fully agree with #Caesar's comment.
don't use try_read, use read instead
slice the buffer to the correct size after reading
check for 0 bytes read, which indicates that the end of the stream was reached
use std::env::{var, VarError};
use std::io::{self, Write};
use tokio::io::AsyncReadExt;
use tokio::net::UnixStream;
#[tokio::main]
async fn main() -> io::Result<()> {
let hypr_instance_sig = match var("HYPRLAND_INSTANCE_SIGNATURE") {
Ok(var) => var,
Err(VarError::NotPresent) => panic!("Is hyprland running?"),
Err(VarError::NotUnicode(_)) => panic!("wtf no unicode?"),
};
let socket_path = format!("/tmp/hypr/{hypr_instance_sig}/.socket2.sock");
let mut stream = UnixStream::connect(socket_path).await?;
let mut buf = [0; 4096];
loop {
let num_read = stream.read(&mut buf).await?;
if num_read == 0 {
break;
}
let buf = &buf[..num_read];
io::stdout().lock().write_all(buf)?;
}
Ok(())
}
Disclaimer: I didn't test the code because I don't have the required socket file. It compiles, though.
I have a server that broadcast messages to connected client, though the messages doesn't get delivered and my tests fails.
I'm using the following
use anyhow::Result;
use std::path::{Path, PathBuf};
use std::process::Stdio;
use std::sync::Arc;
use tokio::io::AsyncWriteExt;
use tokio::net::{UnixListener, UnixStream};
use tokio::sync::broadcast::*;
use tokio::sync::Notify;
use tokio::task::JoinHandle;
This is how I start and setup my server
pub struct Server {
#[allow(dead_code)]
tx: Sender<String>,
rx: Receiver<String>,
address: Arc<PathBuf>,
handle: Option<JoinHandle<Result<()>>>,
abort: Arc<Notify>,
}
impl Server {
pub fn new<P: AsRef<Path>>(address: P) -> Self {
let (tx, rx) = channel::<String>(400);
let address = Arc::new(address.as_ref().to_path_buf());
Self {
address,
handle: None,
tx,
rx,
abort: Arc::new(Notify::new()),
}
}
}
/// Start Server
pub async fn start(server: &mut Server) -> Result<()> {
tokio::fs::remove_file(server.address.as_path()).await.ok();
let listener = UnixListener::bind(server.address.as_path())?;
println!("[Server] Started");
let tx = server.tx.clone();
let abort = server.abort.clone();
server.handle = Some(tokio::spawn(async move {
loop {
let tx = tx.clone();
let abort1 = abort.clone();
tokio::select! {
_ = abort.notified() => break,
Ok((client, _)) = listener.accept() => {
tokio::spawn(async move { handle(client, tx, abort1).await });
}
}
}
println!("[Server] Aborted!");
Ok(())
}));
Ok(())
}
my handle function
/// Handle stream
async fn handle(mut stream: UnixStream, tx: Sender<String>, abort: Arc<Notify>) {
loop {
let mut rx = tx.subscribe();
let abort = abort.clone();
tokio::select! {
_ = abort.notified() => break,
result = rx.recv() => match result {
Ok(output) => {
stream.write_all(output.as_bytes()).await.unwrap();
stream.write(b"\n").await.unwrap();
continue;
}
Err(e) => {
println!("[Server] {e}");
break;
}
}
}
}
stream.write(b"").await.unwrap();
stream.flush().await.unwrap();
}
my connect function
/// Connect to server
async fn connect(address: Arc<PathBuf>, name: String) -> Vec<String> {
use tokio::io::{AsyncBufReadExt, BufReader};
let mut outputs = vec![];
let stream = UnixStream::connect(&*address).await.unwrap();
let mut breader = BufReader::new(stream);
let mut buf = vec![];
loop {
if let Ok(len) = breader.read_until(b'\n', &mut buf).await {
if len == 0 {
break;
} else {
let value = String::from_utf8(buf.clone()).unwrap();
print!("[{name}] {value}");
outputs.push(value)
};
buf.clear();
}
}
println!("[{name}] ENDED");
outputs
}
This what I feed to the channel and want to have broadcasted to all clients
/// Feed data
pub fn feed(tx: Sender<String>, abort: Arc<Notify>) -> Result<JoinHandle<Result<()>>> {
use tokio::io::*;
use tokio::process::Command;
Ok(tokio::spawn(async move {
let mut child = Command::new("echo")
.args(&["1\n", "2\n", "3\n", "4\n"])
.stdout(Stdio::piped())
.stderr(Stdio::null())
.stdin(Stdio::null())
.spawn()?;
let mut stdout = BufReader::new(child.stdout.take().unwrap()).lines();
loop {
let sender = tx.clone();
tokio::select! {
result = stdout.next_line() => match result {
Err(e) => {
println!("[Server] FAILED to send an output to channel: {e}");
},
Ok(None) => break,
Ok(Some(output)) => {
let output = output.trim().to_string();
println!("[Server] {output}");
if !output.is_empty() {
if let Err(e) = sender.send(output) {
println!("[Server] FAILED to send an output to channel: {e}");
}
}
}
}
}
}
println!("[Server] Process Completed");
abort.notify_waiters();
Ok(())
}))
}
my failing test
#[tokio::test]
async fn test_server() -> Result<()> {
let mut server = Server::new("/tmp/testsock.socket");
start(&mut server).await?;
feed(server.tx.clone(), server.abort.clone()).unwrap();
let address = server.address.clone();
let client1 = connect(address.clone(), "Alpha".into());
let client2 = connect(address.clone(), "Beta".into());
let client3 = connect(address.clone(), "Delta".into());
let client4 = connect(address.clone(), "Gamma".into());
let (c1, c2, c3, c4) = tokio::join!(client1, client2, client3, client4,);
server.handle.unwrap().abort();
assert_eq!(c1.len(), 4, "Alpha");
assert_eq!(c2.len(), 4, "Beta");
assert_eq!(c3.len(), 4, "Delta");
assert_eq!(c4.len(), 4, "Gamma");
println!("ENDED");
Ok(())
}
Logs:
[Server] Started
[Server] 1
[Server] 2
[Server] 3
[Server] 4
[Server]
[Delta] 1
[Gamma] 1
[Alpha] 1
[Beta] 1
[Server] Process Completed
[Server] Aborted!
[Gamma] ENDED
[Alpha] ENDED
[Beta] ENDED
[Delta] ENDED
well, not an answer but I just want to suggest to use task::spawn to generate a JoinHandle from a function, then, say your handle could be:
fn handle(mut stream: UnixStream, tx: Sender<String>, abort: Arc<Notify>) -> JoinHandle {
let mut rx = tx.subscribe();
let abort = abort.clone();
task::spawn( async move {
loop {
tokio::select! {
_ = abort.notified() => break,
result = rx.recv() => match result {
Ok(output) => {
stream.write_all(output.as_bytes()).await.unwrap();
stream.write(b"\n").await.unwrap();
continue;
}
Err(e) => {
println!("[Server] {e}");
break;
}
}
}
}
stream.write(b"").await.unwrap();
stream.flush().await.unwrap();
})
}
I mean, I did not tested this, but I see a sort of duplication in the code above, like 2 loop, 2 select! and twice the abort check
I ahve a really simple tcp client/server setup that I found somewhere online.
use std::net::{Shutdown,TcpListener, TcpStream};
use std::thread;
use std::io::{Read,Write,Error};
fn handle_client(mut stream: TcpStream)-> Result<(), Error> {
println!("incoming connection from: {}", stream.peer_addr()?);
let mut buf = [0;512];
loop {
let bytes_read = stream.read(&mut buf)?;
if bytes_read == 0 {return Ok(())}
let tmp = format!("{}", String::from_utf8_lossy(&buf).trim());
eprintln!("getting {}",tmp);
stream.write(&vec![1,2,3,4])?;
}
}
fn main() {
let listener = TcpListener::bind("0.0.0.0:8888").expect("Could not bind");
let mut i = 0;
for stream in listener.incoming() {
match stream {
Err(e)=> {eprintln!("failed: {}", e)}
Ok(stream) => {
thread::spawn(move || {
handle_client(stream).unwrap_or_else(|error| eprintln!("{:?}", error));
});
}
}
}
}
and the client looks like so:
use std::net::TcpStream;
use std::str;
use std::io::{self,BufRead,BufReader,Write};
fn main() {
let mut stream = TcpStream::connect("0.0.0.0:8888").expect("could not connect");
loop {
let mut input = String::new();
let mut buffer : Vec<u8> = Vec::new();
io::stdin().read_line(&mut input).expect("failed to read stdin");
stream.write(input.as_bytes()).expect("Failed to write to server");
let mut reader = BufReader::new(&stream);
reader.read_until(b'\n', &mut buffer).expect("Could not read into buffer");
println!("{}", str::from_utf8(&buffer).expect("msg: &str"))
}
println!("Hello, world!");
}
(actually vs code tells me that the code inside the loop here is unreachable, which is false)
But this basically just lets the client send a message to the server, that the server sends back.
I would actually like to send vectors back and forth, which actually seems like a simpler task.
So, I change the line in the server that writes to the stream to this:
stream.write(&vec![1,2,3,4])?;
and now, in the client that prints the message it gets from the serve to:
println!("{:?}", &buffer)
But when I do this, nothing happens on the clientside, nothing is printed, and it seems like the loop is just stuck somewhere.
I gugess it has something to do with this line:
reader.read_until(b'\n', &mut buffer).expect("Could not read into buffer");
and I read some other way of reading in. I tried using a function from tcpstream called "read_vectored", but I can't use it at all for the stream object I have.
I'm trying to make client program that communicates with a server using a TcpStream wrapped by a openssl::ssl::SslStream (from crates.io). It should wait for read, and process data sent from the server if it was received without delay. At the same time, it should be able to send messages to the server regardless of reading.
I tried some methods such as
Passing single stream to both read and write threads. Both read and write methods require a mutable reference, so I couldn't pass a single stream to two threads.
I followed In Rust how do I handle parallel read writes on a TcpStream, but TcpStream doesn't seem to have clone method, and neither does SslStream.
I tried making copy of TcpStream with as_raw_fd and from_raw_fd :
fn irc_read(mut stream: SslStream<TcpStream>) {
loop {
let mut buf = vec![0; 2048];
let resp = stream.ssl_read(&mut buf);
match resp {
// Process Message
}
}
}
fn irc_write(mut stream: SslStream<TcpStream>) {
thread::sleep(Duration::new(3, 0));
let msg = "QUIT\n";
let res = stream.ssl_write(msg.as_bytes());
let _ = stream.flush();
match res {
// Process
}
}
fn main() {
let ctx = SslContext::new(SslMethod::Sslv23).unwrap();
let read_ssl = Ssl::new(&ctx).unwrap();
let write_ssl = Ssl::new(&ctx).unwrap();
let raw_stream = TcpStream::connect((SERVER, PORT)).unwrap();
let mut fd_stream: TcpStream;
unsafe {
fd_stream = TcpStream::from_raw_fd(raw_stream.as_raw_fd());
}
let mut read_stream = SslStream::connect(read_ssl, raw_stream).unwrap();
let mut write_stream = SslStream::connect(write_ssl, fd_stream).unwrap();
let read_thread = thread::spawn(move || {
irc_read(read_stream);
});
let write_thread = thread::spawn(move || {
irc_write(write_stream);
});
let _ = read_thread.join();
let _ = write_thread.join();
}
this code compiles, but panics on the second SslStream::connect
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Failure(Ssl(ErrorStack([Error { library: "SSL routines", function: "SSL23_GET_SERVER_HELLO", reason: "unknown protocol" }])))', ../src/libcore/result.rs:788
stack backtrace:
1: 0x556d719c6069 - std::sys::backtrace::tracing::imp::write::h00e948915d1e4c72
2: 0x556d719c9d3c - std::panicking::default_hook::_{{closure}}::h7b8a142818383fb8
3: 0x556d719c8f89 - std::panicking::default_hook::h41cf296f654245d7
4: 0x556d719c9678 - std::panicking::rust_panic_with_hook::h4cbd7ca63ce1aee9
5: 0x556d719c94d2 - std::panicking::begin_panic::h93672d0313d5e8e9
6: 0x556d719c9440 - std::panicking::begin_panic_fmt::hd0daa02942245d81
7: 0x556d719c93c1 - rust_begin_unwind
8: 0x556d719ffcbf - core::panicking::panic_fmt::hbfc935564d134c1b
9: 0x556d71899f02 - core::result::unwrap_failed::h66f79b2edc69ddfd
at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/obj/../src/libcore/result.rs:29
10: 0x556d718952cb - _<core..result..Result<T, E>>::unwrap::h49a140af593bc4fa
at /buildslave/rust-buildbot/slave/stable-dist-rustc-linux/build/obj/../src/libcore/result.rs:726
11: 0x556d718a5e3d - dbrust::main::h24a50e631826915e
at /home/lastone817/dbrust/src/main.rs:87
12: 0x556d719d1826 - __rust_maybe_catch_panic
13: 0x556d719c8702 - std::rt::lang_start::h53bf99b0829cc03c
14: 0x556d718a6b83 - main
15: 0x7f40a0b5082f - __libc_start_main
16: 0x556d7188d038 - _start
17: 0x0 - <unknown>
error: Process didn't exit successfully: `target/debug/dbrust` (exit code: 101)
The best solution I've found so far is to use nonblocking. I used Mutex on the stream and passed it to both threads. Then the reading thread acquires a lock and calls read. If there is no message it releases the lock so that the writing thread can use the stream. With this method, the reading thread does busy waiting, resulting in 100% CPU consumption. This is not the best solution, I think.
Is there a safe way to separate the read and write aspects of the stream?
I accomplished the split of an SSL stream into a read and a write part by using Rust's std::cell::UnsafeCell.
extern crate native_tls;
use native_tls::TlsConnector;
use std::cell::UnsafeCell;
use std::error::Error;
use std::io::Read;
use std::io::Write;
use std::marker::Sync;
use std::net::TcpStream;
use std::sync::Arc;
use std::sync::Mutex;
use std::thread;
struct UnsafeMutator<T> {
value: UnsafeCell<T>,
}
impl<T> UnsafeMutator<T> {
fn new(value: T) -> UnsafeMutator<T> {
return UnsafeMutator {
value: UnsafeCell::new(value),
};
}
fn mut_value(&self) -> &mut T {
return unsafe { &mut *self.value.get() };
}
}
unsafe impl<T> Sync for UnsafeMutator<T> {}
struct ReadWrapper<R>
where
R: Read,
{
inner: Arc<UnsafeMutator<R>>,
}
impl<R: Read> Read for ReadWrapper<R> {
fn read(&mut self, buf: &mut [u8]) -> Result<usize, std::io::Error> {
return self.inner.mut_value().read(buf);
}
}
struct WriteWrapper<W>
where
W: Write,
{
inner: Arc<UnsafeMutator<W>>,
}
impl<W: Write> Write for WriteWrapper<W> {
fn write(&mut self, buf: &[u8]) -> Result<usize, std::io::Error> {
return self.inner.mut_value().write(buf);
}
fn flush(&mut self) -> Result<(), std::io::Error> {
return self.inner.mut_value().flush();
}
}
pub struct Socket {
pub output_stream: Arc<Mutex<Write + Send>>,
pub input_stream: Arc<Mutex<Read + Send>>,
}
impl Socket {
pub fn bind(host: &str, port: u16, secure: bool) -> Result<Socket, Box<Error>> {
let tcp_stream = match TcpStream::connect((host, port)) {
Ok(x) => x,
Err(e) => return Err(Box::new(e)),
};
if secure {
let tls_connector = TlsConnector::builder().build().unwrap();
let tls_stream = match tls_connector.connect(host, tcp_stream) {
Ok(x) => x,
Err(e) => return Err(Box::new(e)),
};
let mutator = Arc::new(UnsafeMutator::new(tls_stream));
let input_stream = Arc::new(Mutex::new(ReadWrapper {
inner: mutator.clone(),
}));
let output_stream = Arc::new(Mutex::new(WriteWrapper { inner: mutator }));
let socket = Socket {
output_stream,
input_stream,
};
return Ok(socket);
} else {
let mutator = Arc::new(UnsafeMutator::new(tcp_stream));
let input_stream = Arc::new(Mutex::new(ReadWrapper {
inner: mutator.clone(),
}));
let output_stream = Arc::new(Mutex::new(WriteWrapper { inner: mutator }));
let socket = Socket {
output_stream,
input_stream,
};
return Ok(socket);
}
}
}
fn main() {
let socket = Arc::new(Socket::bind("google.com", 443, true).unwrap());
let socket_clone = Arc::clone(&socket);
let reader_thread = thread::spawn(move || {
let mut res = vec![];
let mut input_stream = socket_clone.input_stream.lock().unwrap();
input_stream.read_to_end(&mut res).unwrap();
println!("{}", String::from_utf8_lossy(&res));
});
let writer_thread = thread::spawn(move || {
let mut output_stream = socket.output_stream.lock().unwrap();
output_stream.write_all(b"GET / HTTP/1.0\r\n\r\n").unwrap();
});
writer_thread.join().unwrap();
reader_thread.join().unwrap();
}
I'm trying to port this Python script that sends and receives input to a helper process to Rust:
import subprocess
data = chr(0x3f) * 1024 * 4096
child = subprocess.Popen(['cat'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output, _ = child.communicate(data)
assert output == data
My attempt worked fine until the input buffer exceeded 64k because presumably the OS's pipe buffer filled up before the input was written.
use std::io::Write;
const DATA: [u8; 1024 * 4096] = [0x3f; 1024 * 4096];
fn main() {
let mut child = std::process::Command::new("cat")
.stdout(std::process::Stdio::piped())
.stdin(std::process::Stdio::piped())
.spawn()
.unwrap();
match child.stdin {
Some(ref mut stdin) => {
match stdin.write_all(&DATA[..]) {
Ok(_size) => {}
Err(err) => panic!(err),
}
}
None => unreachable!(),
}
let res = child.wait_with_output();
assert_eq!(res.unwrap().stdout.len(), DATA.len())
}
Is there a subprocess.communicate equivalent in Rust? Maybe a select equivalent? Can mio be used to solve this problem? Also, there seems to be no way to close stdin.
The goal here is to make a high performance system, so I want to avoid spawning a thread per task.
Well it wasn't a small amount of code to get this done, and I needed a combination of mio and nix, because mio wouldn't set AsRawFd items to be nonblocking when they were pipes, so this had to be done first.
Here's the result
extern crate mio;
extern crate bytes;
use mio::*;
use std::io;
use mio::unix::{PipeReader, PipeWriter};
use std::process::{Command, Stdio};
use std::os::unix::io::AsRawFd;
use nix::fcntl::FcntlArg::F_SETFL;
use nix::fcntl::{fcntl, O_NONBLOCK};
extern crate nix;
struct SubprocessClient {
stdin: PipeWriter,
stdout: PipeReader,
output : Vec<u8>,
input : Vec<u8>,
input_offset : usize,
buf : [u8; 65536],
}
// Sends a message and expects to receive the same exact message, one at a time
impl SubprocessClient {
fn new(stdin: PipeWriter, stdout : PipeReader, data : &[u8]) -> SubprocessClient {
SubprocessClient {
stdin: stdin,
stdout: stdout,
output : Vec::<u8>::new(),
buf : [0; 65536],
input : data.to_vec(),
input_offset : 0,
}
}
fn readable(&mut self, _event_loop: &mut EventLoop<SubprocessClient>) -> io::Result<()> {
println!("client socket readable");
match self.stdout.try_read(&mut self.buf[..]) {
Ok(None) => {
println!("CLIENT : spurious read wakeup");
}
Ok(Some(r)) => {
println!("CLIENT : We read {} bytes!", r);
self.output.extend(&self.buf[0..r]);
}
Err(e) => {
return Err(e);
}
};
return Ok(());
}
fn writable(&mut self, event_loop: &mut EventLoop<SubprocessClient>) -> io::Result<()> {
println!("client socket writable");
match self.stdin.try_write(&(&self.input)[self.input_offset..]) {
Ok(None) => {
println!("client flushing buf; WOULDBLOCK");
}
Ok(Some(r)) => {
println!("CLIENT : we wrote {} bytes!", r);
self.input_offset += r;
}
Err(e) => println!("not implemented; client err={:?}", e)
}
if self.input_offset == self.input.len() {
event_loop.shutdown();
}
return Ok(());
}
}
impl Handler for SubprocessClient {
type Timeout = usize;
type Message = ();
fn ready(&mut self, event_loop: &mut EventLoop<SubprocessClient>, token: Token,
events: EventSet) {
println!("ready {:?} {:?}", token, events);
if events.is_readable() {
let _x = self.readable(event_loop);
}
if events.is_writable() {
let _y = self.writable(event_loop);
}
}
}
pub fn from_nix_error(err: ::nix::Error) -> io::Error {
io::Error::from_raw_os_error(err.errno() as i32)
}
fn set_nonblock(s: &AsRawFd) -> io::Result<()> {
fcntl(s.as_raw_fd(), F_SETFL(O_NONBLOCK)).map_err(from_nix_error)
.map(|_| ())
}
const TEST_DATA : [u8; 1024 * 4096] = [40; 1024 * 4096];
pub fn echo_server() {
let mut event_loop = EventLoop::<SubprocessClient>::new().unwrap();
let process =
Command::new("cat")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn().unwrap();
let raw_stdin_fd;
match process.stdin {
None => unreachable!(),
Some(ref item) => {
let err = set_nonblock(item);
match err {
Ok(()) => {},
Err(e) => panic!(e),
}
raw_stdin_fd = item.as_raw_fd();
},
}
let raw_stdout_fd;
match process.stdout {
None => unreachable!(),
Some(ref item) => {
let err = set_nonblock(item);
match err {
Ok(()) => {},
Err(e) => panic!(e),
}
raw_stdout_fd = item.as_raw_fd();},
}
//println!("listen for connections {:?} {:?}", , process.stdout.unwrap().as_raw_fd());
let mut subprocess = SubprocessClient::new(PipeWriter::from(Io::from_raw_fd(raw_stdin_fd)),
PipeReader::from(Io::from_raw_fd(raw_stdout_fd)),
&TEST_DATA[..]);
let stdout_token : Token = Token(0);
let stdin_token : Token = Token(1);
event_loop.register(&subprocess.stdout, stdout_token, EventSet::readable(),
PollOpt::level()).unwrap();
// Connect to the server
event_loop.register(&subprocess.stdin, stdin_token, EventSet::writable(),
PollOpt::level()).unwrap();
// Start the event loop
event_loop.run(&mut subprocess).unwrap();
let res = process.wait_with_output();
match res {
Err(e) => {panic!(e);},
Ok(output) => {
subprocess.output.extend(&output.stdout);
println!("Final output was {:}\n", output.stdout.len());
},
}
println!("{:?}\n", subprocess.output.len());
}
fn main() {
echo_server();
}
Basically the only way to close stdin was to call process.wait_with_output since the Stdin has no close primitive
Once this happened, the remaining input could extend the output data vector.
There's now a crate that does this
https://crates.io/crates/subprocess-communicate
In this particular example, you know that the input and output amounts are equivalent, so you don't need threads at all. You can just write a bit and then read a bit:
use std::io::{self, Cursor, Read, Write};
static DATA: [u8; 1024 * 4096] = [0x3f; 1024 * 4096];
const TRANSFER_LIMIT: u64 = 32 * 1024;
fn main() {
let mut child = std::process::Command::new("cat")
.stdout(std::process::Stdio::piped())
.stdin(std::process::Stdio::piped())
.spawn()
.expect("Could not start child");
let mut input = Cursor::new(&DATA[..]);
let mut output = Cursor::new(Vec::new());
match (child.stdin.as_mut(), child.stdout.as_mut()) {
(Some(stdin), Some(stdout)) => {
while input.position() < input.get_ref().len() as u64 {
io::copy(&mut input.by_ref().take(TRANSFER_LIMIT), stdin).expect("Could not copy input");
io::copy(&mut stdout.take(TRANSFER_LIMIT), &mut output).expect("Could not copy output");
}
},
_ => panic!("child process input and output were not opened"),
}
child.wait().expect("Could not join child");
let res = output.into_inner();
assert_eq!(res.len(), DATA.len());
assert_eq!(&*res, &DATA[..]);
}
If you didn't have that specific restriction, you will need to use select from the libc crate, which requires file descriptors for the pipes so will probably restrict your code to running on Linux / OS X.
You could also start threads, one for each pipe (and reuse the parent thread for one of the pipes), but you've already ruled that out.