std::sync::Arc of trait in Rust - multithreading

I am trying to implement library for making TCP servers.
This is very simplified code with a problem:
#![crate_name="http_server2"]
#![crate_type="lib"]
use std::io::{TcpListener, Listener, Acceptor, TcpStream, IoResult, Reader, Writer};
use std::ops::Fn;
use std::sync::Arc;
pub trait Handler: Sized + Send {
fn do_it(s: TcpStream) -> IoResult<()>;
}
fn serve(handler: Arc<Handler + Sized>) -> IoResult<()>
{
let listener = TcpListener::bind("127.0.0.1", 1234);
for stream in try!(listener.listen()).incoming() {
let stream = try!(stream);
let handler = handler.clone();
spawn(proc() {
handler.do_it(stream);
});
}
Ok(())
}
Compiler totally ignores my specifications of Handler + Sized. If I implement structure with trait Handler and try to call serve with this structure, such advice about size will be ignored too ( http://is.gd/OWs22i ).
<anon>:13:1: 25:2 error: the trait `core::kinds::Sized` is not implemented for the type `Handler+'static+Sized`
<anon>:13 fn serve(handler: Arc<Handler + Sized>) -> IoResult<()>
<anon>:14 {
<anon>:15 let listener = TcpListener::bind("127.0.0.1", 1234);
<anon>:16
<anon>:17 for stream in try!(listener.listen()).incoming() {
<anon>:18 let stream = try!(stream);
...
<anon>:13:1: 25:2 note: the trait `core::kinds::Sized` must be implemented because it is required by `alloc::arc::Arc`
<anon>:13 fn serve(handler: Arc<Handler + Sized>) -> IoResult<()>
<anon>:14 {
<anon>:15 let listener = TcpListener::bind("127.0.0.1", 1234);
<anon>:16
<anon>:17 for stream in try!(listener.listen()).incoming() {
<anon>:18 let stream = try!(stream);
...
error: aborting due to previous error
How can I implement one template function with multithreading that will accept different handlers?

As I said in my comment above,
use std::io::{TcpListener, Listener, Acceptor, TcpStream, IoResult, Writer};
use std::sync::Arc;
pub trait Handler: Sized + Send {
fn do_it(&self, s: TcpStream) -> IoResult<()>;
}
fn serve<T: Handler + Sized + Send + Sync>(handler: Arc<T>) -> IoResult<()> {
let listener = TcpListener::bind("127.0.0.1", 1234);
for stream in try!(listener.listen()).incoming() {
let stream = try!(stream);
let handler = handler.clone();
spawn(proc() {
let _ = handler.do_it(stream);
});
}
Ok(())
}
struct Hello {
x: u32,
}
impl Handler for Hello {
fn do_it(&self, mut s: TcpStream) -> IoResult<()> { s.write_le_u32(self.x) }
}
fn main() {
let s = Arc::new(Hello{x: 123,});
let _ = serve(s);
}
compiles fine. (playpen)
Changes
Make do_it take &self.
Make serve generic, by adding a type parameter with the constraints you want.
Make the impl of Handler for Hello in do_it not discard the result of the write (remove ;).
Clarify with let _ = ... that we intentionally discard a result.
You will not be able to execute it in the playpen though (application terminated abnormally with signal 31 (Bad system call)), as the playpen forbids IO (network IO in this case). It runs fine on my local box though.

Related

Using synchronous file-IO library in asynchronous code

I want to use library with synchronous file IO in asynchronous application. I also want all file operations work asynchronously. Is that possible?
Something like this:
// function in other crate with synchronous API
fn some_api_fun_with_sync_io(r: &impl std::io::Read) -> Result<(), std::io::Error> {
// ...
}
async fn my_fun() -> anyhow::Result<()> {
let mut async_file = async_std::fs::File::open("test.txt").await?;
// I want some magic here ))
let mut sync_file = magic_async_to_sync_converter(async_file);
some_api_fun_with_sync_io(&mut sync_file)?;
Ok(())
}
I don't think this magic exists yet, but you can conjure it up yourself with async_std::task::block_on:
fn magic_async_to_sync_converter(async_file: AsyncFile) -> Magic {
Magic(async_file)
}
struct Magic(AsyncFile);
impl SyncRead for Magic {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
block_on(self.0.read(buf))
}
}
use std::io::Read as SyncRead;
use async_std::{
fs::File as AsyncFile,
io::ReadExt,
task::{block_on, spawn_blocking},
};
But since some_api_fun_with_sync_io is now doing blocking io, you'll have to shove it into a blocking io thread with spawn_blocking:
spawn_blocking(move || some_api_fun_with_sync_io(sync_file)).await?;
You might want to revise your design and see whether you can do without this though. spawn_blocking is still marked as unstable by async_std.
Benchmarking idea of #Caesar :
use async_std::prelude::*;
use std::time::*;
struct AsyncToSyncWriteCvt<T: async_std::io::Write + Unpin> (T);
impl<T: async_std::io::Write + Unpin> std::io::Write for AsyncToSyncWriteCvt<T> {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
async_std::task::block_on(self.0.write(buf))
}
fn flush(&mut self) -> std::io::Result<()> {
async_std::task::block_on(self.0.flush())
}
}
fn test_sync<W: std::io::Write>(mut w: W) -> Result<(), std::io::Error> {
for _ in 0..1000000 { w.write("test test test test ".as_bytes())?; }
Ok(())
}
async fn test_async<T: async_std::io::Write + Unpin>(mut w: T) -> Result<(), std::io::Error> {
for _ in 0..1000000 { w.write("test test test test ".as_bytes()).await?; }
Ok(())
}
fn main() -> anyhow::Result<()> {
async_std::task::block_on(async {
// bench async -> sync IO
let now = Instant::now();
let async_file = async_std::fs::File::create("test1.txt").await?;
let sync_file = AsyncToSyncWriteCvt(async_file);
test_sync(sync_file)?;
println!("Async -> sync: {:.2}s", now.elapsed().as_secs_f32());
// bench sync IO
let now = Instant::now();
let sync_file = std::fs::File::create("test2.txt")?;
test_sync(sync_file)?;
println!("Sync: {:.2}s", now.elapsed().as_secs_f32());
// bench async IO
let now = Instant::now();
let async_file = async_std::fs::File::create("test3.txt").await?;
test_async(async_file).await?;
println!("Async: {:.2}s", now.elapsed().as_secs_f32());
Ok(())
})
}
This code shows "sync -> async" file writing as fast as "async" file writing but less fast then direct sync writing. BufWriter allow to speed up and to close the speed gap between sync and async

Change a channel receiver type from Receiver<T> to Receiver<U>

I've got an interface, which defines a method that returns a receiver:
pub fn subscribe(to: &str) -> crossbeam_channel::Receiver<Message>;
I am using a library method that returns a Receiver, but of a different message type:
pub fn subscribe(to: &str) -> crossbeam_channel::Receiver<lib::Message>;
It is easy enough to convert lib::Message to Message but how could I implement the interface, which would act as a wrapper for this library, such that the type returned is correct?
I've tried to create a new channel, but this doesn't work, (I think) since the method will return and then no longer pass messages to the new channel, therefore the receiver will always be empty.
let sub_recv = subscription.receiver();
let (send, receiver) = crossbeam_channel::unbounded::<Message>();
for m in sub_recv.try_recv() {
send.send(m.into()).map_err(|_| MQError::ConversionError)?;
}
Thanks
To convert between your types you should run background task to do so, because otwerwise you would block the thread where you are trying to do so.
Playground link
use crossbeam_channel::{Receiver, unbounded}; // 0.5.0
trait ReceiverCompatExt<T>
{
fn convert(self) -> Receiver<T>;
}
impl<T, U> ReceiverCompatExt<U> for Receiver<T>
where U: From<T>,
T: Send + 'static,
U: Send + 'static,
{
fn convert(self) -> Receiver<U> {
let (sender, receiver) = unbounded();
std::thread::spawn(move || {
while let Ok(value) = self.recv() {
if sender.send(value.into()).is_err() {
break;
}
}
});
receiver
}
}

How to convert a Bytes Iterator into a Stream in Rust

I'm trying to figure out build a feature which requires reading the contents of a file into a futures::stream::BoxStream but I'm having a tough time figuring out what I need to do.
I have figured out how to read a file byte by byte via Bytes which implements an iterator.
use std::fs::File;
use std::io::prelude::*;
use std::io::{BufReader, Bytes};
// TODO: Convert this to a async Stream
fn async_read() -> Box<dyn Iterator<Item = Result<u8, std::io::Error>>> {
let f = File::open("/dev/random").expect("Could not open file");
let reader = BufReader::new(f);
let iter = reader.bytes().into_iter();
Box::new(iter)
}
fn main() {
ctrlc::set_handler(move || {
println!("received Ctrl+C!");
std::process::exit(0);
})
.expect("Error setting Ctrl-C handler");
for b in async_read().into_iter() {
println!("{:?}", b);
}
}
However, I've been struggling a bunch trying to figure out how I can turn this Box<dyn Iterator<Item = Result<u8, std::io::Error>>> into an Stream.
I would have thought something like this would work:
use futures::stream;
use std::fs::File;
use std::io::prelude::*;
use std::io::{BufReader, Bytes};
// TODO: Convert this to a async Stream
fn async_read() -> stream::BoxStream<'static, dyn Iterator<Item = Result<u8, std::io::Error>>> {
let f = File::open("/dev/random").expect("Could not open file");
let reader = BufReader::new(f);
let iter = reader.bytes().into_iter();
std::pin::Pin::new(Box::new(stream::iter(iter)))
}
fn main() {
ctrlc::set_handler(move || {
println!("received Ctrl+C!");
std::process::exit(0);
})
.expect("Error setting Ctrl-C handler");
while let Some(b) = async_read().poll() {
println!("{:?}", b);
}
}
But I keep getting a ton of compiler errors, I've tried other permutations but generally getting no where.
One of the compiler errors:
std::pin::Pin::new
``` --> src/main.rs:14:24
|
14 | std::pin::Pin::new(Box::new(stream::iter(iter)))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected trait object `dyn std::iter::Iterator`, found enum `std::result::Result`
Anyone have any advice?
I'm pretty new to Rust, and specifically Streams/lower level stuff so I apologize if I got anything wrong, feel free to correct me.
For some additional background, I'm trying to do this so you can CTRL-C out of a command in nushell
I think you are overcomplicating it a bit, you can just return impl Stream from async_read, there is no need to box or pin (same goes for the original Iterator-based version). Then you need to set up an async runtime in order to poll the stream (in this example I just use the runtime provided by futures::executor::block_on). Then you can call futures::stream::StreamExt::next() on the stream to get a future representing the next item.
Here is one way to do this:
use futures::prelude::*;
use std::{
fs::File,
io::{prelude::*, BufReader},
};
fn async_read() -> impl Stream<Item = Result<u8, std::io::Error>> {
let f = File::open("/dev/random").expect("Could not open file");
let reader = BufReader::new(f);
stream::iter(reader.bytes())
}
async fn async_main() {
while let Some(b) = async_read().next().await {
println!("{:?}", b);
}
}
fn main() {
ctrlc::set_handler(move || {
println!("received Ctrl+C!");
std::process::exit(0);
})
.expect("Error setting Ctrl-C handler");
futures::executor::block_on(async_main());
}

How can I implement a pull-based system using Tokio?

I want to implement a pull-based system between a server and a client where the server will only push data when the client asks for it.
I was playing with Tokio and was able to create a push-based system where I was able to push a string at an interval of 1ms.
let done = listener
.incoming()
.for_each(move |socket| {
let server_queue = _cqueue.clone();
let (reader, mut writer) = socket.split();
let sender = Interval::new_interval(std::time::Duration::from_millis(1))
.for_each(move |_| {
writer
.poll_write(server_queue.pull().borrow())
.map_err(|_| {
tokio::timer::Error::shutdown();
})
.unwrap();
return Ok(());
})
.map_err(|e| println!("{}", e));
;
tokio::spawn(sender);
return Ok(());
})
.map_err(|e| println!("Future_error {}", e));
Is there a way to send only when the client asks for it without having to use a reader?
Let's think back for a moment on the kind of events that could lead to this "sending of data". You can think of multiple ways:
The client connects to the server. By contract, this is "asking for data". You've implemented this case
The client sends an in-band message on the socket/pipe connecting the client and server. For that, you need to take the AsyncRead part of your socket, the AsyncWrite part that you've already used and build a duplex channel so you can read and talk at the same time
The client sends an out-of-band message, typically on another proto-host-port triplet and using a different protocol. Your current server recognizes it, and sends the client that data. To do this, you need a reader for the other triplet, and you need a messaging structure in place to relay this to the one place having access to the AsyncWrite part of your socket
The short answer is no, you cannot really act on an event that you're not listening for.
#Shepmaster I was just wondering if there was an existing library that can be used to handle this "neatly"
There is, and then there isn't.
Most libraries are centered around a specific problem. In your case, you've opted to work at the lowest possible level by having a TCP socket (implementing AsyncRead + AsyncWrite).
To do anything, you're going to need to decide on:
A transport format
A protocol
I tend to wrap code into this when I need a quick and dirty implementation of a duplex stream:
use futures::sync::mpsc::{UnboundedSender, unbounded};
use std::sync::{Arc};
use futures::{Sink, Stream, Future, future, stream};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::codec::{Framed, Encoder, Decoder};
use std::io;
use std::fmt::Debug;
use futures_locks::{RwLock as FutLock};
enum Message<T:Send+Debug+'static> {
Content(T),
Done
}
impl<T: Send + Debug + 'static> From<T> for Message<T> {
fn from(message:T) -> Message<T> {
Message::Content(message)
}
}
struct DuplexStream<T:Send+Debug+'static> {
writer: Arc<FutLock<UnboundedSender<Message<T>>>>,
handlers: Arc<FutLock<Option<Box<dyn Stream<Item = Message<T>, Error = ()> + Send>>>>
}
impl<T:Send+Debug+'static> DuplexStream<T> {
pub fn from<R,U>(framed_socket: Framed<R, U>) -> Arc<DuplexStream<T>>
where U: Send + Encoder<Item = T> + Decoder<Item = T> + 'static, R: Send + AsyncRead + AsyncWrite + 'static {
let (tx, rx) = framed_socket.split();
// Assemble the combined upstream stream
let (upstream_tx, upstream_rx) = unbounded();
let upstream = upstream_rx.take_while(|item| match item {
Message::Done => future::ok(false),
_ => future::ok(true)
}).fold(tx, |o, m| {
o.send(match m {
Message::Content(i) => i,
_ => unreachable!()
}).map_err(|_| {
()
})
}).map(|e| {
Message::Done
}).into_stream();
// Assemble the downstream stream
let downstream = rx.map_err(|_| ()).map(|r| {
Message::Content(r)
}).chain(stream::once(Ok(Message::Done)));
Arc::new(DuplexStream {
writer: Arc::new(FutLock::new(upstream_tx)),
handlers: Arc::new(FutLock::new(Some(Box::new(upstream.select(downstream).take_while(|m| match m {
Message::Content(_) => {
future::ok(true)
},
Message::Done => {
future::ok(false)
}
})))))
})
}
pub fn start(self: Arc<Self>) -> Box<dyn Stream<Item = T, Error = io::Error> + Send> {
Box::new(self.handlers
.write()
.map_err(|_| io::Error::new(io::ErrorKind::NotFound, "Stream closed"))
.map(|mut handler| -> Box<dyn Stream<Item = T, Error = io::Error> + Send> {
match handler.take() {
Some(e) => Box::new(e.map(|r| match r {
Message::Content(i) => i,
_ => unreachable!()
}).map_err(|_| io::Error::new(io::ErrorKind::NotFound, "Stream closed"))),
None => Box::new(stream::once(Err(io::Error::new(io::ErrorKind::AddrInUse, "Handler already taken"))))
}
}).into_stream().flatten()
)
}
pub fn close(self: Arc<Self>) -> Box<dyn Future<Item = (), Error = io::Error> + Send> {
self.inner_send(Message::Done)
}
pub fn send(self: Arc<Self>, message: T) -> Box<dyn Future<Item = (), Error = io::Error> + Send> {
self.inner_send(message.into())
}
pub fn inner_send(self: Arc<Self>, message: Message<T>) -> Box<dyn Future<Item = (), Error = io::Error> + Send> {
Box::new(self.writer.write()
.map_err(|_| io::Error::new(io::ErrorKind::NotFound, "The mutex has disappeared")).and_then(|guard| {
future::result(guard.unbounded_send(message).map_err(|_| io::Error::new(io::ErrorKind::BrokenPipe, "The sink has gone away")))
}))
}
}
This struct has a multitude of advantages, but a few drawbacks. The main advantage is that you can deal with the read and write part on the same object the same way you would in another language. The object itself implements Clone (since it's an Arc), every method is usable everywhere (particularly useful for old futures code) and as long as you keep a copy of it somewhere and don't call close() it'll keep running (as long as the underlying AsyncRead + AsyncWrite implementation is still there).
This does not absolve you from points 1 and 2, but you can (and should) leverage tokio::codec::Framed for point 1, and implement point 2 as business logic.
An example (it's actually a test ;-) ) of the usage:
#[test]
fn it_writes() {
let stream = DuplexStream::from(make_w());
let stream_write = Arc::clone(&stream);
let stream_read= Arc::clone(&stream);
let dup = Arc::clone(&stream);
tokio::run(lazy(move || {
let stream_write = Arc::clone(&stream_write);
stream_read.start().and_then(move |i| {
let stream_write = Arc::clone(&stream_write);
stream_write.send("foo".to_string()).map(|_| i)
}).collect().map(|r| {
assert_eq!(r, vec!["foo".to_string(), "bar".to_string(), "bazfoo".to_string(), "foo".to_string()])
}).map_err(|_| {
assert_eq!(true, false);
})
}));
}

What argument to pass and how to find out its type?

I started with the example for EventLoop from the mio webpage and added the main function:
extern crate mio;
use std::thread;
use mio::{EventLoop, Handler};
struct MyHandler;
impl Handler for MyHandler {
type Timeout = ();
type Message = u32;
fn notify(&mut self, event_loop: &mut EventLoop<MyHandler>, msg: u32) {
assert_eq!(msg, 123);
event_loop.shutdown();
}
}
fn main() {
let mut event_loop = EventLoop::new().unwrap();
let sender = event_loop.channel();
// Send the notification from another thread
thread::spawn(move || {
let _ = sender.send(123);
});
let _ = event_loop.run(&mut MyHandler);
}
Then I had the idea to move the sending thread to a separate function "foo" and started to wonder what type is passed:
extern crate mio;
use std::thread;
use mio::{EventLoop, Handler};
struct MyHandler;
impl Handler for MyHandler {
type Timeout = ();
type Message = u32;
fn notify(&mut self, event_loop: &mut EventLoop<MyHandler>, msg: u32) {
assert_eq!(msg, 123);
event_loop.shutdown();
}
}
fn foo(s: &?) {
let sender = s.clone();
// Send the notification from another thread
thread::spawn(move || {
let _ = sender.send(123);
});
}
fn main() {
let mut event_loop = EventLoop::new().unwrap();
let sender = event_loop.channel();
foo(&sender);
let _ = event_loop.run(&mut MyHandler);
}
So, I let the compiler tell me the type:
fn foo(s: &String) { ...
raises the error:
error: mismatched types:
expected `&collections::string::String`,
found `&mio::event_loop::Sender<_>`
Ok, nice but replacing &String by &mio::event_loop::Sender<u32> raises the error:
error: struct `Sender` is private
fn foo(s: &mio::event_loop::Sender<u32>) {
^
Hm, looks like a dead end, so I thought passing event_loop instead:
fn foo(s: &mio::event_loop::EventLoop<u32>) {
let sender = s.channel().clone();
...
fn main() { ...
foo(&event_loop); ...
but that raises the error:
error: the trait `mio::handler::Handler` is not implemented for the type `u32` [E0277]
src/main.rs:18 fn foo(s: &mio::event_loop::EventLoop<u32>) {
which confuses me completely.
In e.g. C / C++ I would have just passed a pointer either to EventLop or Sender.
What is Rust trying to tell me here? How to get it working in Rust?
Environment: rustc 1.0.0 (a59de37e9 2015-05-13) (built 2015-05-14), mio 0.3.5
The Sender type is re-exported as mio::Sender. The compiler knows that the actual type is mio::event_loop::Sender and reports that. There's currently no way to automatically figure out what type you need in general, but you can look at the documentation of the EventLoop::channel method and see that it returns a Sender. If you click on the Sender type in the documentation you will end up at the documentation of mio::Sender

Resources