Tokio echo server. Cannot read and write in the same future - rust

I'm trying to build an echo server in Tokio. I've seen examples, but all of them seem to use io::copy from Tokio IO which I can't use because I want to modify the output.
However, I can't compile a server that uses writer and reader at the same time. I want to build a task based on futures that enables reading/writing in a loop (an echo server).
My actual code is this:
extern crate futures;
extern crate futures_cpupool;
extern crate tokio;
extern crate tokio_io;
use futures::prelude::*;
use futures_cpupool::CpuPool;
use tokio_io::AsyncRead;
use futures::Stream;
use futures::stream;
use tokio_io::codec::*;
use std::rc::Rc;
fn main() {
let pool = CpuPool::new_num_cpus();
use std::net::*;
let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080);
let listener = tokio::net::TcpListener::bind(&socket).unwrap();
let server = listener.incoming().for_each(|socket| {
let (writer, reader) = socket.framed(LinesCodec::new()).split();
let writer = Rc::new(writer);
let action = reader.for_each(|line| {
println!("ECHO: {}", line);
writer.send(line);
Ok(())
});
pool.spawn(action); // std::rc::Rc<futures::stream::SplitSink<tokio_io::codec::Framed<tokio::net::TcpStream, tokio_io::codec::LinesCodec>>>` cannot be shared between threads safely
Ok(())
});
server.wait().unwrap();
}
You might say that I must use Arc because there are different threads involved. I've tried with Arc and Mutex, but another error arises and I can't figure a way to make it compile:
extern crate futures;
extern crate futures_cpupool;
extern crate tokio;
extern crate tokio_io;
use futures::prelude::*;
use std::time;
use std::thread;
use futures_cpupool::CpuPool;
use tokio_io::AsyncRead;
use futures::Stream;
use tokio_io::codec::*;
use std::sync::Arc;
use std::sync::Mutex;
fn main() {
let pool = CpuPool::new_num_cpus();
use std::net::*;
let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080);
let listener = tokio::net::TcpListener::bind(&socket).unwrap();
let server = listener.incoming().for_each(|socket| {
let (writer, reader) = socket.framed(LinesCodec::new()).split();
let writer = Arc::new(Mutex::new(writer));
let action = reader.for_each(move |line| {
println!("ECHO: {}", line);
writer.lock().unwrap().send(line); // cannot move out of borrowed content
Ok(())
});
pool.spawn(action);
Ok(())
});
server.wait().unwrap();
}
The error it says is: cannot move out of borrowed content

I finally found that forward was the answer to my question.
extern crate tokio;
extern crate tokio_io;
extern crate futures;
use futures::prelude::*;
use tokio_io::AsyncRead;
use futures::Stream;
use tokio_io::codec::*;
struct Cancellable{
rx: std::sync::mpsc::Receiver<()>,
}
impl Future for Cancellable {
type Item = ();
type Error = std::sync::mpsc::RecvError;
fn poll(&mut self) -> Result<Async<Self::Item>,Self::Error> {
match self.rx.try_recv() {
Ok(_) => Ok(Async::Ready(())),
Err(_) => Ok(Async::NotReady)
}
}
}
fn main() {
use std::net::*;
let socket = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), 8080);
let listener = tokio::net::TcpListener::bind(&socket).unwrap();
let server = listener.incoming().for_each(|socket|{
let (writer,reader) = socket.framed(LinesCodec::new()).split();
let (tx,rx) = std::sync::mpsc::channel();
let cancel = Cancellable {
rx: rx,
};
let action = reader
.map(move |line|{
println!("ECHO: {}",line);
if line == "bye"{
println!("BYE");
tx.send(()).unwrap();
}
line
})
.forward(writer)
.select2(cancel)
.map(|_|{
})
.map_err(|err|{
println!("error");
});
tokio::executor::current_thread::spawn(action);
Ok(())
}).map_err(|err|{
println!("error = {:?}",err);
});
tokio::executor::current_thread::run(|_|{
tokio::executor::current_thread::spawn(server);
});
}

Related

How can i make a piece of running code timeout? [duplicate]

How do I set a timeout for HTTP request using asynchronous Hyper (>= 0.11)?
Here is the example of the code without timeout:
extern crate hyper;
extern crate tokio_core;
extern crate futures;
use futures::Future;
use hyper::Client;
use tokio_core::reactor::Core;
fn main() {
let mut core = Core::new().unwrap();
let client = Client::new(&core.handle());
let uri = "http://stackoverflow.com".parse().unwrap();
let work = client.get(uri).map(|res| {
res.status()
});
match core.run(work) {
Ok(status) => println!("Status: {}", status),
Err(e) => println!("Error: {:?}", e)
}
}
Answering my own question with a working code example, based on the link provided by seanmonstar to the Hyper Guide / General Timeout:
extern crate hyper;
extern crate tokio_core;
extern crate futures;
use futures::Future;
use futures::future::Either;
use hyper::Client;
use tokio_core::reactor::Core;
use std::time::Duration;
use std::io;
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = Client::new(&handle);
let uri: hyper::Uri = "http://stackoverflow.com".parse().unwrap();
let request = client.get(uri.clone()).map(|res| res.status());
let timeout = tokio_core::reactor::Timeout::new(Duration::from_millis(170), &handle).unwrap();
let work = request.select2(timeout).then(|res| match res {
Ok(Either::A((got, _timeout))) => Ok(got),
Ok(Either::B((_timeout_error, _get))) => {
Err(hyper::Error::Io(io::Error::new(
io::ErrorKind::TimedOut,
"Client timed out while connecting",
)))
}
Err(Either::A((get_error, _timeout))) => Err(get_error),
Err(Either::B((timeout_error, _get))) => Err(From::from(timeout_error)),
});
match core.run(work) {
Ok(status) => println!("OK: {:?}", status),
Err(e) => println!("Error: {:?}", e)
}
}
Just FYI this has gotten a lot easier with Tokyo >= 1.0, because they now have a dedicated timeout wrapper that can be applied to a future (such as a request) and which wraps the original future type inside a Result whose Ok is the original future type and whose Err is a timeout error.
Thus your code in the question can now handle timeouts as follows:
extern crate tokio; // 1.7.1, full features
use hyper::Client;
use std::time::Duration;
#[tokio::main]
async fn main() {
let client = Client::new();
let uri = "http://stackoverflow.com".parse().unwrap();
let work = client.get(uri);
match tokio::time::timeout(Duration::from_millis(10), work).await {
Ok(result) => match result {
Ok(response) => println!("Status: {}", response.status()),
Err(e) => println!("Network error: {:?}", e),
},
Err(_) => println!("Timeout: no response in 10 milliseconds."),
};
}
(Of course, this code will always give you a timeout. To see the expected 301 response from the network, try going to 200 milliseconds.)

"cannot recursively call into `Core`" when trying to achieve nested concurrency using Tokio

I'm building a service that periodically makes an HTTP request. I'm using tokio::timer::Delay as a periodic trigger and hyper to make the HTTP call.
Using them together gives me the following error:
thread 'tokio-runtime-worker-1' panicked at 'cannot recursively call into `Core`', libcore/option.rs:960:5
How can I make this work?
Below is a simplified version of the service.
main.rs
extern crate futures;
extern crate hyper;
extern crate tokio;
extern crate tokio_core;
extern crate tokio_timer;
use futures::{Future, Stream};
use hyper::Client;
use tokio_core::reactor::Core;
use std::time::{Duration, Instant};
use tokio::timer::Delay;
use std::io::{self, Write};
fn main() {
let when = Instant::now() + Duration::from_secs(1);
tokio::run({
Delay::new(when)
.map_err(|e| panic!("timer failed; err={:?}", e))
.and_then(move |_| {
let mut core = Core::new().unwrap();
let client = Client::new(&core.handle());
let uri = "http://httpbin.org/ip".parse().unwrap();
let work = client.get(uri).and_then(|res| {
println!("Response: {}", res.status());
res.body()
.for_each(|chunk| io::stdout().write_all(&chunk).map_err(From::from))
});
core.run(work).unwrap();
Ok(())
})
})
}
Cargo.toml
[dependencies]
futures = "0.1"
hyper = "0.11"
tokio-core = "0.1"
tokio-timer = "0.1"
tokio = "0.1"
serde = "1.0.19"
serde_derive = "1.0.19"
serde_json = "1.0.19"
hyper-tls = "0.1.3"
One primary conceptual issue I see is that you should not be creating arbitrary Cores. You want to share these as much as possible because that's how Tokio communicates between different futures.
Creating a single core and using it for the HTTP request and the overall command is the right thing to do.
hyper 0.11
hyper 0.11 is not compatible with the tokio crate. Instead, you need to use the component pieces of Tokio:
extern crate futures;
extern crate hyper;
extern crate tokio_core;
extern crate tokio_timer;
use futures::{Future, Stream};
use hyper::Client;
use std::{
io::{self, Write}, time::{Duration, Instant},
};
use tokio_core::reactor::Core;
use tokio_timer::Delay;
fn main() {
let when = Instant::now() + Duration::from_secs(1);
let mut core = Core::new().expect("Could not achieve criticality");
let handle = core.handle();
let command = Delay::new(when)
.map_err(|e| panic!("timer failed; err={:?}", e))
.and_then(move |_| {
let client = Client::new(&handle);
let uri = "http://httpbin.org/ip".parse().unwrap();
client.get(uri).and_then(|res| {
println!("Response: {}", res.status());
res.body()
.for_each(|chunk| io::stdout().write_all(&chunk).map_err(From::from))
})
});
core.run(command).expect("Meltdown occurred");
}
[dependencies]
futures = "0.1"
hyper = "0.11.27"
tokio-core = "0.1.17"
tokio-timer = "0.2.3"
hyper 0.12
Using hyper 0.12, it looks like this:
extern crate hyper;
extern crate tokio;
use hyper::Client;
use std::{
error::Error, io::{self, Write}, time::{Duration, Instant},
};
use tokio::{
prelude::{Future, Stream}, timer::Delay,
};
type MyError = Box<Error + Send + Sync>;
fn main() {
let when = Instant::now() + Duration::from_secs(1);
let command = Delay::new(when).from_err::<MyError>().and_then(move |_| {
let client = Client::new();
let uri = "http://httpbin.org/ip".parse().unwrap();
client.get(uri).from_err::<MyError>().and_then(|res| {
println!("Response: {}", res.status());
res.into_body()
.from_err::<MyError>()
.for_each(|chunk| io::stdout().write_all(&chunk).map_err(From::from))
})
});
tokio::run(command.map_err(|e| panic!("Error: {}", e)));
}
[dependencies]
hyper = "0.12.0"
tokio = "0.1.6"

Getting multiple URLs concurrently with Hyper

I am trying to adapt the Hyper basic client example to get multiple URLs concurrently.
This is the code I currently have:
extern crate futures;
extern crate hyper;
extern crate tokio_core;
use std::io::{self, Write};
use std::iter;
use futures::{Future, Stream};
use hyper::Client;
use tokio_core::reactor::Core;
fn get_url() {
let mut core = Core::new().unwrap();
let client = Client::new(&core.handle());
let uris: Vec<_> = iter::repeat("http://httpbin.org/ip".parse().unwrap()).take(50).collect();
for uri in uris {
let work = client.get(uri).and_then(|res| {
println!("Response: {}", res.status());
res.body().for_each(|chunk| {
io::stdout()
.write_all(&chunk)
.map_err(From::from)
})
});
core.run(work).unwrap();
}
}
fn main() {
get_url();
}
It doesn't seem to be acting concurrently (it takes a long time to complete), am I giving the work to the core in the wrong way?
am I giving the work to the core in the wrong way?
Yes, you are giving one request to Tokio and requiring that it complete before starting the next request. You've taken asynchronous code and forced it to be sequential.
You need to give the reactor a single future that will perform different kinds of concurrent work.
Hyper 0.14
use futures::prelude::*;
use hyper::{body, client::Client};
use std::{
io::{self, Write},
iter,
};
use tokio;
const N_CONCURRENT: usize = 1;
#[tokio::main]
async fn main() {
let client = Client::new();
let uri = "http://httpbin.org/ip".parse().unwrap();
let uris = iter::repeat(uri).take(50);
stream::iter(uris)
.map(move |uri| client.get(uri))
.buffer_unordered(N_CONCURRENT)
.then(|res| async {
let res = res.expect("Error making request: {}");
println!("Response: {}", res.status());
body::to_bytes(res).await.expect("Error reading body")
})
.for_each(|body| async move {
io::stdout().write_all(&body).expect("Error writing body");
})
.await;
}
With N_CONCURRENT set to 1:
real 1.119 1119085us
user 0.012 12021us
sys 0.011 11459us
And set to 10:
real 0.216 216285us
user 0.014 13596us
sys 0.021 20640us
Cargo.toml
[dependencies]
futures = "0.3.17"
hyper = { version = "0.14.13", features = ["client", "http1", "tcp"] }
tokio = { version = "1.12.0", features = ["full"] }
Hyper 0.12
use futures::{stream, Future, Stream}; // 0.1.25
use hyper::Client; // 0.12.23
use std::{
io::{self, Write},
iter,
};
use tokio; // 0.1.15
const N_CONCURRENT: usize = 1;
fn main() {
let client = Client::new();
let uri = "http://httpbin.org/ip".parse().unwrap();
let uris = iter::repeat(uri).take(50);
let work = stream::iter_ok(uris)
.map(move |uri| client.get(uri))
.buffer_unordered(N_CONCURRENT)
.and_then(|res| {
println!("Response: {}", res.status());
res.into_body()
.concat2()
.map_err(|e| panic!("Error collecting body: {}", e))
})
.for_each(|body| {
io::stdout()
.write_all(&body)
.map_err(|e| panic!("Error writing: {}", e))
})
.map_err(|e| panic!("Error making request: {}", e));
tokio::run(work);
}
With N_CONCURRENT set to 1:
real 0m2.279s
user 0m0.193s
sys 0m0.065s
And set to 10:
real 0m0.529s
user 0m0.186s
sys 0m0.075s
See also:
How can I perform parallel asynchronous HTTP GET requests with reqwest?

How can I read from a tokio TCP connection without using the tokio_proto crate?

I'm trying to write a TCP client to print incoming messages. I came up with the following code:
extern crate bytes;
extern crate futures;
extern crate tokio_core;
extern crate tokio_io;
use futures::Future;
use tokio_core::net::TcpStream;
use tokio_core::reactor::Core;
use tokio_io::AsyncRead;
use bytes::BytesMut;
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let connection = TcpStream::connect(&"127.0.0.1:8081".parse().unwrap(), &handle);
let server = connection.and_then(move |mut stream| {
let mut buf = BytesMut::with_capacity(1000);
stream
.read_buf(&mut buf)
.map(|buf| print!("Buffer {:?}", buf))
.map_err(|e| eprintln!("Error: {}", e));
Ok(())
});
core.run(server).unwrap();
}
It compiles but it fails with a Buffer NotReady error.
Rust is a compiled language, which means that you should pay attention to the warnings that the compiler generates:
warning: unused `std::result::Result` which must be used
--> src/main.rs:20:9
|
20 | / stream
21 | | .read_buf(&mut buf)
22 | | .map(|buf| print!("Buffer {:?}", buf))
23 | | .map_err(|e| eprintln!("Error: {}", e));
| |____________________________________________________^
|
= note: #[warn(unused_must_use)] on by default
Additionally, tokio has an entire chapter dedicated to low-level IO which I'll assume you've read to not bore you with details you already know.
First we take the connection Future and convert it into a Stream. A stream can yield multiple values — in this case we return one value for every successful read. We create AsWeGetIt for the simplest implementation of this.
We then print out each value of the stream using Stream::for_each. Conveniently, this performs the corresponding conversion back to a Future, which is what is needed for and_then.
extern crate bytes;
extern crate futures;
extern crate tokio_core;
extern crate tokio_io;
use futures::{Future, Poll, Stream};
use tokio_core::net::TcpStream;
use tokio_core::reactor::Core;
use tokio_io::AsyncRead;
use bytes::BytesMut;
struct AsWeGetIt<R>(R);
impl<R> Stream for AsWeGetIt<R>
where
R: AsyncRead,
{
type Item = BytesMut;
type Error = std::io::Error;
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
let mut buf = BytesMut::with_capacity(1000);
self.0
.read_buf(&mut buf)
.map(|async| async.map(|_| Some(buf)))
}
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let address = "127.0.0.1:8081".parse().expect("Unable to parse address");
let connection = TcpStream::connect(&address, &handle);
let client = connection
.and_then(|tcp_stream| {
AsWeGetIt(tcp_stream).for_each(|buf| {
println!("Buffer {:?}", buf);
Ok(())
})
})
.map_err(|e| eprintln!("Error: {}", e));
core.run(client).expect("Unable to run the event loop");
}

How do I append futures to a BufferUnordered stream?

I'm trying to append futures to the underlying stream of BufferUnordered. At the moment I'm pushing them directly into the underlying stream, the Fuse stream of BufferUnordered is empty, so pushing to it has no effect and the loop below does not receive the 3rd response. Changing the definition of next (1) to stream.buffer_unordered(1) seems to make it work because the underlying stream is not empty/finished.
extern crate url;
extern crate futures;
extern crate tokio_core;
extern crate reqwest;
use url::Url;
use futures::*;
use tokio_core::reactor::Core;
use reqwest::unstable::async::{Client, Response, Decoder};
fn main() {
let mut core = Core::new().unwrap();
let client = Client::new(&core.handle()).unwrap();
let hyper = client.get("https://hyper.rs").unwrap().send();
let google = client.get("https://google.com").unwrap().send();
let stream = stream::futures_unordered(vec![future::ok(hyper), future::ok(google)]);
let mut next = stream.buffer_unordered(5).into_future(); // (1)
loop {
match core.run(next) {
Ok((None, _something)) => {
println!("finished");
break;
},
Ok((Some(response), mut next_requests)) => {
{
let inner = next_requests.get_mut();
println!("{}", inner.is_empty());
println!("{}", response.status());
let yahoo = client.get("https://yahoo.com").unwrap().send();
inner.push(future::ok(yahoo)); // no effect here
}
next = next_requests.into_future();
}
Err((error, next_requests)) => {
next = next_requests.into_future();
}
}
}
}
How do I add more futures to BufferUnordered? Do I actually have to chain it or do something along these lines?

Resources