How do I terminate a hyper server after fulfilling one request? - rust

I need a simple hyper server that serves a single request and then exits. This is my code so far, I believe that all I need is a way to get tx into hello, so I can use tx.send(()) and it should work the way I want it. However, I can't quite work out a way to do that without having the compiler yell at me.
use std::convert::Infallible;
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};
async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
Ok(Response::new(Body::from("Hello World!")))
}
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
let make_svc = make_service_fn(|_conn| {
async { Ok::<_, Infallible>(service_fn(hello)) }
});
let addr = ([127, 0, 0, 1], 3000).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
let graceful = server.with_graceful_shutdown(async {
rx.await.ok();
});
graceful.await?;
Ok(())
}
Rust playground
Relevant crates:
tokio = { version = "0.2", features = ["full"] }
hyper = "0.13.7"
Since How to share mutable state for a Hyper handler? and How to share mutable state for a Hyper handler?, the hyper API has changed and I am unable to compile the code when edited to work with the current version.

A straightforward solution would be to use global state for this, made possible by tokio's Mutex type, like so:
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};
use lazy_static::lazy_static;
use std::convert::Infallible;
use std::sync::Arc;
use tokio::sync::oneshot::Sender;
use tokio::sync::Mutex;
lazy_static! {
/// Channel used to send shutdown signal - wrapped in an Option to allow
/// it to be taken by value (since oneshot channels consume themselves on
/// send) and an Arc<Mutex> to allow it to be safely shared between threads
static ref SHUTDOWN_TX: Arc<Mutex<Option<Sender<()>>>> = <_>::default();
}
async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
// Attempt to send a shutdown signal, if one hasn't already been sent
if let Some(tx) = SHUTDOWN_TX.lock().await.take() {
let _ = tx.send(());
}
Ok(Response::new(Body::from("Hello World!")))
}
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
SHUTDOWN_TX.lock().await.replace(tx);
let make_svc = make_service_fn(|_conn| async { Ok::<_, Infallible>(service_fn(hello)) });
let addr = ([127, 0, 0, 1], 3000).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
let graceful = server.with_graceful_shutdown(async {
rx.await.ok();
});
graceful.await?;
Ok(())
}
In this version of the code, we store the sender half of the shutdown signal channel in a global variable protected by a mutex lock, and then attempt to consume the channel to send the signal on every request.

Related

How to connect bevy game to externel TCP server using tokios async TcpStream?

I want to send Events between the game client and server and I already got it working, but I do not know how to do it with bevy.
I am dependent to use tokios async TcpStream, because I have to be able to split the stream into a OwnedWriteHalf and OwnedReadhalf using stream.into_split().
My first idea was to just spawn a thread that handles the connection and then send the received events to a queue using mpsc::channel
Then I include this queue into a bevy resource using app.insert_resource(Queue) and pull events from it in the game loop.
the Queue:
use tokio::sync::mpsc;
pub enum Instruction {
Push(GameEvent),
Pull(mpsc::Sender<Option<GameEvent>>),
}
#[derive(Clone, Debug)]
pub struct Queue {
sender: mpsc::Sender<Instruction>,
}
impl Queue {
pub fn init() -> Self {
let (tx, rx) = mpsc::channel(1024);
init(rx);
Self{sender: tx}
}
pub async fn send(&self, event: GameEvent) {
self.sender.send(Instruction::Push(event)).await.unwrap();
}
pub async fn pull(&self) -> Option<GameEvent> {
println!("new pull");
let (tx, mut rx) = mpsc::channel(1);
self.sender.send(Instruction::Pull(tx)).await.unwrap();
rx.recv().await.unwrap()
}
}
fn init(mut rx: mpsc::Receiver<Instruction>) {
tokio::spawn(async move {
let mut queue: Vec<GameEvent> = Vec::new();
loop {
match rx.recv().await.unwrap() {
Instruction::Push(ev) => {
queue.push(ev);
}
Instruction::Pull(sender) => {
sender.send(queue.pop()).await.unwrap();
}
}
}
});
}
But because all this has to be async I have block the pull() function in the sync game loop.
I do this using the futures-lite crate:
fn event_pull(
communication: Res<Communication>
) {
let ev = future::block_on(communication.event_queue.pull());
println!("got event: {:?}", ev);
}
And this works fine, BUT after around 5 seconds the whole program just halts and does not receive any more events.
It seems like that future::block_on() does block indefinitely.
Having the main function, in which bevy::prelude::App gets built and run, to be the async tokio::main function might also be a problem here.
It would probably be best to wrap the async TcpStream initialisation and tokio::sync::mpsc::Sender and thus also Queue.pull into synchronous functions, but I do not know how to do this.
Can anyone help?
How to reproduce
The repo can be found here
Just compile both server and client and then run both in the same order.
I got it to work by just replacing every tokio::sync::mpsc with crossbeam::channel, which might be a problem, as it does block
and manually initializing the tokio runtime.
so the init code looks like this:
pub struct Communicator {
pub event_bridge: bridge::Bridge,
pub event_queue: event_queue::Queue,
_runtime: Runtime,
}
impl Communicator {
pub fn init(ip: &str) -> Self {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_io()
.build()
.unwrap();
let (bridge, queue, game_rx) = rt.block_on(async move {
let socket = TcpStream::connect(ip).await.unwrap();
let (read, write) = socket.into_split();
let reader = TcpReader::new(read);
let writer = TcpWriter::new(write);
let (bridge, tcp_rx, game_rx) = bridge::Bridge::init();
reader::init(bridge.clone(), reader);
writer::init(tcp_rx, writer);
let event_queue = event_queue::Queue::init();
return (bridge, event_queue, game_rx);
});
// game of game_rx events to queue for game loop
let eq_clone = queue.clone();
rt.spawn(async move {
loop {
let event = game_rx.recv().unwrap();
eq_clone.send(event);
}
});
Self {
event_bridge: bridge,
event_queue: queue,
_runtime: rt,
}
}
}
And main.rs looks like this:
fn main() {
let communicator = communication::Communicator::init("0.0.0.0:8000");
communicator.event_bridge.push_tcp(TcpEvent::Register{name: String::from("luca")});
App::new()
.insert_resource(communicator)
.add_system(event_pull)
.add_plugins(DefaultPlugins)
.run();
}
fn event_pull(
communication: Res<communication::Communicator>
) {
let ev = communication.event_queue.pull();
if let Some(ev) = ev {
println!("ev");
}
}
Perhaps there might be a better solution.

Shared mutable state in Hyper

I'm trying to create a counter in a Hyper web server that counts the number of requests it has received. I'm using a Arc<Mutex<u64>> to hold onto count. However, I haven't been able to figure out the right combination of move and .clone() to satisfy the types of the closures. Here's some code that compiles, but resets the counter on each request:
extern crate hyper;
use hyper::rt::Future;
use hyper::service::service_fn_ok;
use hyper::{Body, Response, Server};
use std::sync::{Arc, Mutex};
fn main() {
let addr = "0.0.0.0:3000".parse().unwrap();
// FIXME want to create the counter here, not below
let server = Server::bind(&addr)
.serve(|| {
service_fn_ok(|_req| {
let counter = Arc::new(Mutex::new(0));
use_counter(counter)
})
})
.map_err(|e| eprintln!("Error: {}", e));
hyper::rt::run(server)
}
fn use_counter(counter: Arc<Mutex<u64>>) -> Response<Body> {
let mut data = counter.lock().unwrap();
*data += 1;
Response::new(Body::from(format!("Counter: {}\n", data)))
}
It turns out I was pretty close, and looking at a few other examples helped me realize the problem. Since there are two layers of closures at play here, I need to move the counter into the outer closure, clone it, and then move that clone into the inner closure and clone there again. To wit:
extern crate hyper; // 0.12.10
use hyper::rt::Future;
use hyper::service::service_fn_ok;
use hyper::{Body, Response, Server};
use std::sync::{Arc, Mutex};
fn main() {
let addr = "0.0.0.0:3000".parse().unwrap();
let counter = Arc::new(Mutex::new(0));
let server = Server::bind(&addr)
.serve(move || {
let counter = counter.clone();
service_fn_ok(move |_req| use_counter(counter.clone()))
})
.map_err(|e| eprintln!("Error: {}", e));
hyper::rt::run(server)
}
fn use_counter(counter: Arc<Mutex<u64>>) -> Response<Body> {
let mut data = counter.lock().unwrap();
*data += 1;
Response::new(Body::from(format!("Counter: {}\n", data)))
}
Update February 2020 Here's a version using hyper 0.13:
use hyper::{Body, Response, Server, Request};
use std::sync::{Arc, Mutex};
use hyper::service::{make_service_fn, service_fn};
use std::convert::Infallible;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let addr = "0.0.0.0:3000".parse()?;
let counter = Arc::new(Mutex::new(0));
let make_service = make_service_fn(move |_conn| {
let counter = counter.clone();
async move {
Ok::<_, Infallible>(service_fn(move |_req: Request<Body>| {
let counter = counter.clone();
async move {
Ok::<_, Infallible>(use_counter(counter))
}
}))
}
});
Server::bind(&addr).serve(make_service).await?;
Ok(())
}
fn use_counter(counter: Arc<Mutex<u64>>) -> Response<Body> {
let mut data = counter.lock().unwrap();
*data += 1;
Response::new(Body::from(format!("Counter: {}\n", data)))
}

Using Tokio's mpsc and oneshot leads to deadlock

I want to write a SOCKS server which selects one of several internet gateways depending on the destination as requested by the client. The general flow is
Perform SOCKS5 negotiation and derive the address information from client
Request an internal server to select the internet gateway and the destination's IP
Connect and do the communication
For this internal server, a Tokio task is spawned which waits on a mpsc queue. The received messages should contain the SOCKS5 address info and the tx side of a oneshot channel to give back the result.
Another Tokio task just periodically queries the internal server:
extern crate futures;
extern crate tokio_core;
extern crate tokio_timer;
use std::time;
use std::time::{Duration, Instant};
use std::fmt::Debug;
use tokio_core::reactor::{Core, Interval};
use tokio_timer::wheel;
use futures::{Future, Sink, Stream};
use futures::sync::{mpsc, oneshot};
type MsgRequest<A, E> = oneshot::Sender<Result<A, E>>;
type FutRequest<A, E> = mpsc::Sender<MsgRequest<A, E>>;
#[derive(Debug)]
struct Responder<A, E> {
fut_tx: FutRequest<A, E>,
}
impl<A: 'static, E: 'static> Responder<A, E>
where
E: Debug,
{
fn query(&self) -> Result<A, E> {
println!("enter query");
let (res_tx, res_rx) = oneshot::channel::<Result<A, E>>();
println!("send query");
let fut_tx = self.fut_tx.clone();
let res = fut_tx
.send(res_tx)
.then(|tx| {
if let Ok(_tx) = tx {
println!("Sink flushed");
}
res_rx
})
.and_then(|x| Ok(x))
.wait()
.unwrap();
res
}
}
impl<A: 'static, E: 'static> Clone for Responder<A, E> {
fn clone(&self) -> Self {
Responder {
fut_tx: self.fut_tx.clone(),
}
}
}
fn resolve(tx: oneshot::Sender<Result<u8, String>>) -> Result<(), ()> {
println!("resolve");
let delay = time::Duration::from_secs(10);
wheel()
.build()
.sleep(delay)
.then(|_| tx.send(Ok(0)))
.wait()
.unwrap();
println!("resolve answered");
Ok(())
}
fn main() {
let mut lp = Core::new().unwrap();
let handle = lp.handle();
let (fut_tx, fut_rx) = mpsc::channel::<MsgRequest<u8, String>>(100);
let resolver = fut_rx.for_each(|msg| resolve(msg));
handle.spawn(resolver);
let responder = Responder { fut_tx };
let server = Interval::new_at(Instant::now(), Duration::new(2, 0), &handle)
.unwrap()
.for_each(move |_| {
println!("Call query for_each");
let rx = responder.clone();
let _res = rx.query();
Ok(())
})
.map_err(|_| ());
handle.spawn(server);
loop {
lp.turn(None);
}
}
Using Cargo.toml dependencies:
[dependencies]
futures = "0.1"
tokio-core = "0.1"
tokio-timer = "0.1"
This code deadlocks. The output is:
Call query for_each
enter query
send query
Sink flushed
Expected output is:
Call query for_each
enter query
send query
Sink flushed
resolve
resolve answered
Call query for_each
enter query
send query
Sink flushed
resolve
resolve answered
....
This indicates that the request with the tx end has been successfully sent to the internal server but the internal server does not process it. From my understanding, mpsc and oneshot can be used to transfer between tasks and not only threads, so the containing thread should not deadlock as it does.
What's wrong here?
After having read Aaron's blog, the concept of futures is now more clear. My first approach is not demand-driven and thus inadequate. The function resolve() should actually return a future and not a result.
In order to close this question appropriately, here is my modified, further reduced minimal example to show the concept:
extern crate futures;
extern crate tokio_core;
extern crate tokio_timer;
use std::time;
use std::time::{Instant,Duration};
use tokio_core::reactor::{Core, Interval};
use tokio_timer::wheel;
use futures::{Future,Stream,Sink};
use futures::sync::{oneshot,mpsc};
type MsgRequest<A,E> = oneshot::Sender<Result<A,E>>;
fn main() {
let mut lp = Core::new().unwrap();
let handle = lp.handle();
let (fut_tx, fut_rx) = mpsc::channel::<MsgRequest<u8,String>>(100);
let handle2 = handle.clone();
let resolver = fut_rx.and_then(move |tx| {
println!("Got query...wait a bit");
let delay = time::Duration::from_secs(5);
handle2.spawn({
wheel().build().sleep(delay)
.then(move |_|{
println!("Answer query");
tx.send(Ok(0)).unwrap();
println!("query answered");
Ok(())
})
});
Ok(())
})
.for_each(|_| {Ok(())});
handle.spawn(resolver);
let server = Interval::new_at(Instant::now(),
Duration::new(2,0),&handle).unwrap()
.then(move |_| {
let fut_tx = fut_tx.clone();
let (res_tx, res_rx) = oneshot::channel::<Result<u8,String>>();
println!("send query");
fut_tx.send(res_tx)
.then( |tx|{
if let Ok(_tx) = tx { println!("Sink flushed"); }
res_rx
})
})
.for_each(|res| {
println!("Received result {:?}",res);
Ok(())
}).map_err(|_| ());
handle.spawn(server);
loop {
lp.turn(None);
}
}
It outputs as expected:
send query
Sink flushed
Got query...wait a bit
Answer query
query answered
Received result Ok(0)
send query
Sink flushed
Got query...wait a bit
Answer query
query answered
Received result Ok(0)
...

Reuse hyper::client and tokio_core in Iron and Hyper

I make a client request inside an Iron handler. How can I reuse Tokio's Core and Hyper's Client? I'm using hyper 0.11.0 and tokio-core 0.1.
fn get_result(req: &mut Request) -> IronResult<Response> {
let mut payload = String::new();
req.body.read_to_string(&mut payload).unwrap();
// can we re-use core and client somehow. Making then global with lazy_static!() does not work.
let mut core = tokio_core::reactor::Core::new().unwrap();
let client = Client::new(&core.handle());
let uri = "http://host:port/getResult".parse().unwrap();
let mut req: hyper::Request = hyper::Request::new(hyper::Method::Post, uri);
req.headers_mut().set(ContentType::json());
req.headers_mut().set(ContentLength(payload.len() as u64));
req.set_body(payload);
let mut results: Vec<RequestFormat> = Vec::new();
let work = client.request(req).and_then(|res| {
res.body().for_each(|chunk| {
let re: ResultFormat = serde_json::from_slice(&chunk).unwrap();
results.push(re);
Ok(())
})
});
Ok(Response::with(
(iron::status::Ok, serde_json::to_string(&results).unwrap()),
))
}
I created a Downloader class that wraps client and core. Below is snippet.
use hyper;
use tokio_core;
use std::sync::{mpsc};
use std::thread;
use futures::Future;
use futures::stream::Stream;
use std::time::Duration;
use std::io::{self, Write};
use time::precise_time_ns;
use hyper::Client;
pub struct Downloader {
sender : mpsc::Sender<(hyper::Request, mpsc::Sender<hyper::Chunk>)>,
#[allow(dead_code)]
tr : thread::JoinHandle<hyper::Request>,
}
impl Downloader {
pub fn new() -> Downloader {
let (sender, receiver) = mpsc::channel::<(hyper::Request,mpsc::Sender<hyper::Chunk>)>();
let tr = thread::spawn(move||{
let mut core = tokio_core::reactor::Core::new().unwrap();
let client = Client::new(&core.handle());
loop {
let (req , sender) = receiver.recv().unwrap();
let begin = precise_time_ns();
let work = client.request(req)
.and_then(|res| {
res.body().for_each(|chunk| {
sender.send(chunk)
.map_err(|e|{
//io::sink().write(&chunk).unwrap();
io::Error::new(io::ErrorKind::Other, e)
})?;
Ok(())
})
//sender.close();
//res.body().concat2()
});
core.run(work).map_err(|e|{println!("Error Is {:?}", e);});
//This time prints same as all request processing time.
debug!("Time taken In Download {:?} ms", (precise_time_ns() - begin) / 1000000);
}
});
Downloader{sender,
tr,
}
}
pub fn download(&self, req : hyper::Request, results: mpsc::Sender<Vec<u8>>){
self.sender.send((req, results)).unwrap();
}
}
Now client of this class can have a static variable.
lazy_static!{
static ref DOWNLOADER : Mutex<downloader::Downloader> =
Mutex::new(downloader::Downloader::new());
}
let (sender, receiver) = mpsc::channel();
DOWNLOADER.lock().unwrap().download(payload, sender);
and then read through receive channel.
One may need to close sender channel using sender.drop()

How can I pass a socket as an argument to a function being called within a thread?

I'm going to have multiple functions that all need access to one main socket.
Would it better to:
Pass this socket to each function that needs access to it
Have a globally accessible socket
Can someone provide an example of the best way to do this?
I come from a Python/Nim background where things like this are easily done.
Edit:
How can I pass a socket as an arg to a function being called within a thread.
Ex.
fn main() {
let mut s = BufferedStream::new((TcpStream::connect(server).unwrap()));
let thread = Thread::spawn(move || {
func1(s, arg1, arg2);
});
while true {
func2(s, arg1);
}
}
Answer for updated question
We can use TcpStream::try_clone:
use std::io::Read;
use std::net::{TcpStream, Shutdown};
use std::thread;
fn main() {
let mut stream = TcpStream::connect("127.0.0.1:34254").unwrap();
let stream2 = stream.try_clone().unwrap();
let _t = thread::spawn(move || {
// close this stream after one second
thread::sleep_ms(1000);
stream2.shutdown(Shutdown::Read).unwrap();
});
// wait for some data, will get canceled after one second
let mut buf = [0];
stream.read(&mut buf).unwrap();
}
Original answer
It's usually (let's say 99.9% of the time) a bad idea to have any global mutable state, if you can help it. Just do as you said: pass the socket to the functions that need it.
use std::io::{self, Write};
use std::net::TcpStream;
fn send_name(stream: &mut TcpStream) -> io::Result<()> {
stream.write(&[42])?;
Ok(())
}
fn send_number(stream: &mut TcpStream) -> io::Result<()> {
stream.write(&[1, 2, 3])?;
Ok(())
}
fn main() {
let mut stream = TcpStream::connect("127.0.0.1:31337").unwrap();
let r = send_name(&mut stream).and_then(|_| send_number(&mut stream));
match r {
Ok(..) => println!("Yay, sent!"),
Err(e) => println!("Boom! {}", e),
}
}
You could also pass the TcpStream to a struct that manages it, and thus gives you a place to put similar methods.
use std::io::{self, Write};
use std::net::TcpStream;
struct GameService {
stream: TcpStream,
}
impl GameService {
fn send_name(&mut self) -> io::Result<()> {
self.stream.write(&[42])?;
Ok(())
}
fn send_number(&mut self) -> io::Result<()> {
self.stream.write(&[1, 2, 3])?;
Ok(())
}
}
fn main() {
let stream = TcpStream::connect("127.0.0.1:31337").unwrap();
let mut service = GameService { stream: stream };
let r = service.send_name().and_then(|_| service.send_number());
match r {
Ok(..) => println!("Yay, sent!"),
Err(e) => println!("Boom! {}", e),
}
}
None of this is really Rust-specific, these are generally-applicable programming practices.

Resources