I want to write a SOCKS server which selects one of several internet gateways depending on the destination as requested by the client. The general flow is
Perform SOCKS5 negotiation and derive the address information from client
Request an internal server to select the internet gateway and the destination's IP
Connect and do the communication
For this internal server, a Tokio task is spawned which waits on a mpsc queue. The received messages should contain the SOCKS5 address info and the tx side of a oneshot channel to give back the result.
Another Tokio task just periodically queries the internal server:
extern crate futures;
extern crate tokio_core;
extern crate tokio_timer;
use std::time;
use std::time::{Duration, Instant};
use std::fmt::Debug;
use tokio_core::reactor::{Core, Interval};
use tokio_timer::wheel;
use futures::{Future, Sink, Stream};
use futures::sync::{mpsc, oneshot};
type MsgRequest<A, E> = oneshot::Sender<Result<A, E>>;
type FutRequest<A, E> = mpsc::Sender<MsgRequest<A, E>>;
#[derive(Debug)]
struct Responder<A, E> {
fut_tx: FutRequest<A, E>,
}
impl<A: 'static, E: 'static> Responder<A, E>
where
E: Debug,
{
fn query(&self) -> Result<A, E> {
println!("enter query");
let (res_tx, res_rx) = oneshot::channel::<Result<A, E>>();
println!("send query");
let fut_tx = self.fut_tx.clone();
let res = fut_tx
.send(res_tx)
.then(|tx| {
if let Ok(_tx) = tx {
println!("Sink flushed");
}
res_rx
})
.and_then(|x| Ok(x))
.wait()
.unwrap();
res
}
}
impl<A: 'static, E: 'static> Clone for Responder<A, E> {
fn clone(&self) -> Self {
Responder {
fut_tx: self.fut_tx.clone(),
}
}
}
fn resolve(tx: oneshot::Sender<Result<u8, String>>) -> Result<(), ()> {
println!("resolve");
let delay = time::Duration::from_secs(10);
wheel()
.build()
.sleep(delay)
.then(|_| tx.send(Ok(0)))
.wait()
.unwrap();
println!("resolve answered");
Ok(())
}
fn main() {
let mut lp = Core::new().unwrap();
let handle = lp.handle();
let (fut_tx, fut_rx) = mpsc::channel::<MsgRequest<u8, String>>(100);
let resolver = fut_rx.for_each(|msg| resolve(msg));
handle.spawn(resolver);
let responder = Responder { fut_tx };
let server = Interval::new_at(Instant::now(), Duration::new(2, 0), &handle)
.unwrap()
.for_each(move |_| {
println!("Call query for_each");
let rx = responder.clone();
let _res = rx.query();
Ok(())
})
.map_err(|_| ());
handle.spawn(server);
loop {
lp.turn(None);
}
}
Using Cargo.toml dependencies:
[dependencies]
futures = "0.1"
tokio-core = "0.1"
tokio-timer = "0.1"
This code deadlocks. The output is:
Call query for_each
enter query
send query
Sink flushed
Expected output is:
Call query for_each
enter query
send query
Sink flushed
resolve
resolve answered
Call query for_each
enter query
send query
Sink flushed
resolve
resolve answered
....
This indicates that the request with the tx end has been successfully sent to the internal server but the internal server does not process it. From my understanding, mpsc and oneshot can be used to transfer between tasks and not only threads, so the containing thread should not deadlock as it does.
What's wrong here?
After having read Aaron's blog, the concept of futures is now more clear. My first approach is not demand-driven and thus inadequate. The function resolve() should actually return a future and not a result.
In order to close this question appropriately, here is my modified, further reduced minimal example to show the concept:
extern crate futures;
extern crate tokio_core;
extern crate tokio_timer;
use std::time;
use std::time::{Instant,Duration};
use tokio_core::reactor::{Core, Interval};
use tokio_timer::wheel;
use futures::{Future,Stream,Sink};
use futures::sync::{oneshot,mpsc};
type MsgRequest<A,E> = oneshot::Sender<Result<A,E>>;
fn main() {
let mut lp = Core::new().unwrap();
let handle = lp.handle();
let (fut_tx, fut_rx) = mpsc::channel::<MsgRequest<u8,String>>(100);
let handle2 = handle.clone();
let resolver = fut_rx.and_then(move |tx| {
println!("Got query...wait a bit");
let delay = time::Duration::from_secs(5);
handle2.spawn({
wheel().build().sleep(delay)
.then(move |_|{
println!("Answer query");
tx.send(Ok(0)).unwrap();
println!("query answered");
Ok(())
})
});
Ok(())
})
.for_each(|_| {Ok(())});
handle.spawn(resolver);
let server = Interval::new_at(Instant::now(),
Duration::new(2,0),&handle).unwrap()
.then(move |_| {
let fut_tx = fut_tx.clone();
let (res_tx, res_rx) = oneshot::channel::<Result<u8,String>>();
println!("send query");
fut_tx.send(res_tx)
.then( |tx|{
if let Ok(_tx) = tx { println!("Sink flushed"); }
res_rx
})
})
.for_each(|res| {
println!("Received result {:?}",res);
Ok(())
}).map_err(|_| ());
handle.spawn(server);
loop {
lp.turn(None);
}
}
It outputs as expected:
send query
Sink flushed
Got query...wait a bit
Answer query
query answered
Received result Ok(0)
send query
Sink flushed
Got query...wait a bit
Answer query
query answered
Received result Ok(0)
...
Related
I need a simple hyper server that serves a single request and then exits. This is my code so far, I believe that all I need is a way to get tx into hello, so I can use tx.send(()) and it should work the way I want it. However, I can't quite work out a way to do that without having the compiler yell at me.
use std::convert::Infallible;
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};
async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
Ok(Response::new(Body::from("Hello World!")))
}
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
let make_svc = make_service_fn(|_conn| {
async { Ok::<_, Infallible>(service_fn(hello)) }
});
let addr = ([127, 0, 0, 1], 3000).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
let graceful = server.with_graceful_shutdown(async {
rx.await.ok();
});
graceful.await?;
Ok(())
}
Rust playground
Relevant crates:
tokio = { version = "0.2", features = ["full"] }
hyper = "0.13.7"
Since How to share mutable state for a Hyper handler? and How to share mutable state for a Hyper handler?, the hyper API has changed and I am unable to compile the code when edited to work with the current version.
A straightforward solution would be to use global state for this, made possible by tokio's Mutex type, like so:
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};
use lazy_static::lazy_static;
use std::convert::Infallible;
use std::sync::Arc;
use tokio::sync::oneshot::Sender;
use tokio::sync::Mutex;
lazy_static! {
/// Channel used to send shutdown signal - wrapped in an Option to allow
/// it to be taken by value (since oneshot channels consume themselves on
/// send) and an Arc<Mutex> to allow it to be safely shared between threads
static ref SHUTDOWN_TX: Arc<Mutex<Option<Sender<()>>>> = <_>::default();
}
async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
// Attempt to send a shutdown signal, if one hasn't already been sent
if let Some(tx) = SHUTDOWN_TX.lock().await.take() {
let _ = tx.send(());
}
Ok(Response::new(Body::from("Hello World!")))
}
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
SHUTDOWN_TX.lock().await.replace(tx);
let make_svc = make_service_fn(|_conn| async { Ok::<_, Infallible>(service_fn(hello)) });
let addr = ([127, 0, 0, 1], 3000).into();
let server = Server::bind(&addr).serve(make_svc);
println!("Listening on http://{}", addr);
let graceful = server.with_graceful_shutdown(async {
rx.await.ok();
});
graceful.await?;
Ok(())
}
In this version of the code, we store the sender half of the shutdown signal channel in a global variable protected by a mutex lock, and then attempt to consume the channel to send the signal on every request.
I am trying to understand how futures::sync::mpsc::Receiver works. In the below example, the receiver thread sleeps for two seconds and the sender sends every second.
I expect that the sender will need to be blocked because of the wait and then send when the buffer is released.
What I see instead is that it is deadlocked after a time. Increasing the buffer of the channel only extends the time until it is blocked.
What should I do to make the sender send data when the buffer is available and put some backpressure to the sender in such cases? futures::sync::mpsc::channel has its own documentation, but I do not understand how to use it properly.
extern crate futures;
extern crate tokio_core;
use std::{thread, time};
use futures::sync::mpsc;
use futures::{Future, Sink, Stream};
use tokio_core::reactor::Core;
#[derive(Debug)]
struct Stats {
pub success: usize,
pub failure: usize,
}
fn main() {
let mut core = Core::new().expect("Failed to create core");
let remote = core.remote();
let (tx, rx) = mpsc::channel(1);
thread::spawn(move || loop {
let tx = tx.clone();
let delay = time::Duration::from_secs(1);
thread::sleep(delay);
let f = ::futures::done::<(), ()>(Ok(()));
remote.spawn(|_| {
f.then(|res| {
println!("Sending");
tx.send(res).wait();
println!("Sent");
Ok(())
})
});
});
let mut stats = Stats {
success: 0,
failure: 0,
};
let f2 = rx.for_each(|res| {
println!("Received");
let delay = time::Duration::from_secs(2);
thread::sleep(delay);
match res {
Ok(_) => stats.success += 1,
Err(_) => stats.failure += 1,
}
println!("stats = {:?}", stats);
Ok(())
});
core.run(f2).expect("Core failed to run");
}
Never call wait inside of a future. That's blocking, and blocking should never be done inside a future.
Never call sleep inside of a future. That's blocking, and blocking should never be done inside a future.
Channel backpressure is implemented by the fact that send consumes the Sender and returns a future. The future yields the Sender back to you when there is room in the queue.
extern crate futures; // 0.1.25
extern crate tokio; // 0.1.11
use futures::{future, sync::mpsc, Future, Sink, Stream};
use std::time::Duration;
use tokio::timer::Interval;
#[derive(Debug)]
struct Stats {
pub success: usize,
pub failure: usize,
}
fn main() {
tokio::run(future::lazy(|| {
let (tx, rx) = mpsc::channel::<Result<(), ()>>(1);
tokio::spawn({
Interval::new_interval(Duration::from_millis(10))
.map_err(|e| panic!("Interval error: {}", e))
.fold(tx, |tx, _| {
tx.send(Ok(())).map_err(|e| panic!("Send error: {}", e))
})
.map(drop) // discard the tx
});
let mut stats = Stats {
success: 0,
failure: 0,
};
let i = Interval::new_interval(Duration::from_millis(20))
.map_err(|e| panic!("Interval error: {}", e));
rx.zip(i).for_each(move |(res, _)| {
println!("Received");
match res {
Ok(_) => stats.success += 1,
Err(_) => stats.failure += 1,
}
println!("stats = {:?}", stats);
Ok(())
})
}));
}
I'm trying to make a Stream that would wait until a specific character is in buffer. I know there's read_until() on BufRead but I actually need a custom solution, as this is a stepping stone to implement waiting until a specific string in in buffer (or, for example, a regexp match happens).
In my project where I first encountered the problem, problem was that future processing just hanged when I get a Ready(_) from inner future and return NotReady from my function. I discovered I shouldn't do that per docs (last paragraph). However, what I didn't get, is what's the actual alternative that is promised in that paragraph. I read all the published documentation on the Tokio site and it doesn't make sense for me at the moment.
So following is my current code. Unfortunately I couldn't make it simpler and smaller as it's already broken. Current result is this:
Err(Custom { kind: Other, error: Error(Shutdown) })
Err(Custom { kind: Other, error: Error(Shutdown) })
Err(Custom { kind: Other, error: Error(Shutdown) })
<ad infinum>
Expected result is getting some Ok(Ready(_)) out of it, while printing W and W', and waiting for specific character in buffer.
extern crate futures;
extern crate tokio_core;
extern crate tokio_io;
extern crate tokio_io_timeout;
extern crate tokio_process;
use futures::stream::poll_fn;
use futures::{Async, Poll, Stream};
use tokio_core::reactor::Core;
use tokio_io::AsyncRead;
use tokio_io_timeout::TimeoutReader;
use tokio_process::CommandExt;
use std::process::{Command, Stdio};
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
struct Process {
child: tokio_process::Child,
stdout: Arc<Mutex<tokio_io_timeout::TimeoutReader<tokio_process::ChildStdout>>>,
}
impl Process {
fn new(
command: &str,
reader_timeout: Option<Duration>,
core: &tokio_core::reactor::Core,
) -> Self {
let mut cmd = Command::new(command);
let cat = cmd.stdout(Stdio::piped());
let mut child = cat.spawn_async(&core.handle()).unwrap();
let stdout = child.stdout().take().unwrap();
let mut timeout_reader = TimeoutReader::new(stdout);
timeout_reader.set_timeout(reader_timeout);
let timeout_reader = Arc::new(Mutex::new(timeout_reader));
Self {
child,
stdout: timeout_reader,
}
}
}
fn work() -> Result<(), ()> {
let window = Arc::new(Mutex::new(Vec::new()));
let mut core = Core::new().unwrap();
let process = Process::new("cat", Some(Duration::from_secs(20)), &core);
let mark = Arc::new(Mutex::new(b'c'));
let read_until_stream = poll_fn({
let window = window.clone();
let timeout_reader = process.stdout.clone();
move || -> Poll<Option<u8>, std::io::Error> {
let mut buf = [0; 8];
let poll;
{
let mut timeout_reader = timeout_reader.lock().unwrap();
poll = timeout_reader.poll_read(&mut buf);
}
match poll {
Ok(Async::Ready(0)) => Ok(Async::Ready(None)),
Ok(Async::Ready(x)) => {
{
let mut window = window.lock().unwrap();
println!("W: {:?}", *window);
println!("buf: {:?}", &buf[0..x]);
window.extend(buf[0..x].into_iter().map(|x| *x));
println!("W': {:?}", *window);
if let Some(_) = window.iter().find(|c| **c == *mark.lock().unwrap()) {
Ok(Async::Ready(Some(1)))
} else {
Ok(Async::NotReady)
}
}
}
Ok(Async::NotReady) => Ok(Async::NotReady),
Err(e) => Err(e),
}
}
});
let _stream_thread = thread::spawn(move || {
for o in read_until_stream.wait() {
println!("{:?}", o);
}
});
match core.run(process.child) {
Ok(_) => {}
Err(e) => {
println!("Child error: {:?}", e);
}
}
Ok(())
}
fn main() {
work().unwrap();
}
This is complete example project.
If you need more data you need to call poll_read again until you either find what you were looking for or poll_read returns NotReady.
You might want to avoid looping in one task for too long, so you can build yourself a yield_task function to call instead if poll_read didn't return NotReady; it makes sure your task gets called again ASAP after other pending tasks were run.
To use it just run return yield_task();.
fn yield_inner() {
use futures::task;
task::current().notify();
}
#[inline(always)]
pub fn yield_task<T, E>() -> Poll<T, E> {
yield_inner();
Ok(Async::NotReady)
}
Also see futures-rs#354: Handle long-running, always-ready futures fairly #354.
With the new async/await API futures::task::current is gone; instead you'll need a std::task::Context reference, which is provided as parameter to the new std::future::Future::poll trait method.
If you're already manually implementing the std::future::Future trait you can simply insert:
context.waker().wake_by_ref();
return std::task::Poll::Pending;
Or build yourself a Future-implementing type that yields exactly once:
pub struct Yield {
ready: bool,
}
impl core::future::Future for Yield {
type Output = ();
fn poll(self: core::pin::Pin<&mut Self>, cx: &mut core::task::Context<'_>) -> core::task::Poll<Self::Output> {
let this = self.get_mut();
if this.ready {
core::task::Poll::Ready(())
} else {
cx.waker().wake_by_ref();
this.ready = true; // ready next round
core::task::Poll::Pending
}
}
}
pub fn yield_task() -> Yield {
Yield { ready: false }
}
And then use it in async code like this:
yield_task().await;
I'm experimenting with the futures API using the websocket library. I have this code:
use futures::future::Future;
use futures::future;
use futures::sink::Sink;
use futures::stream::Stream;
use futures::sync::mpsc::channel;
use futures::sync::mpsc::{Sender, Receiver};
use tokio_core::reactor::Core;
use websocket::{ClientBuilder, OwnedMessage};
pub fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let handle_clone = handle.clone();
let (send, recv): (Sender<String>, Receiver<String>) = channel(100);
let f = ClientBuilder::new("wss://...")
.unwrap()
.async_connect(None, &handle_clone)
.map_err(|e| println!("error: {:?}", e))
.map(|(duplex, _)| duplex.split())
.and_then(move |(sink, stream)| {
// this task consumes the channel, writes messages to the websocket
handle_clone.spawn(future::loop_fn(recv, |recv: Receiver<String>| {
sink.send(OwnedMessage::Close(None))
.and_then(|_| future::ok(future::Loop::Break(())))
.map_err(|_| ())
}));
// the main tasks listens the socket
future::loop_fn(stream, |stream| {
stream
.into_future()
.and_then(|_| future::ok(future::Loop::Break(())))
.map_err(|_| ())
})
});
loop {
core.turn(None)
}
}
After connecting to the server, I want to run "listener" and "sender" tasks without one blocking the other one. The problem is I can't use sink in the new task, it fails with:
error[E0507]: cannot move out of captured outer variable in an `FnMut` closure
--> src/slack_conn.rs:29:17
|
25 | .and_then(move |(sink, stream)| {
| ---- captured outer variable
...
29 | sink.send(OwnedMessage::Close(None))
| ^^^^ cannot move out of captured outer variable in an `FnMut` closure
I could directly use duplex to send and receive, but that leads to worse errors.
Any ideas on how to make this work? Indeed, I'd be happy with any futures code that allows me to non-blockingly connect to a server and spawn two async tasks:
one that reads from the connection and takes some action (prints to screen etc.)
one that reads from a mpsc channel and writes to the connection
It's fine if I have to write it in a different style.
SplitSink implements Sink which defines send to take ownership:
fn send(self, item: Self::SinkItem) -> Send<Self>
where
Self: Sized,
On the other hand, loop_fn requires that the closure be able to be called multiple times. These two things are fundamentally incompatible — how can you call something multiple times which requires consuming a value?
Here's a completely untested piece of code that compiles — I don't have rogue WebSocket servers lying about.
#[macro_use]
extern crate quick_error;
extern crate futures;
extern crate tokio_core;
extern crate websocket;
use futures::{Future, Stream, Sink};
use futures::sync::mpsc::channel;
use tokio_core::reactor::Core;
use websocket::ClientBuilder;
pub fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let (send, recv) = channel(100);
let f = ClientBuilder::new("wss://...")
.unwrap()
.async_connect(None, &handle)
.from_err::<Error>()
.map(|(duplex, _)| duplex.split())
.and_then(|(sink, stream)| {
let reader = stream
.for_each(|i| {
println!("Read a {:?}", i);
Ok(())
})
.from_err();
let writer = sink
.sink_from_err()
.send_all(recv.map_err(Error::Receiver))
.map(|_| ());
reader.join(writer)
});
drop(send); // Close the sending channel manually
core.run(f).expect("Unable to run");
}
quick_error! {
#[derive(Debug)]
pub enum Error {
WebSocket(err: websocket::WebSocketError) {
from()
description("websocket error")
display("WebSocket error: {}", err)
cause(err)
}
Receiver(err: ()) {
description("receiver error")
display("Receiver error")
}
}
}
The points that stuck out during implementation were:
everything has to become a Future eventually
it's way easier to define an error type and convert to it
Knowing if the Item and Error associated types were "right" was tricky. I ended up doing a lot of "type assertions" ({ let x: &Future<Item = (), Error = ()> = &reader; }).
I am a beginner in Rust.
I have a long running IO-bound process that I want to spawn and monitor via a REST API. I chose Iron for that, following this tutorial . Monitoring means getting its progress and its final result.
When I spawn it, I give it an id and map that id to a resource that I can GET to get the progress. I don't have to be exact with the progress; I can report the progress from 5 seconds ago.
My first attempt was to have a channel via which I send request for progress and receive the status. I got stuck where to store the receiver, as in my understanding it belongs to one thread only. I wanted to put it in the context of the request, but that won't work as there are different threads handling subsequent requests.
What would be the idiomatic way to do this in Rust?
I have a sample project.
Later edit:
Here is a self contained example which follows the sample principle as the answer, namely a map where each thread updates its progress:
extern crate iron;
extern crate router;
extern crate rustc_serialize;
use iron::prelude::*;
use iron::status;
use router::Router;
use rustc_serialize::json;
use std::io::Read;
use std::sync::{Mutex, Arc};
use std::thread;
use std::time::Duration;
use std::collections::HashMap;
#[derive(Debug, Clone, RustcEncodable, RustcDecodable)]
pub struct Status {
pub progress: u64,
pub context: String
}
#[derive(RustcEncodable, RustcDecodable)]
struct StartTask {
id: u64
}
fn start_process(status: Arc<Mutex<HashMap<u64, Status>>>, task_id: u64) {
let c = status.clone();
thread::spawn(move || {
for i in 1..100 {
{
let m = &mut c.lock().unwrap();
m.insert(task_id, Status{ progress: i, context: "in progress".to_string()});
}
thread::sleep(Duration::from_secs(1));
}
let m = &mut c.lock().unwrap();
m.insert(task_id, Status{ progress: 100, context: "done".to_string()});
});
}
fn main() {
let status: Arc<Mutex<HashMap<u64, Status>>> = Arc::new(Mutex::new(HashMap::new()));
let status_clone: Arc<Mutex<HashMap<u64, Status>>> = status.clone();
let mut router = Router::new();
router.get("/:taskId", move |r: &mut Request| task_status(r, &status.lock().unwrap()));
router.post("/start", move |r: &mut Request|
start_task(r, status_clone.clone()));
fn task_status(req: &mut Request, statuses: & HashMap<u64,Status>) -> IronResult<Response> {
let ref task_id = req.extensions.get::<Router>().unwrap().find("taskId").unwrap_or("/").parse::<u64>().unwrap();
let payload = json::encode(&statuses.get(&task_id)).unwrap();
Ok(Response::with((status::Ok, payload)))
}
// Receive a message by POST and play it back.
fn start_task(request: &mut Request, statuses: Arc<Mutex<HashMap<u64, Status>>>) -> IronResult<Response> {
let mut payload = String::new();
request.body.read_to_string(&mut payload).unwrap();
let task_start_request: StartTask = json::decode(&payload).unwrap();
start_process(statuses, task_start_request.id);
Ok(Response::with((status::Ok, json::encode(&task_start_request).unwrap())))
}
Iron::new(router).http("localhost:3000").unwrap();
}
One possibility is to use a global HashMap that associate each worker id with the progress (and result). Here is simple example (without the rest stuff)
#[macro_use]
extern crate lazy_static;
use std::sync::Mutex;
use std::collections::HashMap;
use std::thread;
use std::time::Duration;
lazy_static! {
static ref PROGRESS: Mutex<HashMap<usize, usize>> = Mutex::new(HashMap::new());
}
fn set_progress(id: usize, progress: usize) {
// insert replaces the old value if there was one.
PROGRESS.lock().unwrap().insert(id, progress);
}
fn get_progress(id: usize) -> Option<usize> {
PROGRESS.lock().unwrap().get(&id).cloned()
}
fn work(id: usize) {
println!("Creating {}", id);
set_progress(id, 0);
for i in 0..100 {
set_progress(id, i + 1);
// simulates work
thread::sleep(Duration::new(0, 50_000_000));
}
}
fn monitor(id: usize) {
loop {
if let Some(p) = get_progress(id) {
if p == 100 {
println!("Done {}", id);
// to avoid leaks, remove id from PROGRESS.
// maybe save that the task ends in a data base.
return
} else {
println!("Progress {}: {}", id, p);
}
}
thread::sleep(Duration::new(1, 0));
}
}
fn main() {
let w = thread::spawn(|| work(1));
let m = thread::spawn(|| monitor(1));
w.join().unwrap();
m.join().unwrap();
}
You need to register one channel per request thread, because if cloning Receivers were possible the responses might/will end up with the wrong thread if two request are running at the same time.
Instead of having your thread create a channel for answering requests, use a future. A future allows you to have a handle to an object, where the object doesn't exist yet. You can change the input channel to receive a Promise, which you then fulfill, no output channel necessary.