How do I use a channel receiver inside a nested closure? - multithreading

I want to share some data between an async function with another async function running on a separate thread. In this case it's a cronjob.
This is the solution I've come up with so far:
main.rs
use tokio::sync::mpsc;
use tokio::task;
use tokio_cron_scheduler::{Job, JobScheduler};
#[tokio::main]
pub async fn main() {
let (tx, mut rx): (mpsc::Sender<i16>, mpsc::Receiver<i16>) = mpsc::channel(1000);
task::spawn(async {
let mut sched = JobScheduler::new();
let job = Job::new_async("0 0/5 * * * *", |_uuid, _l| {
Box::pin(async move {
let devices = rx.recv().await.unwrap();
api::insert_datapoint(devices).await;
})
}).unwrap();
sched.add(job).expect("failed adding job to scheduler");
sched.start().await.expect("failed starting scheduler");
});
}
but I'm getting an error preventing me from doing so
cannot move out of `rx`, a captured variable in an `FnMut` closure
move out of `rx` occurs here (E0507)
Is there a way to solve this? Is my approach of using a channel for this task inherently wrong?
EDIT:
I've tried adding the move keyword to the outermost closure, however this results in the same error
Adding a move before |_uuid, _l| does not make any difference either

I ended up getting it working thanks to cdhowie's suggestion
use tokio::task;
use tokio_cron_scheduler::{Job, JobScheduler};
#[tokio::main]
pub async fn main() {
let (tx, rx): (async_channel::Sender<i16>, async_channel::Receiver<i16>) = async_channel::bounded(1000);
task::spawn(async {
let mut sched = JobScheduler::new();
let job = Job::new_async("0 0/5 * * * *", |_uuid, _l| {
let rx = rx.clone();
Box::pin(async move {
if let Ok(devices) = rx.recv.await {
api::insert_datapoint(devices).await;
}
})
}).unwrap();
sched.add(job).expect("failed adding job to scheduler");
sched.start().await.expect("failed starting scheduler");
});
}

Related

How can I run asynchronous tasks on a single thread in order?

I am working on a program using rust-tokio for asynchronous execution. The main function periodically calls a function to append to a CSV file to log operation over time.
I would like to make the CSV creation function asynchronous and run it as a separate task so I can continue the main function if CSV creation is taking some time (like waiting for another application like Excel to release it).
Is there an elegant way to accomplish this?
LocalSet almost seems like it would do the job, but the tasks need to execute in order so the CSV is chronological. To me, the documentation doesn't seem to guarantee this.
Here's some pseudo code to illustrate the idea. Essentially, I'm thinking a queue of tasks that need to be completed.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let local = task::LocalSetOrdered::new(); //This is a fictitious struct
let mut data: usize = 10; //For simplicity, just store a single number
loop {
// Some operations here
data = data + 1;
let data_clone = data.clone();
//Add a new task to complete after all prior tasks
local.push(async move {
match append_to_csv(data_clone).await {
Ok(_) => Ok(()),
Err(_) => Err(()),
}
});
sleep(Duration::from_secs(60)).await;
}
Ok(())
}
async fn append_to_csv(data_in: usize) -> Result<(), Box<dyn Error>> {
loop {
let file = match OpenOptions::new().write(true).append(true).open(filename) {
Ok(f) => f,
Err(_) => {
//Error opening the file, try again
sleep(Duration::from_secs(60)).await;
continue;
}
};
let wtr = csv::Writer::from_writer(file);
let date_time = Utc::now();
wtr.write_field(format!("{}", date_time.format("%Y/%m/%d %H:%M")))?;
wtr.write_field(format!("{}", data_in))?;
wtr.write_record(None::<&[u8]>)?; //Finish the line
}
}
You could use a worker task to write to the csv file, and a channel to pass data to be written
use tokio::sync::mpsc::{channel, Receiver};
#[derive(Debug)]
pub struct CsvData(i32, &'static str);
async fn append_to_csv(mut rx: Receiver<CsvData>) {
let mut wtr = csv::Writer::from_writer(std::io::stdout());
while let Some(data) = rx.recv().await {
wtr.write_record([&data.0.to_string(), data.1]).unwrap();
wtr.flush().unwrap();
}
}
#[tokio::main]
async fn main() {
let (tx, rx) = channel(10);
tokio::spawn(async {
append_to_csv(rx).await;
});
for i in 0.. {
tx.send(CsvData(i, "Hello world")).await.unwrap();
}
}
The channel sender can be cloned if you need to write data sourced from multiple tasks.

Can tokio::select allow defining arbitrary numbre of branches

I am writing an echo server that is able to listen on multiple ports. My working server below relies on select to accept connections from 2 different listeners
However, instead of defining the listeners as individual variables, is it possible to define select branches based on a Vec<TcpListener> ?
use tokio::{io, net, select, spawn};
#[tokio::main]
async fn main() {
let listener1 = net::TcpListener::bind("127.0.0.1:8001").await.unwrap();
let listener2 = net::TcpListener::bind("127.0.0.1:8002").await.unwrap();
loop {
let (conn, _) = select! {
v = listener1.accept() => v.unwrap(),
v = listener2.accept() => v.unwrap(),
};
spawn(handle(conn));
}
}
async fn handle(mut conn: net::TcpStream) {
let (mut read, mut write) = conn.split();
io::copy(&mut read, &mut write).await.unwrap();
}
While futures::future::select_all() works, it is not very elegant (IMHO) and it creates an allocation for each round. A better solution is to use streams (note this also allocates on each round, but this allocates much less):
use tokio::{io, net, spawn};
use tokio_stream::wrappers::TcpListenerStream;
use futures::stream::{StreamExt, SelectAll};
#[tokio::main]
async fn main() {
let mut listeners = SelectAll::new();
listeners.push(TcpListenerStream::new(net::TcpListener::bind("127.0.0.1:8001").await.unwrap()));
listeners.push(TcpListenerStream::new(net::TcpListener::bind("127.0.0.1:8002").await.unwrap()));
while let Some(conn) = listeners.next().await {
let conn = conn.unwrap();
spawn(handle(conn));
}
}
You can use the select_all function from the futures crate, which takes an iterator of futures and awaits any of them (instead of all of them, like join_all does):
use futures::{future::select_all, FutureExt};
use tokio::{io, net, select, spawn};
#[tokio::main]
async fn main() {
let mut listeners = [
net::TcpListener::bind("127.0.0.1:8001").await.unwrap(),
net::TcpListener::bind("127.0.0.1:8002").await.unwrap(),
];
loop {
let (result, index, _) = select_all(
listeners
.iter_mut()
// note: `FutureExt::boxed` is called here because `select_all`
// requires the futures to be pinned
.map(|listener| listener.accept().boxed()),
)
.await;
let (conn, _) = result.unwrap();
spawn(handle(conn));
}
}

deno_runtime running multiple invokes on single worker concurrently

I'm trying to run multiple invocation of the same script on a single deno MainWorker concurrently, and waiting for their
results (since the scripts can be async). Conceptually, I want something like the loop in run_worker below.
type Tx = Sender<(String, Sender<String>)>;
type Rx = Receiver<(String, Sender<String>)>;
struct Runner {
worker: MainWorker,
futures: FuturesUnordered<Pin<Box<dyn Future<Output=(String, Result<Global<Value>, Error>)>>>>,
response_futures: FuturesUnordered<Pin<Box<dyn Future<Output=(String, Result<(), SendError<String>>)>>>>,
result_senders: HashMap<String, Sender<String>>,
}
impl Runner {
fn new() ...
async fn run_worker(&mut self, rx: &mut Rx, main_module: ModuleSpecifier, user_module: ModuleSpecifier) {
self.worker.execute_main_module(&main_module).await.unwrap();
self.worker.preload_side_module(&user_module).await.unwrap();
loop {
tokio::select! {
msg = rx.recv() => {
if let Some((id, sender)) = msg {
let global = self.worker.js_runtime.execute_script("test", "mod.entry()").unwrap();
self.result_senders.insert(id, sender);
self.futures.push(Box::pin(async {
let resolved = self.worker.js_runtime.resolve_value(global).await;
return (id, resolved);
}));
}
},
script_result = self.futures.next() => {
if let Some((id, out)) = script_result {
self.response_futures.push(Box::pin(async {
let value = deserialize_value(out.unwrap(), &mut self.worker);
let res = self.result_senders.remove(&id).unwrap().send(value).await;
return (id.clone(), res);
}));
}
},
// also handle response_futures here
else => break,
}
}
}
}
The worker can't be borrowed as mutable multiple times, so this won't work. So the worker has to be a RefCell, and
I've created a BorrowingFuture:
struct BorrowingFuture {
worker: RefCell<MainWorker>,
global: Global<Value>,
id: String
}
And its poll implementation:
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
match Pin::new(&mut Box::pin(self.worker.borrow_mut().js_runtime.resolve_value(self.global.clone()))).poll(cx) {
Poll::Ready(result) => Poll::Ready((self.id.clone(), result)),
Poll::Pending => {
cx.waker().clone().wake_by_ref();
Poll::Pending
}
}
}
So the above
self.futures.push(Box::pin(async {
let resolved = self.worker.js_runtime.resolve_value(global).await;
return (id, resolved);
}));
would become
self.futures.push(Box::pin(BorrowingFuture{worker: self.worker, global: global.clone(), id: id.clone()}));
and this would have to be done for the response_futures above as well.
But I see a few issues with this.
Creating a new future on every poll and then polling that seems wrong, but it does work.
It probably has a performance impact because new objects are created constantly.
The same issue would happen for the response futures, which would call send on each poll, which seems completely wrong.
The waker.wake_by_ref is called on every poll, because there is no way to know when a script result
will resolve. This results in the future being polled thousands (and more) times per second (always creating a new object),
which could be the same as checking it in a loop, I guess.
Note My current setup doesn't use select!, but an enum as Output from multiple Future implementations, pushed into
a single FuturesUnordered, and then matched to handle the correct type (script, send, receive). I used select here because
it's far less verbose, and gets the point across.
Is there a way to do this better/more efficiently? Or is it just not the way MainWorker was meant to be used?
main for completeness:
#[tokio::main]
async fn main() {
let main_module = deno_runtime::deno_core::resolve_url(MAIN_MODULE_SPECIFIER).unwrap();
let user_module = deno_runtime::deno_core::resolve_url(USER_MODULE_SPECIFIER).unwrap();
let (tx, mut rx) = channel(1);
let (result_tx, mut result_rx) = channel(1);
let handle = thread::spawn(move || {
let runtime = tokio::runtime::Builder::new_multi_thread().enable_all().build().unwrap();
let mut runner = Runner::new();
runtime.block_on(runner.run_worker(&mut rx, main_module, user_module));
});
tx.send(("test input".to_string(), result_tx)).await.unwrap();
let result = result_rx.recv().await.unwrap();
println!("result from worker {}", result);
handle.join().unwrap();
}

How to connect bevy game to externel TCP server using tokios async TcpStream?

I want to send Events between the game client and server and I already got it working, but I do not know how to do it with bevy.
I am dependent to use tokios async TcpStream, because I have to be able to split the stream into a OwnedWriteHalf and OwnedReadhalf using stream.into_split().
My first idea was to just spawn a thread that handles the connection and then send the received events to a queue using mpsc::channel
Then I include this queue into a bevy resource using app.insert_resource(Queue) and pull events from it in the game loop.
the Queue:
use tokio::sync::mpsc;
pub enum Instruction {
Push(GameEvent),
Pull(mpsc::Sender<Option<GameEvent>>),
}
#[derive(Clone, Debug)]
pub struct Queue {
sender: mpsc::Sender<Instruction>,
}
impl Queue {
pub fn init() -> Self {
let (tx, rx) = mpsc::channel(1024);
init(rx);
Self{sender: tx}
}
pub async fn send(&self, event: GameEvent) {
self.sender.send(Instruction::Push(event)).await.unwrap();
}
pub async fn pull(&self) -> Option<GameEvent> {
println!("new pull");
let (tx, mut rx) = mpsc::channel(1);
self.sender.send(Instruction::Pull(tx)).await.unwrap();
rx.recv().await.unwrap()
}
}
fn init(mut rx: mpsc::Receiver<Instruction>) {
tokio::spawn(async move {
let mut queue: Vec<GameEvent> = Vec::new();
loop {
match rx.recv().await.unwrap() {
Instruction::Push(ev) => {
queue.push(ev);
}
Instruction::Pull(sender) => {
sender.send(queue.pop()).await.unwrap();
}
}
}
});
}
But because all this has to be async I have block the pull() function in the sync game loop.
I do this using the futures-lite crate:
fn event_pull(
communication: Res<Communication>
) {
let ev = future::block_on(communication.event_queue.pull());
println!("got event: {:?}", ev);
}
And this works fine, BUT after around 5 seconds the whole program just halts and does not receive any more events.
It seems like that future::block_on() does block indefinitely.
Having the main function, in which bevy::prelude::App gets built and run, to be the async tokio::main function might also be a problem here.
It would probably be best to wrap the async TcpStream initialisation and tokio::sync::mpsc::Sender and thus also Queue.pull into synchronous functions, but I do not know how to do this.
Can anyone help?
How to reproduce
The repo can be found here
Just compile both server and client and then run both in the same order.
I got it to work by just replacing every tokio::sync::mpsc with crossbeam::channel, which might be a problem, as it does block
and manually initializing the tokio runtime.
so the init code looks like this:
pub struct Communicator {
pub event_bridge: bridge::Bridge,
pub event_queue: event_queue::Queue,
_runtime: Runtime,
}
impl Communicator {
pub fn init(ip: &str) -> Self {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_io()
.build()
.unwrap();
let (bridge, queue, game_rx) = rt.block_on(async move {
let socket = TcpStream::connect(ip).await.unwrap();
let (read, write) = socket.into_split();
let reader = TcpReader::new(read);
let writer = TcpWriter::new(write);
let (bridge, tcp_rx, game_rx) = bridge::Bridge::init();
reader::init(bridge.clone(), reader);
writer::init(tcp_rx, writer);
let event_queue = event_queue::Queue::init();
return (bridge, event_queue, game_rx);
});
// game of game_rx events to queue for game loop
let eq_clone = queue.clone();
rt.spawn(async move {
loop {
let event = game_rx.recv().unwrap();
eq_clone.send(event);
}
});
Self {
event_bridge: bridge,
event_queue: queue,
_runtime: rt,
}
}
}
And main.rs looks like this:
fn main() {
let communicator = communication::Communicator::init("0.0.0.0:8000");
communicator.event_bridge.push_tcp(TcpEvent::Register{name: String::from("luca")});
App::new()
.insert_resource(communicator)
.add_system(event_pull)
.add_plugins(DefaultPlugins)
.run();
}
fn event_pull(
communication: Res<communication::Communicator>
) {
let ev = communication.event_queue.pull();
if let Some(ev) = ev {
println!("ev");
}
}
Perhaps there might be a better solution.

tokio::select! but for a Vec of futures

I have a Vec of futures which I want to execute concurrently (but not necessarily in parallel). Basically, I'm looking for some kind of select function that is similar to tokio::select! but takes a collection of futures, or, conversely, a function that is similar to futures::join_all but returns once the first future is done.
An additional requirement is that once a future finished I might want to add a new future to the Vec.
With such a function, my code would roughly look like this:
use std::future::Future;
use std::time::Duration;
use tokio::time::sleep;
async fn wait(millis: u64) -> u64 {
sleep(Duration::from_millis(millis)).await;
millis
}
// This pseudo-implementation simply removes the last
// future and awaits it. I'm looking for something that
// instead polls all futures until one is finished, then
// removes that future from the Vec and returns it.
async fn select<F, O>(futures: &mut Vec<F>) -> O
where
F: Future<Output=O>
{
let future = futures.pop().unwrap();
future.await
}
#[tokio::main]
async fn main() {
let mut futures = vec![
wait(500),
wait(300),
wait(100),
wait(200),
];
while !futures.is_empty() {
let finished = select(&mut futures).await;
println!("Waited {}ms", finished);
if some_condition() {
futures.push(wait(200));
}
}
}
This is exactly what futures::stream::FuturesUnordered is for (which I've found by looking through the source of StreamExt::for_each_concurrent):
use futures::{stream::FuturesUnordered, StreamExt};
use std::time::Duration;
use tokio::time::{sleep, Instant};
async fn wait(millis: u64) -> u64 {
sleep(Duration::from_millis(millis)).await;
millis
}
#[tokio::main]
async fn main() {
let mut futures = FuturesUnordered::new();
futures.push(wait(500));
futures.push(wait(300));
futures.push(wait(100));
futures.push(wait(200));
let start_time = Instant::now();
let mut num_added = 0;
while let Some(wait_time) = futures.next().await {
println!("Waited {}ms", wait_time);
if num_added < 3 {
num_added += 1;
futures.push(wait(200));
}
}
println!("Completed all work in {}ms", start_time.elapsed().as_millis());
}
(playground)
Here's a working prototype based on streams and StreamExt::for_each_concurrent, as Martin Gallagher has suggested in a comment:
use std::time::Duration;
use tokio::sync::RwLock;
use tokio::time::sleep;
use futures::stream::{self, StreamExt};
use futures::{channel::mpsc, sink::SinkExt};
async fn wait(millis: u64) -> u64 {
sleep(Duration::from_millis(millis)).await;
millis
}
#[tokio::main]
async fn main() {
let (mut sink, futures_stream) = mpsc::unbounded();
let start_futures = vec![wait(500), wait(300), wait(100), wait(200)];
let num_futures = RwLock::new(start_futures.len());
sink.send_all(&mut stream::iter(start_futures.into_iter().map(Ok)))
.await
.unwrap();
let sink_lock = RwLock::new(sink);
futures_stream
.for_each_concurrent(None, |fut| async {
let wait_time = fut.await;
println!("Waited {}", wait_time);
if some_condition() {
println!("Adding new future");
let mut sink = sink_lock.write().await;
sink.send(wait(100)).await.unwrap();
} else {
let mut num_futures = num_futures.write().await;
*num_futures -= 1;
}
let num_futures = num_futures.read().await;
if *num_futures <= 0 {
// Close the sink to exit the for_each_concurrent
sink_lock.write().await.close().await.unwrap();
}
})
.await;
}
While this approach works it has the drawback that we need to maintain a separate counter of remaining futures so that we can close the sink -- there's no Vec of futures for which we can check whether it's empty. Closing the sink requires another lock.
Given that I'm fairly new to Rust I wouldn't be surprised if this approach could be made more elegant.

Resources