Multithreaded Client that sends data in a queue and stores data in another, while not blocking in Rust Tokio - multithreading

I'm having difficulties in making a Tokio client that receives packets from a server and stores them in a queue for the main thread to process, while being able to send packets to the server from another queue at the same time.
I'm trying to make a very simple online game demonstration, having a game client that Sends data (it's own modified states, like player movement) and receives data (Game states modified by other players & server, like an NPC/other players that also moved).
The idea is to have a network thread that accesses two Arcs holding Mutexes to Vec<bytes::Bytes> that store serialized data. One Arc is for IncomingPackets, and the other for OutgoingPackets. IncomingPackets would be filled by packets sent from the server to the client that would be later read by the main thread, and OutgoingPackets would be filled by the main thread with packets that should be sent to the server.
I can't seem to receive or send packets in another thread.
The client would only connect to the server, and the server would allow many clients (which would be served individually).
The explanations around stream's usage and implementation are not newbie-friendly, but I think I should be using them somehow.
I wrote some code, but it does not work and is probably wrong.
(My original code does not compile, so treat this as pseudocode, sorry)
Playground
extern crate byteorder; // 1.3.4
extern crate futures; // 0.3.5
extern crate tokio; // 0.2.21
use bytes::Bytes;
use futures::future;
use std::error::Error;
use std::sync::{Arc, Mutex};
use tokio::net::TcpStream;
use byteorder::{BigEndian, WriteBytesExt};
use std::io;
use std::time::Duration;
use tokio::io::AsyncReadExt;
use tokio::io::AsyncWriteExt;
use tokio::net::tcp::{ReadHalf, WriteHalf};
//This is the SharedPackets struct that is located in the crate structures
struct SharedPackets {
data: Mutex<Vec<bytes::Bytes>>,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut stream = TcpStream::connect("127.0.0.1:8080").await?;
let (mut r, mut w) = stream.split();
let mut inc: Vec<bytes::Bytes> = Vec::new();
inc.push(Bytes::from("Wow"));
let mut incoming_packets = Arc::new(SharedPackets {
data: Mutex::new(inc),
});
let mut outg: Vec<bytes::Bytes> = Vec::new();
outg.push(Bytes::from("Wow"));
let mut outgoint_packets = Arc::new(SharedPackets {
data: Mutex::new(outg),
});
let mut local_incoming_packets = Arc::clone(&incoming_packets);
let mut local_outgoint_packets = Arc::clone(&outgoint_packets);
let mut rarc = Arc::new(Mutex::new(r));
let mut warc = Arc::new(Mutex::new(w));
tokio::spawn(async move {
//send and receive are both async functions that contain an infinite loop
//they basically use AsyncWriteExt and AsyncReadExt to manipulate both halves of the stream
//send reads the queue and write this data on the socket
//recv reads the socket and write this data on the queue
//both "queues" are manipulated by the main thread
let mut read = &*rarc.lock().unwrap();
let mut write = &*warc.lock().unwrap();
future::try_join(
send(&mut write, &mut local_outgoint_packets),
recv(&mut read, &mut local_incoming_packets),
)
.await;
});
loop {
//read & write other stuff on both incoming_packets & outgoint_packets
//until the end of the program
}
}
async fn recv(reader: &mut ReadHalf<'_>, queue: &mut Arc<SharedPackets>) -> Result<(), io::Error> {
loop {
let mut buf: Vec<u8> = vec![0; 4096];
let n = match reader.read(&mut buf).await {
Ok(n) if n == 0 => return Ok(()),
Ok(n) => n,
Err(e) => {
eprintln!("failed to read from socket; err = {:?}", e);
return Err(e);
}
};
}
}
async fn send(writer: &mut WriteHalf<'_>, queue: &mut Arc<SharedPackets>) -> Result<(), io::Error> {
loop {
//task::sleep(Duration::from_millis(300)).await;
{
let a = vec!["AAAA"];
for i in a.iter() {
let mut byte_array = vec![];
let str_bytes = i.as_bytes();
WriteBytesExt::write_u32::<BigEndian>(&mut byte_array, str_bytes.len() as u32)
.unwrap();
byte_array.extend(str_bytes);
writer.write(&byte_array).await?;
}
}
}
}
This does not compile:
error: future cannot be sent between threads safely
--> src/main.rs:46:5
|
46 | tokio::spawn(async move {
| ^^^^^^^^^^^^ future created by async block is not `Send`
|
::: /playground/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/task/spawn.rs:127:21
|
127 | T: Future + Send + 'static,
| ---- required by this bound in `tokio::spawn`
|
= help: within `impl futures::Future`, the trait `std::marker::Send` is not implemented for `std::sync::MutexGuard<'_, tokio::net::tcp::ReadHalf<'_>>`
note: future is not `Send` as this value is used across an await
--> src/main.rs:55:9
|
52 | let mut read = &*rarc.lock().unwrap();
| -------------------- has type `std::sync::MutexGuard<'_, tokio::net::tcp::ReadHalf<'_>>` which is not `Send`
...
55 | / future::try_join(
56 | | send(&mut write, &mut local_outgoint_packets),
57 | | recv(&mut read, &mut local_incoming_packets),
58 | | )
59 | | .await;
| |______________^ await occurs here, with `rarc.lock().unwrap()` maybe used later
60 | });
| - `rarc.lock().unwrap()` is later dropped here
help: consider moving this into a `let` binding to create a shorter lived borrow
--> src/main.rs:52:25
|
52 | let mut read = &*rarc.lock().unwrap();
| ^^^^^^^^^^^^^^^^^^^^^
error: future cannot be sent between threads safely
--> src/main.rs:46:5
|
46 | tokio::spawn(async move {
| ^^^^^^^^^^^^ future created by async block is not `Send`
|
::: /playground/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/task/spawn.rs:127:21
|
127 | T: Future + Send + 'static,
| ---- required by this bound in `tokio::spawn`
|
= help: within `impl futures::Future`, the trait `std::marker::Send` is not implemented for `std::sync::MutexGuard<'_, tokio::net::tcp::WriteHalf<'_>>`
note: future is not `Send` as this value is used across an await
--> src/main.rs:55:9
|
53 | let mut write = &*warc.lock().unwrap();
| -------------------- has type `std::sync::MutexGuard<'_, tokio::net::tcp::WriteHalf<'_>>` which is not `Send`
54 |
55 | / future::try_join(
56 | | send(&mut write, &mut local_outgoint_packets),
57 | | recv(&mut read, &mut local_incoming_packets),
58 | | )
59 | | .await;
| |______________^ await occurs here, with `warc.lock().unwrap()` maybe used later
60 | });
| - `warc.lock().unwrap()` is later dropped here
help: consider moving this into a `let` binding to create a shorter lived borrow
--> src/main.rs:53:26
|
53 | let mut write = &*warc.lock().unwrap();
| ^^^^^^^^^^^^^^^^^^^^^
I think this is the least of the problems, because I'm really new with tokio.
I could not find an example of this, do you know any performant approach to this problem?

Why don't you use channels for sending/receiveing data from/to other tasks?
There's a plenty of helpful examples here how to share data between tasks
EDIT: I looked at your code, noticed you're using wrong mutex. You should use tokio::sync::Mutex when dealing with async code. Secondly, there were issues with references in arc. I've moved creating arcs to spawned task and add cloning to send/reacv functions.
extern crate futures; // 0.3.5; // 0.1.36std;
extern crate tokio; // 0.2.21;
extern crate byteorder; // 1.3.4;
use std::{error::Error};
use std::sync::{Arc};
use tokio::sync::Mutex;
use tokio::net::TcpStream;
use futures::{future};
use bytes::Bytes;
use std::io;
use std::time::Duration;
use tokio::io::AsyncWriteExt;
use tokio::io::AsyncReadExt;
use tokio::net::tcp::{ReadHalf, WriteHalf};
use byteorder::{BigEndian, WriteBytesExt};
//This is the SharedPackets struct that is located in the crate structures
struct SharedPackets {
data: Mutex<Vec<bytes::Bytes>>
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut inc : Vec<bytes::Bytes> = Vec::new();
inc.push(Bytes::from("Wow"));
let mut incoming_packets = Arc::new(SharedPackets {
data: Mutex::new(inc)
});
let mut outg : Vec<bytes::Bytes> = Vec::new();
outg.push(Bytes::from("Wow"));
let mut outgoint_packets = Arc::new(SharedPackets {
data: Mutex::new(outg)
});
let mut local_incoming_packets = Arc::clone(&incoming_packets);
let mut local_outgoint_packets = Arc::clone(&outgoint_packets);
tokio::spawn(async move {
let mut stream = TcpStream::connect("127.0.0.1:8080").await.unwrap();
let (mut r, mut w) = stream.split();
let mut rarc = Arc::new(Mutex::new(& mut r));
let mut warc = Arc::new(Mutex::new(& mut w));
//send and receive are both async functions that contain an infinite loop
//they basically use AsyncWriteExt and AsyncReadExt to manipulate both halves of the stream
//send reads the queue and write this data on the socket
//recv reads the socket and write this data on the queue
//both "queues" are manipulated by the main thread
//let mut read = &*rarc.lock().await;
//let mut write = &*warc.lock().await;
future::try_join(send(warc.clone(), &mut local_outgoint_packets), recv(rarc.clone(), &mut local_incoming_packets)).await;
});
loop {
//read & write other stuff on both incoming_packets & outgoint_packets
//until the end of the program
}
}
async fn recv(readerw: Arc<Mutex<&mut ReadHalf<'_>>>, queue: &mut Arc<SharedPackets>) -> Result<(), io::Error> {
let mut reader = readerw.lock().await;
loop {
let mut buf : Vec<u8> = vec![0; 4096];
let n = match reader.read(&mut buf).await {
Ok(n) if n == 0 => return Ok(()),
Ok(n) => n,
Err(e) => {
eprintln!("failed to read from socket; err = {:?}", e);
return Err(e);
}
};
}
}
async fn send(writerw: Arc<Mutex<&mut WriteHalf<'_>>>, queue: &mut Arc<SharedPackets>) -> Result<(), io::Error> {
let mut writer = writerw.lock().await;
loop{
//task::sleep(Duration::from_millis(300)).await;
{
let a = vec!["AAAA"];
for i in a.iter() {
let mut byte_array = vec![];
let str_bytes = i.as_bytes();
WriteBytesExt::write_u32::<BigEndian>(&mut byte_array, str_bytes.len() as u32).unwrap();
byte_array.extend(str_bytes);
writer.write(&byte_array).await?;
}
}
}
}
Here is full code without errors, didn't test it though: Playground link

Here's an example that's a bit contrived, but it should help:
Playground link
use std::{sync::Arc, time::Duration};
use tokio::{self, net::TcpStream, sync::Mutex};
#[tokio::main]
async fn main() {
let mut incoming_packets = Arc::new(Mutex::new(vec![b"Wow".to_vec()]));
let mut local_incoming_packets = incoming_packets.clone();
tokio::spawn(async move {
for i in 0usize..10 {
tokio::time::delay_for(Duration::from_millis(200)).await;
let mut packets = local_incoming_packets.lock().await;
packets.push(i.to_ne_bytes().to_vec());
}
});
loop {
tokio::time::delay_for(Duration::from_millis(200)).await;
let packets = incoming_packets.lock().await;
dbg!(packets);
}
}
You can see that I have to clone outside of the async move block since that block takes ownership of everything inside it. I'm not sure about r and w but you might need to move those inside the block as well before you can pass mutable references to them. I can update my answer if you provide code that includes all of the proper use statements.
One thing that you need to remember is that main() can technically exit before the code that has been spawned.
Also, note that I used tokio::sync::Mutex so that you can yield while waiting to acquire the lock.

Related

Problem regarding borrowing references in Rust

I am trying to write a program in which one thread writes to a queue and another thread
reads from the queue
But I am facing an issue regarding accessing the 'queue' in the thread that reads the queue
Below is the code which is not compiling
use ::std::collections::VecDeque;
use notify::{Config, RecommendedWatcher, RecursiveMode, Watcher};
use std::cell::RefCell;
use std::path::{Path, PathBuf};
use std::thread;
use std::time::Duration;
fn main() {
//let path = std::env::args()
// .nth(1)
// .expect("Argument 1 needs to be a path");
//println!("watching {}", path);
let path = "c:\\testfolder";
if let Err(e) = watch(path) {
println!("error: {:?}", e)
}
}
fn process_queue(queue: &VecDeque<String>) -> () {}
fn watch<P: AsRef<Path>>(path: P) -> notify::Result<()> {
let (tx, rx) = std::sync::mpsc::channel();
// Automatically select the best implementation for your platform.
// You can also access each implementation directly e.g. INotifyWatcher.
let mut watcher = RecommendedWatcher::new(tx, Config::default())?;
// Add a path to be watched. All files and directories at that path and
// below will be monitored for changes.
let mut queue: VecDeque<String> = VecDeque::new();
thread::spawn(|| {
// everything in here runs
process_queue(&queue)
});
watcher.watch(path.as_ref(), RecursiveMode::Recursive)?;
for res in rx {
match res {
Ok(event) => {
println!("changed: {:?}", event.paths);
let os_str: String = String::from(event.paths[0].to_str().unwrap());
//let my_str: String = os_str.unwrap().to_str().unwrap();
//let s =os_str.into_os_string();
queue.push_back(os_str);
}
Err(e) => println!("watch error: {:?}", e),
}
}
Ok(())
}
The output from the Rust compiler
error[E0373]: closure may outlive the current function, but it borrows `queue`, which is owned by the current function
--> src\main.rs:43:19
|
43 | thread::spawn(|| {
| ^^ may outlive borrowed value `queue`
...
47 | process_queue(&queue)
| ----- `queue` is borrowed here
|
note: function requires argument type to outlive `'static`
--> src\main.rs:43:5
|
43 | / thread::spawn(|| {
44 | |
45 | | // everything in here runs
46 | |
47 | | process_queue(&queue)
48 | |
49 | | });
| |______^
help: to force the closure to take ownership of `queue` (and any other referenced variables), use the `move` keyword
|
43 | thread::spawn(move || {
| ++++
error[E0502]: cannot borrow `queue` as mutable because it is also borrowed as immutable
--> src\main.rs:63:17
|
43 | thread::spawn(|| {
| - -- immutable borrow occurs here
| _____|
| |
44 | |
45 | | // everything in here runs
46 | |
47 | | process_queue(&queue)
| | ----- first borrow occurs due to use of `queue` in closure
48 | |
49 | | });
| |______- argument requires that `queue` is borrowed for `'static`
...
63 | queue.push_back(os_str);
| ^^^^^^^^^^^^^^^^^^^^^^^ mutable borrow occurs here
From the errors I understand that the compiler does not allow both mutable and immutable references at the same time.
But I don't know how to implement what I am trying to do with these restrictions.
One way to solve this is by Box-ing the VecDeque so that you can pass a cloned reference to your process_queue function.
Using a Box allows you to allocate the VecDeque on the heap so that you can give your spawned thread a reference to the Vec and also still mutate the queue in the current thread.
This would look like:
let mut queue = Box::new(VecDeque::new());
let queue_clone = queue.clone();
thread::spawn(|| {
// queue_clone is now moved into the fn closure and is
// not accessible to the code below
process_queue(queue_clone)
});
and you can update process_queue to accept the correct type:
fn process_queue(queue: Box<VecDeque<String>>) -> () { }
Note that with this implementation, process_queue only runs once when the thread is spawned, and if you want to have process_queue do something every time the queue is changed, following the advice of others to use something like Channels makes the most sense.
Thanks for all your responses
From all the responses I understand that using channels and moving the receiver loop to the other thread as suggested bu user4815162342
will be the best solution
I successfully implemented what I was trying to do using channels based on your suggestions.
The final working code is pasted below
use std::thread;
use std::time::Duration;
use notify::{RecommendedWatcher, RecursiveMode, Watcher, Config};
use std::path::Path;
use std::path::PathBuf;
//
fn main() {
//let path = std::env::args()
// .nth(1)
// .expect("Argument 1 needs to be a path");
//println!("watching {}", path);
let path="c:\\testfolder";
if let Err(e) = watch(path) {
println!("error: {:?}", e)
}
}
fn watch<P: AsRef<Path>>(path: P) -> notify::Result<()> {
let (tx, rx) = std::sync::mpsc::channel();
// Automatically select the best implementation for your platform.
// You can also access each implementation directly e.g. INotifyWatcher.
let mut watcher = RecommendedWatcher::new(tx, Config::default())?;
// Add a path to be watched. All files and directories at that path and
// below will be monitored for changes.
let handle=thread::spawn(move || {
// everything in here runs
for res in rx {
match res {
Ok(event) =>{
// println!("changed: {:?}", event.paths);
let os_str:String = String::from(event.paths[0].to_str().unwrap());
println!("file name: {}", os_str);
},
Err(e) => println!("watch error: {:?}", e),
}
}
});
watcher.watch(path.as_ref(), RecursiveMode::Recursive)?;
handle.join();
Ok(())
}
In your situation, using Rust's MPSC (multi-producer single-consumer, ie essentially a queue) would probably be the best. You could also create a variable that is shared between multiple thread using Arc and Mutex structs, but that would be way overkilled and can have a performance impact (only one can access the variable at any time).
Here is an example of a multi-threaded MPSC, I will let you adapt it to your infrastructure :
use std::{sync::mpsc, thread};
fn main() {
let (sender, receiver) = mpsc::channel();
let handle_1 = thread::spawn(|| {
thread_1(sender);
});
let handle_2 = thread::spawn(|| {
thread_2(receiver);
});
handle_1.join().unwrap();
handle_2.join().unwrap();
}
// the enum must have the Send trait (automatically implemented)
enum Instruction {
Print(String),
Exit
}
fn thread_1(sender: mpsc::Sender<Instruction>) {
sender.send(Instruction::Print("I".to_owned())).unwrap();
sender.send(Instruction::Print("like".to_owned())).unwrap();
sender.send(Instruction::Print("Rust".to_owned())).unwrap();
sender.send(Instruction::Print(".".to_owned())).unwrap();
sender.send(Instruction::Exit).unwrap();
}
fn thread_2(receiver: mpsc::Receiver<Instruction>) {
'global_loop: loop {
for received in receiver.recv() {
match received {
Instruction::Print(string) => print!("{} ", string),
Instruction::Exit => {
println!("");
break 'global_loop;
}
}
}
}
}

Sharing arrays between threads in Rust

I'm new to Rust and I'm struggling with some ownership semantics.
The goal is to do some nonsense measurements on multiplying 2 f64 arrays and writing the result in a third array.
In the single-threaded version, a single thread takes care of the whole range. In the multi-threaded version, each thread takes care of a segment of the range.
The single-threaded version is easy, but my problem is with the multithreaded version where I'm struggling with the ownership rules.
I was thinking to use raw pointers, to bypass the borrow checker. But I'm still not able to make it pass.
#![feature(box_syntax)]
use std::time::SystemTime;
use rand::Rng;
use std::thread;
fn main() {
let nCells = 1_000_000;
let concurrency = 1;
let mut one = box [0f64; 1_000_000];
let mut two = box [0f64; 1_000_000];
let mut res = box [0f64; 1_000_000];
println!("Creating data");
let mut rng = rand::thread_rng();
for i in 0..nCells {
one[i] = rng.gen::<f64>();
two[i] = rng.gen::<f64>();
res[i] = 0 as f64;
}
println!("Finished creating data");
let rounds = 100000;
let start = SystemTime::now();
let one_raw = Box::into_raw(one);
let two_raw = Box::into_raw(two);
let res_raw = Box::into_raw(res);
let mut handlers = Vec::new();
for _ in 0..rounds {
let sizePerJob = nCells / concurrency;
for j in 0..concurrency {
let from = j * sizePerJob;
let to = (j + 1) * sizePerJob;
handlers.push(thread::spawn(|| {
unsafe {
unsafe {
processData(one_raw, two_raw, res_raw, from, to);
}
}
}));
}
for j in 0..concurrency {
handlers.get_mut(j).unwrap().join();
}
handlers.clear();
}
let durationUs = SystemTime::now().duration_since(start).unwrap().as_micros();
let durationPerRound = durationUs / rounds;
println!("duration per round {} us", durationPerRound);
}
// Make sure we can find the function in the generated Assembly
#[inline(never)]
pub fn processData(one: *const [f64;1000000],
two: *const [f64;1000000],
res: *mut [f64;1000000],
from: usize,
to: usize) {
unsafe {
for i in from..to {
(*res)[i] = (*one)[i] * (*two)[i];
}
}
}
This is the error I'm getting
error[E0277]: `*mut [f64; 1000000]` cannot be shared between threads safely
--> src/main.rs:38:27
|
38 | handlers.push(thread::spawn(|| {
| ^^^^^^^^^^^^^ `*mut [f64; 1000000]` cannot be shared between threads safely
|
= help: the trait `Sync` is not implemented for `*mut [f64; 1000000]`
= note: required because of the requirements on the impl of `Send` for `&*mut [f64; 1000000]`
note: required because it's used within this closure
--> src/main.rs:38:41
|
38 | handlers.push(thread::spawn(|| {
| ^^
note: required by a bound in `spawn`
--> /home/pveentjer/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/thread/mod.rs:653:8
|
653 | F: Send + 'static,
| ^^^^ required by this bound in `spawn`
[edit] I know that spawning threads is very expensive. I'll convert this to a pool of worker threads that can be recycled once this code is up and running.
You can use chunks_mut or split_at_mut to get non-overlapping slices of one two and res. You can then access different slices from different threads safely. See: documentation for chunks_mut and documentation for split_at_mut
I was able to compile it using scoped threads and chunks_mut. I have removed all the unsafe stuff because there is no need. See the code:
#![feature(box_syntax)]
#![feature(scoped_threads)]
use rand::Rng;
use std::thread;
use std::time::SystemTime;
fn main() {
let nCells = 1_000_000;
let concurrency = 2;
let mut one = box [0f64; 1_000_000];
let mut two = box [0f64; 1_000_000];
let mut res = box [0f64; 1_000_000];
println!("Creating data");
let mut rng = rand::thread_rng();
for i in 0..nCells {
one[i] = rng.gen::<f64>();
two[i] = rng.gen::<f64>();
res[i] = 0 as f64;
}
println!("Finished creating data");
let rounds = 1000;
let start = SystemTime::now();
for _ in 0..rounds {
let size_per_job = nCells / concurrency;
thread::scope(|s| {
for it in one
.chunks_mut(size_per_job)
.zip(two.chunks_mut(size_per_job))
.zip(res.chunks_mut(size_per_job))
{
let ((one, two), res) = it;
s.spawn(|| {
processData(one, two, res);
});
}
});
}
let durationUs = SystemTime::now().duration_since(start).unwrap().as_micros();
let durationPerRound = durationUs / rounds;
println!("duration per round {} us", durationPerRound);
}
// Make sure we can find the function in the generated Assembly
#[inline(never)]
pub fn processData(one: &[f64], two: &[f64], res: &mut [f64]) {
for i in 0..one.len() {
res[i] = one[i] * two[i];
}
}

How to share tokio::net::TcpStream act on it concurrently?

I have a requirement to send and receive normal data on the same TcpStream, while sending heartbeat data at regular intervals. In the current implementation, I used Arc<Mutex<TcpStream>>, but it compiled with errors:
use anyhow::Result;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
#[tokio::main]
async fn main() -> Result<()> {
let stream = TcpStream::connect("127.0.0.1:8888").await.unwrap();
let stream = Arc::new(Mutex::new(stream));
let common_stream = stream.clone();
let handler1 = tokio::spawn(async {
loop {
let mut stream = common_stream.lock().unwrap();
let mut buf = [0u8; 10];
stream.read_exact(&mut buf).await.unwrap();
buf.reverse();
stream.write(&buf).await.unwrap();
}
});
let heartbeat_stream = stream.clone();
let handler2 = tokio::spawn(async {
loop {
let mut stream = heartbeat_stream.lock().unwrap();
stream.write_u8(1).await.unwrap();
thread::sleep(Duration::from_millis(200));
}
});
handler1.await?;
handler2.await?;
Ok(())
}
error: future cannot be sent between threads safely
--> src\main.rs:14:20
|
14 | let handler1 = tokio::spawn(async {
| ^^^^^^^^^^^^ future created by async block is not `Send`
|
= help: within `impl Future<Output = [async output]>`, the trait `Send` is not implemented for `std::sync::MutexGuard<'_, tokio::net::TcpStream>`
note: future is not `Send` as this value is used across an await
--> src\main.rs:20:31
|
16 | let mut stream = common_stream.lock().unwrap();
| ---------- has type `std::sync::MutexGuard<'_, tokio::net::TcpStream>` which is not `Send`
...
20 | stream.write(&buf).await.unwrap();
| ^^^^^^ await occurs here, with `mut stream` maybe used later
21 | }
| - `mut stream` is later dropped here
note: required by a bound in `tokio::spawn`
--> .cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\tokio-1.17.0\src\task\spawn.rs:127:21
|
127 | T: Future + Send + 'static,
| ^^^^ required by this bound in `tokio::spawn`
error: future cannot be sent between threads safely
--> src\main.rs:25:20
|
25 | let handler2 = tokio::spawn(async {
| ^^^^^^^^^^^^ future created by async block is not `Send`
|
= help: within `impl Future<Output = [async output]>`, the trait `Send` is not implemented for `std::sync::MutexGuard<'_, tokio::net::TcpStream>`
note: future is not `Send` as this value is used across an await
--> src\main.rs:28:31
|
27 | let mut stream = heartbeat_stream.lock().unwrap();
| ---------- has type `std::sync::MutexGuard<'_, tokio::net::TcpStream>` which is not `Send`
28 | stream.write_u8(1).await.unwrap();
| ^^^^^^ await occurs here, with `mut stream` maybe used later
...
31 | }
| - `mut stream` is later dropped here
note: required by a bound in `tokio::spawn`
--> .cargo\registry\src\mirrors.tuna.tsinghua.edu.cn-df7c3c540f42cdbd\tokio-1.17.0\src\task\spawn.rs:127:21
|
127 | T: Future + Send + 'static,
| ^^^^ required by this bound in `tokio::spawn`
How can these errors be fixed or is there another way to achieve the same goal?
Here is a solution that splits the stream to two parts for reading and writing plus does in a loop:
waiting for heartbeat events and sends a byte to write half of stream when this happens
waits data from read half (10 bytes), reverses it and writes again to write half
Also this does not spawn threads and does everything nicely in current one without locks.
use anyhow::Result;
use std::time::Duration;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
#[tokio::main]
async fn main() -> Result<()> {
let mut stream = TcpStream::connect("127.0.0.1:8888").await?;
let (mut read, mut write) = stream.split();
let mut heartbeat_interval = tokio::time::interval(Duration::from_millis(200));
let mut buf = [0u8; 10];
loop {
tokio::select! {
_ = heartbeat_interval.tick() => {
write.write(&[1]).await?;
}
result = read.read_exact(&mut buf) => {
let _bytes_read = result?;
buf.reverse();
write.write(&buf).await?;
}
}
}
}
Several errors in your code, although the idea behind it is almost good. You should use any available tool in async as possible. Some of the needed/desired changes:
Use tokio::time::sleep because it is async, otherwise the call is blocking
Use an async version of mutex (the one from futures crate for example)
Use some kind of generic error handling (anyhow would help)
use futures::lock::Mutex;
use anyhow::Error;
use tokio::time::sleep;
use std::sync::Arc;
use std::time::Duration;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::TcpStream;
#[tokio::main]
async fn main() -> Result<(), Error> {
let stream = TcpStream::connect("127.0.0.1:8888").await.unwrap();
let stream = Arc::new(Mutex::new(stream));
let common_stream = stream.clone();
let handler1 = tokio::spawn(async move {
loop {
let mut stream = common_stream.lock().await;
let mut buf = [0u8; 10];
stream.read_exact(&mut buf).await.unwrap();
buf.reverse();
stream.write(&buf).await.unwrap();
}
});
let heartbeat_stream = stream.clone();
let handler2 = tokio::spawn(async move {
loop {
let mut stream = heartbeat_stream.lock().await;
stream.write_u8(1).await.unwrap();
sleep(Duration::from_millis(200)).await;
}
});
handler1.await?;
handler2.await?;
Ok(())
}
Playground

What's the proper way to use variables defined in the main thread in a child thread in Rust?

I'm new to Rust and still reading the Rust book. Below is my program.
use clap::{App, Arg};
type GenericError = Box<dyn std::error::Error + Send + Sync + 'static>;
type GenericResult<T> = Result<T, GenericError>;
fn main() -> GenericResult<()> {
let matches = App::new("test")
.arg(Arg::new("latency")
.takes_value(true))
.get_matches();
let latency_present = matches.is_present("latency");
let latency = matches.value_of("latency").unwrap_or("1000:10,300:30");
let latency_pairs: Vec<&str> = latency.split(",").collect();
let checker = std::thread::spawn(move || -> GenericResult<()>{
loop {
if latency_present {
for (i, latency_pair) in latency_pairs.iter().enumerate() {
// let latency_pair: Vec<&str> = latency_pair.split(":").collect();
// let latency = latency_pair[0].parse::<f64>().unwrap();
}
}
}
});
checker.join().unwrap()?;
Ok(())
}
When I run it, it tells me this:
error[E0597]: `matches` does not live long enough
--> src\main.rs:14:19
|
14 | let latency = matches.value_of("latency").unwrap_or("1000:10,300:30");
| ^^^^^^^--------------------
| |
| borrowed value does not live long enough
| argument requires that `matches` is borrowed for `'static`
...
30 | }
| - `matches` dropped here while still borrowed
I don't quite understand the error messages here. But I guess it's because I use latency_pairs in the checker thread and latency_pairs could get dropped while checker is still executing. Is my understanding correct? How to fix the error? I tried for (i, latency_pair) in latency_pairs.clone().iter().enumerate() { in order to pass a cloned value for the thread, but it doesn't help.
latency_pairs holds references into latency which in turn references matches. Thus cloning latency_pairs just clones the references into latency and matches.
Your code would require that latency's type is &'static str but it's actually &'a str where 'a is bound to matches' lifetime.
You can call to_owned() on latency to get an owned value and split the string inside the closure or you can call to_owned() on each of the splits collected in latency_pairs and move that Vec<String> into the closure:
let latency_pairs: Vec<String> = latency.split(",").map(ToOwned::to_owned).collect();
let checker = std::thread::spawn(move || -> GenericResult<()>{
loop {
if latency_present {
for (i, latency_pair) in latency_pairs.iter().enumerate() {
// let latency_pair: Vec<&str> = latency_pair.split(":").collect();
// let latency = latency_pair[0].parse::<f64>().unwrap();
}
}
}
});
If you need to use latency_pairs outside of the closure, you can clone it before moving it into the closure:
let latency_pairs: Vec<String> = latency.split(",").map(ToOwned::to_owned).collect();
let latency_pairs_ = latency_pairs.clone();
let checker = std::thread::spawn(move || -> GenericResult<()>{
loop {
if latency_present {
for (i, latency_pair) in latency_pairs_.iter().enumerate() {
// let latency_pair: Vec<&str> = latency_pair.split(":").collect();
// let latency = latency_pair[0].parse::<f64>().unwrap();
}
}
}
});
println!("{:?}", latency_pairs);

How do I call a function every X seconds with a parameter that doesn't implement Copy?

I'm creating a basic Minecraft server in Rust. As soon as a player logs in to my server, I need to send a KeepAlive packet every 20 seconds to ensure the client is still connected.
I thought this would be solved by spawning a new thread that sends the packet after sleeping 20 seconds. (line 35). Unfortunately, the Server type doesn't implement Copy.
Error
error[E0382]: borrow of moved value: `server`
--> src/main.rs:22:9
|
20 | let mut server = Server::from_tcpstream(stream).unwrap();
| ---------- move occurs because `server` has type `ozelot::server::Server`, which does not implement the `Copy` trait
21 | 'listenloop: loop {
22 | server.update_inbuf().unwrap();
| ^^^^^^ value borrowed here after move
...
35 | thread::spawn(move || {
| ------- value moved into closure here, in previous iteration of loop
Code:
use std::thread;
use std::io::Read;
use std::fs::File;
use std::time::Duration;
use std::io::BufReader;
use ozelot::{Server, clientbound, ClientState, utils, read};
use ozelot::serverbound::ServerboundPacket;
use std::net::{TcpListener, TcpStream};
fn main() -> std::io::Result<()> {
let listener = TcpListener::bind("0.0.0.0:25565")?;
for stream in listener.incoming() {
handle_client(stream?);
}
Ok(())
}
fn handle_client(stream: TcpStream) {
let mut server = Server::from_tcpstream(stream).unwrap();
'listenloop: loop {
server.update_inbuf().unwrap();
let packets = server.read_packet().unwrap_or_default();
'packetloop: for packet in packets {
match packet {
//...
ServerboundPacket::LoginStart(ref p) => {
//...
//send keep_alive every 20 seconds
thread::spawn(move || {
thread::sleep(Duration::from_secs(20));
send_keep_alive(server)
});
fn send_keep_alive(mut server: Server) {
server.send(clientbound::KeepAlive::new(728)).unwrap();
}
//...
},
//...
_ => {},
}
}
thread::sleep(Duration::from_millis(50));
}
}
Notice how most of the functions in ozelot::Server take &mut self as a parameter. This means that they are called on a mutable reference to a ozelot::Server.
From Rust docs:
you can only have one mutable reference to a particular piece of data in a particular scope
The issue is - how can we share ozelot::Server between the main thread and the new thread?
A lot of things in std::sync module solve exactly these kinds of problems. In your case, you want read access from the main thread and write access from the thread calling send_keep_alive. This calls for Mutex - exclusive read/write access at a time, and
Wrap the server in an Arc<Mutex>:
use std::sync::{Arc, Mutex};
// --snip--
fn handle_client(stream: TcpStream) {
let mut server = Server::from_tcpstream(stream).unwrap();
let mut server = Arc::new(Mutex::new(server));
Move a clone of Arc into the thread, and lock the Mutex to gain access:
let server = Arc::clone(server);
thread::spawn(move || {
thread::sleep(Duration::from_secs(20));
send_keep_alive(&mut server)
});
And make send_keep_alive receive server by mutable reference:
fn send_keep_alive(server: &mut Arc<Mutex<Server>>) {
loop {
let mut server = server.lock().unwrap();
server.send(clientbound::KeepAlive::new(728)).unwrap();
}
}

Resources