We all know that using global variables can lead to subtle bugs. I need to migrate Python programs to Rust, keeping the algorithm intact as far as possible. Once I have demonstrated Python-Rust equivalence there will be opportunities to debug and change the logic to fit Rust better. Here is a simple Python program using global variables, followed by my unsuccessful Rust version.
# global variable
a = 15
# function to perform addition
def add():
global a
a += 100
# function to perform subtraction
def subtract():
global a
a -= 100
# Using a global through functions
print("Initial value of a = ", a)
add()
print("a after addition = ", a)
subtract()
print("a after subtraction = ", a)
Here is a Rust program that runs, but I cannot get the closures to update the so-called global variable.
fn fmain() {
// global variable
let mut a = 15;
// perform addition
let add = || {
let mut _name = a;
// name += 100; // the program won't compile if this is uncommented
};
call_once(add);
// perform subtraction
let subtract = || {
let mut _name = a;
// name -= 100; // the program won't compile if this is uncommented
};
call_once(subtract);
// Using a global through functions
println!("Initial value of a = {}", a);
add();
println!("a after addition = {}", a);
subtract();
println!("a after subtraction = {}", a);
}
fn main() {
fmain();
}
fn call_once<F>(f: F)
where
F: FnOnce(),
{
f();
}
My request: Re-create the Python logic in Rust.
Your Rust code is not using global variables, the a variable is stack-allocated. While Rust doesn't particularly endorse global variables, you can certainly use them. Translated to Rust that uses actual globals, your program would look like this:
use lazy_static::lazy_static;
use parking_lot::Mutex; // or std::sync::Mutex
// global variable
lazy_static! {
static ref A: Mutex<u32> = Mutex::new(15);
}
// function to perform addition
fn add() {
*A.lock() += 100;
}
// function to perform subtraction
fn subtract() {
*A.lock() -= 100;
}
fn main() {
// Using a global through functions
println!("Initial value of a = {}", A.lock());
add();
println!("a after addition = {}", A.lock());
subtract();
println!("a after subtraction = {}", A.lock());
}
Playground
If you prefer to use closures, you can do that too, but you'll need to use interior mutability to allow multiple closures to capture the same environment. For example, you could use a Cell:
use std::cell::Cell;
fn main() {
let a = Cell::new(15);
let add = || {
a.set(a.get() + 100);
};
let subtract = || {
a.set(a.get() - 100);
};
// Using a global through functions
println!("Initial value of a = {}", a.get());
add();
println!("a after addition = {}", a.get());
subtract();
println!("a after subtraction = {}", a.get());
}
Playground
Dependency-less examples as enum and function. EDIT : Code improved, as suggested in comment and corrected match arm.
use std::sync::{Arc, Mutex, Once};
static START: Once = Once::new();
static mut ARCMUT: Vec<Arc<Mutex<i32>>> = Vec::new();
// as enum
enum Operation {
Add,
Subtract,
}
impl Operation {
// static change
fn result(self) -> i32 {
let mut arc_clone = unsafe { ARCMUT[0].clone() };
let mut unlock = arc_clone.lock().unwrap();
match self {
Operation::Add => *unlock += 100,
Operation::Subtract => *unlock -= 100,
}
*unlock
}
// dynamic change
fn amount(self, amount: i32) -> i32 {
let mut arc_clone = unsafe { ARCMUT[0].clone() };
let mut unlock = arc_clone.lock().unwrap();
match self {
Operation::Add => *unlock += amount,
Operation::Subtract => *unlock -= amount,
}
*unlock
}
}
// as a function
fn add() -> i32 {
let mut arc_clone = unsafe { ARCMUT[0].clone() };
let mut unlcok = arc_clone.lock().unwrap();
*unlcok += 100;
*unlcok
}
// as trait
trait OperationTrait {
fn add(self) -> Self;
fn subtract(self) -> Self;
fn return_value(self) ->i32;
}
impl OperationTrait for i32 {
fn add(mut self) -> Self {
let arc_clone = unsafe{ARCMUT[0].clone()};
let mut unlock = arc_clone.lock().unwrap();
*unlock += self;
self
}
fn subtract(mut self) -> Self {
let arc_clone = unsafe{ARCMUT[0].clone()};
let mut unlock = arc_clone.lock().unwrap();
*unlock -= self;
self
}
fn return_value(self)->Self{
let arc_clone = unsafe{ARCMUT[0].clone()};
let mut unlock = arc_clone.lock().unwrap();
*unlock
}
}
// fn main
fn main() {
START.call_once(|| unsafe {
ARCMUT = vec![Arc::new(Mutex::new(15))];
});
let test = Operation::Add.result();
println!("{:?}", test);
let test = Operation::Subtract.amount(100);
println!("{:?}", test);
let test = add();
println!("{:?}", test);
let test = 4000.add();
println!("{:?}", test);
}
Related
I have a bunch of math that has real time constraints. My main loop will just call this function repeatedly and it will always store results into an existing buffer. However, I want to be able to spawn the threads at init time and then allow the threads to run and do their work and then wait for more data. The synchronization I will use a Barrier and have that part working. What I can't get working and have tried various iterations of Arc or crossbeam is splitting the thread spawning up and the actual workload. This is what I have now.
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
for i in 0..WORK_SIZE {
work.push(i as f64);
}
crossbeam::scope(|scope| {
let threads: Vec<_> = work
.chunks(NUM_TASKS_PER_THREAD)
.map(|chunk| scope.spawn(move |_| chunk.iter().cloned().sum::<f64>()))
.collect();
let threaded_time = std::time::Instant::now();
let thread_sum: f64 = threads.into_iter().map(|t| t.join().unwrap()).sum();
let threaded_micros = threaded_time.elapsed().as_micros() as f64;
println!("threaded took: {:#?}", threaded_micros);
let serial_time = std::time::Instant::now();
let no_thread_sum: f64 = work.iter().cloned().sum();
let serial_micros = serial_time.elapsed().as_micros() as f64;
println!("serial took: {:#?}", serial_micros);
assert_eq!(thread_sum, no_thread_sum);
println!(
"Threaded performace was {:?}",
serial_micros / threaded_micros
);
})
.unwrap();
}
But I can't find a way to spin these threads up in an init function and then in a do_work function pass work into them. I attempted to do something like this with Arc's and Mutex's but couldn't get everything straight there either. What I want to turn this into is something like the following
use std::sync::{Arc, Barrier, Mutex};
use std::{slice::Chunks, thread::JoinHandle};
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.into_iter().cloned().sum::<f64>();
let mut result = *result.lock().unwrap();
result += sum;
}
}
fn init(
mut data: Chunks<'_, f64>,
result: &Arc<Mutex<f64>>,
barrier: &Arc<Barrier>,
) -> Vec<std::thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let result = result.clone();
let barrier = barrier.clone();
let chunk = data.nth(i).unwrap();
handles.push(std::thread::spawn(|| {
//Pass the particular thread the particular chunk it will operate on.
do_work(chunk, result, barrier);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let mut result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let threads = init(work.chunks(NUM_TASKS_PER_THREAD), &result, &work_barrier);
loop {
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I hope this expresses the intent clearly enough of what I need to do. The issue with this specific implementation is that the chunks don't seem to live long enough and when I tried wrapping them in an Arc as it just moved the argument doesn't live long enough to the Arc::new(data.chunk(_)) line.
use std::sync::{Arc, Barrier, Mutex};
use std::thread;
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.iter().sum::<f64>();
*result.lock().unwrap() += sum;
}
}
fn init(
work: Vec<f64>,
result: Arc<Mutex<f64>>,
barrier: Arc<Barrier>,
) -> Vec<thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let slice = work[i * NUM_TASKS_PER_THREAD..(i + 1) * NUM_TASKS_PER_THREAD].to_owned();
let result = Arc::clone(&result);
let w = Arc::clone(&barrier);
handles.push(thread::spawn(move || {
do_work(&slice, result, w);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let _threads = init(work, Arc::clone(&result), Arc::clone(&work_barrier));
loop {
thread::sleep(std::time::Duration::from_secs(3));
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I have created a very simplified example of the code I am having an issue with:
use core::time;
use std::thread;
use tokio::sync::{mpsc::Receiver, RwLock};
struct MyStruct {
counter: Arc<RwLock<i32>>,
rx: RwLock<Receiver<i32>>,
}
impl MyStruct {
async fn start_here(&self) { // <--------- Lifetime error here on self
while let Some(message) = self.rx.write().await.recv().await {
tokio::spawn(self.do_some_work_then_update_counter());
}
}
async fn do_some_work_then_update_counter(&self) {
let dur = time::Duration::from_millis(10000);
thread::sleep(dur);
let mut counter = self.counter.write().await;
*counter += 1;
}
}
There is a receiver that is receiving messages from another part of the program, and I want to be able to process each message in its own task to prevent blocking the next message from being processed.
As you can imagine it's a lifetime error since the task could outlast self in this case.
One solution I have done is this:
impl MyStruct {
async fn start_here(&self) {
while let Some(message) = self.rx.write().await.recv().await {
let counter = self.counter.clone();
tokio::spawn(do_some_work_then_update_counter(counter));
}
}
}
async fn do_some_work_then_update_counter(counter: Arc<RwLock<i32>>) {
let dur = time::Duration::from_millis(10000);
thread::sleep(dur);
let mut counter = counter.write().await;
*counter += 1;
}
This just doesn't seem like a good option, I want to keep do_some_work_then_update_counter as an impl of MyStruct instead of a free function since it is modifying data on MyStruct.
I am wondering if there is a better solution to this?
You can if you'll return impl Future directly instead of being async fn:
fn do_some_work_then_update_counter(&self) -> impl Future<Output = ()> {
let counter = Arc::clone(&self.counter);
async move {
let dur = time::Duration::from_millis(10000);
thread::sleep(dur);
let mut counter = counter.write().await;
*counter += 1;
}
}
Not much to explain. I don't understand what is even having the lifetime designated 1 and 2 by the compiler error message.
All posts I have checked so far just say use crossbeam for scopped threads, but this hasn't fixed my issue at all and I dont think I even understand the finer issue here.
Any help is appreciated.
use crossbeam_utils::thread;
struct TestStruct {
s: f64,
}
impl TestStruct {
fn new() -> Self {
Self {
s: -1.,
}
}
fn fubar(&'static self) -> f64 {
let thread_return_value = thread::scope(|scope|
// lifetime may not live long enough
// returning this value requires that `'1` must outlive `'2`
// Question: what are the two lifetimes even of? I am probably just
// a noob here.
scope.spawn(move |_| { // same error with or without move
// I have found that it doesnt matter what I put in this scope,
// but the following is the closest to what I have in my actual
// code.
let mut psum = 0.;
for _ in 0..10 { psum += self.s; }
psum
})
).unwrap();
// do anything with thread_return_value
return 0.; // just so its explicitly not the problem here, return 0.
}
}
fn main() {
let test_item = TestStruct::new();
// rustcE0597
let stored_value = test_item.fubar();
println!("{}", &stored_value);
return;
}
Edit after marking for correct answer, working minimal example:
#![feature(let_chains)]
use crossbeam_utils::thread;
struct TestStruct {
s: f64,
}
impl TestStruct {
fn new() -> Self {
Self {
s: -1.,
}
}
fn fubar(&self) -> f64 {
let thread_return_value = thread::scope(|scope| {
let th = scope.spawn(move |_| {
let mut psum = 0.;
for _ in 0..10 { psum += self.s; }
psum
});
let psum = th.join().unwrap();
psum
}
).unwrap();
return thread_return_value;
}
}
fn main() {
let test_item = TestStruct::new();
// rustcE0597
let stored_value = test_item.fubar();
println!("{}", &stored_value);
return;
}
The most obvious problem in your code is the &'static self lifetime. If you do so, you will only be able to call this function with static (that is, global) values of this type. So just remove that 'static and write &self.
Then the real problem is because you are trying to return your scoped thread handle from the crossbeam::scoped, the value returned by scope.spawn(), and that is not allowed. That is why they are called scoped threads: they are limited to the enclosing scope.
Remember that in Rust, when a block ends without a ; the value of the last expression is returned as the value of the block itself.
You probably want to return the psum. If so you need to wait for the handle to finish:
fn fubar(& self) -> f64 {
let thread_return_value = thread::scope(|scope| {
let th = scope.spawn(move |_| {
let mut psum = 0.;
for _ in 0..10 { psum += self.s; }
psum
}); // <--- here, add a ;
let psum = th.join().unwrap(); //get the inner result
psum //forward it to the outer scope
}).unwrap();
return 0.;
}
In this rust program, inside the run function, I am trying to pass the "pair_clone" as a parameter for both threads but I keep getting a mismatched type error? I thought I was passing the pair but it says I'm passing an integer instead.
use std::sync::{Arc, Mutex, Condvar};
fn producer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "producing"
}
}
fn consumer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "consuming"
}
}
pub fn run() {
println!("Main::Begin");
let num_of_loops = 5;
let num_of_threads = 4;
let mut array_of_threads = vec!();
let pair = Arc ::new((Mutex::new(true), Condvar::new()));
for pair in 0..num_of_threads {
let pair_clone = pair.clone();
array_of_threads.push(std::thread::spawn( move || producer(&pair_clone, num_of_loops)));
array_of_threads.push(std::thread::spawn( move || consumer(&pair_clone, num_of_loops)));
}
for i in array_of_threads {
i.join().unwrap();
}
println!("Main::End");
}
You have two main errors
The first: you are using the name of the pair as the loop index. This makes pair be the integer the compiler complains about.
The second: you are using one copy while you need two, one for the producer and the other for the consumer
After Edit
use std::sync::{Arc, Mutex, Condvar};
fn producer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "producing"
}
fn consumer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "consuming"
}
pub fn run() {
println!("Main::Begin");
let num_of_loops = 5;
let num_of_threads = 4;
let mut array_of_threads = vec![];
let pair = Arc ::new((Mutex::new(true), Condvar::new()));
for _ in 0..num_of_threads {
let pair_clone1 = pair.clone();
let pair_clone2 = pair.clone();
array_of_threads.push(std::thread::spawn( move || producer(&pair_clone1, num_of_loops)));
array_of_threads.push(std::thread::spawn( move || consumer(&pair_clone2, num_of_loops)));
}
for i in array_of_threads {
i.join().unwrap();
}
println!("Main::End");
}
Demo
Note that I haven't given any attention to the code quality. just fixed the compile errors.
This question already has an answer here:
How to send a pointer to another thread?
(1 answer)
Closed 5 months ago.
I was able to proceed forward to implement my asynchronous udp server. However I have this error showing up twice because my variable data has type *mut u8 which is not Send:
error: future cannot be sent between threads safely
help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `*mut u8`
note: captured value is not `Send`
And the code (MRE):
use std::error::Error;
use std::time::Duration;
use std::env;
use tokio::net::UdpSocket;
use tokio::{sync::mpsc, task, time}; // 1.4.0
use std::alloc::{alloc, Layout};
use std::mem;
use std::mem::MaybeUninit;
use std::net::SocketAddr;
const UDP_HEADER: usize = 8;
const IP_HEADER: usize = 20;
const AG_HEADER: usize = 4;
const MAX_DATA_LENGTH: usize = (64 * 1024 - 1) - UDP_HEADER - IP_HEADER;
const MAX_CHUNK_SIZE: usize = MAX_DATA_LENGTH - AG_HEADER;
const MAX_DATAGRAM_SIZE: usize = 0x10000;
/// A wrapper for [ptr::copy_nonoverlapping] with different argument order (same as original memcpy)
unsafe fn memcpy(dst_ptr: *mut u8, src_ptr: *const u8, len: usize) {
std::ptr::copy_nonoverlapping(src_ptr, dst_ptr, len);
}
// Different from https://doc.rust-lang.org/std/primitive.u32.html#method.next_power_of_two
// Returns the [exponent] from the smallest power of two greater than or equal to n.
const fn next_power_of_two_exponent(n: u32) -> u32 {
return 32 - (n - 1).leading_zeros();
}
async fn run_server(socket: UdpSocket) {
let mut missing_indexes: Vec<u16> = Vec::new();
let mut peer_addr = MaybeUninit::<SocketAddr>::uninit();
let mut data = std::ptr::null_mut(); // ptr for the file bytes
let mut len: usize = 0; // total len of bytes that will be written
let mut layout = MaybeUninit::<Layout>::uninit();
let mut buf = [0u8; MAX_DATA_LENGTH];
let mut start = false;
let (debounce_tx, mut debounce_rx) = mpsc::channel::<(usize, SocketAddr)>(3300);
let (network_tx, mut network_rx) = mpsc::channel::<(usize, SocketAddr)>(3300);
loop {
// Listen for events
let debouncer = task::spawn(async move {
let duration = Duration::from_millis(3300);
loop {
match time::timeout(duration, debounce_rx.recv()).await {
Ok(Some((size, peer))) => {
eprintln!("Network activity");
}
Ok(None) => {
if start == true {
eprintln!("Debounce finished");
break;
}
}
Err(_) => {
eprintln!("{:?} since network activity", duration);
}
}
}
});
// Listen for network activity
let server = task::spawn({
// async{
let debounce_tx = debounce_tx.clone();
async move {
while let Some((size, peer)) = network_rx.recv().await {
// Received a new packet
debounce_tx.send((size, peer)).await.expect("Unable to talk to debounce");
eprintln!("Received a packet {} from: {}", size, peer);
let packet_index: u16 = (buf[0] as u16) << 8 | buf[1] as u16;
if start == false { // first bytes of a new file: initialization // TODO: ADD A MUTEX to prevent many initializations
start = true;
let chunks_cnt: u32 = (buf[2] as u32) << 8 | buf[3] as u32;
let n: usize = MAX_DATAGRAM_SIZE << next_power_of_two_exponent(chunks_cnt);
unsafe {
layout.as_mut_ptr().write(Layout::from_size_align_unchecked(n, mem::align_of::<u8>()));
// /!\ data has type `*mut u8` which is not `Send`
data = alloc(layout.assume_init());
peer_addr.as_mut_ptr().write(peer);
}
let a: Vec<u16> = vec![0; chunks_cnt as usize]; //(0..chunks_cnt).map(|x| x as u16).collect(); // create a sorted vector with all the required indexes
missing_indexes = a;
}
missing_indexes[packet_index as usize] = 1;
unsafe {
let dst_ptr = data.offset((packet_index as usize * MAX_CHUNK_SIZE) as isize);
memcpy(dst_ptr, &buf[AG_HEADER], size - AG_HEADER);
};
println!("receiving packet {} from: {}", packet_index, peer);
}
}
});
// Prevent deadlocks
drop(debounce_tx);
match socket.recv_from(&mut buf).await {
Ok((size, src)) => {
network_tx.send((size, src)).await.expect("Unable to talk to network");
}
Err(e) => {
eprintln!("couldn't recieve a datagram: {}", e);
}
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let addr = env::args().nth(1).unwrap_or_else(|| "127.0.0.1:8080".to_string());
let socket = UdpSocket::bind(&addr).await?;
println!("Listening on: {}", socket.local_addr()?);
run_server(socket);
Ok(())
}
Since I was converting from synchronous to asynchronous code I know that, potentially, multiple thread would be writing to data, and that is probably why I encounter such error. But I don't know which syntax I could use to "clone" the mut ptr and make it unique for each thread (and same for the buffer).
As suggested by user4815162342 I think the best would be
to make pointer Send by wrapping it in a struct and declaring unsafe impl Send for NewStruct {}.
Any help strongly appreciated!
PS: Full code can be found on my github repository
Short version
Thanks to the comment of user4815162342 I decided to add an implementation for the mut ptr to be able to use it with Send and Sync, which allowed me to solve this part (there are still other issues, but beyond the scope of this question):
pub struct FileBuffer {
data: *mut u8
}
unsafe impl Send for FileBuffer {}
unsafe impl Sync for FileBuffer {}
//let mut data = std::ptr::null_mut(); // ptr for the file bytes
let mut fileBuffer: FileBuffer = FileBuffer { data: std::ptr::null_mut() };
Long version
use std::error::Error;
use std::time::Duration;
use std::env;
use tokio::net::UdpSocket;
use tokio::{sync::mpsc, task, time}; // 1.4.0
use std::alloc::{alloc, Layout};
use std::mem;
use std::mem::MaybeUninit;
use std::net::SocketAddr;
const UDP_HEADER: usize = 8;
const IP_HEADER: usize = 20;
const AG_HEADER: usize = 4;
const MAX_DATA_LENGTH: usize = (64 * 1024 - 1) - UDP_HEADER - IP_HEADER;
const MAX_CHUNK_SIZE: usize = MAX_DATA_LENGTH - AG_HEADER;
const MAX_DATAGRAM_SIZE: usize = 0x10000;
/// A wrapper for [ptr::copy_nonoverlapping] with different argument order (same as original memcpy)
unsafe fn memcpy(dst_ptr: *mut u8, src_ptr: *const u8, len: usize) {
std::ptr::copy_nonoverlapping(src_ptr, dst_ptr, len);
}
// Different from https://doc.rust-lang.org/std/primitive.u32.html#method.next_power_of_two
// Returns the [exponent] from the smallest power of two greater than or equal to n.
const fn next_power_of_two_exponent(n: u32) -> u32 {
return 32 - (n - 1).leading_zeros();
}
pub struct FileBuffer {
data: *mut u8
}
unsafe impl Send for FileBuffer {}
unsafe impl Sync for FileBuffer {}
async fn run_server(socket: UdpSocket) {
let mut missing_indexes: Vec<u16> = Vec::new();
let mut peer_addr = MaybeUninit::<SocketAddr>::uninit();
//let mut data = std::ptr::null_mut(); // ptr for the file bytes
let mut fileBuffer: FileBuffer = FileBuffer { data: std::ptr::null_mut() };
let mut len: usize = 0; // total len of bytes that will be written
let mut layout = MaybeUninit::<Layout>::uninit();
let mut buf = [0u8; MAX_DATA_LENGTH];
let mut start = false;
let (debounce_tx, mut debounce_rx) = mpsc::channel::<(usize, SocketAddr)>(3300);
let (network_tx, mut network_rx) = mpsc::channel::<(usize, SocketAddr)>(3300);
loop {
// Listen for events
let debouncer = task::spawn(async move {
let duration = Duration::from_millis(3300);
loop {
match time::timeout(duration, debounce_rx.recv()).await {
Ok(Some((size, peer))) => {
eprintln!("Network activity");
}
Ok(None) => {
if start == true {
eprintln!("Debounce finished");
break;
}
}
Err(_) => {
eprintln!("{:?} since network activity", duration);
}
}
}
});
// Listen for network activity
let server = task::spawn({
// async{
let debounce_tx = debounce_tx.clone();
async move {
while let Some((size, peer)) = network_rx.recv().await {
// Received a new packet
debounce_tx.send((size, peer)).await.expect("Unable to talk to debounce");
eprintln!("Received a packet {} from: {}", size, peer);
let packet_index: u16 = (buf[0] as u16) << 8 | buf[1] as u16;
if start == false { // first bytes of a new file: initialization // TODO: ADD A MUTEX to prevent many initializations
start = true;
let chunks_cnt: u32 = (buf[2] as u32) << 8 | buf[3] as u32;
let n: usize = MAX_DATAGRAM_SIZE << next_power_of_two_exponent(chunks_cnt);
unsafe {
layout.as_mut_ptr().write(Layout::from_size_align_unchecked(n, mem::align_of::<u8>()));
// /!\ data has type `*mut u8` which is not `Send`
fileBuffer.data = alloc(layout.assume_init());
peer_addr.as_mut_ptr().write(peer);
}
let a: Vec<u16> = vec![0; chunks_cnt as usize]; //(0..chunks_cnt).map(|x| x as u16).collect(); // create a sorted vector with all the required indexes
missing_indexes = a;
}
missing_indexes[packet_index as usize] = 1;
unsafe {
let dst_ptr = fileBuffer.data.offset((packet_index as usize * MAX_CHUNK_SIZE) as isize);
memcpy(dst_ptr, &buf[AG_HEADER], size - AG_HEADER);
};
println!("receiving packet {} from: {}", packet_index, peer);
}
}
});
// Prevent deadlocks
drop(debounce_tx);
match socket.recv_from(&mut buf).await {
Ok((size, src)) => {
network_tx.send((size, src)).await.expect("Unable to talk to network");
}
Err(e) => {
eprintln!("couldn't recieve a datagram: {}", e);
}
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let addr = env::args().nth(1).unwrap_or_else(|| "127.0.0.1:8080".to_string());
let socket = UdpSocket::bind(&addr).await?;
println!("Listening on: {}", socket.local_addr()?);
run_server(socket);
Ok(())
}