Sending Nalgebra VectorN between threads - multithreading

I'm using Nalgebra's VectorN<f64, N> type in some single-threaded code which is working well. I'm now trying to multithread various parts of the algorithm, but have run into issues passing VectorNs into a thread::spawn call. For example, the following code fails to compile:
use std::thread;
use nalgebra::{VectorN, DefaultAllocator, DimName};
use nalgebra::allocator::Allocator;
struct Test<N>
where
N: DimName,
DefaultAllocator: Allocator<f64, N>,
{
pub field: VectorN<f64, N>,
}
impl<N> Test<N>
where
N: DimName,
DefaultAllocator: Allocator<f64, N>,
{
pub fn test(&self) {
let handle = thread::spawn(move || {
let thing = self.field;
let thing2 = thing * 2.0;
thing2
});
let res = handle.join().unwrap();
}
}
With this error:
error[E0277]: `<nalgebra::base::default_allocator::DefaultAllocator as nalgebra::base::allocator::Allocator<f64, N>>::Buffer` cannot be sent between threads safely
--> trajectories/src/path/mod.rs:34:22
|
34 | let handle = thread::spawn(move || {
| ^^^^^^^^^^^^^ `<nalgebra::base::default_allocator::DefaultAllocator as nalgebra::base::allocator::Allocator<f64, N>>::Buffer` cannot be sent between threads safely
|
= help: within `nalgebra::base::matrix::Matrix<f64, N, nalgebra::base::dimension::U1, <nalgebra::base::default_allocator::DefaultAllocator as nalgebra::base::allocator::Allocator<f64, N>>::Buffer>`, the trait `std::marker::Send` is not
implemented for `<nalgebra::base::default_allocator::DefaultAllocator as nalgebra::base::allocator::Allocator<f64, N>>::Buffer`
= note: required because it appears within the type `nalgebra::base::matrix::Matrix<f64, N, nalgebra::base::dimension::U1, <nalgebra::base::default_allocator::DefaultAllocator as nalgebra::base::allocator::Allocator<f64, N>>::Buffer>`
= note: required by `std::thread::spawn`
I've tried various definitions for N and DefaultAllocator in the where clauses but haven't got far. Various search engines have turned up nothing useful around this issue.
If I replace VectorN<f64, N> with VectorN<f64, U3> (or any other U* type from Nalgebra), the above error goes away. I've read the Nalgebra generic programming guide however that seems out of date and perhaps not what I need; I don't want complete genericity, just the ability to use VectorN with any size bound. What trait bounds do I need to put on my struct so I can pass field into a thread?

I took a stab in the dark (based on the error messages given by the compiler) and managed to make this work by adding bounds to Allocator::Buffer like this:
use nalgebra::allocator::Allocator;
struct Test<N>
where
N: DimName,
DefaultAllocator: Allocator<f64, N>,
<DefaultAllocator as Allocator<f64, N>>::Buffer: Send + Sync,
{
pub field: VectorN<f64, N>,
}
// snip ...
I'm not sure this is the right way to do it and it certainly adds some noise, but it appears to now let me pass Nalgebra constructs into threads.

Related

Pin vs Box: Why is Box not enough?

I would like to know examples where keeping a T type within Box would be unsafe, while within Pin it would be safe.
Initially, I thought that std::marker::PhantomPinned prevents an instance from being moved around (by forbidding it), but seemingly it does not. Since:
use std::pin::Pin;
use std::marker::PhantomPinned;
#[derive(Debug)]
struct MyStruct {
field: u32,
_pin: PhantomPinned
}
impl MyStruct {
fn new(field: u32) -> Self {
Self {
field,
_pin: PhantomPinned,
}
}
}
fn func(x: MyStruct) {
println!("{:?}", x);
func2(x);
}
fn func2(x: MyStruct) {
println!("{:?}", x);
}
fn main() {
let x = MyStruct::new(5);
func(x);
}
this code is compilable, despite the fact that it moves MyStruct from main to func and etc.
as for Box and Pin they both keep their contents on the heap, so it does not seem to be subjected to motions.
Thus, I would appreciate if someone elaborated this topic on these questions. Since it is not covered in other questions and docs, what is inherently wrong with just getting by with Box.
I think you misunderstand.
PhantomPinned does not make data immovable. It just says that once the data is pinned, it will never be able to be unpinned again.
Therefore, to make data with PhantomPinned unmovable, you have to Pin it first.
For example, if you create a pinned version of your MyStruct variable, you cannot unpin it:
fn main() {
let pinned_x = Box::pin(MyStruct::new(5));
let unpinned_x = Pin::into_inner(pinned_x);
}
error[E0277]: `PhantomPinned` cannot be unpinned
--> src/main.rs:20:38
|
20 | let unpinned_x = Pin::into_inner(pinned_x);
| --------------- ^^^^^^^^ within `MyStruct`, the trait `Unpin` is not implemented for `PhantomPinned`
| |
| required by a bound introduced by this call
|
= note: consider using `Box::pin`
note: required because it appears within the type `MyStruct`
--> src/main.rs:4:8
|
4 | struct MyStruct {
| ^^^^^^^^
note: required by a bound in `Pin::<P>::into_inner`
--> /home/martin/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/pin.rs:482:23
|
482 | impl<P: Deref<Target: Unpin>> Pin<P> {
| ^^^^^ required by this bound in `Pin::<P>::into_inner`
While with a normal struct, you can unpin it without a problem:
struct MyUnpinnableStruct;
fn main() {
let pinned_x = Box::pin(MyUnpinnableStruct);
let unpinned_x = Pin::into_inner(pinned_x);
}
Difference between Pin and Box
They are both completely different concepts. Pin makes sure that the data it points to cannot be moved. Box puts something on the heap.
As you can see from the previous examples, both are often used in conjunction, as the easiest way to prevent something from moving is to put it on the heap.
PhantomPin causes classes to be !Unpin, meaning once they are pinned, they can no longer be unpinned.
You can try to use Pin on values on the stack, but you will run into problems quickly. While it works for unpin-able structs:
struct MyUnpinnableStruct(u32);
fn main() {
let y = MyUnpinnableStruct(7);
{
let pinned_y = Pin::new(&y);
}
// This moves y into the `drop` function
drop(y);
}
It fails for structs that contain PhantomPinned:
fn main() {
let x = MyStruct::new(5);
{
// This fails; pinning a reference to a stack object
// will fail, because once we drop that reference the
// object will be movable again. So we cannot `Pin` stack objects
let pinned_x = Pin::new(&x);
}
// This moves x into the `drop` function
drop(x);
}
error[E0277]: `PhantomPinned` cannot be unpinned
--> src/main.rs:24:33
|
24 | let pinned_x = Pin::new(&x);
| -------- ^^ within `MyStruct`, the trait `Unpin` is not implemented for `PhantomPinned`
| |
| required by a bound introduced by this call
|
= note: consider using `Box::pin`
note: required because it appears within the type `MyStruct`
--> src/main.rs:4:8
|
4 | struct MyStruct {
| ^^^^^^^^
note: required by a bound in `Pin::<P>::new`
--> /home/martin/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/pin.rs:482:23
|
482 | impl<P: Deref<Target: Unpin>> Pin<P> {
| ^^^^^ required by this bound in `Pin::<P>::new`
Box without Pin
While the content of Box is on the heap and therefore has a constant address, you can still move it back from the heap to the stack, which wouldn't be possible with a Pin object:
// Note that MyData does not implement Clone or Copy
struct MyData(u32);
impl MyData {
fn print_addr(&self) {
println!("Address: {:p}", self);
}
}
fn main() {
// On the heap
let x_heap = Box::new(MyData(42));
x_heap.print_addr();
// Moved back on the stack
let x_stack = *x_heap;
x_stack.print_addr();
}
Address: 0x557452040ad0
Address: 0x7ffde8f7f0d4
Enforcing Pin
To make sure that an object is pinned in a member function, you can use the following syntax:
fn print_addr(self: Pin<&Self>)
Together with PhantomPinned, you now can be 100% sure that print_addr will always print the same address for the same object:
use std::{marker::PhantomPinned, pin::Pin};
struct MyData(u32, PhantomPinned);
impl MyData {
fn print_addr(self: Pin<&Self>) {
println!("Address: {:p}", self);
}
}
fn main() {
// On the heap
let x_pinned = Box::pin(MyData(42, PhantomPinned));
x_pinned.as_ref().print_addr();
// Moved back on the stack
let x_unpinned = Pin::into_inner(x_pinned); // FAILS!
let x_stack = *x_unpinned;
let x_pinned_again = Box::pin(x_stack);
x_pinned_again.as_ref().print_addr();
}
In this example, there is absolutely no way to ever unpin x_pinned again, and print_addr can only be called on the pinned object.
Why is this useful? For example because you can now work with raw pointers, as is required in the Future trait.
But in general, Pin is only really useful if paired with unsafe code. Without unsafe code, the borrow checker is sufficient to keep track of your objects.

Creating a cyclic Tokio stream connected to a shared state

I am running into a problem that I do not really understand and hoped
that somebody might be able to see what I have misunderstood.
The problem is quite straightforward: I have a global state (shared
between several tasks) and want to have an infinite cycle over a
vector in the global state. I will then zip that with an interval
stream and hence get a regular emission of the next value in the
stream.
If the vector in the state changes, the inifinite stream should just
reload the vector and start reading from the new one instead, and
discard the old array.
Here is the code that I've gotten this far, and the questions are at
the end of the post.
use futures::stream::Stream;
use futures::{Async, Poll};
use std::iter::Cycle;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
use tokio::timer::Interval;
We define a global state that hold an array that can be
updated. Whenever the array is updated, we will step the version and
set the array.
struct State<T> {
version: u32,
array: Vec<T>,
}
impl<T> State<T> {
fn new(array: Vec<T>) -> Self {
Self {
version: 0,
array: Vec::new(),
}
}
fn update(&mut self, array: Vec<T>) {
self.version += 1;
self.array = array;
}
}
Now, we create an stream over the state. When initialized, it will
read the array and version from the state and store it and then keep
an instance of std::iter::Cycle internally that will cycle over the
array.
struct StateStream<I> {
state: Arc<Mutex<State<I::Item>>>,
version: u32,
iter: Cycle<I>,
}
impl<I> StateStream<I>
where
I: Iterator,
{
fn new(state: Arc<Mutex<State<I::Item>>>) -> Self {
let (version, array) = {
let locked_state = state.lock().unwrap();
(locked_state.version, locked_state.array)
};
Self {
state: state,
version: version,
iter: array.iter().cycle(),
}
}
}
We now implement the stream for the StateStream. With each poll, it
will check if the version of the state changed, and if it did, reload
the array and version.
We will then take the next item from the iterator and return that.
impl<I> Stream for StateStream<I>
where
I: Iterator + Clone,
{
type Item = I::Item;
type Error = ();
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
let locked_state = self.state.lock().unwrap();
if locked_state.version > self.version {
self.iter = locked_state.array.clone().iter().cycle();
self.version = locked_state.version;
}
Ok(Async::Ready(self.iter.next()))
}
}
The main program looks like this. I do not update the vector here, but
that is not important for the case at hand.
fn main() {
let state = Arc::new(Mutex::new(State::new(vec![2, 3, 5, 7, 11, 13])));
let primes = StateStream::new(state)
.take(20)
.zip(
Interval::new(Instant::now(), Duration::from_millis(500))
.map_err(|err| println!("Error: {}", err)),
)
.for_each(|(number, instant)| {
println!("fire; number={}, instant={:?}", number, instant);
Ok(())
});
tokio::run(primes);
}
When compiling this, I get the following errors:
cargo run --example cycle_stream_shared
Compiling tokio-testing v0.1.0 (/home/mats/crates/tokio-examples)
error[E0308]: mismatched types
--> examples/cycle_stream_shared.rs:66:19
|
66 | iter: array.iter().cycle(),
| ^^^^^^^^^^^^^^^^^^^^ expected type parameter, found struct `std::slice::Iter`
|
= note: expected type `std::iter::Cycle<I>`
found type `std::iter::Cycle<std::slice::Iter<'_, <I as std::iter::Iterator>::Item>>`
error[E0308]: mismatched types
--> examples/cycle_stream_shared.rs:81:25
|
81 | self.iter = locked_state.array.clone().iter().cycle();
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected type parameter, found struct `std::slice::Iter`
|
= note: expected type `std::iter::Cycle<I>`
found type `std::iter::Cycle<std::slice::Iter<'_, <I as std::iter::Iterator>::Item>>`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0308`.
error: Could not compile `tokio-testing`.
To learn more, run the command again with --verbose.
Now, the error and the explanation says that the concrete type is not
possible to derive, but in this case, I am using the generic struct
Cycle<I> and expect I to be instantiated to std::slice::Iter<'_,
I::Item>. Since std::slice::Iter has implemented Iterator and, the type have implemented all necessary traits to match.
Some answers to similar questions exist, but nothing that seems to
match this case:
“Expected type parameter” error in the constructor of a generic
struct is showing that the types do not match (same
as the explanation gives) because the generic struct definition allow any type, but the construction require a specific type.
In this case, we are using a generic type Cycle<I>, where I should implement the Iterator trait, and try to use a type std::slice::Iter<..> that does implement Iterator.
How do I return an instance of a trait from a
method? talk about how to return an arbitrary type
matching a trait, which is not the case here.
The other questions are mostly referring to these two, or variations
of these.
Update: Changed it to be a generic type to demonstrate that it still does not work.

Spawning tasks with non-static lifetimes with tokio 0.1.x

I have a tokio core whose main task is running a websocket (client). When I receive some messages from the server, I want to execute a new task that will update some data. Below is a minimal failing example:
use tokio_core::reactor::{Core, Handle};
use futures::future::Future;
use futures::future;
struct Client {
handle: Handle,
data: usize,
}
impl Client {
fn update_data(&mut self) {
// spawn a new task that updates the data
self.handle.spawn(future::ok(()).and_then(|x| {
self.data += 1; // error here
future::ok(())
}));
}
}
fn main() {
let mut runtime = Core::new().unwrap();
let mut client = Client {
handle: runtime.handle(),
data: 0,
};
let task = future::ok::<(), ()>(()).and_then(|_| {
// under some conditions (omitted), we update the data
client.update_data();
future::ok::<(), ()>(())
});
runtime.run(task).unwrap();
}
Which produces this error:
error[E0477]: the type `futures::future::and_then::AndThen<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:13:51: 16:10 self:&mut &mut Client]>` does not fulfill the required lifetime
--> src/main.rs:13:21
|
13 | self.handle.spawn(future::ok(()).and_then(|x| {
| ^^^^^
|
= note: type must satisfy the static lifetime
The problem is that new tasks that are spawned through a handle need to be static. The same issue is described here. Sadly it is unclear to me how I can fix the issue. Even some attempts with and Arc and a Mutex (which really shouldn't be needed for a single-threaded application), I was unsuccessful.
Since developments occur rather quickly in the tokio landscape, I am wondering what the current best solution is. Do you have any suggestions?
edit
The solution by Peter Hall works for the example above. Sadly when I built the failing example I changed tokio reactor, thinking they would be similar. Using tokio::runtime::current_thread
use futures::future;
use futures::future::Future;
use futures::stream::Stream;
use std::cell::Cell;
use std::rc::Rc;
use tokio::runtime::current_thread::{Builder, Handle};
struct Client {
handle: Handle,
data: Rc<Cell<usize>>,
}
impl Client {
fn update_data(&mut self) {
// spawn a new task that updates the data
let mut data = Rc::clone(&self.data);
self.handle.spawn(future::ok(()).and_then(move |_x| {
data.set(data.get() + 1);
future::ok(())
}));
}
}
fn main() {
// let mut runtime = Core::new().unwrap();
let mut runtime = Builder::new().build().unwrap();
let mut client = Client {
handle: runtime.handle(),
data: Rc::new(Cell::new(1)),
};
let task = future::ok::<(), ()>(()).and_then(|_| {
// under some conditions (omitted), we update the data
client.update_data();
future::ok::<(), ()>(())
});
runtime.block_on(task).unwrap();
}
I obtain:
error[E0277]: `std::rc::Rc<std::cell::Cell<usize>>` cannot be sent between threads safely
--> src/main.rs:17:21
|
17 | self.handle.spawn(future::ok(()).and_then(move |_x| {
| ^^^^^ `std::rc::Rc<std::cell::Cell<usize>>` cannot be sent between threads safely
|
= help: within `futures::future::and_then::AndThen<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]>`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::cell::Cell<usize>>`
= note: required because it appears within the type `[closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]`
= note: required because it appears within the type `futures::future::chain::Chain<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]>`
= note: required because it appears within the type `futures::future::and_then::AndThen<futures::future::result_::FutureResult<(), ()>, futures::future::result_::FutureResult<(), ()>, [closure#src/main.rs:17:51: 20:10 data:std::rc::Rc<std::cell::Cell<usize>>]>`
So it does seem like in this case I need an Arc and a Mutex even though the entire code is single-threaded?
In a single-threaded program, you don't need to use Arc; Rc is sufficient:
use std::{rc::Rc, cell::Cell};
struct Client {
handle: Handle,
data: Rc<Cell<usize>>,
}
impl Client {
fn update_data(&mut self) {
let data = Rc::clone(&self.data);
self.handle.spawn(future::ok(()).and_then(move |_x| {
data.set(data.get() + 1);
future::ok(())
}));
}
}
The point is that you no longer have to worry about the lifetime because each clone of the Rc acts as if it owns the data, rather than accessing it via a reference to self. The inner Cell (or RefCell for non-Copy types) is needed because the Rc can't be dereferenced mutably, since it has been cloned.
The spawn method of tokio::runtime::current_thread::Handle requires that the future is Send, which is what is causing the problem in the update to your question. There is an explanation (of sorts) for why this is the case in this Tokio Github issue.
You can use tokio::runtime::current_thread::spawn instead of the method of Handle, which will always run the future in the current thread, and does not require that the future is Send. You can replace self.handle.spawn in the code above and it will work just fine.
If you need to use the method on Handle then you will also need to resort to Arc and Mutex (or RwLock) in order to satisfy the Send requirement:
use std::sync::{Mutex, Arc};
struct Client {
handle: Handle,
data: Arc<Mutex<usize>>,
}
impl Client {
fn update_data(&mut self) {
let data = Arc::clone(&self.data);
self.handle.spawn(future::ok(()).and_then(move |_x| {
*data.lock().unwrap() += 1;
future::ok(())
}));
}
}
If your data is really a usize, you could also use AtomicUsize instead of Mutex<usize>, but I personally find it just as unwieldy to work with.

Why do I get an error that "Sync is not satisfied" when moving self, which contains an Arc, into a new thread?

I have a struct which holds an Arc<Receiver<f32>> and I'm trying to add a method which takes ownership of self, and moves the ownership into a new thread and starts it. However, I'm getting the error
error[E0277]: the trait bound `std::sync::mpsc::Receiver<f32>: std::marker::Sync` is not satisfied
--> src/main.rs:19:9
|
19 | thread::spawn(move || {
| ^^^^^^^^^^^^^ `std::sync::mpsc::Receiver<f32>` cannot be shared between threads safely
|
= help: the trait `std::marker::Sync` is not implemented for `std::sync::mpsc::Receiver<f32>`
= note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Arc<std::sync::mpsc::Receiver<f32>>`
= note: required because it appears within the type `Foo`
= note: required because it appears within the type `[closure#src/main.rs:19:23: 22:10 self:Foo]`
= note: required by `std::thread::spawn`
If I change the struct to hold an Arc<i32> instead, or just a Receiver<f32>, it compiles, but not with a Arc<Receiver<f32>>. How does this work? The error doesn't make sense to me as I'm not trying to share it between threads (I'm moving it, not cloning it).
Here is the full code:
use std::sync::mpsc::{channel, Receiver, Sender};
use std::sync::Arc;
use std::thread;
pub struct Foo {
receiver: Arc<Receiver<f32>>,
}
impl Foo {
pub fn new() -> (Foo, Sender<f32>) {
let (sender, receiver) = channel::<f32>();
let sink = Foo {
receiver: Arc::new(receiver),
};
(sink, sender)
}
pub fn run_thread(self) -> thread::JoinHandle<()> {
thread::spawn(move || {
println!("Thread spawned by 'run_thread'");
self.run(); // <- This line gives the error
})
}
fn run(mut self) {
println!("Executing 'run'")
}
}
fn main() {
let (example, sender) = Foo::new();
let handle = example.run_thread();
handle.join();
}
How does this work?
Let's check the requirements of thread::spawn again:
pub fn spawn<F, T>(f: F) -> JoinHandle<T>
where
F: FnOnce() -> T,
F: Send + 'static, // <-- this line is important for us
T: Send + 'static,
Since Foo contains an Arc<Receiver<_>>, let's check if and how Arc implements Send:
impl<T> Send for Arc<T>
where
T: Send + Sync + ?Sized,
So Arc<T> implements Send if T implements Send and Sync. And while Receiver implements Send, it does not implement Sync.
So why does Arc have such strong requirements for T? T also has to implement Send because Arc can act like a container; if you could just hide something that doesn't implement Send in an Arc, send it to another thread and unpack it there... bad things would happen. The interesting part is to see why T also has to implement Sync, which is apparently also the part you are struggling with:
The error doesn't make sense to me as I'm not trying to share it between threads (I'm moving it, not cloning it).
The compiler can't know that the Arc in Foo is in fact not shared. Consider if you would add a #[derive(Clone)] to Foo later (which is possible without a problem):
fn main() {
let (example, sender) = Foo::new();
let clone = example.clone();
let handle = example.run_thread();
clone.run();
// oopsie, now the same `Receiver` is used from two threads!
handle.join();
}
In the example above there is only one Receiver which is shared between threads. And this is no good, since Receiver does not implement Sync!
To me this code raises the question: why the Arc in the first place? As you noticed, without the Arc, it works without a problem: you clearly state that Foo is the only owner of the Receiver. And if you are "not trying to share [the Receiver]" anyway, there is no point in having multiple owners.

How do I share a generic struct between threads using Arc<Mutex<MyStruct<T>>>?

I have some mutable state I need to share between threads. I followed the concurrency section of the Rust book, which shares a vector between threads and mutates it.
Instead of a vector, I need to share a generic struct that is ultimately monomorphized. Here is a distilled example of what I'm trying:
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;
use std::marker::PhantomData;
trait Memory {}
struct SimpleMemory;
impl Memory for SimpleMemory {}
struct SharedData<M: Memory> {
value: usize,
phantom: PhantomData<M>,
}
impl<M: Memory> SharedData<M> {
fn new() -> Self {
SharedData {
value: 0,
phantom: PhantomData,
}
}
}
fn main() {
share(SimpleMemory);
}
fn share<M: Memory>(memory: M) {
let data = Arc::new(Mutex::new(SharedData::<M>::new()));
for i in 0..3 {
let data = data.clone();
thread::spawn(move || {
let mut data = data.lock().unwrap();
data.value += i;
});
}
thread::sleep(Duration::from_millis(50));
}
The compiler complains with the following error:
error[E0277]: the trait bound `M: std::marker::Send` is not satisfied
--> src/main.rs:37:9
|
37 | thread::spawn(move || {
| ^^^^^^^^^^^^^
|
= help: consider adding a `where M: std::marker::Send` bound
= note: required because it appears within the type `std::marker::PhantomData<M>`
= note: required because it appears within the type `SharedData<M>`
= note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Mutex<SharedData<M>>`
= note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Arc<std::sync::Mutex<SharedData<M>>>`
= note: required because it appears within the type `[closure#src/main.rs:37:23: 40:10 data:std::sync::Arc<std::sync::Mutex<SharedData<M>>>, i:usize]`
= note: required by `std::thread::spawn`
I'm trying to understand why M would need to implement Send, and what the appropriate way to accomplish this is.
I'm trying to understand why M would need to implement Send, ...
Because, as stated by the Send documentation:
Types that can be transferred across thread boundaries.
If it's not Send, it is by definition not safe to send to another thread.
Almost all of the information you need is right there in the documentation:
thread::spawn requires the callable you give it to be Send.
You're using a closure, which is only Send if all the values it captures are Send. This is true in general of most types (they are Send if everything they're made of is Send, and similarly for Sync).
You're capturing data, which is an Arc<T>, which is only Send if T is Send.
T is a Mutex<U>, which is only Send if U is Send.
U is M. Thus, M must be Send.
In addition, note that thread::spawn also requires that the callable be 'static, so you need that too. It needs that because if it didn't require that, it'd have no guarantee that the value will continue to exist for the entire lifetime of the thread (which may or may not outlive the thread that spawned it).
..., and what the appropriate way to accomplish this is.
Same way as any other constraints: M: 'static + Send + Memory.

Resources