How to pass immutable parameters to a thread? (about lifetimes) - multithreading

Let thread state consists of immutable parameters Params and the rest of the (mutable) state State.
I am trying to mock spawning a thread that does something being controlled by parameters Params:
use std::thread;
struct Params {
x: i32,
}
struct State<'a> {
params: &'a Params,
y: i32,
}
impl<'a> State<'a> {
fn new(params: &Params) -> State {
State {
params,
y: 0,
}
}
fn start(&mut self) -> thread::JoinHandle<()> {
let params = self.params.clone();
thread::spawn(move || { params; /* ... */ })
}
}
But this does not work:
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
--> test.rs:20:34
|
20 | let params = self.params.clone();
| ^^^^^
|
note: first, the lifetime cannot outlive the lifetime `'a` as defined on the impl at 12:6...
--> test.rs:12:6
|
12 | impl<'a> State<'a> {
| ^^
note: ...so that the types are compatible
--> test.rs:20:34
|
20 | let params = self.params.clone();
| ^^^^^
= note: expected `&&Params`
found `&&'a Params`
= note: but, the lifetime must be valid for the static lifetime...
note: ...so that the type `[closure#test.rs:21:23: 21:42 params:&Params]` will meet its required lifetime bounds
--> test.rs:21:9
|
21 | thread::spawn(move || { params; /* ... */ })
| ^^^^^^^^^^^^^
I understand why it does not work: The thread could run indefinitely long, and the params could be destroyed before it is terminated. That's clearly an error.
Now explain what is the proper way to make params long at least as long as the thread. In other words, help to correct the above code. What should I do with lifetimes?

You got the right idea with using clone and a move lambda, but you forgot one detail: Params isn't Clone! Therefore the compiler did the best it could when it saw self.params.clone() and cloned… the reference.
That's why the error messages have two & here:
= note: expected `&&Params`
found `&&'a Params`
Your issue is solved by using #[derive(Clone)] struct Params { /* … */ }.

Related

How can I use a channel between an async closure and my main thread in rust?

I am trying to use a channel to communicate between an event handler and the main thread of my program using async rust. The event handler in question is this one from the matrix-rust-sdk.
I can see that this exact pattern is used in the code here.
But when I tried literally the same thing in my own code, it gives me a really strange lifetime error.
error: lifetime may not live long enough
--> src/main.rs:84:13
|
80 | move |event: OriginalSyncRoomMessageEvent, room: Room| {
| ------------------------------------------------------
| | |
| | return type of closure `impl futures_util::Future<Output = ()>` contains a lifetime `'2`
| lifetime `'1` represents this closure's body
...
84 | / async move {
85 | | if let Room::Joined(room) = room {
86 | | if room.room_id() == room_id {
87 | | match event.content.msgtype {
... |
94 | | }
95 | | }
| |_____________^ returning this value requires that `'1` must outlive `'2`
|
= note: closure implements `Fn`, so references to captured variables can't escape the closure
I tried to make a much simpler example, and the weird lifetime error remains:
use tokio::sync::mpsc;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let (tx, _rx) = mpsc::channel(32);
let closure = move || async {
at.send("hi");
};
Ok(())
}
Gives me:
error: lifetime may not live long enough
--> src/main.rs:9:27
|
9 | let closure = move || async {
| ___________________-------_^
| | | |
| | | return type of closure `impl Future<Output = ()>` contains a lifetime `'2`
| | lifetime `'1` represents this closure's body
10 | | at.send("hi");
11 | | };
| |_____^ returning this value requires that `'1` must outlive `'2`
|
= note: closure implements `Fn`, so references to captured variables can't escape the closure
So how can I use a channel in an async closure? Why doesn't my code work when the code in the matrix-rust-sdk does?
I think you meant || async move instead of move || async.
use tokio::sync::mpsc;
#[tokio::main]
async fn main() {
let (tx, _rx) = mpsc::channel(32);
let closure = || async move {
tx.send("hi").await.unwrap();
};
closure().await;
}
I think in most cases, |args| async move {} is what you want to use if you want to create an async closure. But I don't completely understand the differences either.
For more infos, this might help:
What is the difference between `|_| async move {}` and `async move |_| {}`.
I don't think your minimal example represents your actual problem of your real code, though. This is a minimal example that represents the real problem:
#[derive(Clone, Debug)]
struct RoomId(u32);
#[derive(Clone, Debug)]
struct Room {
id: RoomId,
}
impl Room {
fn room_id(&self) -> &RoomId {
&self.id
}
}
#[tokio::main]
async fn main() {
let dm_room = Room { id: RoomId(42) };
let dm_room_closure = dm_room.clone();
let closure = move || {
let room_id = dm_room_closure.room_id();
async move {
println!("{}", room_id.0);
}
};
closure().await;
}
error: lifetime may not live long enough
--> src/main.rs:23:9
|
20 | let closure = move || {
| -------
| | |
| | return type of closure `impl Future<Output = ()>` contains a lifetime `'2`
| lifetime `'1` represents this closure's body
...
23 | / async move {
24 | | println!("{}", room_id.0);
25 | | }
| |_________^ returning this value requires that `'1` must outlive `'2`
|
= note: closure implements `Fn`, so references to captured variables can't escape the closure
The real problem here is caused by the fact that room_id contains a reference to dm_room_closure, but dm_room_closure does not get kept alive by the innermost async move context.
To fix this, make sure that the async move keeps dm_room_closure alive by moving it in as well. In this case, this is as simple as creating the room_id variable inside of the async move:
#[derive(Clone, Debug)]
struct RoomId(u32);
#[derive(Clone, Debug)]
struct Room {
id: RoomId,
}
impl Room {
fn room_id(&self) -> &RoomId {
&self.id
}
}
#[tokio::main]
async fn main() {
let dm_room = Room { id: RoomId(42) };
let dm_room_closure = dm_room.clone();
let closure = move || {
async move {
let room_id = dm_room_closure.room_id();
println!("{}", room_id.0);
}
};
closure().await;
}
42
So I finally fixed the error. It turns out that something in Room doesn't implement Copy and therefore it was causing some sort of state sharing despite the Clones. I fixed it by passing the RoomId as a string. Since the lifetime error message is entirely opaque, there was no way to see which moved variable was actually causing the problem. Off to file a compiler bug report.

Lifetime issues when refactoring into function with closure back-call

I am writing a Rust application that uses the wgpu library to render stuff. How the library works is largely unimportant, since the errors I'm facing are lifetime-related.
In the function that actually performs the rendering looks like this: (You don't need to understand it, I listed it largely for completeness' sake.)
pub fn render(&self) -> anyhow::Result<()> {
let output = self.surface.get_current_texture()?;
let view = output.texture.create_view(&wgpu::TextureViewDescriptor::default());
let mut encoder = self.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor { label: Some("render encoder") }
);
let render_pass_descriptor = wgpu::RenderPassDescriptor {
label: Some("render pass"),
color_attachments: &[
wgpu::RenderPassColorAttachment {
view: &view,
resolve_target: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(
wgpu::Color { r: 0.1, g: 0.2, b: 0.3, a: 1.0 }),
store: false,
}
},
],
depth_stencil_attachment: None,
};
let mut render_pass = encoder.begin_render_pass(&render_pass_descriptor);
render_pass.set_pipeline(&self.render_pipeline);
render_pass.set_vertex_buffer(0, self.vertex_buffer.slice(..));
render_pass.draw(0..self.num_vertecies, 0..1);
// Explicitly drop, bc. it borrows the encoder.
drop(render_pass);
self.queue.submit(iter::once(encoder.finish()));
output.present();
Ok(())
}
I wanted to refactor this piece of code into a utility function, but keep the three calls on the render_pass object.
The utility function has this signature and does the same stuff the original function did, but instead of the three calls on render_pipeline, it just calls the render_pass_configurator closure:
pub fn submit_render_pass<F: FnOnce(wgpu::RenderPass)>(
surface: &wgpu::Surface,
device: &wgpu::Device,
queue: &wgpu::Queue,
clear_color: wgpu::Color,
render_pass_configurator: F,
) -> anyhow::Result<()> { ... }
And the body of the original render() function is replaced with the call to this utility function:
util::submit_render_pass(&self.surface, &self.device, &self.queue,
wgpu::Color { r: 0.1, g: 0.2, b: 0.3, a: 1.0 },
| mut render_pass | {
render_pass.set_pipeline(&self.render_pipeline);
render_pass.set_vertex_buffer(0, self.vertex_buffer.slice(..));
render_pass.draw(0..self.num_vertecies, 0..1);
},
)
Seems straightforward to me, but of course Rust's borrow checker disagrees:
error[E0495]: cannot infer an appropriate lifetime for borrow expression due to conflicting requirements
--> src/gpu.rs:89:38
|
89 | render_pass.set_pipeline(&self.render_pipeline);
| ^^^^^^^^^^^^^^^^^^^^^
|
note: first, the lifetime cannot outlive the anonymous lifetime defined here...
--> src/gpu.rs:85:19
|
85 | pub fn render(&self) -> anyhow::Result<()> {
| ^^^^^
note: ...so that reference does not outlive borrowed content
--> src/gpu.rs:89:38
|
89 | render_pass.set_pipeline(&self.render_pipeline);
| ^^^^^^^^^^^^^^^^^^^^^
note: but, the lifetime must be valid for the anonymous lifetime #1 defined here...
--> src/gpu.rs:88:9
|
88 | / | mut render_pass: wgpu::RenderPass | {
89 | | render_pass.set_pipeline(&self.render_pipeline);
90 | | render_pass.set_vertex_buffer(0, self.vertex_buffer.slice(..));
91 | | render_pass.draw(0..self.num_vertecies, 0..1);
92 | | },
| |_________^
note: ...so that the types are compatible
--> src/gpu.rs:89:25
|
89 | render_pass.set_pipeline(&self.render_pipeline);
| ^^^^^^^^^^^^
= note: expected `&mut RenderPass<'_>`
found `&mut RenderPass<'_>`
(... and a similar error for the .slice(..) call.)
I understand that because of the render_pass.set_pipeline(&self.render_pipeline) call, render_pass may not ever live longer than &self. But it doesn't. render_pipeline gets dropped at the end of the closure, and &self lives on.
I tried adding lifetimes to the best of my ability, and I got the error to change only once when I added an explicit lifetime to the type of the utility function, changing the closure definition to F: FnOnce(wgpu::RenderPass<'a>): (But this error didn't make much sense to me either.)
error[E0597]: `view` does not live long enough
--> src/gpu_util.rs:130:23
|
112 | pub fn submit_render_pass<'a, F: FnOnce(wgpu::RenderPass<'a>)>(
| -- lifetime `'a` defined here
...
130 | view: &view,
| ^^^^^ borrowed value does not live long enough
...
143 | render_pass_configurator(render_pass);
| ------------------------------------- argument requires that `view` is borrowed for `'a`
...
149 | }
| - `view` dropped here while still borrowed
Update
I got it to work by writing the render() function like this: (Self == GpuState)
pub fn render(&self) -> anyhow::Result<()> {
fn configure_render_pass<'a>(s: &'a GpuState, mut render_pass: wgpu::RenderPass<'a>) {
render_pass.set_pipeline(&s.render_pipeline);
render_pass.set_vertex_buffer(0, s.vertex_buffer.slice(..));
render_pass.draw(0..s.num_vertecies, 0..1);
}
util::submit_render_pass(&self.surface, &self.device, &self.queue,
wgpu::Color { r: 0.1, g: 0.2, b: 0.3, a: 1.0 },
| render_pass: wgpu::RenderPass<'_> | {
configure_render_pass(self, render_pass);
},
)
}
I think what makes it work here is that I get a chance to explicitly tell the compiler that the captured self lives as long as the render_pass. But that's only my guess...
I'll leave the question open, if anyone comes up with a solution to make it work without the extra function declaration.
pub fn submit_render_pass<F: FnOnce(wgpu::RenderPass)>(
surface: &wgpu::Surface,
device: &wgpu::Device,
queue: &wgpu::Queue,
clear_color: wgpu::Color,
render_pass_configurator: F,
) -> anyhow::Result<()> { ... }
First of, all, you have a hidden lifetime parameter, which can give rise to very confusing errors. Add the #![deny(elided_lifetimes_in_paths)] lint setting to your code to force these to be explicit. (It's unfortunate that that lint isn't on by default.) You'll be required to change the code to…
pub fn submit_render_pass<F: FnOnce(wgpu::RenderPass<'???>)>(
And now we see part of the problem: what goes in the spot I've marked '???? The RenderPass borrows from the CommandEncoder (the signature of CommandEncoder::begin_render_pass() tells us that), but the CommandEncoder is a local variable inside submit_render_pass(), so borrows of it cannot have a lifetime that is nameable from outside the function.
To solve this problem, you need to use a HRTB to specify that the callback function must be able to accept any lifetime:
pub fn submit_render_pass<F>(
surface: &wgpu::Surface,
device: &wgpu::Device,
queue: &wgpu::Queue,
clear_color: wgpu::Color,
render_pass_configurator: F,
) -> anyhow::Result<()>
where
F: for<'encoder> FnOnce(wgpu::RenderPass<'encoder>)
{ ... }

"cannot infer an appropriate lifetime" when attempting to return a chunked response with hyper

I would like to return binary data in chunks of specific size. Here is a minimal example.
I made a wrapper struct for hyper::Response to hold my data like status, status text, headers and the resource to return:
pub struct Response<'a> {
pub resource: Option<&'a Resource>
}
This struct has a build method that creates the hyper::Response:
impl<'a> Response<'a> {
pub fn build(&mut self) -> Result<hyper::Response<hyper::Body>, hyper::http::Error> {
let mut response = hyper::Response::builder();
match self.resource {
Some(r) => {
let chunks = r.data
.chunks(100)
.map(Result::<_, std::convert::Infallible>::Ok);
response.body(hyper::Body::wrap_stream(stream::iter(chunks)))
},
None => response.body(hyper::Body::from("")),
}
}
}
There is also another struct holding the database content:
pub struct Resource {
pub data: Vec<u8>
}
Everything works until I try to create a chunked response. The Rust compiler gives me the following error:
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
--> src/main.rs:14:15
|
14 | match self.resource {
| ^^^^^^^^^^^^^
|
note: first, the lifetime cannot outlive the lifetime `'a` as defined on the impl at 11:6...
--> src/main.rs:11:6
|
11 | impl<'a> Response<'a> {
| ^^
note: ...so that the types are compatible
--> src/main.rs:14:15
|
14 | match self.resource {
| ^^^^^^^^^^^^^
= note: expected `Option<&Resource>`
found `Option<&'a Resource>`
= note: but, the lifetime must be valid for the static lifetime...
note: ...so that the types are compatible
--> src/main.rs:19:31
|
19 | response.body(hyper::Body::wrap_stream(stream::iter(chunks)))
| ^^^^^^^^^^^^^^^^^^^^^^^^
= note: expected `From<&[u8]>`
found `From<&'static [u8]>`
I don't know how to fulfill these lifetime requirements. How can I do this correctly?
The problem is not in the 'a itself, but in the fact that the std::slice::chunks() function returns an iterator that borrows the original slice. You are trying to create a stream future from this Chunks<'_, u8> value, but the stream requires it to be 'static. Even if your Resource did not have the 'a lifetime, you would still have the r.data borrowed, and it would still fail.
Remember that here 'static does not mean that the value lives forever, but that it can be made to live as long as necessary. That is, the future must not hold any (non-'static) borrows.
You could clone all the data, but if it is very big, it can be costly. If so, you could try using Bytes, that is just like Vec<u8> but reference counted.
It looks like there is no Bytes::chunks() function that returns an iterator of Bytes. Fortunately it is easy to do it by hand.
Lastly, remember that iterators in Rust are lazy, so they keep the original data borrowed, even if it is a Bytes. So we need to collect them into a Vec to actually own the data (playground):
pub struct Resource {
pub data: Bytes,
}
impl<'a> Response<'a> {
pub fn build(&mut self) -> Result<hyper::Response<hyper::Body>, hyper::http::Error> {
let mut response = hyper::Response::builder();
match self.resource {
Some(r) => {
let len = r.data.len();
let chunks = (0..len)
.step_by(100)
.map(|x| {
let range = x..len.min(x + 100);
Ok(r.data.slice(range))
})
.collect::<Vec<Result<Bytes, std::convert::Infallible>>>();
response.body(hyper::Body::wrap_stream(stream::iter(chunks)))
}
None => response.body(hyper::Body::from("")),
}
}
}
UPDATE: We can avoid the call to collect() if we notice that stream::iter() takes ownership of an IntoIterator that can be evaluated lazily, as long as we make it 'static. It can be done if we do a (cheap) clone of r.data and move it into the lambda (playground):
let data = r.data.clone();
let len = data.len();
let chunks = (0..len).step_by(100)
.map(move |x| {
let range = x .. len.min(x + 100);
Result::<_, std::convert::Infallible>::Ok(data.slice(range))
});
response.body(hyper::Body::wrap_stream(stream::iter(chunks)))

In Rust, I have a large number of receiver objects I'd like manage, however I'm running into lifetime issues using Select

Due to the possibility of there being a large number of objects, I'd like to have a way to add them to the select list, remove them for processing and then add them back. All, without having to rebuild the select list each time an object is added back for waiting. It looks something like this:
use std::collections::HashMap;
use crossbeam::{Select, Sender, Receiver};
struct WaitList <'a> {
sel: Select<'a>,
objects: HashMap<u128, Object>,
sel_index: HashMap<usize, u128>,
}
impl<'a> WaitList<'a> {
fn new () -> Self { Self { sel: Select::new(), objects: HashMap::new(), sel_index: HashMap::new() } }
fn select(&self) -> &Object {
let oper = self.sel.select();
let index = oper.index();
let id = self.sel_index.get(&index).unwrap();
let obj = self.objects.get(&id).unwrap();
obj.cmd = oper.recv(&obj.receiver).unwrap();
self.sel.remove(index);
obj
}
fn add_object(&self, object: Object) {
let id = object.id;
self.objects.insert(id, object);
self.add_select(id);
}
fn add_select(&self, id: u128) {
let idx = self.sel.recv(&self.objects.get(&id).unwrap().receiver);
self.sel_index.insert(idx, id);
}
}
Over time the select list will contain more dead entries, then live, and I'll rebuild it at that time. But, I'd like to not have to rebuild it every time. Here's the detailed error message:
Checking test-select v0.1.0 (/Users/bruce/Projects/rust/examples/test-select)
error[E0495]: cannot infer an appropriate lifetime for autoref due to conflicting requirements
--> src/main.rs:28:47
|
28 | let idx = self.sel.recv(&self.objects.get(&id).unwrap().receiver);
| ^^^
|
note: first, the lifetime cannot outlive the anonymous lifetime #1 defined on the method body at 27:5...
--> src/main.rs:27:5
|
27 | / fn add_select(&self, id: u128) {
28 | | let idx = self.sel.recv(&self.objects.get(&id).unwrap().receiver);
29 | | self.sel_index.insert(idx, id);
30 | | }
| |_____^
note: ...so that reference does not outlive borrowed content
--> src/main.rs:28:34
|
28 | let idx = self.sel.recv(&self.objects.get(&id).unwrap().receiver);
| ^^^^^^^^^^^^
note: but, the lifetime must be valid for the lifetime `'a` as defined on the impl at 9:6...
--> src/main.rs:9:6
|
9 | impl<'a> WaitList<'a> {
| ^^
note: ...so that the types are compatible
--> src/main.rs:28:28
|
28 | let idx = self.sel.recv(&self.objects.get(&id).unwrap().receiver);
| ^^^^
= note: expected `&mut crossbeam::Select<'_>`
found `&mut crossbeam::Select<'a>`
While I believe I understand the issue, that the borrow of the receiver from the hash table doesn't live long enough, I'm having a difficult time trying to come up with an alternative -- and I'm not seeing a clean way to borrow the information. I considered creating a struct to contain the borrow, and using that instead of a plain id in the wait sel_index however that runs into the same lifetime problem.
struct SingleWaiter<'a> {
id: u128,
receiver: &'a Receiver::<Command>
}
I feel like I'm missing something or not understanding something, as it seems like it shouldn't be that difficult to do what I want to do. I can imagine that the choice of HashMap for holding object might be the issue, a Vec felt wrong, as I'm adding and inserting. BTW, the HashMap isn't normally part of the waitlist. It is part of something else, but the problem remains the same irregardless of where the HashMap lives.

How to create a thread manager?

I have a data stream that I want to process in the background, but I want to create a struct or some functions to manage this stream.
In C++ land, I would create a class that abstracts all of this away. It would have a start method which would initialize the data stream and start a thread for processing. It would have a stop method that stops the processing and joins the thread.
However, this isn't really Rusty, and it doesn't even work in Rust.
Example (Playground)
use std::thread;
use std::time::Duration;
struct Handler {
worker_handle: Option<thread::JoinHandle<()>>,
stop_flag: bool, // Should be atomic, but lazy for example
}
impl Handler {
pub fn new() -> Handler {
let worker_handle = None;
let stop_flag = true;
return Handler { worker_handle, stop_flag };
}
pub fn start(&mut self) {
self.stop_flag = false;
self.worker_handle = Some(std::thread::spawn(move || {
println!("Spawned");
self.worker_fn();
}));
}
pub fn stop(&mut self) {
let handle = match self.worker_handle {
None => return,
Some(x) => x,
};
self.stop_flag = true;
handle.join();
}
fn worker_fn(&mut self) {
while !self.stop_flag {
println!("Working!");
}
}
}
fn main() {
let mut handler = Handler::new();
handler.start();
thread::sleep(Duration::from_millis(10000));
handler.stop();
}
Output:
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
--> src/main.rs:20:54
|
20 | self.worker_handle = Some(std::thread::spawn(move || {
| ______________________________________________________^
21 | | println!("Spawned");
22 | | self.worker_fn();
23 | | }));
| |_________^
|
note: first, the lifetime cannot outlive the anonymous lifetime #1 defined on the method body at 18:5...
--> src/main.rs:18:5
|
18 | / pub fn start(&mut self) {
19 | | self.stop_flag = false;
20 | | self.worker_handle = Some(std::thread::spawn(move || {
21 | | println!("Spawned");
22 | | self.worker_fn();
23 | | }));
24 | | }
| |_____^
= note: ...so that the types are compatible:
expected &mut Handler
found &mut Handler
= note: but, the lifetime must be valid for the static lifetime...
note: ...so that the type `[closure#src/main.rs:20:54: 23:10 self:&mut Handler]` will meet its required lifetime bounds
--> src/main.rs:20:35
|
20 | self.worker_handle = Some(std::thread::spawn(move || {
| ^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
Even if I cheat and remove the call the worker_fn, I still can't really work with JoinHandles like I might expect from C++ land.
error[E0507]: cannot move out of `self.worker_handle.0` which is behind a mutable reference
--> src/main.rs:27:28
|
27 | let handle = match self.worker_handle {
| ^^^^^^^^^^^^^^^^^^ help: consider borrowing here: `&self.worker_handle`
28 | None => return,
29 | Some(x) => x,
| -
| |
| data moved here
| move occurs because `x` has type `std::thread::JoinHandle<()>`, which does not implement the `Copy` trait
error: aborting due to previous error
So it's clear that I'm going outside of the usual Rust model and probably shouldn't be using this strategy.
But I still want to build some kind of interface that lets me simply spin up the data stream without worrying about managing threads, and I'm not really sure the best way to do this.
So it seems I have two core problems.
1) How can I create a function to be run in a thread which accepts data from an outside source, and can be signaled to quit safely? If I had an atomic bool for killing it, how would I share that between the threads?
2) How do I handle joining the thread when I'm done? The stop method needs to clean up the thread, but I don't know how to track a reference to it.

Resources