In this code everything works except task_id. I want this script to count requests in task_id:
use std::thread;
use std::thread::sleep_ms;
use std::sync::mpsc;
#[macro_use] extern crate nickel;
use nickel::Nickel;
fn main() {
let mut server = Nickel::new();
let mut task_id: i64 = 0;
server.utilize(router! {
get "**" => |_req, _res| {
task_id += 1;
run_heavy_task(task_id);
"Yo!"
}
});
server.listen("127.0.0.1:6767");
}
fn run_heavy_task(task_id: i64) {
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
println!("heavy task {} started!", task_id);
sleep_ms(3000);
println!("heavy task {} completed", task_id);
let result = tx.send(());
});
//rx.recv();
//println!("Task {} completed", task_id);
}
error:
can't capture dynamic environment in a fn item; use the || { ... }
closure form instead main.rs:13 task_id += 1;
Please help me solve this issue - how can I pass task_id into closure?
To expand on Chris Morgan's answer, this is a self-contained example with the same error:
fn main() {
let mut a = 0;
fn router() {
a += 1;
}
router();
println!("{}", a)
}
The problem is that fn items are not allowed to capture their environment, period. Capturing environment is non-trivial, and there are multiple ways to get variables into a closure. Closures are actually structs with the appropriate member variables for each captured variable.
Review Chris Morgan's statement:
Bear in mind how it may be multi-threaded; at the very least you will need to use some form of mutex to get it to work.
(Emphasis mine). You are creating a function, but you don't get to control how or when it is called. For all you know, Nickel may choose to call it from multiple threads - that's up to the library. As it is, you do not ever call the code task_id += 1!
I'm no expert on Nickel, but it appears that having dynamic routing is not possible with the code you've posted. However, it should be possible for you to avoid using the macro and construct a handler yourself, you "just" need to implement Middleware. That handler may be able to contain state, like your task_id.
Related
I am adding tests to the 'hello' web server from the rust book.
My issue/error is around how to test whether a Worker has processed a Job.
My idea is to pass an anonymous function which updates a bool from false to true.
I think ownership is an issue here. I tried wrapping f in a Box, thinking it would prevent passing bool as a value as opposed to a reference. Using Box I struggled to mutate the value of state_updated when it was wrapped in this way.
I also tried writing a basic struct to wrap and update the bool. I have since reverted back to a mut bool.
First question: What changes do I need to make to get the test to pass?
Second question: Is there a better way for me to test this?
Below is a minimal version which reproduces my issue.
The full code is available at the bottom of this page in the rust book.
My current test creates a Worker, sends a Job to the worker, and asserts on an expected change
that could only have occurred if the Worker has processed the Job.
I intend to iterate on this test to add proper thread cleanup in the future.
use std::sync::mpsc;
use std::sync::Arc;
use std::sync::Mutex;
use hello_server_help::Worker;
use std::thread;
use std::time::Duration;
#[test]
fn test_worker_processes_job() {
let (sender, r) = mpsc::channel();
let receiver = Arc::new(Mutex::new(r));
let _ = Worker::new(0, receiver);
let mut state_updated = false;
let f = move || state_updated = true;
sender.send(Box::new(f)).unwrap();
thread::sleep(Duration::from_secs(1)); // primitive wait, for now
assert_eq!(state_updated, true);
}
It's my understanding that f is taking ownership of state_updated. In the assert line, however,
at the end, there is no error along the lines of "referenced after move".
Running the tests gives me the output:
running 1 test
test test_worker_processes_job ... FAILED
failures:
---- test_worker_processes_job stdout ----
thread 'test_worker_processes_job' panicked at 'assertion failed: `(left == right)`
left: `false`,
right: `true`', tests/worker_tests.rs:19:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
The MRE implementation:
use std::sync::mpsc;
use std::sync::Arc;
use std::sync::Mutex;
use std::thread;
pub type Job = Box<dyn FnOnce() + Send + 'static>;
pub struct Worker {
id: usize,
handle: Option<thread::JoinHandle<()>>,
}
impl Worker {
pub fn new(id: usize, receiver: Arc<Mutex<mpsc::Receiver<Job>>>) -> Worker {
let thread = thread::spawn(move || loop {
let job = receiver
.lock()
.expect("Error obtaining lock.")
.recv()
.unwrap();
job();
});
Worker {
id,
handle: Some(thread),
}
}
}
state_updated is a boolean so it implements Copy, which is why you can move it into your closure and keep using it afterwards, and why you can't see the changes: the one that is modified by the closure is the copy and not the original.
If you want to update a boolean in the thread and have it visible in the caller, you will need to make sure that you send a reference and you will need to have some synchronization mechanism. Two solutions:
Use an Arc<Mutex<bool>>:
use std::sync::Arc;
use std::sync::Mutex;
let state_updated = Arc::new (Mutex::new (false));
let state_ref = state_updated.clone()
let f = move || *state_ref.lock().unwrap() = true;
…
assert_eq!(*state_updated.lock().unwrap(), true);
Or use an AtomicBool:
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
let state_updated = AtomicBool::new (false);
let state_ref = &state_updated;
let f = move || state_ref.store (true, Ordering::Release);
…
assert_eq!(state_updated.load (Ordering::Acquire), true);
The compiler will complain that "state_ref does not live long enough", but you can get around that by using a scoped thread (or from rayon or crossbeam), or with a bit of unsafe: let state_ref: &'static AtomicBool = unsafe { transmute (&state_updated) }; (just make sure you join the child thread before state_updated goes out of scope).
It might however be better to use a channel for the return value:
use use std::sync::mpsc;
let (rsend, rrecv) = mpsc::channel();
let f = move || rsend.send(());
…
assert_eq!(rrecv.recv_timeout (Duration::from_secs (1)), Ok(()));
that way you only wait until the result is available (the duration is just a timeout if the thread takes too long to compute the result).
I need to run an async function in actix::prelude::AsyncContext::run_interval, but I need to also pass in a struct member and return the result (not the future). This is a somewhat more complex version of this question here. As can be seen in the commented section below, I have tried a few approaches but all of them fail for one reason or another.
I have looked at a few related resources, including the AsyncContext trait and these StackOverflow questions: 3, 4.
Here is my example code (actix crate is required in Cargo.toml):
use std::time::Duration;
use actix::{Actor, Arbiter, AsyncContext, Context, System};
struct MyActor {
id: i32
}
impl MyActor {
fn new(id: i32) -> Self {
Self {
id: id,
}
}
fn heartbeat(&self, ctx: &mut <Self as Actor>::Context) {
ctx.run_interval(Duration::from_secs(1), |act, ctx| {
//lifetime issue
//let res = 0;
//Arbiter::spawn(async {
// res = two(act.id).await;
//});
//future must return `()`
//let res = Arbiter::spawn(two(act.id));
//async closures unstable
//let res = Arbiter::current().exec(async || {
// two(act.id).await
//});
});
}
}
impl Actor for MyActor {
type Context = Context<Self>;
fn started(&mut self, ctx: &mut Self::Context) {
self.heartbeat(ctx);
}
}
// assume functions `one` and `two` live in another module
async fn one(id: i32) -> i32 {
// assume something is done with id here
let x = id;
1
}
async fn two(id: i32) -> i32 {
let x = id;
// assume this may call other async functions
one(x).await;
2
}
fn main() {
let mut system = System::new("test");
system.block_on(async { MyActor::new(10).start() });
system.run();
}
Rust version:
$ rustc --version
rustc 1.50.0 (cb75ad5db 2021-02-10)
Using Arbiter::spawn would work, but the issue is with the data being accessed from inside the async block that's passed to Arbiter::spawn. Since you're accessing act from inside the async block, that reference will have to live longer than the closure that calls Arbiter::spawn. In fact, in will have to have a lifetime of 'static since the future produced by the async block could potentially live until the end of the program.
One way to get around this in this specific case, given that you need an i32 inside the async block, and an i32 is a Copy type, is to move it:
ctx.run_interval(Duration::from_secs(1), |act, ctx| {
let id = act.id;
Arbiter::spawn(async move {
two(id).await;
});
});
Since we're using async move, the id variable will be moved into the future, and will thus be available whenever the future is run. By assigning it to id first, we are actually copying the data, and it's the copy (id) that will be moved.
But this might not be what you want, if you're trying to get a more general solution where you can access the object inside the async function. In that case, it gets a bit tricker, and you might want to consider not using async functions if possible. If you must, it might be possible to have a separate object with the data you need which is surrounded by std::rc::Rc, which can then be moved into the async block without duplicating the underlying data.
I am trying to get a value from a thread, in this case a HashMap. I reduced the code to the following (I originally tried to share a HashMap containig a Vec):
use std::thread;
use std::sync::mpsc;
use std::sync::Mutex;
use std::sync::Arc;
use std::collections::HashMap;
fn main() {
let(tx, rx) = mpsc::channel();
let n_handle= thread::spawn( || {
tx.send(worker());
});
print!("{:?}", rx.recv().unwrap().into_inner().unwrap());
}
fn worker() -> Arc<Mutex<HashMap<String, i32>>>{
let result: HashMap<String, i32> = HashMap::new();
// some computation
Arc::from(Mutex::from(result))
}
Still Rust says that:
std::sync::mpsc::Sender<std::sync::Arc<std::sync::Mutex<std::collections::HashMap<std::string::String, i32>>>> cannot be shared between threads safely
I read some confusing stuff about putting everything into Arc<Mutex<..>> which I also tried with the value:
let result: HashMap<String, Arc<Mutex<i32>>> = HashMap::new();
Can anyone point me to a document that explains the usage of the mpsc::channel with values such as HashMaps? I understand why it is not working, as the trait Sync is not implemented for the HashMap, which is required to share the stuff. Still I have no idea how to get it to work.
You can pass the values between threads with using mpsc channel.
Until you tag your thread::spawn with the move keyword like following:
thread::spawn(move || {});
Since you did not tag it with move keyword then it is not moving the outer variables into the thread scope but only sharing their references. Thus you need to implement Sync trait that every outer variable you use.
mpsc::Sender does not implement Sync that is why you get the error cannot be shared between threads.
The solution for your case would be ideal to move the sender to inside of the thread scope with move like following:
use std::collections::HashMap;
use std::sync::mpsc;
use std::sync::Arc;
use std::sync::Mutex;
use std::thread;
fn main() {
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
let _ = tx.send(worker());
});
let arc = rx.recv().unwrap();
let hashmap_guard = arc.lock().unwrap();
print!(
"HashMap that retrieved from thread : {:?}",
hashmap_guard.get("Hello").unwrap()
);
}
fn worker() -> Arc<Mutex<HashMap<String, i32>>> {
let mut result: HashMap<String, i32> = HashMap::new();
result.insert("Hello".to_string(), 2);
// some computation
Arc::new(Mutex::new(result))
}
Playground
For further info: I'd recommend reading The Rust Programming Language, specifically the chapter on concurrency. In it, you are introduced to Arc: especially if you want to share your data in between threads.
I needed to pass a resource between several functions which use a closure as an argument. And within these the data was handled, but it looked for that the changes that were realized to a variable will be reflected in the rest.
The first thing I thought was to use Rc. I had previously used Arc to handle the data between different threads, but since these functions aren't running in different threads I chose Rc instead.
The most simplified code that I have, to show my doubts:
The use of RefCell was because maybe I had to see that this syntax will not work as I expected:
*Rc::make_mut(&mut rc_pref_temp)...
use std::sync::Arc;
use std::rc::Rc;
use std::sync::Mutex;
use std::cell::RefCell;
use std::cell::Cell;
fn main() {
test2();
println!("---");
test();
}
#[derive(Debug, Clone)]
struct Prefe {
name_test: RefCell<u64>,
}
impl Prefe {
fn new() -> Prefe {
Prefe {
name_test: RefCell::new(3 as u64),
}
}
}
fn test2(){
let mut prefe: Prefe = Prefe::new();
let mut rc_pref = Rc::new(Mutex::new(prefe));
println!("rc_pref Mutex: {:?}", rc_pref.lock().unwrap().name_test);
let mut rc_pref_temp = rc_pref.clone();
*rc_pref_temp.lock().unwrap().name_test.get_mut() += 1;
println!("rc_pref_clone Mutex: {:?}", rc_pref_temp.lock().unwrap().name_test);
*rc_pref_temp.lock().unwrap().name_test.get_mut() += 1;
println!("rc_pref_clone Mutex: {:?}", rc_pref_temp.lock().unwrap().name_test);
println!("rc_pref Mutex: {:?}", rc_pref.lock().unwrap().name_test);
}
fn test(){
let mut prefe: Prefe = Prefe::new();
let mut rc_pref = Rc::new(prefe);
println!("rc_pref: {:?}", rc_pref.name_test);
let mut rc_pref_temp = rc_pref.clone();
*((*Rc::make_mut(&mut rc_pref_temp)).name_test).get_mut() += 1;
println!("rc_pref_clone: {:?}", rc_pref_temp.name_test);
*((*Rc::make_mut(&mut rc_pref_temp)).name_test).get_mut() += 1;
println!("rc_pref_clone: {:?}", rc_pref_temp.name_test);
println!("rc_pref: {:?}", rc_pref.name_test);
}
The code is simplified, the scenario where it is used is totally different. I note this to avoid comments like "you can lend a value to the function", because what interests me is to know why the cases exposed work in this way.
stdout:
rc_pref Mutex : RefCell { value: 3 }
rc_pref_clone Mutex : RefCell { value: 4 }
rc_pref_clone Mutex : RefCell { value: 5 }
rc_pref Mutex : RefCell { value: 5 }
---
rc_pref : RefCell { value: 3 }
rc_pref_clone : RefCell { value: 4 }
rc_pref_clone : RefCell { value: 5 }
rc_pref : RefCell { value: 3 }
About test()
I'm new to Rust so I don't know if this crazy syntax is the right way.
*((*Rc::make_mut(&mut rc_pref_temp)).name_test).get_mut() += 1;
When running test() you can see that the previous syntax works, because it increases the value, but this increase does not affect the clones. I expected that with the use of *Rc::make_mut(& mut rc_pref_temp)... that the clones of a shared reference would reflect the same values.
If Rc has references to the same object, why do the changes to an object not apply to the rest of the clones? Why does this work this way? Am I doing something wrong?
Note: I use RefCell because in some tests I thought that maybe I had something to do.
About test2()
I've got it working as expected using Mutex with Rc, but I do not know if this is the correct way. I have some ideas of how Mutex and Arc works, but after using this syntax:
*Rc::make_mut(&mut rc_pref_temp)...
With the use of Mutex in test2(), I wonder if Mutex is not only responsible for changing the data in but also the one in charge of reflecting the changes in all the cloned references.
Do the shared references actually point to the same object? I want to think they do, but with the above code where the changes are not reflected without the use of Mutex, I have some doubts.
You need to read and understand the documentation for functions you use before you use them. Rc::make_mut says, emphasis mine:
Makes a mutable reference into the given Rc.
If there are other Rc or Weak pointers to the same value, then
make_mut will invoke clone on the inner value to ensure unique
ownership. This is also referred to as clone-on-write.
See also get_mut, which will fail rather than cloning.
You have multiple Rc pointers because you called rc_pref.clone(). Thus, when you call make_mut, the inner value will be cloned and the Rc pointers will now be disassociated from each other:
use std::rc::Rc;
fn main() {
let counter = Rc::new(100);
let mut counter_clone = counter.clone();
println!("{}", Rc::strong_count(&counter)); // 2
println!("{}", Rc::strong_count(&counter_clone)); // 2
*Rc::make_mut(&mut counter_clone) += 50;
println!("{}", Rc::strong_count(&counter)); // 1
println!("{}", Rc::strong_count(&counter_clone)); // 1
println!("{}", counter); // 100
println!("{}", counter_clone); // 150
}
The version with the Mutex works because it's completely different. You aren't calling a function which clones the inner value anymore. Of course, it doesn't make sense to use a Mutex when you don't have threads. The single-threaded equivalent of a Mutex is... RefCell!
I honestly don't know how you found Rc::make_mut; I've never even heard of it before. The module documentation for cell doesn't mention it, nor does the module documentation for rc.
I'd highly encourage you to take a step back and re-read through the documentation. The second edition of The Rust Programming Language has a chapter on smart pointers, including Rc and RefCell. Read the module-level documentation for rc and cell as well.
Here's what your code should look like. Note the usage of borrow_mut.
fn main() {
let prefe = Rc::new(Prefe::new());
println!("prefe: {:?}", prefe.name_test); // 3
let prefe_clone = prefe.clone();
*prefe_clone.name_test.borrow_mut() += 1;
println!("prefe_clone: {:?}", prefe_clone.name_test); // 4
*prefe_clone.name_test.borrow_mut() += 1;
println!("prefe_clone: {:?}", prefe_clone.name_test); // 5
println!("prefe: {:?}", prefe.name_test); // 5
}
I'm trying to share a mutable object between threads in Rust using Arc, but I get this error:
error[E0596]: cannot borrow data in a `&` reference as mutable
--> src/main.rs:11:13
|
11 | shared_stats_clone.add_stats();
| ^^^^^^^^^^^^^^^^^^ cannot borrow as mutable
This is the sample code:
use std::{sync::Arc, thread};
fn main() {
let total_stats = Stats::new();
let shared_stats = Arc::new(total_stats);
let threads = 5;
for _ in 0..threads {
let mut shared_stats_clone = shared_stats.clone();
thread::spawn(move || {
shared_stats_clone.add_stats();
});
}
}
struct Stats {
hello: u32,
}
impl Stats {
pub fn new() -> Stats {
Stats { hello: 0 }
}
pub fn add_stats(&mut self) {
self.hello += 1;
}
}
What can I do?
Arc's documentation says:
Shared references in Rust disallow mutation by default, and Arc is no exception: you cannot generally obtain a mutable reference to something inside an Arc. If you need to mutate through an Arc, use Mutex, RwLock, or one of the Atomic types.
You will likely want a Mutex combined with an Arc:
use std::{
sync::{Arc, Mutex},
thread,
};
struct Stats;
impl Stats {
fn add_stats(&mut self, _other: &Stats) {}
}
fn main() {
let shared_stats = Arc::new(Mutex::new(Stats));
let threads = 5;
for _ in 0..threads {
let my_stats = shared_stats.clone();
thread::spawn(move || {
let mut shared = my_stats.lock().unwrap();
shared.add_stats(&Stats);
});
// Note: Immediately joining, no multithreading happening!
// THIS WAS A LIE, see below
}
}
This is largely cribbed from the Mutex documentation.
How can I use shared_stats after the for? (I'm talking about the Stats object). It seems that the shared_stats cannot be easily converted to Stats.
As of Rust 1.15, it's possible to get the value back. See my additional answer for another solution as well.
[A comment in the example] says that there is no multithreading. Why?
Because I got confused! :-)
In the example code, the result of thread::spawn (a JoinHandle) is immediately dropped because it's not stored anywhere. When the handle is dropped, the thread is detached and may or may not ever finish. I was confusing it with JoinGuard, a old, removed API that joined when it is dropped. Sorry for the confusion!
For a bit of editorial, I suggest avoiding mutability completely:
use std::{ops::Add, thread};
#[derive(Debug)]
struct Stats(u64);
// Implement addition on our type
impl Add for Stats {
type Output = Stats;
fn add(self, other: Stats) -> Stats {
Stats(self.0 + other.0)
}
}
fn main() {
let threads = 5;
// Start threads to do computation
let threads: Vec<_> = (0..threads).map(|_| thread::spawn(|| Stats(4))).collect();
// Join all the threads, fail if any of them failed
let result: Result<Vec<_>, _> = threads.into_iter().map(|t| t.join()).collect();
let result = result.unwrap();
// Add up all the results
let sum = result.into_iter().fold(Stats(0), |i, sum| sum + i);
println!("{:?}", sum);
}
Here, we keep a reference to the JoinHandle and then wait for all the threads to finish. We then collect the results and add them all up. This is the common map-reduce pattern. Note that no thread needs any mutability, it all happens in the master thread.