Unable to use asynchronous actors - rust

I'm trying to use the actors as documented in the actix documentation. But even the doc example is not working for me. I tried the following code which compiles but does not print the message "Received fibo message"
use actix::prelude::*;
// #[derive(Message)]
// #[rtype(Result = "Result<u64, ()>")]
// struct Fibonacci(pub u32);
struct Fibonacci(pub u32);
impl Message for Fibonacci {
type Result = Result<u64, ()>;
}
struct SyncActor;
impl Actor for SyncActor {
// It's important to note that you use "SyncContext" here instead of "Context".
type Context = SyncContext<Self>;
}
impl Handler<Fibonacci> for SyncActor {
type Result = Result<u64, ()>;
fn handle(&mut self, msg: Fibonacci, _: &mut Self::Context) -> Self::Result {
println!("Received fibo message");
if msg.0 == 0 {
Err(())
} else if msg.0 == 1 {
Ok(1)
} else {
let mut i = 0;
let mut sum = 0;
let mut last = 0;
let mut curr = 1;
while i < msg.0 - 1 {
sum = last + curr;
last = curr;
curr = sum;
i += 1;
}
Ok(sum)
}
}
}
fn main() {
System::new().block_on(async {
// Start the SyncArbiter with 2 threads, and receive the address of the Actor pool.
let addr = SyncArbiter::start(2, || SyncActor);
// send 5 messages
for n in 5..10 {
// As there are 2 threads, there are at least 2 messages always being processed
// concurrently by the SyncActor.
println!("Sending fibo message");
addr.do_send(Fibonacci(n));
}
});
}
This program displays 5 times :
Sending fibo message
Two remarks, first I'm unable to use the macro rtype, I use to implement Message myself. And then the line addr.do_send(Fibonacci(n)) seems to not send anything to my actor. However if I use addr.send(Fibonacci(n)).await; my message get sent and received on the actor side. But since I'm awaiting the send function it processes the message synchronously instead of using the 2 threads I have defined theoretically.
I also tried to wait with a thread::sleep after my main loop but the messages were not arriving either.
I might be misunderstanding something but it seems strange to me.
Cargo.toml file :
[dependencies]
actix = "0.11.1"
actix-rt = "2.2.0"

I finally managed to make it works, though I can't understand exactly why. Simply using tokio to wait for a ctrl-C made it possible for me to call do_send/try_send and work in parallel.
fn main() {
System::new().block_on(async {
// Start the SyncArbiter with 4 threads, and receive the address of the Actor pool.
let addr = SyncArbiter::start(4, || SyncActor);
// send 15 messages
for n in 5..20 {
// As there are 4 threads, there are at least 4 messages always being processed
// concurrently by the SyncActor.
println!("Sending fibo message");
addr.do_send(Fibonacci(n));
}
// This does not wotk
//thread::spawn(move || {
// thread::sleep(Duration::from_secs_f32(10f32));
//}).join();
// This made it worked
tokio::signal::ctrl_c().await.unwrap();
println!("Ctrl-C received, shutting down");
System::current().stop();
});
}

You don't have to use crate tokio explicitly here. In your loop, just change the last line to addr.send(Fibonacci(n)).await.unwrap(). Method send returns a future and it must be awaited to resolve.

Related

How can I share a Vector between 2 threads?

I am pretty new to Rust, and cannot manage to keep both Arcs values updated in both threads I'm spawning. The idea would be that one thread loops over received events and when it receives one, updates the object, which the other thread constantly watches. How can I achieve that in Rust, or if this method isn't adequate, would there be a better way to do it ?
(The concrete idea would be one thread listening for MIDI events and the other one re-rendering on a LED strip the notes received)
Here's what I currently have:
main.rs
mod functions;
mod structs;
use crate::functions::*;
use crate::structs::*;
use portmidi as pm;
use rs_ws281x::{ChannelBuilder, ControllerBuilder, StripType};
use std::sync::{Arc, Mutex};
use std::{fs, thread, time};
const MIDI_TIMEOUT: u64 = 10;
const MIDI_CHANNEL: usize = 0;
#[tokio::main]
async fn main() {
let config: Arc<std::sync::Mutex<Config>> = Arc::new(Mutex::new(
toml::from_str(&fs::read_to_string("config.toml").unwrap()).unwrap(),
));
let config_midi = config.clone();
let config_leds = config.clone();
let leds_status = Arc::new(Mutex::new(vec![0; config.lock().unwrap().leds.num_leds]));
let leds_status_midi = Arc::clone(&leds_status);
let leds_status_leds = Arc::clone(&leds_status);
thread::spawn(move || {
let config = config_midi.lock().unwrap();
let midi_context = pm::PortMidi::new().unwrap();
let device_info = midi_context
.device(config.midi.id)
.expect(format!("Could not find device with id {}", config.midi.id).as_str());
println!("Using device {}) {}", device_info.id(), device_info.name());
let input_port = midi_context
.input_port(device_info, config.midi.buffer_size)
.expect("Could not create input port");
let mut leds_status = leds_status_midi.lock().unwrap();
loop {
if let Ok(_) = input_port.poll() {
if let Ok(Some(events)) = input_port.read_n(config.midi.buffer_size) {
for event in events {
let event_type =
get_midi_event_type(event.message.status, event.message.data2);
match event_type {
MidiEventType::NoteOn => {
let key = get_note_position(event.message.data1, &config);
leds_status[key] = 1;
}
MidiEventType::NoteOff => {
let key = get_note_position(event.message.data1, &config);
leds_status[key] = 0;
}
_ => {}
}
}
}
}
thread::sleep(time::Duration::from_millis(MIDI_TIMEOUT));
}
});
thread::spawn(move || {
let config = config_leds.lock().unwrap();
let mut led_controller = ControllerBuilder::new()
.freq(800_000)
.dma(10)
.channel(
MIDI_CHANNEL,
ChannelBuilder::new()
.pin(config.leds.pin)
.count(config.leds.num_leds as i32)
.strip_type(StripType::Ws2812)
.brightness(config.leds.brightness)
.build(),
)
.build()
.unwrap();
loop {
let leds_status = leds_status_leds.lock().unwrap();
print!("\x1b[2J\x1b[1;1H");
println!(
"{:?}",
leds_status.iter().filter(|x| (**x) > 0).collect::<Vec<_>>()
);
}
});
}
functions.rs
use crate::structs::MidiEventType;
pub fn get_note_position(note: u8, config: &crate::structs::Config) -> usize {
let mut note_offset = 0;
for i in 0..config.leds.offsets.len() {
if note > config.leds.offsets[i][0] {
note_offset = config.leds.offsets[i][1];
break;
}
}
note_offset -= config.leds.shift;
let note_pos_raw = 2 * (note - 20) - note_offset;
config.leds.num_leds - (note_pos_raw as usize)
}
pub fn get_midi_event_type(status: u8, velocity: u8) -> MidiEventType {
if status == 144 && velocity > 0 {
MidiEventType::NoteOn
} else if status == 128 || (status == 144 && velocity == 0) {
MidiEventType::NoteOff
} else {
MidiEventType::ControlChange
}
}
structs.rs
use serde_derive::Deserialize;
#[derive(Deserialize, Debug)]
pub struct Config {
pub leds: LedsConfig,
pub midi: MidiConfig,
}
#[derive(Deserialize, Debug)]
pub struct LedsConfig {
pub pin: i32,
pub num_leds: usize,
pub brightness: u8,
pub offsets: Vec<Vec<u8>>,
pub shift: u8,
pub fade: i8,
}
#[derive(Deserialize, Debug)]
pub struct MidiConfig {
pub id: i32,
pub buffer_size: usize,
}
#[derive(Debug)]
pub enum MidiEventType {
NoteOn,
NoteOff,
ControlChange,
}
Thank you very much !
The idea would be that one thread loops over received events and when it receives one, updates the object, which the other thread constantly watches.
That's a good way to do it, particularly if one of the threads needs to be near-realtime (e.g. live audio processing). You can use channels to achieve this. You transfer the sender to one thread and the receiver to another. In a realtime scenario, the receiver can loop until try_recv errs with Empty (limiting to some number of iterations to prevent starvation of the processing code). For example, something like this, given a r: Receiver:
// Process 100 messages max to not starve the thread of the other stuff
// it needs to be doing.
for _ in 0..100 {
match r.try_recv() {
Ok(msg) => { /* Process msg, applying it to the current state */ },
Err(TryRecvError::Empty) => break,
Err(TryRecvError::Disconnected) => {
// The sender is gone, maybe this is our signal to terminate?
return;
},
}
}
Alternatively, if one thread needs to act only when a message is received, it can simply iterate the receiver, which will continue to loop as long as messages are received and the channel is open:
for msg in r {
// Handle the message
}
It really is that simple. If the channel is empty but there are senders alive, it will block until a message is received. Once all senders are gone and the channel is empty, the loop will terminate.
A channel can convey messages of exactly one type; if only one kind of message needs to be sent, you can use a struct. Otherwise, an enum with variants for each kind of message works well.
Given the sending side of the channel, s: Sender, you just s.send(your_message_value).
Another option would be to create an Arc<Mutex<_>>, which it looks like you are doing in your sample code. This way is fine if the lock contention is not too high, but this can inhibit the ability of both threads to run concurrently, which is often the goal of multithreading. Channels tend to work better in message-passing scenarios because there isn't a need for a mutual exclusion lock.
As a side note, you are using Tokio with an async main(), but you never actually do anything with any futures, so there's no reason to even use Tokio in this code.

Worker threads send many messages through a channel to main but only the first one is delivered

I've been trying to extend the thread pool example from the Multi-Threaded Web Server chapter in The Book. The original example works fine and dispatches messages to workers properly though the spsc channel (ingress), but now I want to return values (strings) from the worker threads through an mpsc channel (egress). Somehow the egress channel sends only one message instead of 10. egress_tx.send() seems to be executed 10 times but egress_rx.recv() gives me one message only and then the program finishes (i.e. no deadlocks etc). The worker threads are terminated properly in the Drop trait implementation (this code is not shown). I'd appreciate any suggestions about debugging such a problem: putting a breakpoint ar recv() and trying to find something meaningful in its internals hasn't helped much.
type Job = Box<dyn FnOnce(usize) -> String + Send + 'static>;
enum Message {
Run(Job),
Halt,
}
struct Worker {
id: usize,
thread: Option<thread::JoinHandle<()>>,
}
pub struct ThreadPool {
workers: Vec<Worker>,
ingress_tx: Sender<Message>,
pub egress_rx: Receiver<String>
}
impl Worker {
fn new(id: usize, rx: Arc<Mutex<mpsc::Receiver<Message>>>, tx: mpsc::Sender<String>) -> Worker {
let thread = thread::spawn(move ||
loop {
let msg = rx.lock().unwrap().recv().unwrap();
match msg {
Message::Run(job) => {
let s = job(id);
println!("Sending \"{}\"", s);
tx.send(s).unwrap();
},
Message::Halt => break,
}
}
);
Worker {id, thread: Some(thread)}
}
}
impl ThreadPool {
pub fn new(size: usize) -> Result<ThreadPool, ThreadPoolError> {
if size <= 0 {
return Err(ThreadPoolError::ZeroSizedPool)
}
let (ingress_tx, ingress_rx) = mpsc::channel();
let ingress_rx = Arc::new(Mutex::new(ingress_rx));
let (egress_tx, egress_rx) = mpsc::channel();
let mut workers = Vec::with_capacity(size);
for id in 0..size {
workers.push(Worker::new(id, ingress_rx.clone(), egress_tx.clone()));
}
Ok(ThreadPool {workers, ingress_tx, egress_rx})
}
pub fn execute<F>(&self, f: F)
where F: FnOnce(usize) -> String + Send + 'static
{
let j = Box::new(f);
self.ingress_tx.send(Message::Run(j)).unwrap();
}
}
fn run_me(id: usize, i: usize) -> String {
format!("Worker {} is processing tile {}...", id, i).to_string()
}
#[cfg(test)]
mod threadpool_tests {
use super::*;
#[test]
fn tp_test() {
let tpool = ThreadPool::new(4).expect("Cannot create threadpool");
for i in 0..10 {
let closure = move |worker_id| run_me(worker_id, i);
tpool.execute(closure);
}
for s in tpool.egress_rx.recv() {
println!("{}", s);
}
}
}
And the output is:
Sending "Worker 0 is processing tile 0..."
Sending "Worker 0 is processing tile 2..."
Sending "Worker 3 is processing tile 1..."
Sending "Worker 3 is processing tile 4..."
Sending "Worker 2 is processing tile 3..."
Sending "Worker 2 is processing tile 6..."
Sending "Worker 1 is processing tile 5..."
Sending "Worker 0 is processing tile 7..."
Sending "Worker 0 is processing tile 9..."
Sending "Worker 3 is processing tile 8..."
Receiving "Worker 0 is processing tile 0..."
Process finished with exit code 0
In your code, you have for s in tpool.egress_rx.recv(), which isn't doing quite what you want. Instead of iterating over the values received by the channel, you're receiving one element (wrapped in a Result) and then iterating over that, since Result implements IntoIterator to iterate over the success value (or nothing, if it contains an error).
Simply changing this to for s in tpool.egress_rx should fix it, since channels also implement IntoIterator.

Is there an API to race N threads (or N closures on N threads) to completion?

Given several threads that complete with an Output value, how do I get the first Output that's produced? Ideally while still being able to get the remaining Outputs later in the order they're produced, and bearing in mind that some threads may or may not terminate.
Example:
struct Output(i32);
fn main() {
let mut spawned_threads = Vec::new();
for i in 0..10 {
let join_handle: ::std::thread::JoinHandle<Output> = ::std::thread::spawn(move || {
// pretend to do some work that takes some amount of time
::std::thread::sleep(::std::time::Duration::from_millis(
(1000 - (100 * i)) as u64,
));
Output(i) // then pretend to return the `Output` of that work
});
spawned_threads.push(join_handle);
}
// I can do this to wait for each thread to finish and collect all `Output`s
let outputs_in_order_of_thread_spawning = spawned_threads
.into_iter()
.map(::std::thread::JoinHandle::join)
.collect::<Vec<::std::thread::Result<Output>>>();
// but how would I get the `Output`s in order of completed threads?
}
I could solve the problem myself using a shared queue/channels/similar, but are there built-in APIs or existing libraries which could solve this use case for me more elegantly?
I'm looking for an API like:
fn race_threads<A: Send>(
threads: Vec<::std::thread::JoinHandle<A>>
) -> (::std::thread::Result<A>, Vec<::std::thread::JoinHandle<A>>) {
unimplemented!("so far this doesn't seem to exist")
}
(Rayon's join is the closest I could find, but a) it only races 2 closures rather than an arbitrary number of closures, and b) the thread pool w/ work stealing approach doesn't make sense for my use case of having some closures that might run forever.)
It is possible to solve this use case using pointers from How to check if a thread has finished in Rust? just like it's possible to solve this use case using an MPSC channel, however here I'm after a clean API to race n threads (or failing that, n closures on n threads).
These problems can be solved by using a condition variable:
use std::sync::{Arc, Condvar, Mutex};
#[derive(Debug)]
struct Output(i32);
enum State {
Starting,
Joinable,
Joined,
}
fn main() {
let pair = Arc::new((Mutex::new(Vec::new()), Condvar::new()));
let mut spawned_threads = Vec::new();
let &(ref lock, ref cvar) = &*pair;
for i in 0..10 {
let my_pair = pair.clone();
let join_handle: ::std::thread::JoinHandle<Output> = ::std::thread::spawn(move || {
// pretend to do some work that takes some amount of time
::std::thread::sleep(::std::time::Duration::from_millis(
(1000 - (100 * i)) as u64,
));
let &(ref lock, ref cvar) = &*my_pair;
let mut joinable = lock.lock().unwrap();
joinable[i] = State::Joinable;
cvar.notify_one();
Output(i as i32) // then pretend to return the `Output` of that work
});
lock.lock().unwrap().push(State::Starting);
spawned_threads.push(Some(join_handle));
}
let mut should_stop = false;
while !should_stop {
let locked = lock.lock().unwrap();
let mut locked = cvar.wait(locked).unwrap();
should_stop = true;
for (i, state) in locked.iter_mut().enumerate() {
match *state {
State::Starting => {
should_stop = false;
}
State::Joinable => {
*state = State::Joined;
println!("{:?}", spawned_threads[i].take().unwrap().join());
}
State::Joined => (),
}
}
}
}
(playground link)
I'm not claiming this is the simplest way to do it. The condition variable will awake the main thread every time a child thread is done. The list can show the state of each thread, if one is (about to) finish, it can be joined.
No, there is no such API.
You've already been presented with multiple options to solve your problem:
Use channels
Use a CondVar
Use futures
Sometimes when programming, you have to go beyond sticking pre-made blocks together. This is supposed to be a fun part of programming. I encourage you to embrace it. Go create your ideal API using the components available and publish it to crates.io.
I really don't see what's so terrible about the channels version:
use std::{sync::mpsc, thread, time::Duration};
#[derive(Debug)]
struct Output(i32);
fn main() {
let (tx, rx) = mpsc::channel();
for i in 0..10 {
let tx = tx.clone();
thread::spawn(move || {
thread::sleep(Duration::from_millis((1000 - (100 * i)) as u64));
tx.send(Output(i)).unwrap();
});
}
// Don't hold on to the sender ourselves
// Otherwise the loop would never terminate
drop(tx);
for r in rx {
println!("{:?}", r);
}
}

How to daisy chain threads using channels in Rust?

I'm trying to implement the sieve of Eratosthenes in Rust using coroutines as a learning exercise (not homework), and I can't find any reasonable way of connecting each thread to the Receiver and Sender ends of two different channels.
The Receiver is involved in two distinct tasks, namely reporting the highest prime found so far, and supplying further candidate primes for the filter. This is fundamental to the algorithm.
Here is what I would like to do but can't because the Receiver cannot be transferred between threads. Using std::sync::Arc does not appear to help, unsurprisingly.
Please note that I do understand why this doesn't work
fn main() {
let (basetx, baserx): (Sender<u32>, Receiver<u32>) = channel();
let max_number = 103;
thread::spawn(move|| {
generate_natural_numbers(&basetx, max_number);
});
let oldrx = &baserx;
loop {
// we need the prime in this thread
let prime = match oldrx.recv() {
Ok(num) => num,
Err(_) => { break; 0 }
};
println!("{}",prime);
// create (newtx, newrx) in a deliberately unspecified way
// now we need to pass the receiver off to the sieve thread
thread::spawn(move || {
sieve(oldrx, newtx, prime); // forwards numbers if not divisible by prime
});
oldrx = newrx;
}
}
Equivalent working Go code:
func main() {
channel := make(chan int)
var ok bool = true;
var prime int = 0;
go generate(channel, 103)
for true {
prime, ok = <- channel
if !ok {
break;
}
new_channel := make(chan int)
go sieve(channel, new, prime)
channel = new_channel
fmt.Println(prime)
}
}
What is the best way to deal with a situation like this where a Receiver needs to be handed off to a different thread?
You don't really explain what the problem that you are having, but your code is close enough:
use std::sync::mpsc::{channel, Sender, Receiver};
use std::thread;
fn generate_numbers(tx: Sender<u8>) {
for i in 2..100 {
tx.send(i).unwrap();
}
}
fn filter(rx: Receiver<u8>, tx: Sender<u8>, val: u8) {
for v in rx {
if v % val != 0 {
tx.send(v).unwrap();
}
}
}
fn main() {
let (base_tx, base_rx) = channel();
thread::spawn(move || {
generate_numbers(base_tx);
});
let mut old_rx = base_rx;
loop {
let num = match old_rx.recv() {
Ok(v) => v,
Err(_) => break,
};
println!("prime: {}", num);
let (new_tx, new_rx) = channel();
thread::spawn(move || {
filter(old_rx, new_tx, num);
});
old_rx = new_rx;
}
}
using coroutines
Danger, Danger, Will Robinson! These are not coroutines; they are full-fledged threads! These are a lot more heavyweight compared to a coroutine.
What is the best way to deal with a situation like this where a Receiver needs to be handed off to a different thread?
Just... transfer ownership of the Receiver to the thread?

How to correctly exit the thread blocking on mpsc::Receiver

impl A {
fn new() -> (A, std::sync::mpsc::Receiver<Data>) {
let (sender, receiver) = std::sync::mpsc::channel();
let objA = A { sender: sender, }; // A spawns threads, clones and uses sender etc
(objA, receiver)
}
}
impl B {
fn new() -> B {
let (objA, receiver) = A::new();
B {
a: objA,
join_handle: Some(std::thread::spwan(move || {
loop {
match receiver.recv() {
Ok(data) => /* Do Something, inform main thread etc */,
Err(_) => break,
}
}
})),
}
}
}
impl Drop for B {
fn drop(&mut self) {
// Want to do something like "sender.close()/receiver.close()" etc so that the following
// thread joins. But there is no such function. How do i break the following thread ?
self.join_handle().take().unwrap().join().unwrap();
}
}
Is there a way to cleanly exit under such a circumstance ? The thing is that when either receiver or sender is dropped the other sniffs this and gives an error. In case of receiver it will be woken up and will yield an error in which case i am breaking out of the infinite and blocking loop above. However how do i do that explicitly using this very property of channels, without resorting to other flags in conjunction with try_recv()etc., and cleanly exit my thread deterministically?
Why not sending a specific message to shut this thread? I do not know what is your data but most of the time it may be an enum and adding a enum variant like 'MyData::Shutdown' in your receive you can simply break out of the loop.
You can wrap the a field of your B type in an Option. This way in the Drop::drop method you can do drop(self.a.take()) which will replace the field with a None and drop the sender. This closes the channel and your thread can now be properly joined.
You can create a new channel and swap your actual sender out with the dummy-sender. Then you can drop your sender and therefor join the thread:
impl Drop for B {
fn drop(&mut self) {
let (s, _) = channel();
drop(replace(&mut self.a.sender, s));
self.join_handle.take().unwrap().join().unwrap();
}
}
Try it out in the playpen: http://is.gd/y7A9L0
I don't know what the overhead of creating and immediately dropping a channel is, but it's not free and unlikely to be optimized out (There's an Arc in there).
on a side-note, Your infinite loop with a match on receiver.recv() could be replaced by a for loop using the Receiver::iter method:
for _ in receiver.iter() {
// do something with the value
}

Resources