use std::iter;
fn worker_sum(from: u64, to: u64) -> u64 {
range(from, to).fold(0u64, |sum, x| sum + x)
}
fn main() {
let max = 5u64;
let step = 2u64;
let (sender, receiver) = channel::<u64>();
for x in iter::range_step_inclusive(0u64, max, step) {
let end = if x + step > max { max } else { x + step };
//println!("{} -> {} = {}", x, end, worker_sum(x, end));
let local_sender = sender.clone();
spawn(proc(){
local_sender.send(worker_sum(x, end));
});
}
loop {
match receiver.try_recv() {
Ok(x) => println!("{}", x),
Err(_) => break,
}
}
}
I get the error:
task '' failed at 'sending on a closed channel', /home/rustbuild/src/rust-buildbot/slave/nightly-linux/build/src/libsync/comm/mod.rs:573
I somehow understand the problem, but how to properly "select" from the channel? The documentation is really sparse, even though I'm using the nightly build, which is said to have improved the docs (since version 0.13).
So my questions are:
How to solve the problem with as little structural changes in the code as possible?
How to make the code idiomatic?
The problem you have here is that the channel becomes closed by the reading task before all data is sent. Your loop is:
loop {
match receiver.try_recv() {
Ok(x) => println!("{}", x),
Err(_) => break,
}
}
In this loop, your receiver breaks as soon as it meets and error. Once the loop is broken, your function will reach the end of its scope and the receiver will be destroyed. Once this is done, any attempt to send more data will fail.
The problem here, is that your receiver gets an Err(Empty) because the senders have not yet sent anything. You must wait for them and only break when meeting an Err(Disconnected)
You need to change your code to something like this (explanations in comments) :
use std::iter;
fn worker_sum(from: u64, to: u64) -> u64 {
range(from, to).fold(0u64, |sum, x| sum + x)
}
fn main() {
let max = 5u64;
let step = 2u64;
let (sender, receiver) = channel::<u64>();
for x in iter::range_step_inclusive(0u64, max, step) {
let end = if x + step > max { max } else { x + step };
// here, each thread will own its own sender, and the channel will
// be closed once all senders are destroyed.
let local_sender = sender.clone();
spawn(proc(){
local_sender.send(worker_sum(x, end));
// Once we reach here, the sender of this task is destroyed.
});
}
// We destroy the sender of the main task,
// because we don't want to wait for it:
// it would deadlock the program
drop(sender);
loop {
match receiver.try_recv() {
Ok(x) => println!("{}", x),
// We break only if the channel is closed,
// it means that all senders are finished.
Err(e) if e == ::std::comm::Disconnected => { break; },
_ => {}
}
}
}
Related
I wanted to know if it's possible to loop{} and call a function inside the loop that takes a parameter that is not a reference without copying it.
I am using the crate posix_mq, and I want to open an existing queue, if it not exists wait 1 second and try to open it again.
Here is my code:
pub fn open(&mut self) -> Result<(), posix_mq::error::Error> {
let mut attempt = self.tools_.get_max_attempt(); //30
let queue_name: Name = Name::new(&self.queue_name_)?;
loop {
match Queue::open(queue_name) {
Ok(q) => {
self.my_queue_ = Some(q);
Ok::<(),posix_mq::error::Error>(());
}
Err(e) => match e {
posix_mq::error::Error::QueueNotFound() => {
let waiting_print = "Waiting for creation of the queue.".to_string();
self.tools_.update_printing_elements(waiting_print, false);
attempt -= 1;
if attempt <= 1 {
return Err(e);
}
thread::sleep(time::Duration::from_secs(1));
},
_ => {Err::<(), posix_mq::error::Error>(e);},
}
}
}
}
I want that Queue::open() borrows queue_name without creating a copy of it instead of taking ownership of it.
The test code below considers a situation in which there are three different threads.
Each thread has to do certain asynchronous tasks, that may take a certain time to finish.
This is "simulated" in the code below with a sleep.
On top of that, two of the threads collect information that they have to send to the third one for further processing. This is done using mpsc channels.
Due to the fact that there are out of our control information obtained from outside of the Rust application, the threads may get interrupted. This is emulated by generating a random number, and the loop on each thread breaks when that happens.
What I'm trying to achieve is a system in which whenever one of the threads has an error (simulated with the random number = 9), every other thread is cancelled too.
`
use std::sync::mpsc::channel;
use std::sync::mpsc::{Sender, Receiver, TryRecvError};
use std::thread::sleep;
use tokio::time::Duration;
use rand::distributions::{Uniform, Distribution};
#[tokio::main]
async fn main() {
execution_cycle().await;
}
async fn execution_cycle() {
let (tx_first, rx_first) = channel::<Message>();
let (tx_second, rx_second) = channel::<Message>();
let handle_sender_first = tokio::spawn(sender_thread(tx_first));
let handle_sender_second = tokio::spawn(sender_thread(tx_second));
let handle_receiver = tokio::spawn(receiver_thread(rx_first, rx_second));
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..10);
let mut cancel_from_cycle = rng_generator.sample(&mut thread_rng);
while !&handle_sender_first.is_finished() && !&handle_sender_second.is_finished() && !&handle_receiver.is_finished() {
cancel_from_cycle = rng_generator.sample(&mut thread_rng);
if (cancel_from_cycle == 9) {
println!("Aborting from the execution cycle.");
handle_receiver.abort();
handle_sender_first.abort();
handle_sender_second.abort();
}
}
if handle_sender_first.is_finished() {
println!("handle_sender_first finished.");
} else {
println!("handle_sender_first ongoing.");
}
if handle_sender_second.is_finished() {
println!("handle_sender_second finished.");
} else {
println!("handle_sender_second ongoing.");
}
if handle_receiver.is_finished() {
println!("handle_receiver finished.");
} else {
println!("handle_receiver ongoing.");
}
}
async fn sender_thread(tx: Sender<Message>) {
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..20);
let mut random_id = rng_generator.sample(&mut thread_rng);
while random_id != 9 {
let msg = Message {
id: random_id,
text: "hello".to_owned()
};
println!("Sending message {}.", msg.id);
random_id = rng_generator.sample(&mut thread_rng);
println!("Generated id {}.", random_id);
let result = tx.send(msg);
match result {
Ok(res) => {},
Err(error) => {
println!("Sending error {:?}", error);
random_id = 9;
}
}
sleep(Duration::from_millis(2000));
}
}
async fn receiver_thread(rx_first: Receiver<Message>, rx_second: Receiver<Message>) {
let mut channel_open_first = true;
let mut channel_open_second = true;
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..15);
let mut random_event = rng_generator.sample(&mut thread_rng);
while channel_open_first && channel_open_second && random_event != 9 {
channel_open_first = receiver_inner(&rx_first);
channel_open_second = receiver_inner(&rx_second);
random_event = rng_generator.sample(&mut thread_rng);
println!("Generated event {}.", random_event);
sleep(Duration::from_millis(800));
}
}
fn receiver_inner(rx: &Receiver<Message>) -> bool {
let value = rx.try_recv();
match value {
Ok(msg) => {
println!("Message {} received: {}", msg.id, msg.text);
},
Err(error) => {
if error != TryRecvError::Empty {
println!("{}", error);
return false;
} else { /* Channel is empty.*/ }
}
}
return true;
}
struct Message {
id: usize,
text: String,
}
`
In the working example here, it does exactly that, however, it does it only from inside the threads, and I would like to add a "kill switch" in the execution_cycle() method, allowing to cancel all the three threads when a certain event takes place (the random number cancel_from_cycle == 9), and do that in the most simple way possible... I tried drop(handler_sender), and also panic!() from the execution_cycle() but the spawn threads keep running, preventing the application to finish. I also tried handle_receiver().abort() without success.
How can I achieve the wished result?
i'm new to rust.
I'm trying to write a file_sensor that will start a counter after a file is created. The plan is that after an amount of time, if a second file is not received the sensor will exit with a zero exit code.
I could write the code to continue that work but i feel the code below illustrates the problem (i have also missed for example the post function referred to)
I have been struggling with this problem for several hours, i've tried Arc and mutex's and even global variables.
The Timer implementation is Ticktock-rs
I need to be able to either get heartbeat in the match body for EventKind::Create(CreateKind::Folder) or file_count in the loop
The code i've attached here runs but file_count is always zero in the loop.
use std::env;
use std::path::Path;
use std::{thread, time};
use std::process::ExitCode;
use ticktock::Timer;
use notify::{
Watcher,
RecommendedWatcher,
RecursiveMode,
Result,
event::{EventKind, CreateKind, ModifyKind, Event}
};
fn main() -> Result<()> {
let now = time::Instant::now();
let mut heartbeat = Timer::apply(
|_, count| {
*count += 1;
*count
},
0,
)
.every(time::Duration::from_millis(500))
.start(now);
let mut file_count = 0;
let args = Args::parse();
let REQUEST_SENSOR_PATH = env::var("REQUEST_SENSOR_PATH").expect("$REQUEST_SENSOR_PATH} is not set");
let mut watcher = notify::recommended_watcher(move|res: Result<Event>| {
match res {
Ok(event) => {
match event.kind {
EventKind::Create(CreateKind::File) => {
file_count += 1;
// do something with file
}
_ => { /* something else changed */ }
}
println!("{:?}", event);
},
Err(e) => {
println!("watch error: {:?}", e);
ExitCode::from(101);
},
}
})?;
watcher.watch(Path::new(&REQUEST_SENSOR_PATH), RecursiveMode::Recursive)?;
loop {
let now = time::Instant::now();
if let Some(n) = heartbeat.update(now){
println!("Heartbeat: {}, fileCount: {}", n, file_count);
if n > 10 {
heartbeat.set_value(0);
// This function will reset timer when a file arrives
}
}
}
Ok(())
}
Your compiler warnings show you the problem:
warning: unused variable: `file_count`
--> src/main.rs:31:25
|
31 | file_count += 1;
| ^^^^^^^^^^
|
= note: `#[warn(unused_variables)]` on by default
= help: did you mean to capture by reference instead?
The problem here is that you use file_count inside of a move || closure. file_count is an i32, which is Copy. Using it in a move || closure actually creates a copy of it, which does no longer update the original variable if you assign to it.
Either way, it's impossible to modify a variable in main() from an event handler. Event handlers require 'static lifetime if they reference things, because Rust cannot guarantee that the event handler lives shorter than main.
One solution for this problem is to use reference counters and interior mutability. In this case, I will use Arc for reference counters and AtomicI32 for interior mutability. Note that notify::recommended_watcher requires thread safety, otherwise instead of an Arc<AtomicI32> we could have used an Rc<Cell<i32>>, which is the same thing but only for single-threaded environments, with a little less overhead.
use notify::{
event::{CreateKind, Event, EventKind},
RecursiveMode, Result, Watcher,
};
use std::time;
use std::{env, sync::atomic::Ordering};
use std::{path::Path, sync::Arc};
use std::{process::ExitCode, sync::atomic::AtomicI32};
use ticktock::Timer;
fn main() -> Result<()> {
let now = time::Instant::now();
let mut heartbeat = Timer::apply(
|_, count| {
*count += 1;
*count
},
0,
)
.every(time::Duration::from_millis(500))
.start(now);
let file_count = Arc::new(AtomicI32::new(0));
let REQUEST_SENSOR_PATH =
env::var("REQUEST_SENSOR_PATH").expect("$REQUEST_SENSOR_PATH} is not set");
let mut watcher = notify::recommended_watcher({
let file_count = Arc::clone(&file_count);
move |res: Result<Event>| {
match res {
Ok(event) => {
match event.kind {
EventKind::Create(CreateKind::File) => {
file_count.fetch_add(1, Ordering::AcqRel);
// do something with file
}
_ => { /* something else changed */ }
}
println!("{:?}", event);
}
Err(e) => {
println!("watch error: {:?}", e);
ExitCode::from(101);
}
}
}
})?;
watcher.watch(Path::new(&REQUEST_SENSOR_PATH), RecursiveMode::Recursive)?;
loop {
let now = time::Instant::now();
if let Some(n) = heartbeat.update(now) {
println!(
"Heartbeat: {}, fileCount: {}",
n,
file_count.load(Ordering::Acquire)
);
if n > 10 {
heartbeat.set_value(0);
// This function will reset timer when a file arrives
}
}
}
}
Also, note that the ExitCode::from(101); gives you a warning. It does not actually exit the program, it only creates an exit code variable and then discards it again. You probably intended to write std::process::exit(101);. Although I would discourage it, because it does not properly clean up (does not call any Drop implementations). I'd use panic here, instead. This is the exact usecase panic is meant for.
TL;DR I'm trying to have a background thread that's ID'd that is controlled via that ID and web calls, and the background threads doesn't seem to be getting the message via all the types of channels I've tried.
I've tried both the std channels as well as tokio's, and of those I've tried all but the watcher type from tokio. All have the same result which probably means that I've messed something up somewhere without realizing it, but I can't find the issue:
use std::collections::{
hash_map::Entry::{Occupied, Vacant},
HashMap,
};
use std::sync::Arc;
use tokio::sync::mpsc::{self, UnboundedSender};
use tokio::sync::RwLock;
use tokio::task::JoinHandle;
use uuid::Uuid;
use warp::{http, Filter};
#[derive(Default)]
pub struct Switcher {
pub handle: Option<JoinHandle<bool>>,
pub pipeline_end_tx: Option<UnboundedSender<String>>,
}
impl Switcher {
pub fn set_sender(&mut self, tx: UnboundedSender<String>) {
self.pipeline_end_tx = Some(tx);
}
pub fn set_handle(&mut self, handle: JoinHandle<bool>) {
self.handle = Some(handle);
}
}
const ADDR: [u8; 4] = [0, 0, 0, 0];
const PORT: u16 = 3000;
type RunningPipelines = Arc<RwLock<HashMap<String, Arc<RwLock<Switcher>>>>>;
#[tokio::main]
async fn main() {
let running_pipelines = Arc::new(RwLock::new(HashMap::<String, Arc<RwLock<Switcher>>>::new()));
let session_create = warp::post()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.then(|pipelines: RunningPipelines| async move {
println!("session requested OK!");
let id = Uuid::new_v4();
let mut switcher = Switcher::default();
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
switcher.set_sender(tx);
let t = tokio::spawn(async move {
println!("Background going...");
//This would be something processing in the background until it received the end signal
match rx.recv().await {
Some(v) => {
println!(
"Got end message:{} YESSSSSS#!##!!!!!!!!!!!!!!!!1111eleven",
v
);
}
None => println!("Error receiving end signal:"),
}
println!("ABORTING HANDLE");
true
});
let ret = HashMap::from([("session_id", id.to_string())]);
switcher.set_handle(t);
{
pipelines
.write()
.await
.insert(id.to_string(), Arc::new(RwLock::new(switcher)));
}
Ok(warp::reply::json(&ret))
});
let session_end = warp::delete()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.and(warp::query::<HashMap<String, String>>())
.then(
|pipelines: RunningPipelines, p: HashMap<String, String>| async move {
println!("session end requested OK!: {:?}", p);
match p.get("session_id") {
None => Ok(warp::reply::with_status(
"Please specify session to end",
http::StatusCode::BAD_REQUEST,
)),
Some(id) => {
let mut pipe = pipelines.write().await;
match pipe.entry(String::from(id)) {
Occupied(handle) => {
println!("occupied");
let (k, v) = handle.remove_entry();
drop(pipe);
println!("removed from hashmap, key:{}", k);
let s = v.write().await;
if let Some(h) = &s.handle {
if let Some(tx) = &s.pipeline_end_tx {
match tx.send("goodbye".to_string()) {
Ok(res) => {
println!(
"sent end message|{:?}| to fpipeline: {}",
res, id
);
//Added this to try to get it to at least Error on the other side
drop(tx);
},
Err(err) => println!(
"ERROR sending end message to pipeline({}):{}",
id, err
),
};
} else {
println!("no sender channel found for pipeline: {}", id);
};
h.abort();
} else {
println!(
"no luck finding the value in handle in the switcher: {}",
id
);
};
}
Vacant(_) => {
println!("no luck finding the handle in the pipelines: {}", id)
}
};
Ok(warp::reply::with_status("done", http::StatusCode::OK))
}
}
},
);
let routes = session_create
.or(session_end)
.recover(handle_rejection)
.with(warp::cors().allow_any_origin());
println!("starting server...");
warp::serve(routes).run((ADDR, PORT)).await;
}
async fn handle_rejection(
err: warp::Rejection,
) -> Result<impl warp::Reply, std::convert::Infallible> {
Ok(warp::reply::json(&format!("{:?}", err)))
}
fn with_pipelines(
pipelines: RunningPipelines,
) -> impl Filter<Extract = (RunningPipelines,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || pipelines.clone())
}
depends:
[dependencies]
warp = "0.3"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde", "v4"] }
Results when I boot up, send a "create" request, and then an "end" request with the received ID:
starting server...
session requested OK!
Background going...
session end requested OK!: {"session_id": "6b984a45-38d8-41dc-bf95-422f75c5a429"}
occupied
removed from hashmap, key:6b984a45-38d8-41dc-bf95-422f75c5a429
sent end message|()| to fpipeline: 6b984a45-38d8-41dc-bf95-422f75c5a429
You'll notice that the background thread starts (and doesn't end) when the "create" request is made, but when the "end" request is made, while everything appears to complete successfully from the request(web) side, the background thread doesn't ever receive the message. As I've said I've tried all different channel types and moved things around to get it into this configuration... i.e. flattened and thread safetied as much as I could or at least could think of. I'm greener than I would like in rust, so any help would be VERY appreciated!
I think that the issue here is that you are sending the message and then immediately aborting the background task:
tx.send("goodbye".to_string());
//...
h.abort();
And the background task does not have time to process the message, as the abort is of higher priority.
What you need is to join the task, not to abort it.
Curiously, tokio tasks handles do not have a join() method, instead you wait for the handle itself. But for that you need to own the handle, so first you have to extract the handle from the Switcher:
let mut s = v.write().await;
//steal the task handle
if let Some(h) = s.handle.take() {
//...
tx.send("goodbye".to_string());
//...
//join the task
h.await.unwrap();
}
Note that joining a task may fail, in case the task is aborted or panicked. I'm just panicking in the code above, but you may want to do something different.
Or... you could not to wait for the task. In tokio if you drop a task handle, it will be detached. Then, it will finish when it finishes.
I'm trying to implement the sieve of Eratosthenes in Rust using coroutines as a learning exercise (not homework), and I can't find any reasonable way of connecting each thread to the Receiver and Sender ends of two different channels.
The Receiver is involved in two distinct tasks, namely reporting the highest prime found so far, and supplying further candidate primes for the filter. This is fundamental to the algorithm.
Here is what I would like to do but can't because the Receiver cannot be transferred between threads. Using std::sync::Arc does not appear to help, unsurprisingly.
Please note that I do understand why this doesn't work
fn main() {
let (basetx, baserx): (Sender<u32>, Receiver<u32>) = channel();
let max_number = 103;
thread::spawn(move|| {
generate_natural_numbers(&basetx, max_number);
});
let oldrx = &baserx;
loop {
// we need the prime in this thread
let prime = match oldrx.recv() {
Ok(num) => num,
Err(_) => { break; 0 }
};
println!("{}",prime);
// create (newtx, newrx) in a deliberately unspecified way
// now we need to pass the receiver off to the sieve thread
thread::spawn(move || {
sieve(oldrx, newtx, prime); // forwards numbers if not divisible by prime
});
oldrx = newrx;
}
}
Equivalent working Go code:
func main() {
channel := make(chan int)
var ok bool = true;
var prime int = 0;
go generate(channel, 103)
for true {
prime, ok = <- channel
if !ok {
break;
}
new_channel := make(chan int)
go sieve(channel, new, prime)
channel = new_channel
fmt.Println(prime)
}
}
What is the best way to deal with a situation like this where a Receiver needs to be handed off to a different thread?
You don't really explain what the problem that you are having, but your code is close enough:
use std::sync::mpsc::{channel, Sender, Receiver};
use std::thread;
fn generate_numbers(tx: Sender<u8>) {
for i in 2..100 {
tx.send(i).unwrap();
}
}
fn filter(rx: Receiver<u8>, tx: Sender<u8>, val: u8) {
for v in rx {
if v % val != 0 {
tx.send(v).unwrap();
}
}
}
fn main() {
let (base_tx, base_rx) = channel();
thread::spawn(move || {
generate_numbers(base_tx);
});
let mut old_rx = base_rx;
loop {
let num = match old_rx.recv() {
Ok(v) => v,
Err(_) => break,
};
println!("prime: {}", num);
let (new_tx, new_rx) = channel();
thread::spawn(move || {
filter(old_rx, new_tx, num);
});
old_rx = new_rx;
}
}
using coroutines
Danger, Danger, Will Robinson! These are not coroutines; they are full-fledged threads! These are a lot more heavyweight compared to a coroutine.
What is the best way to deal with a situation like this where a Receiver needs to be handed off to a different thread?
Just... transfer ownership of the Receiver to the thread?