Rust GTK: Trigger ApplicationWindow/DrawingArea redraw on a timer? - rust

I'm sure there's an easy way to do this but I don't know what it is. I have a very basic gtk::{Application, ApplicationWindow, DrawingArea}; setup. I want the DrawingArea::connect_draw closure to be triggered repeatedly on a timer, so it updates according to some changing state. (It would also be cool if it could be actively triggered by other threads, but a timer is fine.)
So far everything I've found that would work on a timer fails because it would mean moving the ApplicationWindow to another thread. (fails with NonNull<GObject> cannot be shared between threads safely) What I have currently triggers redraw on generic events, so if I click my mouse on the window it will redraw, but not do so automatically.
That code is below, but please show me how to make this work?
//BOILER PLATE SCROLL DOWN
extern crate cairo;
extern crate rand;
extern crate gtk;
extern crate gdk;
extern crate glib;
use std::{thread, time};
use gtk::prelude::*;
use gtk::{Application, ApplicationWindow, DrawingArea};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender};
fn main(){
let app = Application::builder()
.application_id("org.example.HelloWorld")
.build();
let (tx, rx ) : (Sender<f64>, Receiver<f64>)= mpsc::channel();
gtk::init().expect("GTK init failed");
let draw_area = DrawingArea::new();
let _id = draw_area.connect_draw(move |_unused, f| {
let red = rx.recv().unwrap();
f.set_source_rgb(red,0.5, 0.5);
f.paint().expect("Painting failed");
Inhibit(false)
});
app.connect_activate(move |app| {
let win = ApplicationWindow::builder()
.application(app)
.default_width(320)
.default_height(200)
.title("Hello, World!")
.build();
win.add(&draw_area);
win.show_all();
//IMPORTANT STUFF STARTS HERE
win.connect_event(|w, _g|{ //HORRIBLE HACK HELP FIX
w.queue_draw();
Inhibit(false)
});
glib::timeout_add_seconds(1, ||{
println!("I wish I could redraw instead of printing this line");
Continue(true)
});
//fails with "`NonNull<GObject>` cannot be shared between threads safely" :
// glib::timeout_add_seconds(1, ||{
// win.queue_draw();
// Continue(true)
// });
//IMPORTANT STUFF ENDS HERE
});
thread::spawn(move || {
loop {
thread::sleep(time::Duration::from_millis(100));
tx.send(rand::random::<f64>()).unwrap();
}
});
app.run();
}
EDIT: I tried a mutex version, maybe have implemented it wrong. The following code gives the same error (NonNull<GObject> cannot be shared between threads safely)
let mut_win = Mutex::new(win);
let arc_win = Arc::new(mut_win);
glib::timeout_add_seconds(1, move ||{
let mut w = arc_win.lock().unwrap();
(*w).queue_draw();
Continue(true)
});

Use glib::timeout_add_seconds_local() instead of the non-local version if you're doing everything on the same thread.
The generic version requires a Send-able closure and can be called from any thread at any time, calling the closure from your main thread. The local version can only be called from the main thread and panics otherwise.
By not requiring a Send-able closure, you can move a reference to your widgets into the closure.

Okay, I eventually made it work, after stumbling onto gtk-rs: how to update view from another thread . The key is to stash window in a thread-local global (TBH I don't really understand what that means but it works), and then access it through a static function.
I had to modify the linked answer a bit because of scope disagreements between my channel and my window. Eventually I just decided to deal with them separately.
I strongly suspect this is not the right way to do this, but at least it runs.
extern crate cairo;
extern crate rand;
extern crate gtk;
extern crate gdk;
extern crate glib;
use std::sync::{Arc, Mutex};
use std::{thread, time, u32};
use gtk::prelude::*;
use gtk::{Application, ApplicationWindow, DrawingArea};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender};
use std::cell::RefCell;
const SIZE : usize = 400;
type Message = (usize, usize);
type Grid = [[bool; SIZE]; SIZE];
thread_local!(
static GLOBAL: RefCell<Option<ApplicationWindow>> = RefCell::new(None);
);
fn check_update_display(){
GLOBAL.with(|global|{
if let Some(win) = &*global.borrow() {
win.queue_draw();
}
})
}
fn main(){
let app = Application::builder()
.application_id("org.example.HelloWorld")
.build();
let (tx, rx ) : (Sender<Message>, Receiver<Message>) = mpsc::channel();
gtk::init().expect("GTK init failed");
let draw_area = DrawingArea::new();
let grid_mut = Arc::new(Mutex::new([[false; SIZE]; SIZE]));
let draw_grid_mut = Arc::clone(&grid_mut);
let _id = draw_area.connect_draw(move |_unused, f| {
let grid = *(draw_grid_mut.lock().unwrap());
f.set_source_rgb(0.0,0.0, 0.0);
f.paint().expect("Painting failed");
f.set_source_rgb(1.0,1.0, 1.0);
let mut count = 0;
for i in 0 .. SIZE{
for j in 0 .. SIZE {
if grid[i][j] {
count = count + 1;
f.move_to(i as f64, j as f64);
f.rectangle(i as f64 * 3.0, j as f64 * 3.0 , 1.0, 1.0);
}
}
}
f.stroke().unwrap();
Inhibit(false)
});
let reader_grid = Arc::clone(&grid_mut);
thread::spawn(move ||{
loop{
let mut g = reader_grid.lock().unwrap();
let (x, y) = rx.recv().unwrap();
g[x][y] = true;
drop(g);
thread::sleep(time::Duration::from_millis(10));
}
});
app.connect_activate(move |app| {
let win =ApplicationWindow::builder()
.application(app)
.default_width(320)
.default_height(200)
.title("steveburg")
.build();
win.add(&draw_area);
win.show_all();
GLOBAL.with(|global|{
*global.borrow_mut() = Some(win);
});
glib::timeout_add_seconds(1, move ||{
check_update_display();
Continue(true)
});
});
thread::spawn(move || {
steveburg(tx);
});
app.run();
}
fn random_pair() -> (i32, i32) {
let (x, y) = ((rand::random::<u32>() % 3) as i32 - 1, (rand::random::<u32>() % 3) as i32 - 1);
(x, y)
}
fn steveburg(tx : Sender<Message>){
let mut grid : Grid = [[false; SIZE]; SIZE];
loop{
let (mut x, mut y) = (SIZE/2, SIZE/2);
'drift: loop {
if x == 0 || x == SIZE - 1 || y == 0 || y == SIZE - 1 {
break 'drift;
}
for nx in 0 .. 3 {
for ny in 0 .. 3 {
if grid[x + nx -1][y + ny -1] {break 'drift}
}
}
let (xa, ya) = random_pair();
(x, y) = ((x as i32+ xa) as usize, (y as i32 + ya) as usize);
}
grid[x][y] = true;
tx.send((x, y)).unwrap();
thread::sleep(time::Duration::from_millis(10));
}
}

Related

How to create threads that last entire duration of program and pass immutable chunks for threads to operate on?

I have a bunch of math that has real time constraints. My main loop will just call this function repeatedly and it will always store results into an existing buffer. However, I want to be able to spawn the threads at init time and then allow the threads to run and do their work and then wait for more data. The synchronization I will use a Barrier and have that part working. What I can't get working and have tried various iterations of Arc or crossbeam is splitting the thread spawning up and the actual workload. This is what I have now.
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
for i in 0..WORK_SIZE {
work.push(i as f64);
}
crossbeam::scope(|scope| {
let threads: Vec<_> = work
.chunks(NUM_TASKS_PER_THREAD)
.map(|chunk| scope.spawn(move |_| chunk.iter().cloned().sum::<f64>()))
.collect();
let threaded_time = std::time::Instant::now();
let thread_sum: f64 = threads.into_iter().map(|t| t.join().unwrap()).sum();
let threaded_micros = threaded_time.elapsed().as_micros() as f64;
println!("threaded took: {:#?}", threaded_micros);
let serial_time = std::time::Instant::now();
let no_thread_sum: f64 = work.iter().cloned().sum();
let serial_micros = serial_time.elapsed().as_micros() as f64;
println!("serial took: {:#?}", serial_micros);
assert_eq!(thread_sum, no_thread_sum);
println!(
"Threaded performace was {:?}",
serial_micros / threaded_micros
);
})
.unwrap();
}
But I can't find a way to spin these threads up in an init function and then in a do_work function pass work into them. I attempted to do something like this with Arc's and Mutex's but couldn't get everything straight there either. What I want to turn this into is something like the following
use std::sync::{Arc, Barrier, Mutex};
use std::{slice::Chunks, thread::JoinHandle};
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.into_iter().cloned().sum::<f64>();
let mut result = *result.lock().unwrap();
result += sum;
}
}
fn init(
mut data: Chunks<'_, f64>,
result: &Arc<Mutex<f64>>,
barrier: &Arc<Barrier>,
) -> Vec<std::thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let result = result.clone();
let barrier = barrier.clone();
let chunk = data.nth(i).unwrap();
handles.push(std::thread::spawn(|| {
//Pass the particular thread the particular chunk it will operate on.
do_work(chunk, result, barrier);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let mut result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let threads = init(work.chunks(NUM_TASKS_PER_THREAD), &result, &work_barrier);
loop {
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}
I hope this expresses the intent clearly enough of what I need to do. The issue with this specific implementation is that the chunks don't seem to live long enough and when I tried wrapping them in an Arc as it just moved the argument doesn't live long enough to the Arc::new(data.chunk(_)) line.
use std::sync::{Arc, Barrier, Mutex};
use std::thread;
pub const WORK_SIZE: usize = 524_288;
pub const NUM_THREADS: usize = 6;
pub const NUM_TASKS_PER_THREAD: usize = WORK_SIZE / NUM_THREADS;
//simplified version of what actual work that code base will do
fn do_work(data: &[f64], result: Arc<Mutex<f64>>, barrier: Arc<Barrier>) {
loop {
barrier.wait();
let sum = data.iter().sum::<f64>();
*result.lock().unwrap() += sum;
}
}
fn init(
work: Vec<f64>,
result: Arc<Mutex<f64>>,
barrier: Arc<Barrier>,
) -> Vec<thread::JoinHandle<()>> {
let mut handles = Vec::with_capacity(NUM_THREADS);
//spawn threads, in actual code these would be stored in a lib crate struct
for i in 0..NUM_THREADS {
let slice = work[i * NUM_TASKS_PER_THREAD..(i + 1) * NUM_TASKS_PER_THREAD].to_owned();
let result = Arc::clone(&result);
let w = Arc::clone(&barrier);
handles.push(thread::spawn(move || {
do_work(&slice, result, w);
}));
}
handles
}
fn main() {
let mut work: Vec<f64> = Vec::with_capacity(WORK_SIZE);
let result = Arc::new(Mutex::new(0.0));
for i in 0..WORK_SIZE {
work.push(i as f64);
}
let work_barrier = Arc::new(Barrier::new(NUM_THREADS + 1));
let _threads = init(work, Arc::clone(&result), Arc::clone(&work_barrier));
loop {
thread::sleep(std::time::Duration::from_secs(3));
work_barrier.wait();
//actual code base would do something with summation stored in result.
println!("{:?}", result.lock().unwrap());
}
}

How to use a clone in a Rust thread

In this rust program, inside the run function, I am trying to pass the "pair_clone" as a parameter for both threads but I keep getting a mismatched type error? I thought I was passing the pair but it says I'm passing an integer instead.
use std::sync::{Arc, Mutex, Condvar};
fn producer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "producing"
}
}
fn consumer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "consuming"
}
}
pub fn run() {
println!("Main::Begin");
let num_of_loops = 5;
let num_of_threads = 4;
let mut array_of_threads = vec!();
let pair = Arc ::new((Mutex::new(true), Condvar::new()));
for pair in 0..num_of_threads {
let pair_clone = pair.clone();
array_of_threads.push(std::thread::spawn( move || producer(&pair_clone, num_of_loops)));
array_of_threads.push(std::thread::spawn( move || consumer(&pair_clone, num_of_loops)));
}
for i in array_of_threads {
i.join().unwrap();
}
println!("Main::End");
}
You have two main errors
The first: you are using the name of the pair as the loop index. This makes pair be the integer the compiler complains about.
The second: you are using one copy while you need two, one for the producer and the other for the consumer
After Edit
use std::sync::{Arc, Mutex, Condvar};
fn producer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "producing"
}
fn consumer(pair: &(Mutex<bool>, Condvar), num_of_loops: u32) {
let (mutex, cv) = pair;
//prints "consuming"
}
pub fn run() {
println!("Main::Begin");
let num_of_loops = 5;
let num_of_threads = 4;
let mut array_of_threads = vec![];
let pair = Arc ::new((Mutex::new(true), Condvar::new()));
for _ in 0..num_of_threads {
let pair_clone1 = pair.clone();
let pair_clone2 = pair.clone();
array_of_threads.push(std::thread::spawn( move || producer(&pair_clone1, num_of_loops)));
array_of_threads.push(std::thread::spawn( move || consumer(&pair_clone2, num_of_loops)));
}
for i in array_of_threads {
i.join().unwrap();
}
println!("Main::End");
}
Demo
Note that I haven't given any attention to the code quality. just fixed the compile errors.

Implement a monitoring thread without lock?

I have a struct that sends messages to a channel as well as updating some of its own fields. How do I implement a monitoring thread that looks (read only) at its internal fields periodically?
I can write it using a Arc<Mutex<T>> wrapper, but I feel it is not that efficient as A::x could have been i32 which is stored and updated on the stack. Is there any better way to do it without the locks?
use std::sync::{Arc, Mutex};
use std::sync::mpsc::{channel, Sender};
use std::{thread, time};
struct A {
x: Arc<Mutex<i32>>,
y: Sender<i32>,
}
impl A {
fn do_some_loop(&mut self) {
let sleep_time = time::Duration::from_millis(200);
// This is a long running thread.
for x in 1..1000000 {
*self.x.lock().unwrap() = x;
self.y.send(x);
thread::sleep(sleep_time);
}
}
}
fn test() {
let (sender, recever) = channel();
let x = Arc::new(Mutex::new(1));
let mut a = A { x: x.clone(), y: sender };
thread::spawn(move || {
// Monitor every 10 secs.
let sleep_time = time::Duration::from_millis(10000);
loop {
thread::sleep(sleep_time);
println!("{}", *x.lock().unwrap());
}
});
a.do_some_loop();
}

Read from a channel or timeout?

With Rust 1.9, I'd like to read from a mpsc::channel or timeout. Is there a clear idiom to make this work? I've seen the unstable approach described in mpsc::Select but this Github discussion suggests it is not a robust approach. Is there a better-recommended way for me to achieve receive-or-timeout semantics?
Rust 1.12 introduced Receiver::recv_timeout:
use std::sync::mpsc::channel;
use std::time::Duration;
fn main() {
let (.., rx) = channel::<bool>();
let timeout = Duration::new(3, 0);
println!("start recv");
let _ = rx.recv_timeout(timeout);
println!("done!");
}
I don't know how you'd do it with the standard library channels, but the chan crate provides a chan_select! macro:
#[macro_use]
extern crate chan;
use std::time::Duration;
fn main() {
let (_never_sends, never_receives) = chan::sync::<bool>(1);
let timeout = chan::after(Duration::from_millis(50));
chan_select! {
timeout.recv() => {
println!("timed out!");
},
never_receives.recv() => {
println!("Shouldn't have a value!");
},
}
}
I was able to get something working using the standard lib.
use std::sync::mpsc::channel;
use std::thread;
use std::time::{Duration, Instant};
use std::sync::mpsc::TryRecvError;
fn main() {
let (send, recv) = channel();
thread::spawn(move || {
send.send("Hello world!").unwrap();
thread::sleep(Duration::from_secs(1)); // block for two seconds
send.send("Delayed").unwrap();
});
println!("{}", recv.recv().unwrap()); // Received immediately
println!("Waiting...");
let mut resolved: bool = false;
let mut result: Result<&str, TryRecvError> = Ok("Null");
let now = Instant::now();
let timeout: u64= 2;
while !resolved {
result = recv.try_recv();
resolved = !result.is_err();
if now.elapsed().as_secs() as u64 > timeout {
break;
}
}
if result.is_ok(){
println!("Results: {:?}", result.unwrap());
}
println!("Time elapsed: {}", now.elapsed().as_secs());
println!("Resolved: {}", resolved.to_string());
}
This will spin for timeout seconds and will result in either the received value or an Err Result.

How do I use a Condvar to limit multithreading?

I'm trying to use a Condvar to limit the number of threads that are active at any given time. I'm having a hard time finding good examples on how to use Condvar. So far I have:
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let &(ref num, ref cvar) = &*thread_count;
{
let mut start = num.lock().unwrap();
if *start >= 20 {
cvar.wait(start);
}
*start += 1;
}
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
The compiler error given is:
error[E0382]: use of moved value: `start`
--> src/main.rs:16:18
|
14 | cvar.wait(start);
| ----- value moved here
15 | }
16 | *start += 1;
| ^^^^^ value used here after move
|
= note: move occurs because `start` has type `std::sync::MutexGuard<'_, i32>`, which does not implement the `Copy` trait
I'm entirely unsure if my use of Condvar is correct. I tried staying as close as I could to the example on the Rust API. Wwat is the proper way to implement this?
Here's a version that compiles:
use std::{
sync::{Arc, Condvar, Mutex},
thread,
};
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0u8), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let (num, cvar) = &*thread_count;
let mut start = cvar
.wait_while(num.lock().unwrap(), |start| *start >= 20)
.unwrap();
// Before Rust 1.42, use this:
//
// let mut start = num.lock().unwrap();
// while *start >= 20 {
// start = cvar.wait(start).unwrap()
// }
*start += 1;
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
The important part can be seen from the signature of Condvar::wait_while or Condvar::wait:
pub fn wait_while<'a, T, F>(
&self,
guard: MutexGuard<'a, T>,
condition: F
) -> LockResult<MutexGuard<'a, T>>
where
F: FnMut(&mut T) -> bool,
pub fn wait<'a, T>(
&self,
guard: MutexGuard<'a, T>
) -> LockResult<MutexGuard<'a, T>>
This says that wait_while / wait consumes the guard, which is why you get the error you did - you no longer own start, so you can't call any methods on it!
These functions are doing a great job of reflecting how Condvars work - you give up the lock on the Mutex (represented by start) for a while, and when the function returns you get the lock again.
The fix is to give up the lock and then grab the lock guard return value from wait_while / wait. I've also switched from an if to a while, as encouraged by huon.
For reference, the usual way to have a limited number of threads in a given scope is with a Semaphore.
Unfortunately, Semaphore was never stabilized, was deprecated in Rust 1.8 and was removed in Rust 1.9. There are crates available that add semaphores on top of other concurrency primitives.
let sema = Arc::new(Semaphore::new(20));
for i in 0..100 {
let sema = sema.clone();
thread::spawn(move || {
let _guard = sema.acquire();
println!("{}", i);
})
}
This isn't quite doing the same thing: since each thread is not printing the total number of the threads inside the scope when that thread entered it.
I realized the code I provided didn't do exactly what I wanted it to, so I'm putting this edit of Shepmaster's code here for future reference.
use std::sync::{Arc, Condvar, Mutex};
use std::thread;
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0u8), Condvar::new()));
let mut i = 0;
while i < 150 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let x;
let &(ref num, ref cvar) = &*thread_count;
{
let start = num.lock().unwrap();
let mut start = if *start >= 20 {
cvar.wait(start).unwrap()
} else {
start
};
*start += 1;
x = *start;
}
println!("{}", x);
{
let mut counter = num.lock().unwrap();
*counter -= 1;
}
cvar.notify_one();
});
i += 1;
}
println!("done");
}
Running this in the playground should show more or less expected behavior.
You want to use a while loop, and re-assign start at each iteration, like:
fn main() {
let thread_count_arc = Arc::new((Mutex::new(0), Condvar::new()));
let mut i = 0;
while i < 100 {
let thread_count = thread_count_arc.clone();
thread::spawn(move || {
let &(ref num, ref cvar) = &*thread_count;
let mut start = num.lock().unwrap();
while *start >= 20 {
let current = cvar.wait(start).unwrap();
start = current;
}
*start += 1;
println!("hello");
cvar.notify_one();
});
i += 1;
}
}
See also some article on the topic:
https://medium.com/#polyglot_factotum/rust-concurrency-five-easy-pieces-871f1c62906a
https://medium.com/#polyglot_factotum/rust-concurrency-patterns-condvars-and-locks-e278f18db74f

Resources