In an application, I nest gtk event loops to be able to return a value in a callback.
However, I have some issues with a callback already calling gtk_main_quit() because multiple calls to this function does not seem to exit as many nestings of event loop.
Here is an example of my issue:
extern crate gtk;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering;
use std::thread;
use gtk::{Button, ButtonExt, ContainerExt, Continue, Inhibit, WidgetExt, Window, WindowType};
fn main() {
gtk::init().unwrap();
let window = Window::new(WindowType::Toplevel);
let quit = Arc::new(AtomicBool::new(false));
window.connect_delete_event(|_, _| {
gtk::main_quit();
Inhibit(false)
});
let button = Button::new_with_label("Click");
let quit2 = quit.clone();
button.connect_clicked(move |_| {
let quit = quit2.clone();
thread::spawn(move || {
quit.store(true, Ordering::Relaxed);
});
println!("Run");
gtk::main();
});
window.add(&button);
window.show_all();
gtk::idle_add(move || {
if quit.load(Ordering::Relaxed) {
println!("Quit");
gtk::main_quit();
gtk::main_quit();
quit.store(false, Ordering::Relaxed);
}
Continue(true)
});
gtk::main();
}
As you can see in the gtk::idle_add call, I call gtk::main_quit() twice, which should exit the application when the button is pressed because gtk::main() was also called twice (one at the end of the main function, the other one in the button clicked callback).
But the application does not exit when I click the button.
The documentation of gtk seems to indicate that this is the expected behaviour:
Makes the innermost invocation of the main loop return when it regains control.
(emphasis is mine)
So, I believe that this does not exit the application because calling gtk::main_quit() twice won't allow the gtk main loop to "regain control".
My question is, what should I do between the two calls to gtk::main_quit() to stop the 2 nestings of event loop?
Short answer: the solution is to replace the second gtk::main_quit with:
gtk::idle_add(|| {
gtk::main_quit();
Continue(false)
});
As before, the first gtk::main_quit() will arrange for the inner main loop to quit. Additionally, the idle handler will be picked up by the outer main loop, causing it to immediately terminate as well.
This can be generalized with a pseudo-recursive function that repeats the process as many times as necessary:
fn deep_main_quit(n: usize) {
if n == 0 {
return;
}
gtk::main_quit();
gtk::idle_add(move || {
deep_main_quit(n - 1);
Continue(false)
});
}
Note that the use of idle_add to continually check for a flag will result in busy-looping, which you almost certainly want to avoid. (On my machine, running your program takes up a full CPU core.) In general, the preferred approach is to wait on a condition variable. But if you just need to tell the GUI thread to do something, as shown in your code, you can just call glib::idle_add from a different thread. The provided callback will be queued and executed in the GUI thread.
With this change, one thread releasing two levels of gtk_main in the GUI thread would look like this:
fn main() {
gtk::init().unwrap();
let window = Window::new(WindowType::Toplevel);
window.connect_delete_event(|_, _| {
gtk::main_quit();
Inhibit(false)
});
let button = Button::new_with_label("Click");
button.connect_clicked(|_| {
thread::spawn(|| {
glib::idle_add(|| {
println!("Quit");
deep_main_quit(2);
Continue(false)
});
});
println!("Run");
gtk::main();
});
window.add(&button);
window.show_all();
gtk::main();
}
I've fixed my issue by manually creating a glib MainLoop and using it instead of nesting calls gtk event loop.
Related
I referred this and also tried tungstenite library. But I was able to run only one server at a time, it captured whole thread.
I tried running multiple servers on different thread but that never listen anything and just exit the program.
Is there anyway that I can run multiple WebSocket servers on different ports, and create, destroy a server in runtime?
Edit: If I run a server on main thread and another one on other thread, it works, looks like I'd have to keep main thread busy somehow.. but is there any better way?
here's some example code:
it uses:
use std::net::TcpListener;
use std::thread::spawn;
use tungstenite::accept;
this is the normal code that blocks the main thread
let server = TcpListener::bind("127.0.0.1:9002").expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
here's the code with thread:
spawn(|| {
let server = TcpListener::bind("127.0.0.1:9001").expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
});
but the above code needs the main thread to run somehow, I'm indeed able to run multiple servers on different threads but need something to occupy main thread.
Rust programs terminate when the end of main() is reached. What you need to do is wait until your secondary threads have finished.
std::thread::spawn returns a JoinHandle, which has a join method which does exactly that - it waits (blocks) until the thread that the handle refers to finishes, and returns an error if the thread panicked.
So, to keep your program alive as long as any threads are running, you need to collect all of these handles, and join() them one by one. Unlike a busy-loop, this will not waste CPU resources unnecessarily.
use std::net::TcpListener;
use std::thread::spawn;
use tungstenite::accept;
fn main() {
let mut handles = vec![];
// Spawn 3 identical servers on ports 9001, 9002, 9003
for i in 1..=3 {
let handle = spawn(move || {
let server = TcpListener::bind(("127.0.0.1", 9000 + i)).expect("err: ");
for stream in server.incoming() {
spawn(move || {
let mut websocket = accept(stream.unwrap()).unwrap();
loop {
let msg = websocket.read_message().unwrap();
println!("{}", msg);
// We do not want to send back ping/pong messages.
if msg.is_binary() || msg.is_text() {
websocket.write_message(msg).unwrap();
}
}
});
}
});
handles.push(handle);
}
// Wait for each thread to finish before exiting
for handle in handles {
if let Err(e) = handle.join() {
eprintln!("{:?}", e)
}
}
}
When you do all the work in a thread (or threads) and the main thread has nothing to do, usually it is set to wait (join) that thread.
This has the additional advantage that if your secondary thread finishes or panics, then your program will also finish. Or you can wrap the whole create-thread/join-thread in a loop and make it more resilient:
fn main() {
loop {
let th = std::thread::spawn(|| {
// Do the real work here
std::thread::sleep(std::time::Duration::from_secs(1));
panic!("oh!");
});
if let Err(e) = th.join() {
eprintln!("Thread panic: {:?}", e)
}
}
}
Link to playground, I've changed to the loop into a for _ in ..3 because playgrond does not like infinite loops.
In my application I have a blocking task that synchronically reads messages from a queue and feeds them to a running task.
All of this works fine, but the problem that I'm having is that the process does not terminate correctly, since the queue_reader task does not stop.
I've constructed a small example based on the tokio documentation at: https://docs.rs/tokio/1.20.1/tokio/task/fn.spawn_blocking.html
use tokio::sync::mpsc;
use tokio::task;
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
loop {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
println!("Finalizing thread");
queue_reader.abort(); // This doesn't seem to terminate the queue_reader task
queue_reader.await.unwrap(); // <-- The process hangs on this task.
println!("Done");
}
At first I expected that queue_reader.abort() should terminate the task, however it doesn't. My expectation is that tokio can only do this for tasks that use .await internally, because that will handle control over to tokio. Is this right?
In order to terminate the queue_reader task I introduced a oneshot channel, over which I signal the termination, as shown in the next snippet.
use tokio::task;
use tokio::sync::{oneshot, mpsc};
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// A new channel to communicate when the process must finish.
let (term_tx, mut term_rx) = oneshot::channel();
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
// As long as termination is not signalled
while term_rx.try_recv().is_err() {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
// Signal termination
term_tx.send(()).unwrap();
println!("Finalizing thread");
queue_reader.await.unwrap();
println!("Done");
}
My question is, is this the canonical/best way to do this, or are there better alternatives?
Tokio cannot terminate CPU-bound/blocking tasks.
It is technically possible to kill OS threads, but generally it is not a good idea, as it's expensive to create new threads and it can leave your program in an invalid state. Even if Tokio decided this was something worth implementing, it would serverely limit its implementation - it would be forced into a multithread model, just to support the possibility that you'd want to kill a blocking task before it's finished.
Your solution is pretty good; give your blocking task the responsibility for terminating itself and provide a way to tell it to do so. If this future was part of a library, you could abstract the mechanism away by returning a "handle" to the task that had a cancel() method.
Are there better alternatives? Maybe, but that would depend on other factors. Your solution is good and easily extended, for example if you later needed to send different types of signal to the task.
I set the timeout to 1s, but the task executes to 3s, but no panic occurs.
#code
#[should_panic]
fn test_timeout() {
let rt = create_runtime();
let timeout_duration = StdDuration::from_secs(1);
let sleep_duration = StdDuration::from_secs(3);
let _guard = rt.enter();
let timeout = time::timeout(timeout_duration, async {
log("timeout running");
thread::sleep(sleep_duration);
log("timeout finsihed");
"Ding!".to_string()
});
rt.block_on(timeout).unwrap();
}
Using thread::sleep in asynchronous code is almost always wrong.
Conceptually, the timeout works like this:
tokio spawns a timer which would wake up after the specified duration.
tokio spawns your future. If it returns Poll::Ready, timer is thrown away and the future succeeds. If it returns Poll::Pending, tokio waits for the next event, i.e. for wakeup of either your future or the timer.
If the future wakes up, tokio polls it again. If it returns Poll::Ready - again, timer is thrown away, future succeeds.
If the timer wakes up, tokio polls the future one last time; if it's still Poll::Pending, it times out and is not polled anymore, and timeout returns an error.
In your case, however, future do not return Poll::Pending - it blocks inside the thread::sleep. So, even though the timer could fire after one second has passed, tokio has no way to react - it waits for the future to return, future returns only after the thread is unblocked, and, since there's no await inside the block, it returns Poll::Ready - so the timer isn't even checked.
To fix this, you're expected to use tokio::time::sleep for any pauses inside async code. With it, the future times out properly. To illustrate this claim, let's see the self-contained example equivalent to your original code:
use core::time::Duration;
use tokio::time::timeout;
#[tokio::main]
async fn main() {
let timeout_duration = Duration::from_secs(1);
let sleep_duration = Duration::from_secs(3);
timeout(timeout_duration, async {
println!("timeout running");
std::thread::sleep(sleep_duration);
println!("timeout finsihed");
"Ding!".to_string()
})
.await
.unwrap_err();
}
Playground
As you've already noticed, this fails - unwrap_err panics when called on Ok, and timeout returns Ok since the future didn't time out properly.
But when replacing std::thread::sleep(...) with tokio::time::sleep(...).await...
use core::time::Duration;
use tokio::time::timeout;
#[tokio::main]
async fn main() {
let timeout_duration = Duration::from_secs(1);
let sleep_duration = Duration::from_secs(3);
timeout(timeout_duration, async {
println!("timeout running");
tokio::time::sleep(sleep_duration).await;
println!("timeout finsihed");
"Ding!".to_string()
})
.await
.unwrap_err();
}
...we get the expected behavior - playground.
I'm trying to create a daemon in Rust which runs a process on a schedule forever:
use scheduled_thread_pool::ScheduledThreadPool;
use std::time::Duration;
let pool = ScheduledThreadPool::new(1);
pool.execute_at_fixed_rate(
Duration::new(5, 0),
Duration::new(5, 0),
move || do_business_logic(),
);
This stops the threads as soon as processing reaches the end of main(). How can I keep it running forever?
You should not use the loop { } suggested in the comments.
This has drawbacks:
it will burn cpu needlessly
your program will run forever, even in the case your do_business_logic() panics.
Lets look at the docs of scheduled_thread_pool::execute_at_fixed_rate, it says under Panics:
If the closure panics, it will not be run again.
That doesn't sound very helpful, if you want to notice in your program, wether your do_business_logic panics
So you could create a std::sync::Channel and just wait for receiving a value on the channel.
You move the sender into the closure you hand over to the thread_pool.
If your closure panics, the sender will be dropped, and the receiver will stop waiting. So you know, something happened.
Some working code:
use scheduled_thread_pool::ScheduledThreadPool;
use std::time::Duration;
use std::sync::mpsc::*;
fn main() {
let pool = ScheduledThreadPool::new(1);
let (tx, rx): (Sender<u8>, Receiver<u8>) = channel();
let killswitch = std::sync::Arc::new(std::sync::Mutex::new(false));
let killswitch2 = killswitch.clone();
pool.execute_at_fixed_rate(
Duration::new(5, 0),
Duration::new(5, 0),
move || {
if *(killswitch2.lock().unwrap()) {
panic!("gotto go!");
}
println!("haha");
tx.send(1).unwrap();
}
);
for (count, _) in rx.iter().enumerate() {
println!("got one");
if count > 0 {
println!("that's boring, killing it");
*(killswitch.lock().unwrap()) = true;
}
}
println!("Have a nice day");
}
clippy yelled at me for using a Mutex around a bool and suggested using an AtomicBool instead, but I think, that is a different rabbit hole.
Is it possible to force resume a sleeping thread which has been paused? For example, by calling sleep:
std::thread::sleep(std::time::Duration::from_secs(60 * 20));
I know that I can communicate between threads using std::sync::mpsc but if the thread is asleep, this does not force it to wake up before the time indicated.
I have thought that using std::sync::mpsc and maybe
Builder and .name associated with the thread, but I do not know how to get the thread to wake up.
If you want to be woken up by an event, thread::sleep() is not the correct function to use, as it's not supposed to be stopped.
There are other methods of waiting while being able to be woken up by an event (this is usually called blocking). Probably the easiest way is to use a channel together with Receiver::recv_timeout(). Often it's also sufficient to send () through the channel. That way we just communicate a signal, but don't send actual data.
If you don't want to wake up after a specific timeout, but only when a signal arrives, just use Receiver::recv().
Example with timeout:
use std::thread;
use std::sync::mpsc::{self, RecvTimeoutError};
use std::time::Duration;
use std::io;
fn main() {
let (sender, receiver) = mpsc::channel();
thread::spawn(move || {
loop {
match receiver.recv_timeout(Duration::from_secs(2)) {
Err(RecvTimeoutError::Timeout) => {
println!("Still waiting... I'm bored!");
// we'll try later...
}
Err(RecvTimeoutError::Disconnected) => {
// no point in waiting anymore :'(
break;
}
Ok(_) => {
println!("Finally got a signal! ♥♥♥");
// doing work now...
}
}
}
});
loop {
let mut s = String::new();
io::stdin().read_line(&mut s).expect("reading from stdin failed");
if s.trim() == "start" {
sender.send(()).unwrap();
}
}
}
Here, the second thread is woken up at least every two seconds (the timeout), but also earlier once something was sent through the channel.
park_timeout allows timed sleeps with wakeups from unpark, but it can also wake up early.
See std::thread module documentation