Kill warp webserver on request in rust - multithreading

I'm learning rust and for something that I want to do is kill, or shutdown, a webserver on GET /.
Is this something you can't do in with warp? Or is my implementation broken?
I've got the following code, but it just doesn't seem to want to respond to any HTTP requests.
pub async fn perform_oauth_flow(&self) {
let (tx, rx) = channel::unbounded();
let routes = warp::path::end().map(move || {
println!("handling");
tx.send("kill");
Ok(warp::reply::with_status("OK", http::StatusCode::CREATED))
});
println!("Spawning server");
let webserver_thread = thread::spawn(|| async {
spawn(warp::serve(routes).bind(([127, 0, 0, 1], 3000)))
.await
.unwrap();
});
println!("waiting for result");
let result = rx.recv().unwrap();
println!("Got result");
if result == "kill" {
webserver_thread.join().unwrap().await;
}
}

let webserver_thread = thread::spawn(|| async {
// ^^^^^
Creating an async block is not going to execute the code inside; it is just creating a Future you need to .await. Your server never actually runs.
In general, using threads with async code is not going to work well. Better to use your runtime tasks, in case of warp it is tokio, using tokio::spawn():
let webserver_thread = tokio::spawn(async move {
spawn(warp::serve(routes).bind(([127, 0, 0, 1], 3000)))
.await
.unwrap();
});
// ...
if result == "kill" {
webserver_thread.await;
}
You may also find it necessary to use tokio's async channels instead of synchronous channels.

There are two issues in your code:
As pointed out by #ChayimFriedman's answer, you never start the server because your async block never runs.
Even if you replace the threads with Tokio tasks, you never tell the server to exit. You need to use bind_with_graceful_shutdown so that you can notify the server to exit.
(Untested) complete example:
pub async fn perform_oauth_flow(&self) {
let (tx, rx) = tokio::oneshot::channel();
let routes = warp::path::end().map(move || {
println!("handling");
tx.send(());
Ok(warp::reply::with_status("OK", http::StatusCode::CREATED))
});
println!("Spawning server");
let server = warp::serve(routes)
.bind_with_graceful_shutdown(
([127, 0, 0, 1], 3000)),
async { rx.await.ok(); })
.1;
println!("waiting for result");
server.await;
}

Related

Receiver on tokio's mpsc channel only receives messages when buffer is full

I've spent a few hours trying to figure this out and I'm pretty done. I found the question with a similar name, but that looks like something was blocking synchronously which was messing with tokio. That very well may be the issue here, but I have absolutely no idea what is causing it.
Here is a heavily stripped down version of my project which hopefully gets the issue across.
use std::io;
use futures_util::{
SinkExt,
stream::{SplitSink, SplitStream},
StreamExt,
};
use tokio::{
net::TcpStream,
sync::mpsc::{channel, Receiver, Sender},
};
use tokio_tungstenite::{
connect_async,
MaybeTlsStream,
tungstenite::Message,
WebSocketStream,
};
#[tokio::main]
async fn main() {
connect_to_server("wss://a_valid_domain.com".to_string()).await;
}
async fn read_line() -> String {
loop {
let mut str = String::new();
io::stdin().read_line(&mut str).unwrap();
str = str.trim().to_string();
if !str.is_empty() {
return str;
}
}
}
async fn connect_to_server(url: String) {
let (ws_stream, _) = connect_async(url).await.unwrap();
let (write, read) = ws_stream.split();
let (tx, rx) = channel::<ChannelMessage>(100);
tokio::spawn(channel_thread(write, rx));
tokio::spawn(handle_std_input(tx.clone()));
read_messages(read, tx).await;
}
#[derive(Debug)]
enum ChannelMessage {
Text(String),
Close,
}
// PROBLEMATIC FUNCTION
async fn channel_thread(
mut write: SplitSink<WebSocketStream<MaybeTlsStream<TcpStream>>, Message>,
mut rx: Receiver<ChannelMessage>,
) {
while let Some(msg) = rx.recv().await {
println!("{:?}", msg); // This only fires when buffer is full
match msg {
ChannelMessage::Text(text) => write.send(Message::Text(text)).await.unwrap(),
ChannelMessage::Close => {
write.close().await.unwrap();
rx.close();
return;
}
}
}
}
async fn read_messages(
mut read: SplitStream<WebSocketStream<MaybeTlsStream<TcpStream>>>,
tx: Sender<ChannelMessage>,
) {
while let Some(msg) = read.next().await {
let msg = match msg {
Ok(m) => m,
Err(_) => continue
};
match msg {
Message::Text(m) => println!("{}", m),
Message::Close(_) => break,
_ => {}
}
}
if !tx.is_closed() {
let _ = tx.send(ChannelMessage::Close).await;
}
}
async fn handle_std_input(tx: Sender<ChannelMessage>) {
loop {
let str = read_line().await;
if tx.is_closed() {
break;
}
tx.send(ChannelMessage::Text(str)).await.unwrap();
}
}
As you can see, what I'm trying to do is:
Connect to a websocket
Print outgoing messages from the websocket
Forward any input from stdin to the websocket
Also a custom heartbeat solution which was trimmed out
The issue lies in the channel_thread() function. I move the websocket writer into this function as well as the channel receiver. The issue is, it only loops over the sent objects when the buffer is full.
I've spent a lot of time trying to solve this, any help is greatly appreciated.
Here, you make a blocking synchronous call in an async context:
async fn read_line() -> String {
loop {
let mut str = String::new();
io::stdin().read_line(&mut str).unwrap();
// ^^^^^^^^^^^^^^^^^^^
// This is sync+blocking
str = str.trim().to_string();
if !str.is_empty() {
return str;
}
}
}
You never ever make blocking synchronous calls in an async context, because that prevents the entire thread from running other async tasks. Your channel receiver task is likely also assigned to this thread, so it's having to wait until all the blocking calls are done and whatever invokes this function yields back to the async runtime.
Tokio has its own async version of stdin, which you should use instead.

Tokio channel sends, but doesn't receive

TL;DR I'm trying to have a background thread that's ID'd that is controlled via that ID and web calls, and the background threads doesn't seem to be getting the message via all the types of channels I've tried.
I've tried both the std channels as well as tokio's, and of those I've tried all but the watcher type from tokio. All have the same result which probably means that I've messed something up somewhere without realizing it, but I can't find the issue:
use std::collections::{
hash_map::Entry::{Occupied, Vacant},
HashMap,
};
use std::sync::Arc;
use tokio::sync::mpsc::{self, UnboundedSender};
use tokio::sync::RwLock;
use tokio::task::JoinHandle;
use uuid::Uuid;
use warp::{http, Filter};
#[derive(Default)]
pub struct Switcher {
pub handle: Option<JoinHandle<bool>>,
pub pipeline_end_tx: Option<UnboundedSender<String>>,
}
impl Switcher {
pub fn set_sender(&mut self, tx: UnboundedSender<String>) {
self.pipeline_end_tx = Some(tx);
}
pub fn set_handle(&mut self, handle: JoinHandle<bool>) {
self.handle = Some(handle);
}
}
const ADDR: [u8; 4] = [0, 0, 0, 0];
const PORT: u16 = 3000;
type RunningPipelines = Arc<RwLock<HashMap<String, Arc<RwLock<Switcher>>>>>;
#[tokio::main]
async fn main() {
let running_pipelines = Arc::new(RwLock::new(HashMap::<String, Arc<RwLock<Switcher>>>::new()));
let session_create = warp::post()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.then(|pipelines: RunningPipelines| async move {
println!("session requested OK!");
let id = Uuid::new_v4();
let mut switcher = Switcher::default();
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
switcher.set_sender(tx);
let t = tokio::spawn(async move {
println!("Background going...");
//This would be something processing in the background until it received the end signal
match rx.recv().await {
Some(v) => {
println!(
"Got end message:{} YESSSSSS#!##!!!!!!!!!!!!!!!!1111eleven",
v
);
}
None => println!("Error receiving end signal:"),
}
println!("ABORTING HANDLE");
true
});
let ret = HashMap::from([("session_id", id.to_string())]);
switcher.set_handle(t);
{
pipelines
.write()
.await
.insert(id.to_string(), Arc::new(RwLock::new(switcher)));
}
Ok(warp::reply::json(&ret))
});
let session_end = warp::delete()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.and(warp::query::<HashMap<String, String>>())
.then(
|pipelines: RunningPipelines, p: HashMap<String, String>| async move {
println!("session end requested OK!: {:?}", p);
match p.get("session_id") {
None => Ok(warp::reply::with_status(
"Please specify session to end",
http::StatusCode::BAD_REQUEST,
)),
Some(id) => {
let mut pipe = pipelines.write().await;
match pipe.entry(String::from(id)) {
Occupied(handle) => {
println!("occupied");
let (k, v) = handle.remove_entry();
drop(pipe);
println!("removed from hashmap, key:{}", k);
let s = v.write().await;
if let Some(h) = &s.handle {
if let Some(tx) = &s.pipeline_end_tx {
match tx.send("goodbye".to_string()) {
Ok(res) => {
println!(
"sent end message|{:?}| to fpipeline: {}",
res, id
);
//Added this to try to get it to at least Error on the other side
drop(tx);
},
Err(err) => println!(
"ERROR sending end message to pipeline({}):{}",
id, err
),
};
} else {
println!("no sender channel found for pipeline: {}", id);
};
h.abort();
} else {
println!(
"no luck finding the value in handle in the switcher: {}",
id
);
};
}
Vacant(_) => {
println!("no luck finding the handle in the pipelines: {}", id)
}
};
Ok(warp::reply::with_status("done", http::StatusCode::OK))
}
}
},
);
let routes = session_create
.or(session_end)
.recover(handle_rejection)
.with(warp::cors().allow_any_origin());
println!("starting server...");
warp::serve(routes).run((ADDR, PORT)).await;
}
async fn handle_rejection(
err: warp::Rejection,
) -> Result<impl warp::Reply, std::convert::Infallible> {
Ok(warp::reply::json(&format!("{:?}", err)))
}
fn with_pipelines(
pipelines: RunningPipelines,
) -> impl Filter<Extract = (RunningPipelines,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || pipelines.clone())
}
depends:
[dependencies]
warp = "0.3"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde", "v4"] }
Results when I boot up, send a "create" request, and then an "end" request with the received ID:
starting server...
session requested OK!
Background going...
session end requested OK!: {"session_id": "6b984a45-38d8-41dc-bf95-422f75c5a429"}
occupied
removed from hashmap, key:6b984a45-38d8-41dc-bf95-422f75c5a429
sent end message|()| to fpipeline: 6b984a45-38d8-41dc-bf95-422f75c5a429
You'll notice that the background thread starts (and doesn't end) when the "create" request is made, but when the "end" request is made, while everything appears to complete successfully from the request(web) side, the background thread doesn't ever receive the message. As I've said I've tried all different channel types and moved things around to get it into this configuration... i.e. flattened and thread safetied as much as I could or at least could think of. I'm greener than I would like in rust, so any help would be VERY appreciated!
I think that the issue here is that you are sending the message and then immediately aborting the background task:
tx.send("goodbye".to_string());
//...
h.abort();
And the background task does not have time to process the message, as the abort is of higher priority.
What you need is to join the task, not to abort it.
Curiously, tokio tasks handles do not have a join() method, instead you wait for the handle itself. But for that you need to own the handle, so first you have to extract the handle from the Switcher:
let mut s = v.write().await;
//steal the task handle
if let Some(h) = s.handle.take() {
//...
tx.send("goodbye".to_string());
//...
//join the task
h.await.unwrap();
}
Note that joining a task may fail, in case the task is aborted or panicked. I'm just panicking in the code above, but you may want to do something different.
Or... you could not to wait for the task. In tokio if you drop a task handle, it will be detached. Then, it will finish when it finishes.

[tokio-rs][documentation] Multiple asynchronous "sub-apps" with shared state example?

A common pattern for Node.js apps is to split them into many "sub-apps" that share some state. Of course, all the "sub-apps" should be handled asynchronously.
Here's a simple example of such a Node app, with three "sub-apps":
An interval timer => Every 10 seconds, a shared itv_counter is incremented
A TCP server => For every TCP message received, a shared tcp_counter is incremented
A UDP server => For every UDP message received, a shared udp_counter is incremented
Every time one of the counters is incremented, all three counters must be printed (hence why the "sub-apps" need to share state).
Here's an implementation in Node. The nice thing about Node is that you can assume that pretty much all I/O operations are handled asynchronously by default. There's no cognitive overhead for the developer.
const dgram = require('dgram');
const net = require('net');
const tcp_port = 3000;
const udp_port = 3001;
const tcp_listener = net.createServer();
const udp_listener = dgram.createSocket('udp4');
// state shared by the 3 asynchronous applications
const shared_state = {
itv_counter: 0,
tcp_counter: 0,
udp_counter: 0,
};
// itv async app: increment itv_counter every 10 seconds and print shared state
setInterval(() => {
shared_state.itv_counter += 1;
console.log(`itv async app: ${JSON.stringify(shared_state)}`);
}, 10_000);
// tcp async app: increment tcp_counter every time a TCP message is received and print shared state
tcp_listener.on('connection', (client) => {
client.on('data', (_data) => {
shared_state.tcp_counter += 1;
console.log(`tcp async app: ${JSON.stringify(shared_state)}`);
});
});
tcp_listener.listen(tcp_port, () => {
console.log(`TCP listener on port ${tcp_port}`);
});
// udp async app: increment udp_counter every time a UDP message is received and print shared state
udp_listener.on('message', (_message, _client) => {
shared_state.udp_counter += 1;
console.log(`udp async app: ${JSON.stringify(shared_state)}`);
});
udp_listener.on('listening', () => {
console.log(`UDP listener on port ${udp_port}`);
});
udp_listener.bind(udp_port);
Now, here's an implementation in Rust with Tokio as the asynchronous runtime.
use std::sync::{Arc, Mutex};
use std::time::Duration;
use tokio::net::{TcpListener, UdpSocket};
use tokio::prelude::*;
// state shared by the 3 asynchronous applications
#[derive(Clone, Debug)]
struct SharedState {
state: Arc<Mutex<State>>,
}
#[derive(Debug)]
struct State {
itv_counter: usize,
tcp_counter: usize,
udp_counter: usize,
}
impl SharedState {
fn new() -> SharedState {
SharedState {
state: Arc::new(Mutex::new(State {
itv_counter: 0,
tcp_counter: 0,
udp_counter: 0,
})),
}
}
}
#[tokio::main]
async fn main() {
let shared_state = SharedState::new();
// itv async app: increment itv_counter every 10 seconds and print shared state
let itv_shared_state = shared_state.clone();
let itv_handle = tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(10));
interval.tick().await;
loop {
interval.tick().await;
let mut state = itv_shared_state.state.lock().unwrap();
state.itv_counter += 1;
println!("itv async app: {:?}", state);
}
});
// tcp async app: increment tcp_counter every time a TCP message is received and print shared state
let tcp_shared_state = shared_state.clone();
let tcp_handle = tokio::spawn(async move {
let mut tcp_listener = TcpListener::bind("127.0.0.1:3000").await.unwrap();
println!("TCP listener on port 3000");
while let Ok((mut tcp_stream, _)) = tcp_listener.accept().await {
let tcp_shared_state = tcp_shared_state.clone();
tokio::spawn(async move {
let mut buffer = [0; 1024];
while let Ok(byte_count) = tcp_stream.read(&mut buffer).await {
if byte_count == 0 {
break;
}
let mut state = tcp_shared_state.state.lock().unwrap();
state.tcp_counter += 1;
println!("tcp async app: {:?}", state);
}
});
}
});
// udp async app: increment udp_counter every time a UDP message is received and print shared state
let udp_shared_state = shared_state.clone();
let udp_handle = tokio::spawn(async move {
let mut udp_listener = UdpSocket::bind("127.0.0.1:3001").await.unwrap();
println!("UDP listener on port 3001");
let mut buffer = [0; 1024];
while let Ok(_byte_count) = udp_listener.recv(&mut buffer).await {
let mut state = udp_shared_state.state.lock().unwrap();
state.udp_counter += 1;
println!("udp async app: {:?}", state);
}
});
itv_handle.await.unwrap();
tcp_handle.await.unwrap();
udp_handle.await.unwrap();
}
First of all, as I'm not super comfortable with Tokio and async Rust yet, there might be things that are dead wrong in this implementation, or bad practice. Please let me know if that's the case (e.g. I have no clue if the three JoinHandle .await are necessary at the very end). That said, it behaves the same as the Node implementation for my simple tests.
But I'm still not sure if it's equivalent under the hood in terms of asynchronicity. Should there be a tokio::spawn for every callback in the Node app? In that case, I should wrap tcp_stream.read() and udp_listener.recv() in another tokio::spawn to mimic the Node callbacks for TCP's on('data') and UDP's on('message'), respectively. Not sure...
What would be the tokio implementation that would be totally equivalent to the Node.js app in terms of asynchronicity? In general, what's a good rule of thumb to know when something should be wrapped in a tokio::spawn?
I see you have three different counters for your tasks and so I think there is a meaningful way to use a token of your state struct and turn it around between tasks.
So every task is responsible to update its own counter.
As a suggestion I suggest to use tokio::sync::mpsc::channel and implement three mpsc value each one directed from one task to next one.
Of course if there is an update period difference between tasks there is a risk that some values update a little bit late but I think in general cases it can be ignored.

Rusoto async using FuturesOrdered combinator

I am trying to send off parallel asynchronous Rusoto SQS requests using FuturesOrdered:
use futures::prelude::*; // 0.1.26
use futures::stream::futures_unordered::FuturesUnordered;
use rusoto_core::{Region, HttpClient}; // 0.38.0
use rusoto_credential::EnvironmentProvider; // 0.17.0
use rusoto_sqs::{SendMessageBatchRequest, SendMessageBatchRequestEntry, Sqs, SqsClient}; // 0.38.0
fn main() {
let client = SqsClient::new_with(
HttpClient::new().unwrap(),
EnvironmentProvider::default(),
Region::UsWest2,
);
let messages: Vec<u32> = (1..12).map(|n| n).collect();
let chunks: Vec<_> = messages.chunks(10).collect();
let tasks: FuturesUnordered<_> = chunks.into_iter().map(|c| {
let batch = create_batch(c);
client.send_message_batch(batch)
}).collect();
let tasks = tasks
.for_each(|t| {
println!("{:?}", t);
Ok(())
})
.map_err(|e| println!("{}", e));
tokio::run(tasks);
}
fn create_batch(ids: &[u32]) -> SendMessageBatchRequest {
let queue_url = "https://sqs.us-west-2.amazonaws.com/xxx/xxx".to_string();
let entries = ids
.iter()
.map(|id| SendMessageBatchRequestEntry {
id: id.to_string(),
message_body: id.to_string(),
..Default::default()
})
.collect();
SendMessageBatchRequest {
entries,
queue_url,
}
}
The tasks complete correctly but tokio::run(tasks) doesn't stop. I assume that is because of tasks.for_each() will force it to continue to run and look for more futures?
Why doesn't tokio::run(tasks) stop? Am I using FuturesOrdered correctly?
I am also a little worried about memory usage when creating up to 60,000 futures to complete and pushing them into the FuturesUnordered combinator.
I discovered that it was the SqsClient in the main function that was causing it to block as it is still doing some house work even though the tasks are finished.
A solution provided by one of the Rusoto people was to add this just above tokio::run
std::mem::drop(client);

How to terminate or suspend a Rust thread from another thread?

Editor's note — this example was created before Rust 1.0 and the specific types have changed or been removed since then. The general question and concept remains valid.
I have spawned a thread with an infinite loop and timer inside.
thread::spawn(|| {
let mut timer = Timer::new().unwrap();
let periodic = timer.periodic(Duration::milliseconds(200));
loop {
periodic.recv();
// Do my work here
}
});
After a time based on some conditions, I need to terminate this thread from another part of my program. In other words, I want to exit from the infinite loop. How can I do this correctly? Additionally, how could I to suspend this thread and resume it later?
I tried to use a global unsafe flag to break the loop, but I think this solution does not look nice.
For both terminating and suspending a thread you can use channels.
Terminated externally
On each iteration of a worker loop, we check if someone notified us through a channel. If yes or if the other end of the channel has gone out of scope we break the loop.
use std::io::{self, BufRead};
use std::sync::mpsc::{self, TryRecvError};
use std::thread;
use std::time::Duration;
fn main() {
println!("Press enter to terminate the child thread");
let (tx, rx) = mpsc::channel();
thread::spawn(move || loop {
println!("Working...");
thread::sleep(Duration::from_millis(500));
match rx.try_recv() {
Ok(_) | Err(TryRecvError::Disconnected) => {
println!("Terminating.");
break;
}
Err(TryRecvError::Empty) => {}
}
});
let mut line = String::new();
let stdin = io::stdin();
let _ = stdin.lock().read_line(&mut line);
let _ = tx.send(());
}
Suspending and resuming
We use recv() which suspends the thread until something arrives on the channel. In order to resume the thread, you need to send something through the channel; the unit value () in this case. If the transmitting end of the channel is dropped, recv() will return Err(()) - we use this to exit the loop.
use std::io::{self, BufRead};
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
fn main() {
println!("Press enter to wake up the child thread");
let (tx, rx) = mpsc::channel();
thread::spawn(move || loop {
println!("Suspending...");
match rx.recv() {
Ok(_) => {
println!("Working...");
thread::sleep(Duration::from_millis(500));
}
Err(_) => {
println!("Terminating.");
break;
}
}
});
let mut line = String::new();
let stdin = io::stdin();
for _ in 0..4 {
let _ = stdin.lock().read_line(&mut line);
let _ = tx.send(());
}
}
Other tools
Channels are the easiest and the most natural (IMO) way to do these tasks, but not the most efficient one. There are other concurrency primitives which you can find in the std::sync module. They belong to a lower level than channels but can be more efficient in particular tasks.
The ideal solution would be a Condvar. You can use wait_timeout in the std::sync module, as pointed out by #Vladimir Matveev.
This is the example from the documentation:
use std::sync::{Arc, Mutex, Condvar};
use std::thread;
use std::time::Duration;
let pair = Arc::new((Mutex::new(false), Condvar::new()));
let pair2 = pair.clone();
thread::spawn(move|| {
let &(ref lock, ref cvar) = &*pair2;
let mut started = lock.lock().unwrap();
*started = true;
// We notify the condvar that the value has changed.
cvar.notify_one();
});
// wait for the thread to start up
let &(ref lock, ref cvar) = &*pair;
let mut started = lock.lock().unwrap();
// as long as the value inside the `Mutex` is false, we wait
loop {
let result = cvar.wait_timeout(started, Duration::from_millis(10)).unwrap();
// 10 milliseconds have passed, or maybe the value changed!
started = result.0;
if *started == true {
// We received the notification and the value has been updated, we can leave.
break
}
}
Having been back to this question several times myself, here's what I think addresses OP's intent and others' best practice of getting the thread to stop itself. Building on the accepted answer, Crossbeam is a nice upgrade to mpsc in allowing message endpoints to be cloned and moved. It also has a convenient tick function. The real point here is it has try_recv() which is non-blocking.
I'm not sure how universally useful it'd be to put a message checker in the middle of an operational loop like this. I haven't found that Actix (or previously Akka) could really stop a thread without--as stated above--getting the thread to do it itself. So this is what I'm using for now (wide open to correction here, still learning myself).
// Cargo.toml:
// [dependencies]
// crossbeam-channel = "0.4.4"
use crossbeam_channel::{Sender, Receiver, unbounded, tick};
use std::time::{Duration, Instant};
fn main() {
let (tx, rx):(Sender<String>, Receiver<String>) = unbounded();
let rx2 = rx.clone();
// crossbeam allows clone and move of receiver
std::thread::spawn(move || {
// OP:
// let mut timer = Timer::new().unwrap();
// let periodic = timer.periodic(Duration::milliseconds(200));
let ticker: Receiver<Instant> = tick(std::time::Duration::from_millis(500));
loop {
// OP:
// periodic.recv();
crossbeam_channel::select! {
recv(ticker) -> _ => {
// OP: Do my work here
println!("Hello, work.");
// Comms Check: keep doing work?
// try_recv is non-blocking
// rx, the single consumer is clone-able in crossbeam
let try_result = rx2.try_recv();
match try_result {
Err(_e) => {},
Ok(msg) => {
match msg.as_str() {
"END_THE_WORLD" => {
println!("Ending the world.");
break;
},
_ => {},
}
},
_ => {}
}
}
}
}
});
// let work continue for 10 seconds then tell that thread to end.
std::thread::sleep(std::time::Duration::from_secs(10));
println!("Goodbye, world.");
tx.send("END_THE_WORLD".to_string());
}
Using strings as a message device is a tad cringeworthy--to me. Could do the other suspend and restart stuff there in an enum.

Resources