How to create a complex concurrent datastructure in rust - rust

I need a struct containing several fields that should be usable in a concurrent context.
struct Foo {
number: u32,
collection: Vec<u32>,
}
In C I would add a Mutex to the struct and (un)lock that, but doing this in rust, using a Mutex<()> is not possible, as the struct is located somewhere inside an Arc. This means I need the Mutex around the fields if I want to mutate them.
However this leads to problems if I want to extract functionality into private methods, as Mutex is not reentrant.
[Edit]
If the fields in question are needed in both the main method and the helper method, locking multiple times is not the solution, as I would like the whole block to be treated as one unit, otherwise it would be possible to change something inbetween from another thread.
[/Edit]
In the simeple case like above, the solution would probably be to not make it concurrent at all and let the caller deal with it, i.e. use Mutex<Foo> and deal with the locking every time it is used.
However, in some situations, a CondVar is needed, to wait on the collection being filled or emptied, in which case I need a MutexGuard to wait on.
I guess I could move the CondVar to the outside as well, but then I would need to be concerned with the inner workings of my struct on the outside, which I don't like at all.
I don't know if I'm going completely into the wrong direction here, but I couldn't find anything on this by searching the web. Everything only talks about locking simple datatypes.
The approaches mentioned above:
struct Foo {
lock: Mutex<()>,
number: u32,
collection: Vec<u32>,
}
struct Foo {
number: Mutex<u32>,
collection: Mutex<Vec<u32>>,
}
impl Foo {
pub fn add_if_bigger(&self, v: u32) {
let collection = self.collection.lock().unwrap();
if self.is_bigger(v) {
collection.push(v);
}
}
fn is_bigger(&self) -> bool {
let collection = self.collection.lock().unwrap();
// already locked by this thread --^
// ...
}
}

Related

Initializing a struct field-by-field. Is it possible to know if all the fields were initialized?

I'm following the example from the official documentation. I'll copy the code here for simplicity:
#[derive(Debug, PartialEq)]
pub struct Foo {
name: String,
list: Vec<u8>,
}
let foo = {
let mut uninit: MaybeUninit<Foo> = MaybeUninit::uninit();
let ptr = uninit.as_mut_ptr();
// Initializing the `name` field
// Using `write` instead of assignment via `=` to not call `drop` on the
// old, uninitialized value.
unsafe { addr_of_mut!((*ptr).name).write("Bob".to_string()); }
// Initializing the `list` field
// If there is a panic here, then the `String` in the `name` field leaks.
unsafe { addr_of_mut!((*ptr).list).write(vec![0, 1, 2]); }
// All the fields are initialized, so we call `assume_init` to get an initialized Foo.
unsafe { uninit.assume_init() }
};
What bothers me is the second unsafe comment: If there is a panic here, then the String in the name field leaks. This is exactly what I want to avoid. I modified the example so now it reflects my concerns:
use std::mem::MaybeUninit;
use std::ptr::addr_of_mut;
#[derive(Debug, PartialEq)]
pub struct Foo {
name: String,
list: Vec<u8>,
}
#[allow(dead_code)]
fn main() {
let mut uninit: MaybeUninit<Foo> = MaybeUninit::uninit();
let ptr = uninit.as_mut_ptr();
init_foo(ptr);
// this is wrong because it tries to read the uninitialized field
// I could avoid this call if the function `init_foo` returns a `Result`
// but I'd like to know which fields are initialized so I can cleanup
let _foo = unsafe { uninit.assume_init() };
}
fn init_foo(foo_ptr: *mut Foo) {
unsafe { addr_of_mut!((*foo_ptr).name).write("Bob".to_string()); }
// something happened and `list` field left uninitialized
return;
}
The code builds and runs. But using MIRI I see the error:
Undefined Behavior: type validation failed at .value.list.buf.ptr.pointer: encountered uninitialized raw pointer
The question is how I can figure out which fields are initialized and which are not? Sure, I could return a result with the list of field names or similar, for example. But I don't want to do it - my struct can have dozens of fields, it changes over time and I'm too lazy to maintain an enum that should reflect the fields. Ideally I'd like to have something like this:
if addr_initialized!((*ptr).name) {
clean(addr_of_mut!((*ptr).name));
}
Update: Here's an example of what I want to achieve. I'm doing some Vulkan programming (with ash crate, but that's not important). I want to create a struct that holds all the necessary objects, like Device, Instance, Surface, etc.:
struct VulkanData {
pub instance: Instance,
pub device: Device,
pub surface: Surface,
// 100500 other fields
}
fn init() -> Result<VulkanData, Error> {
// let vulkan_data = VulkanData{}; // can't do that because some fields are not default constructible.
let instance = create_instance(); // can fail
let device = create_device(instance); // can fail, in this case instance have to be destroyed
let surface = create_surface(device); // can fail, in this case instance and device have to be destroyed
//other initialization routines
VulkanData{instance, device, surface, ...}
}
As you can see, for every such object, there's a corresponding create_x function, which can fail. Obviously, if I fail in the middle of the process, I don't want to proceed. But I want to clear already created objects. As you mentioned, I could create a wrapper. But it's very tedious work to create wrappers for hundreds of types, I absolutely want to avoid this (btw, ash is already a wrapper over C-types). Moreover, because of the asynchronous nature of CPU-GPU communication, sometimes it makes no sense to drop an object, it can lead to errors. Instead, some form of a signal should come from the GPU that indicates that an object is safe to destroy. That's the main reason why I can't implement Drop for the wrappers.
But as soon as the struct is successfully initialized I know that it's safe to read any of its fields. That's why don't want to use an Option - it adds some overhead and makes no sense in my particular example.
All that is trivially achievable in C++ - create an uninitialized struct (well, by default all Vulkan objects are initialized with VK_NULL_HANDLE), start to fill it field-by-field, if something went wrong just destroy the objects that are not null.
There is no general purpose way to tell if something is initialized or not. Miri can detect this because it adds a lot of instrumentation and overhead to track memory operations.
All that is trivially achievable in C++ - create an uninitialized struct (well, by default all Vulkan objects are initialized with VK_NULL_HANDLE), start to fill it field-by-field, if something went wrong just destroy the objects that are not null.
You could theoretically do the same in Rust, however this is quite unsafe and makes a lot of assumptions about the construction of the ash types.
If the functions didn't depend on each other, I might suggest something like this:
let instance = create_instance();
let device = create_device();
let surface = create_surface();
match (instance, device, surface) {
(Ok(instance), Ok(device), Ok(surface)) => {
Ok(VulkanData{
instance,
device,
surface,
})
}
instance, device, surface {
// clean up the `Ok` ones and return some error
}
}
However, your functions are dependent on others succeeding (e.g. need the Instance to create a Device) and this also has the disadvantage that it would keep creating values when one already failed.
Creating wrappers with custom drop behavior is the most robust way to accomplish this. There is the vulkano crate that is built on top of ash that does this among other things. But if that's not to your liking you can use something like scopeguard to encapsulate drop logic on the fly.
use scopeguard::{guard, ScopeGuard}; // 1.1.0
fn init() -> Result<VulkanData, Error> {
let instance = guard(create_instance()?, destroy_instance);
let device = guard(create_device(&instance)?, destroy_device);
let surface = guard(create_surface(&device)?, destroy_surface);
Ok(VulkanData {
// use `into_inner` to escape the drop behavior
instance: ScopeGuard::into_inner(instance),
device: ScopeGuard::into_inner(device),
surface: ScopeGuard::into_inner(surface),
})
}
See a full example on the playground. No unsafe required.
I believe MaybeUninit is designed for the cases when you have all the information about its contents and can make the code safe "by hand".
If you need to figure out in runtime if a field has a value, then use Option<T>.
Per the documentation:
You can think of MaybeUninit<T> as being a bit like Option<T> but without any of the run-time tracking and without any of the safety checks.

Is it safe to `Send` struct containing `Rc` if strong_count is 1 and weak_count is 0?

I have a struct that is not Send because it contains Rc. Lets say that Arc has too big overhead, so I want to keep using Rc. I would still like to occasionally Send this struct between threads, but only when I can verify that the Rc has strong_count 1 and weak_count 0.
Here is (hopefully safe) abstraction that I have in mind:
mod my_struct {
use std::rc::Rc;
#[derive(Debug)]
pub struct MyStruct {
reference_counted: Rc<String>,
// more fields...
}
impl MyStruct {
pub fn new() -> Self {
MyStruct {
reference_counted: Rc::new("test".to_string())
}
}
pub fn pack_for_sending(self) -> Result<Sendable, Self> {
if Rc::strong_count(&self.reference_counted) == 1 &&
Rc::weak_count(&self.reference_counted) == 0
{
Ok(Sendable(self))
} else {
Err(self)
}
}
// There are more methods, some may clone the `Rc`!
}
/// `Send`able wrapper for `MyStruct` that does not allow you to access it,
/// only unpack it.
pub struct Sendable(MyStruct);
// Safety: `MyStruct` is not `Send` because of `Rc`. `Sendable` can be
// only created when the `Rc` has strong count 1 and weak count 0.
unsafe impl Send for Sendable {}
impl Sendable {
/// Retrieve the inner `MyStruct`, making it not-sendable again.
pub fn unpack(self) -> MyStruct {
self.0
}
}
}
use crate::my_struct::MyStruct;
fn main() {
let handle = std::thread::spawn(|| {
let my_struct = MyStruct::new();
dbg!(&my_struct);
// Do something with `my_struct`, but at the end the inner `Rc` should
// not be shared with anybody.
my_struct.pack_for_sending().expect("Some Rc was still shared!")
});
let my_struct = handle.join().unwrap().unpack();
dbg!(&my_struct);
}
I did a demo on the Rust playground.
It works. My question is, is it actually safe?
I know that the Rc is owned only by a single onwer and nobody can change that under my hands, because it can't be accessed by other threads and we wrap it into Sendable which does not allow access to the contained value.
But in some crazy world Rc could for example internally use thread local storage and this would not be safe... So is there some guarantee that I can do this?
I know that I must be extremely careful to not introduce some additional reason for the MyStruct to not be Send.
No.
There are multiple points that need to be verified to be able to send Rc across threads:
There can be no other handle (Rc or Weak) sharing ownership.
The content of Rc must be Send.
The implementation of Rc must use a thread-safe strategy.
Let's review them in order!
Guaranteeing the absence of aliasing
While your algorithm -- checking the counts yourself -- works for now, it would be better to simply ask Rc whether it is aliased or not.
fn is_aliased<T>(t: &mut Rc<T>) -> bool { Rc::get_mut(t).is_some() }
The implementation of get_mut will be adjusted should the implementation of Rc change in ways you have not foreseen.
Sendable content
While your implementation of MyStruct currently puts String (which is Send) into Rc, it could tomorrow change to Rc<str>, and then all bets are off.
Therefore, the sendable check needs to be implemented at the Rc level itself, otherwise you need to audit any change to whatever Rc holds.
fn sendable<T: Send>(mut t: Rc<T>) -> Result<Rc<T>, ...> {
if !is_aliased(&mut t) {
Ok(t)
} else {
...
}
}
Thread-safe Rc internals
And that... cannot be guaranteed.
Since Rc is not Send, its implementation can be optimized in a variety of ways:
The entire memory could be allocated using a thread-local arena.
The counters could be allocated using a thread-local arena, separately, so as to seamlessly convert to/from Box.
...
This is not the case at the moment, AFAIK, however the API allows it, so the next release could definitely take advantage of this.
What should you do?
You could make pack_for_sending unsafe, and dutifully document all assumptions that are counted on -- I suggest using get_mut to remove one of them. Then, on each new release of Rust, you'd have to double-check each assumption to ensure that your usage if still safe.
Or, if you do not mind making an allocation, you could write a conversion to Arc<T> yourself (see Playground):
fn into_arc<T>(this: Rc<T>) -> Result<Arc<T>, Rc<T>> {
Rc::try_unwrap(this).map(|t| Arc::new(t))
}
Or, you could write a RFC proposing a Rc <-> Arc conversion!
The API would be:
fn Rc<T: Send>::into_arc(this: Self) -> Result<Arc<T>, Rc<T>>
fn Arc<T>::into_rc(this: Self) -> Result<Rc<T>, Arc<T>>
This could be made very efficiently inside std, and could be of use to others.
Then, you'd convert from MyStruct to MySendableStruct, just moving the fields and converting Rc to Arc as you go, send to another thread, then convert back to MyStruct.
And you would not need any unsafe...
The only difference between Arc and Rc is that Arc uses atomic counters. The counters are only accessed when the pointer is cloned or dropped, so the difference between the two is negligible in applications which just share pointers between long-lived threads.
If you have never cloned the Rc, it is safe to send between threads. However, if you can guarantee that the pointer is unique then you can make the same guarantee about a raw value, without using a smart pointer at all!
This all seems quite fragile, for little benefit; future changes to the code might not meet your assumptions, and you will end up with Undefined Behaviour. I suggest that you at least try making some benchmarks with Arc. Only consider approaches like this when you measure a performance problem.
You might also consider using the archery crate, which provides a reference-counted pointer that abstracts over atomicity.

Create two mutable references that are thread safe to a struct in Rust

I'm trying to create async Reader and Writer in tokio, these require Send, and must be thread safe. (does not seem to be a way to write single threaded tokio code that avoids mutexts)
The reader and writer both need to interact, e.g read data might result in a response.
I would like both Reader and Writer to have a thread safe pointer to a Session that can ensure communication between the two.
/// function on the
impl Session {
pub fn split(&mut self, sock: TcpStream) -> (Reader, Writer) {
let (read_half, write_half) = sock.split();
let session = Arc::new(RwLock::new(self)); // <- expected lifetime
( Reader::new(session, read_half), Writer::new(Arc::clone(session), write_half) )
}
...
}
pub struct Reader {
session: Arc<RwLock<&mut StompSession>>,
read_half: ReadHalf<TcpStream>,
...
pub struct Writer {
session: Arc<RwLock<&mut StompSession>>,
write_half: WriteHalf<TcpStream>,
....
I dont really understand the difference between Arc<RwLock<&mut StompSession>> and Arc<RwLock<StompSession>> both can only be talking about pointers.
Naturally come at this by stuggling with the borrow checker and rust book only has examples for RRwLock with integers, not mutable "objects".
Let's start by clearing a few things:
does not seem to be a way to write single threaded tokio code that avoids mutexes
The Mutex requirement has nothing to do with single-threadedness but with mutable borrows. Whenever you spawn a future, that future is its own entity; it isn't magically part of your struct and most definitely does not know how to keep a &mut self. That is the point of the Mutex - it allows you to dynamically acquire a mutable reference to inner state - and the Arc allows you to have access to the Mutex itself in multiple places.
Their non-synchronized equivalents are Rc and Cell/RefCell, by the way, and their content (whether synchronized or unsynchronized) should be an owned type.
The Send requirement actually shows up when you use futures on top of tokio, as an Executor requires futures spawned on it to be Send (for obvious reasons - you could make a spawn_local method, but that'd cause more problems than it solves).
Now, back to your problem. I'm going to give you the shortest path to an answer to your problem. This, however, will not be completely the right way of doing things; however, since I do not know what protocol you're going to lay on top of TcpStream or what kind of requirements you have, I cannot point you in the right direction (yet). Comments are there for that reason - give me more requirements and I'll happily edit this!
Anyway. Back to the problem. Since Mutex<_> is better used with an owned type, we're going to do that right now and "fix" both your Reader and Writer:
pub struct Reader {
session: Arc<RwLock<Session>>,
read_half: ReadHalf<TcpStream>,
}
pub struct Writer {
session: Arc<RwLock<StompSession>>,
write_half: WriteHalf<TcpStream>,
}
Since we changed this, we also need to reflect it elsewhere, but to be able to do this we're going to need to consume self. It is fine, however, as we're going to have an Arc<Mutex<_>> copy in either object we return:
impl Session {
pub fn split(self, sock: TcpStream) -> (Reader, Writer) {
let (read_half, write_half) = sock.split();
let session = Arc::new(RwLock::new(self));
( Reader::new(session.clone(), read_half), Writer::new(session, write_half) )
}
}
And lo and behold, it compiles and each Writer/Reader pair now has its own borrowable (mutably and non-mutably) reference to our session!
The playground snippet highlights the changes made. As I said, it works now, but it's going to bite you in the ass the moment you try to do something as you're going to need something on top of both ReadHalf and WriteHalf to be able to use them properly.

What are the use cases of the newly proposed Pin type?

There is a new Pin type in unstable Rust and the RFC is already merged. It is said to be kind of a game changer when it comes to passing references, but I am not sure how and when one should use it.
Can anyone explain it in layman's terms?
What is pinning ?
In programming, pinning X means instructing X not to move.
For example:
Pinning a thread to a CPU core, to ensure it always executes on the same CPU,
Pinning an object in memory, to prevent a Garbage Collector to move it (in C# for example).
What is the Pin type about?
The Pin type's purpose is to pin an object in memory.
It enables taking the address of an object and having a guarantee that this address will remain valid for as long as the instance of Pin is alive.
What are the usecases?
The primary usecase, for which it was developed, is supporting Generators.
The idea of generators is to write a simple function, with yield, and have the compiler automatically translate this function into a state machine. The state that the generator carries around is the "stack" variables that need to be preserved from one invocation to another.
The key difficulty of Generators that Pin is designed to fix is that Generators may end up storing a reference to one of their own data members (after all, you can create references to stack values) or a reference to an object ultimately owned by their own data members (for example, a &T obtained from a Box<T>).
This is a subcase of self-referential structs, which until now required custom libraries (and lots of unsafe). The problem of self-referential structs is that if the struct move, the reference it contains still points to the old memory.
Pin apparently solves this years-old issue of Rust. As a library type. It creates the extra guarantee that as long as Pin exist the pinned value cannot be moved.
The usage, therefore, is to first create the struct you need, return it/move it at will, and then when you are satisfied with its place in memory initialize the pinned references.
One of the possible uses of the Pin type is self-referencing objects; an article by ralfj provides an example of a SelfReferential struct which would be very complicated without it:
use std::ptr;
use std::pin::Pin;
use std::marker::PhantomPinned;
struct SelfReferential {
data: i32,
self_ref: *const i32,
_pin: PhantomPinned,
}
impl SelfReferential {
fn new() -> SelfReferential {
SelfReferential { data: 42, self_ref: ptr::null(), _pin: PhantomPinned }
}
fn init(self: Pin<&mut Self>) {
let this : &mut Self = unsafe { self.get_unchecked_mut() };
// Set up self_ref to point to this.data.
this.self_ref = &mut this.data as *const i32;
}
fn read_ref(self: Pin<&Self>) -> Option<i32> {
let this : &Self= self.get_ref();
// Dereference self_ref if it is non-NULL.
if this.self_ref == ptr::null() {
None
} else {
Some(unsafe { *this.self_ref })
}
}
}
fn main() {
let mut data: Pin<Box<SelfReferential>> = Box::new(SelfReferential::new()).into();
data.as_mut().init();
println!("{:?}", data.as_ref().read_ref()); // prints Some(42)
}

Forced to use of Mutex when it's not required

I am writing a game and have a player list defined as follows:
pub struct PlayerList {
by_name: HashMap<String, Arc<Mutex<Player>>>,
by_uuid: HashMap<Uuid, Arc<Mutex<Player>>>,
}
This struct has methods for adding, removing, getting players, and getting the player count.
The NetworkServer and Server shares this list as follows:
NetworkServer {
...
player_list: Arc<Mutex<PlayerList>>,
...
}
Server {
...
player_list: Arc<Mutex<PlayerList>>,
...
}
This is inside an Arc<Mutex> because the NetworkServer accesses the list in a different thread (network loop).
When a player joins, a thread is spawned for them and they are added to the player_list.
Although the only operation I'm doing is adding to player_list, I'm forced to use Arc<Mutex<Player>> instead of the more natural Rc<RefCell<Player>> in the HashMaps because Mutex<PlayerList> requires it. I am not accessing players from the network thread (or any other thread) so it makes no sense to put them under a Mutex. Only the HashMaps need to be locked, which I am doing using Mutex<PlayerList>. But Rust is pedantic and wants to protect against all misuses.
As I'm only accessing Players in the main thread, locking every time to do that is both annoying and less performant. Is there a workaround instead of using unsafe or something?
Here's an example:
use std::cell::Cell;
use std::collections::HashMap;
use std::ffi::CString;
use std::rc::Rc;
use std::sync::{Arc, Mutex};
use std::thread;
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
struct Uuid([u8; 16]);
struct Player {
pub name: String,
pub uuid: Uuid,
}
struct PlayerList {
by_name: HashMap<String, Arc<Mutex<Player>>>,
by_uuid: HashMap<Uuid, Arc<Mutex<Player>>>,
}
impl PlayerList {
fn add_player(&mut self, p: Player) {
let name = p.name.clone();
let uuid = p.uuid;
let p = Arc::new(Mutex::new(p));
self.by_name.insert(name, Arc::clone(&p));
self.by_uuid.insert(uuid, p);
}
}
struct NetworkServer {
player_list: Arc<Mutex<PlayerList>>,
}
impl NetworkServer {
fn start(&mut self) {
let player_list = Arc::clone(&self.player_list);
thread::spawn(move || {
loop {
// fake network loop
// listen for incoming connections, accept player and add them to player_list.
player_list.lock().unwrap().add_player(Player {
name: "blahblah".into(),
uuid: Uuid([0; 16]),
});
}
});
}
}
struct Server {
player_list: Arc<Mutex<PlayerList>>,
network_server: NetworkServer,
}
impl Server {
fn start(&mut self) {
self.network_server.start();
// main game loop
loop {
// I am only accessing players in this loop in this thread. (main thread)
// so Mutex for individual player is not needed although rust requires it.
}
}
}
fn main() {
let player_list = Arc::new(Mutex::new(PlayerList {
by_name: HashMap::new(),
by_uuid: HashMap::new(),
}));
let network_server = NetworkServer {
player_list: Arc::clone(&player_list),
};
let mut server = Server {
player_list,
network_server,
};
server.start();
}
As I'm only accessing Players in the main thread, locking everytime to do that is both annoying and less performant.
You mean, as right now you are only accessing Players in the main thread, but at any time later you may accidentally introduce an access to them in another thread?
From the point of view of the language, if you can get a reference to a value, you may use the value. Therefore, if multiple threads have a reference to a value, this value should be safe to use from multiple threads. There is no way to enforce, at compile-time, that a particular value, although accessible, is actually never used.
This raises the question, however:
If the value is never used by a given thread, why does this thread have access to it in the first place?
It seems to me that you have a design issue. If you can manage to redesign your program so that only the main thread has access to the PlayerList, then you will immediately be able to use Rc<RefCell<...>>.
For example, you could instead have the network thread send a message to the main thread announcing that a new player connected.
At the moment, you are "Communicating by Sharing", and you could shift toward "Sharing by Communicating" instead. The former usually has synchronization primitives (such as mutexes, atomics, ...) all over the place, and may face contention/dead-lock issues, while the latter usually has communication queues (channels) and requires an "asynchronous" style of programming.
Send is a marker trait that governs which objects can have ownership transferred across thread boundaries. It is automatically implemented for any type that is entirely composed of Send types. It is also an unsafe trait because manually implementing this trait can cause the compiler to not enforce the concurrency safety that we love about Rust.
The problem is that Rc<RefCell<Player>> isn't Send and thus your PlayerList isn't Send and thus can't be sent to another thread, even when wrapped in an Arc<Mutex<>>. The unsafe workaround would be to unsafe impl Send for your PlayerList struct.
Putting this code into your playground example allows it to compile the same way as the original with Arc<Mutex<Player>>
struct PlayerList {
by_name: HashMap<String, Rc<RefCell<Player>>>,
by_uuid: HashMap<Uuid, Rc<RefCell<Player>>>,
}
unsafe impl Send for PlayerList {}
impl PlayerList {
fn add_player(&mut self, p: Player) {
let name = p.name.clone();
let uuid = p.uuid;
let p = Rc::new(RefCell::new(p));
self.by_name.insert(name, Rc::clone(&p));
self.by_uuid.insert(uuid, p);
}
}
Playground
The Nomicon is sadly a little sparse at explaining what rules have have to be enforced by the programmer when unsafely implementing Send for a type containing Rcs, but accessing in only one thread seems safe enough...
For completeness, here's TRPL's bit on Send and Sync
I suggest solving this threading problem using a multi-sender-single-receiver channel. The network threads get a Sender<Player> and no direct access to the player list.
The Receiver<Player> gets stored inside the PlayerList. The only thread accessing the PlayerList is the main thread, so you can remove the Mutex around it. Instead in the place where the main-thread used to lock the mutexit dequeue all pending players from the Receiver<Player>, wraps them in an Rc<RefCell<>> and adds them to the appropriate collections.
Though looking at the bigger designing, I wouldn't use a per-player thread in the first place. Instead I'd use some kind single threaded event-loop based design. (I didn't look into which Rust libraries are good in that area, but tokio seems popular)

Resources