I have written a simple future based on this tutorial which looks like this:
extern crate chrono; // 0.4.6
extern crate futures; // 0.1.25
use std::{io, thread};
use chrono::{DateTime, Duration, Utc};
use futures::{Async, Future, Poll, task};
pub struct WaitInAnotherThread {
end_time: DateTime<Utc>,
running: bool,
}
impl WaitInAnotherThread {
pub fn new(how_long: Duration) -> WaitInAnotherThread {
WaitInAnotherThread {
end_time: Utc::now() + how_long,
running: false,
}
}
pub fn run(&mut self, task: task::Task) {
let lend = self.end_time;
thread::spawn(move || {
while Utc::now() < lend {
let delta_sec = lend.timestamp() - Utc::now().timestamp();
if delta_sec > 0 {
thread::sleep(::std::time::Duration::from_secs(delta_sec as u64));
}
task.notify();
}
println!("the time has come == {:?}!", lend);
});
}
}
impl Future for WaitInAnotherThread {
type Item = ();
type Error = Box<io::Error>;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
if Utc::now() < self.end_time {
println!("not ready yet! parking the task.");
if !self.running {
println!("side thread not running! starting now!");
self.run(task::current());
self.running = true;
}
Ok(Async::NotReady)
} else {
println!("ready! the task will complete.");
Ok(Async::Ready(()))
}
}
}
So the question is how do I replace pub fn run(&mut self, task: task::Task) with something that will not create a new thread for the future to resolve. It be useful if someone could rewrite my code with replaced run function without separate thread it will help me to understand how things should be. Also I know that tokio has an timeout implementation but I need this code for learning.
I think I understand what you mean.
Lets say you have two task, the Main and the Worker1, in this case you are polling the worker1 to wait for an answer; BUT there is a better way, and this is to wait for competition of the Worker1; and this can be done without having any Future, you simply call from Main the Worker1 function, when the worker is over the Main will go on. You need no future, you are simply calling a function, and the division Main and Worker1 is just an over-complication.
Now, I think your question became relevant in the moment you add at least another worker, last add Worker2, and you want the Main to resume the computation as soon as one of the two task complete; and you don't want those task to be executed in another thread/process, maybe because you are using asynchronous call (which simply mean the threading is done somewhere else, or you are low level enough that you receive Hardware Interrupt).
Since your Worker1 and Worker2 have to share the same thread, you need a way to save the current execution Main, create the one for one of the worker, and after a certain amount of work, time or other even (Scheduler), switch to the other worker, and so on. This is a Multi-Tasking system, and there are various software implementation for it in Rust; but with HW support you could do things that in software only you could not do (like have the hardware prevent one Task to access the resource from the other), plus you can have the CPU take care of the task switching and all... Well, this is what Thread and Process are.
Future are not what you are looking for, they are higher level and you can find some software scheduler that support them.
Related
I am trying to create a future polling for inputs from the crossterm crate, which does not provide an asynchronous API, as far as I know.
At first I tried to do something like the following :
use crossterm::event::poll as crossterm_poll;
use std::future::Future;
use std::pin::Pin;
use std::task::{Context, Poll};
use std::time::Duration;
use tokio::time::{sleep, timeout};
struct Polled {}
impl Polled {
pub fn new() -> Polled {
Polled {}
}
}
impl Future for Polled {
type Output = bool;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
// If there are events pending, it returns "Ok(true)", else it returns instantly
let poll_status = crossterm_poll(Duration::from_secs(0));
if poll_status.is_ok() && poll_status.unwrap() {
return Poll::Ready(true);
}
Poll::Pending
}
}
pub async fn poll(d: Duration) -> Result<bool, ()> {
let polled = Polled::new();
match timeout(d, polled).await {
Ok(b) => Ok(b),
Err(_) => Err(()),
}
}
It technically works but obivously the program started using 100% CPU all the time since the executor always try to poll the future in case there's something new. Thus I wanted to add some asynchronous equivalent to sleep, that would delay the next time the executor tries to poll the Future, so I tried adding the following (right before returning Poll::Pending), which obviously did not work since sleep_future::poll() just returns Pending :
let mut sleep_future = sleep(Duration::from_millis(50));
tokio::pin!(sleep_future);
sleep_future.poll(cx);
cx.waker().wake_by_ref();
The fact that poll is not async forbids the use of async functions, and I'm starting to wonder if what I want to do is actually feasible, or if I'm not diving in my first problem the wrong way.
Is finding a way to do some async sleep the good way to go ?
If not, what is it ? Am I missing something in the asynchronous paradigm ?
Or is it just sometimes impossible to wrap some synchronous logic into a Future if the crate does not give you the necessary tools to do so ?
Thanks in advance anyway !
EDIT : I found a way to do what I want using an async block :
pub async fn poll(d: Duration) -> Result<bool, ()> {
let mdr = async {
loop {
let a = crossterm_poll(Duration::from_secs(0));
if a.is_ok() && a.unwrap() {
break;
}
sleep(Duration::from_millis(50)).await;
}
true
};
match timeout(d, mdr).await {
Ok(b) => Ok(b),
Err(_) => Err(()),
}
}
Is it the idiomatic way to do so ? Or did I miss something more elegant ?
Yes, using an async block is a good way to compose futures, like your custom poller and tokio's sleep.
However, if you did want to write your own Future which also invokes tokio's sleep, here's what you would need to do differently:
Don't call wake_by_ref() immediately — the sleep future will take care of that when its time comes, and that's how you avoid spinning (using 100% CPU).
You must construct the sleep() future once when you intend to sleep (not every time you're polled), then store it in your future (this will require pin-projection) and poll the same future again the next time you're polled. That's how you ensure you wait the intended amount of time and not shorter.
Async blocks are usually a much easier way to get the same result.
I am making my own channel implementation, but std::task::Context doesn't make it clear how the waker was generated.
My fake code:
struct MyAtomicWaker {
lock: SpinLock,
is_waked: AtomicBool,
waker: std::task::Waker,
}
struct WeakAtomicWaker (Weak<MyAtomicWaker>)
impl MyAtomicWaker {
fn is_waked(&self) -> bool {}
fn weak(self: Arc<MyAtomicWaker>) -> WeakAtomicWaker;
fn cancel(&self) {} // nullify WeakAtomicWaker, means the waker is not waked by a future
}
impl WeakAtomicWaker {
fn wake(self) {} // upgrade to arc and can wake only once when waker not cancelled
}
struct ReceiveFuture<T> {
waker: Option<Arc<MyAtomicWaker>>,
}
impl<T> Drop for ReceiveFuture<T> {
fn drop(&mut self) {
if let Some(waker) = self.waker.take() { waker.cancel(); }
}
}
impl<T> Future for ReceiveFuture<T> {
type Output = Result<(), SendError<T>>;
fn poll(self: Pin<&mut Self>, ctx: &mut Context) -> Poll<Self::Output> {
let _self = self.get_mut();
if _self.waker.is_none() {
let my_waker = _self.reg_waker(ctx.waker().clone()); // wrap the waker with Arc, store it inside _self, and send the weak ref to other side of channel
_self.waker.replace(my_waker);
}
// do some polling
match _self.recv.try_recv() {
Ok(item)=>{
if let Some(waker) = _self.waker.take() {
waker.cancel();
}
return Poll::Ready(item); //canncel my waker and ready
},
Err(TryRecvError)=>{
if let Some(waker) = _self.waker.as_ref() {
if waker.is_wake() { // the waker is triggered but it's a false alarm, should make a new one.
let my_waker = self.reg_waker(ctx.waker().clone());
_self.waker.replace(my_waker);
} else { // the waker has not trigger, we do not have to make a new one ?
}
}
return Poll::Pending;
},
Err(...)
}
}
}
Is it necessary to register a new waker every time poll() is called? In my code, there's a lot of timeouts and looping selects due to the combination of different futures.
I have a little experiment that works on the playground, but I'm not sure whether it will always work fine for both Tokio and async-std in various settings.
In my production code, I register a new waker and cancel the old waker in every poll() call. I don't know whether it is safe to only register a waker the first time and reuse it on the next polls.
Given the following order:
f.reg_waker(waker1)
f.poll() gets Poll::Pending
combined future (or future::select) wakeup due to other future selecting, but waker1 has not woken up.
f.poll() gets Poll::Pending
some outsider call waker1.wake();
will waker1.wake() guarantee to wake up f after that?
I'm asking this because:
I have a Stream that multiplexes multiple receiving channels
My MPMC and MPSC channel implementations are lockless. Some channels inside a multiplex selection may be used as a close notification channel and seldom gets message. When I'm polling it a lot (say a million times), it will lead to a million waker thrown to the other side (which looks like a memory leak). Canceling previous wakers produced in the same future without lock is more complex logic than an implementation with lock
For these reasons, I have a waker canceling solution that leads to fairness problem, which needs to be avoided as much as possible
I'm not interested in what the book states, or what the API laws declare; I'm only interested in how the low level is implemented. Show some code why this works or why this does not work would be helpful. I code to implement product; if necessary I will stick to a specified dependency or do some hacking in order to get the job done until I have a better way.
Yes, it is required to re-set the waker each time. Future::poll states (emphasis mine):
Note that on multiple calls to poll, only the Waker from the Context passed to the most recent call should be scheduled to receive a wakeup.
See also:
Why do I not get a wakeup for multiple futures when they use the same underlying socket?
Is it valid to wake a Rust future while it's being polled?
Given the fact that "a waker can be waked in parallel of Future::poll.
Counter-evidence: Presuming every time a waker must be clone() and re-registered in order for this future to wake up properly, this will make previous waker invalid, so it's not possible for "concurrent wake up from different thread, e.g. future::select block". The conclusion is not true, so it counter-proves this statement:
"A waker is always valid from the time starting from ctx.waker().clone() until waker.wake()". This serves my motive. waker is not needed to reset every time, if it's not used to woken yet.
In addition to investigate tokio waker implementation, all RawWaker produce by ctx.waker.clone() is just a ref_count to a manual drop memory entry on the heap, if wakers clone outside prevent the ref_count dropping zero, the real waker entry always exists.
In a rust async function is there any way to get access to the current Context without writing an explicit implementation of a Future?
Before actually answering the question, it is useful to remember what a Context is; whenever you are writing an implementation of a Future that depends on outside resources (say, I/O), you do not want to busy-wait anything. As a result, you'll most likely have implementations of Future where you'll return Pending and then wake it up. Context (and Waker) exist for that purpose.
However, this is what they are: low-level, implementation details. If you are using a Future already as opposed to writing a low-level implementation of one, the Waker will most likely be contained somewhere, but not directly accessible to you.
As a result of this, a Waker directly leaking is an implementation detail leak 99.9% of the time and not actually recommended. A Waker being used as part of a bigger struct is perfectly fine, however, but this is where you'll need to implement your own Future from scratch. There is no other valid use case for this, and in normal terms, you should never need direct access to a Waker.
Due to the limitations of the playground, I sadly cannot show you a live example of when it is useful to get this Waker; however, such a future setup may be used in such a situation: let's assume we're building the front door of a house. We have a doorbell and a door, and we want to be notified when somebody rings the doorbell. However, we don't want to have to wait at the door for visitors.
We therefore make two objects: a FrontDoor and a Doorbell, and we give the option to wire() the Doorbell to connect the two.
pub struct FrontDoor {
doorbell: Arc<RwLock<Doorbell>>
}
impl FrontDoor {
pub fn new() -> FrontDoor {
FrontDoor {
doorbell: Arc::new(RwLock::new(Doorbell {
waker: None,
visitor: false
}))
}
}
pub fn wire(&self) -> Arc<RwLock<Doorbell>> {
self.doorbell.clone() // We retrieve the bell
}
}
impl Future for FrontDoor {
type Output = ();
fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
self.doorbell.read().map(|guard| {
match guard.visitor {
true => Poll::Ready(()),
false => Poll::Pending
}
}).unwrap_or(Poll::Pending)
}
}
pub struct Doorbell {
waker: Option<Waker>,
pub visitor: bool
}
impl Doorbell {
pub fn ring(&mut self) {
self.visitor = true;
self.waker.as_ref().map(|waker| waker.wake_by_ref());
}
}
Our FrontDoor implements Future, which means we can just throw it on an executor of your choice; waker is contained in the Doorbell object and allows us to "ring" and wake up our future.
I am writing a game and have a player list defined as follows:
pub struct PlayerList {
by_name: HashMap<String, Arc<Mutex<Player>>>,
by_uuid: HashMap<Uuid, Arc<Mutex<Player>>>,
}
This struct has methods for adding, removing, getting players, and getting the player count.
The NetworkServer and Server shares this list as follows:
NetworkServer {
...
player_list: Arc<Mutex<PlayerList>>,
...
}
Server {
...
player_list: Arc<Mutex<PlayerList>>,
...
}
This is inside an Arc<Mutex> because the NetworkServer accesses the list in a different thread (network loop).
When a player joins, a thread is spawned for them and they are added to the player_list.
Although the only operation I'm doing is adding to player_list, I'm forced to use Arc<Mutex<Player>> instead of the more natural Rc<RefCell<Player>> in the HashMaps because Mutex<PlayerList> requires it. I am not accessing players from the network thread (or any other thread) so it makes no sense to put them under a Mutex. Only the HashMaps need to be locked, which I am doing using Mutex<PlayerList>. But Rust is pedantic and wants to protect against all misuses.
As I'm only accessing Players in the main thread, locking every time to do that is both annoying and less performant. Is there a workaround instead of using unsafe or something?
Here's an example:
use std::cell::Cell;
use std::collections::HashMap;
use std::ffi::CString;
use std::rc::Rc;
use std::sync::{Arc, Mutex};
use std::thread;
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
struct Uuid([u8; 16]);
struct Player {
pub name: String,
pub uuid: Uuid,
}
struct PlayerList {
by_name: HashMap<String, Arc<Mutex<Player>>>,
by_uuid: HashMap<Uuid, Arc<Mutex<Player>>>,
}
impl PlayerList {
fn add_player(&mut self, p: Player) {
let name = p.name.clone();
let uuid = p.uuid;
let p = Arc::new(Mutex::new(p));
self.by_name.insert(name, Arc::clone(&p));
self.by_uuid.insert(uuid, p);
}
}
struct NetworkServer {
player_list: Arc<Mutex<PlayerList>>,
}
impl NetworkServer {
fn start(&mut self) {
let player_list = Arc::clone(&self.player_list);
thread::spawn(move || {
loop {
// fake network loop
// listen for incoming connections, accept player and add them to player_list.
player_list.lock().unwrap().add_player(Player {
name: "blahblah".into(),
uuid: Uuid([0; 16]),
});
}
});
}
}
struct Server {
player_list: Arc<Mutex<PlayerList>>,
network_server: NetworkServer,
}
impl Server {
fn start(&mut self) {
self.network_server.start();
// main game loop
loop {
// I am only accessing players in this loop in this thread. (main thread)
// so Mutex for individual player is not needed although rust requires it.
}
}
}
fn main() {
let player_list = Arc::new(Mutex::new(PlayerList {
by_name: HashMap::new(),
by_uuid: HashMap::new(),
}));
let network_server = NetworkServer {
player_list: Arc::clone(&player_list),
};
let mut server = Server {
player_list,
network_server,
};
server.start();
}
As I'm only accessing Players in the main thread, locking everytime to do that is both annoying and less performant.
You mean, as right now you are only accessing Players in the main thread, but at any time later you may accidentally introduce an access to them in another thread?
From the point of view of the language, if you can get a reference to a value, you may use the value. Therefore, if multiple threads have a reference to a value, this value should be safe to use from multiple threads. There is no way to enforce, at compile-time, that a particular value, although accessible, is actually never used.
This raises the question, however:
If the value is never used by a given thread, why does this thread have access to it in the first place?
It seems to me that you have a design issue. If you can manage to redesign your program so that only the main thread has access to the PlayerList, then you will immediately be able to use Rc<RefCell<...>>.
For example, you could instead have the network thread send a message to the main thread announcing that a new player connected.
At the moment, you are "Communicating by Sharing", and you could shift toward "Sharing by Communicating" instead. The former usually has synchronization primitives (such as mutexes, atomics, ...) all over the place, and may face contention/dead-lock issues, while the latter usually has communication queues (channels) and requires an "asynchronous" style of programming.
Send is a marker trait that governs which objects can have ownership transferred across thread boundaries. It is automatically implemented for any type that is entirely composed of Send types. It is also an unsafe trait because manually implementing this trait can cause the compiler to not enforce the concurrency safety that we love about Rust.
The problem is that Rc<RefCell<Player>> isn't Send and thus your PlayerList isn't Send and thus can't be sent to another thread, even when wrapped in an Arc<Mutex<>>. The unsafe workaround would be to unsafe impl Send for your PlayerList struct.
Putting this code into your playground example allows it to compile the same way as the original with Arc<Mutex<Player>>
struct PlayerList {
by_name: HashMap<String, Rc<RefCell<Player>>>,
by_uuid: HashMap<Uuid, Rc<RefCell<Player>>>,
}
unsafe impl Send for PlayerList {}
impl PlayerList {
fn add_player(&mut self, p: Player) {
let name = p.name.clone();
let uuid = p.uuid;
let p = Rc::new(RefCell::new(p));
self.by_name.insert(name, Rc::clone(&p));
self.by_uuid.insert(uuid, p);
}
}
Playground
The Nomicon is sadly a little sparse at explaining what rules have have to be enforced by the programmer when unsafely implementing Send for a type containing Rcs, but accessing in only one thread seems safe enough...
For completeness, here's TRPL's bit on Send and Sync
I suggest solving this threading problem using a multi-sender-single-receiver channel. The network threads get a Sender<Player> and no direct access to the player list.
The Receiver<Player> gets stored inside the PlayerList. The only thread accessing the PlayerList is the main thread, so you can remove the Mutex around it. Instead in the place where the main-thread used to lock the mutexit dequeue all pending players from the Receiver<Player>, wraps them in an Rc<RefCell<>> and adds them to the appropriate collections.
Though looking at the bigger designing, I wouldn't use a per-player thread in the first place. Instead I'd use some kind single threaded event-loop based design. (I didn't look into which Rust libraries are good in that area, but tokio seems popular)
I have the following code:
extern crate futures;
extern crate futures_cpupool;
extern crate tokio_timer;
use std::time::Duration;
use futures::Future;
use futures_cpupool::CpuPool;
use tokio_timer::Timer;
fn work(foo: Foo) {
std::thread::sleep(std::time::Duration::from_secs(10));
}
#[derive(Debug)]
struct Foo { }
impl Drop for Foo {
fn drop(&mut self) {
println!("Dropping Foo");
}
}
fn main() {
let pool = CpuPool::new_num_cpus();
let foo = Foo { };
let work_future = pool.spawn_fn(|| {
let work = work(foo);
let res: Result<(), ()> = Ok(work);
res
});
println!("Created the future");
let timer = Timer::default();
let timeout = timer.sleep(Duration::from_millis(750))
.then(|_| Err(()));
let select = timeout.select(work_future).map(|(win, _)| win);
match select.wait() {
Ok(()) => { },
Err(_) => { },
}
}
It seems this code doesn't execute Foo::drop - no message is printed.
I expected foo to be dropped as soon as timeout future resolves in select, as it's a part of environment of a closure, passed to dropped future.
How to make it execute Foo::drop?
The documentation for CpuPool states:
The worker threads associated with a thread pool are kept alive so long as there is an open handle to the CpuPool or there is work running on them. Once all work has been drained and all references have gone away the worker threads will be shut down.
Additionally, you transfer ownership of foo from main to the closure, which then transfers it to work. work will drop foo at the end of the block. However, work is also performing a blocking sleep operation. This sleep counts as work running on the thread.
The sleep is still going when the main thread exits, which immediately tears down the program, and all the threads, without any time to clean up.
As pointed out in How to terminate or suspend a Rust thread from another thread? (and other questions in other languages), there's no safe way to terminate a thread.
I expected foo to be dropped as soon as timeout future resolves in select, as it's a part of environment of a closure, passed to dropped future.
The future doesn't actually "have" the closure or foo. All it has is a handle to the thread:
pub struct CpuFuture<T, E> {
inner: Receiver<thread::Result<Result<T, E>>>,
keep_running_flag: Arc<AtomicBool>,
}
Strangely, the docs say:
If the returned future is dropped then this CpuPool will attempt to cancel the computation, if possible. That is, if the computation is in the middle of working, it will be interrupted when possible.
However, I don't see any implementation for Drop for CpuFuture, so I don't see how it could be possible (or safe). Instead of Drop, the threadpool itself runs a Future. When that future is polled, it checks to see if the receiver has been dropped. This behavior is provided by the oneshot::Receiver. However, this has nothing to do with threads, which are outside the view of the future.