This code uses tracing events:
# Cargo.toml
[dependencies]
tracing = "0.1.3"
tracing-subscriber = { version = "0.2.9", features = ["chrono", "env-filter", "fmt"] }
tracing-appender = "0.1.1"
use tracing::{Level, event, };
use tracing::dispatcher::{with_default, Dispatch};
use std::thread;
use tracing_appender::rolling::{RollingFileAppender};
use tracing_appender::non_blocking::{NonBlocking, WorkerGuard};
use tracing_subscriber::fmt::SubscriberBuilder;
pub static file_appender:RollingFileAppender = tracing_appender::rolling::never("/ws/sarvi-sjc/", "prefix.log");
pub static (non_blocking, _guard:WorkerGuard):(NonBlocking:WorkerGuard) = tracing_appender::non_blocking(file_appender);
pub static subscriber:SubscriberBuilder = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(Level::TRACE)
.with_writer(non_blocking)
.finish();
pub static my_dispatch = Dispatch::new(subscriber);
with_default(&my_dispatch, || {
event!(Level::INFO, "chmod(...)");
});
I want the first global static lines to be initialized and stored in a thread_local!() so that it is initialized only once for each thread.
I should then be able to use that thread-specific Dispatch instance to scope event subscribers. The above code is taken from inside a function and were let statements. As static variables, one them doesn't work, and had the same problem within thread_local!() as well.
pub static (non_blocking, _guard:WorkerGuard):(NonBlocking:WorkerGuard) = tracing_appender::non_blocking(file_appender);
error: expected identifier, found `(`
--> src/lib.rs:13:12
|
13 | pub static (non_blocking, _guard:WorkerGuard):(NonBlocking:WorkerGuard) = tracing_appender::non_blocking(file_appender);
| ^ expected identifier
The second problem was understanding how they are initialized in a thread local fashion.
Wrap your static declarations with the thread_local! macro, then you can access each value using the with method, which will return a value unique to the thread.
use tracing::{
dispatcher::{with_default, Dispatch},
event, Level,
};
use tracing_appender::non_blocking::WorkerGuard;
use tracing_subscriber::FmtSubscriber;
fn make_dispatch() -> (Dispatch, WorkerGuard) {
let file_appender = tracing_appender::rolling::never("ws/sarvi-sjc/", "prefix.log");
let (non_blocking, guard) = tracing_appender::non_blocking(file_appender);
let subscriber = FmtSubscriber::builder()
.with_max_level(Level::TRACE)
.with_writer(non_blocking)
.finish();
(Dispatch::new(subscriber), guard)
}
thread_local!(static MY_DISPATCH: (Dispatch, WorkerGuard) = make_dispatch());
fn main() {
// Main thread:
let (my_dispatch, _guard) = make_dispatch();
with_default(&my_dispatch, || {
event!(Level::INFO, "main thread");
});
// Other thread:
std::thread::spawn(|| {
MY_DISPATCH.with(|(my_dispatch, _guard)| {
with_default(&my_dispatch, || {
event!(Level::INFO, "other thread");
});
});
})
.join()
.unwrap();
}
I made sure to store the WorkerGuard in thread local storage too, so that it does not go out of scope after MY_DISPATCH is initialized. This is because the documentation for tracing_appender::non_blocking states:
This function returns a tuple of NonBlocking and WorkerGuard. NonBlocking implements MakeWriter which integrates with tracing_subscriber. WorkerGuard is a drop guard that is responsible for flushing any remaining logs when the program terminates.
Note that the WorkerGuard returned by non_blocking must be assigned to a binding that is not _, as _ will result in the WorkerGuard being dropped immediately. Unintentional drops of WorkerGuard remove the guarantee that logs will be flushed during a program's termination, in a panic or otherwise.
This way, the guard will be dropped when the thread exits. However, keep in mind that Rust's built-in thread local storage has some weird quirks about initialization and destruction. See the documentation for std::thread::LocalKey. Notably:
On Unix systems when pthread-based TLS is being used, destructors will not be run for TLS values on the main thread when it exits. Note that the application will exit immediately after the main thread exits as well.
Therefore, on the main thread, you should call make_dispatch() directly rather than MY_DISPATCH, so that it is dropped when the program exits (but note that in general, things are not guaranteed to be dropped, especially during a panic or std::process::exit, etc.; therefore, there is still a chance that some logs could be lost, as is the nature of most non-blocking I/O).
Related
In my application I have a blocking task that synchronically reads messages from a queue and feeds them to a running task.
All of this works fine, but the problem that I'm having is that the process does not terminate correctly, since the queue_reader task does not stop.
I've constructed a small example based on the tokio documentation at: https://docs.rs/tokio/1.20.1/tokio/task/fn.spawn_blocking.html
use tokio::sync::mpsc;
use tokio::task;
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
loop {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
println!("Finalizing thread");
queue_reader.abort(); // This doesn't seem to terminate the queue_reader task
queue_reader.await.unwrap(); // <-- The process hangs on this task.
println!("Done");
}
At first I expected that queue_reader.abort() should terminate the task, however it doesn't. My expectation is that tokio can only do this for tasks that use .await internally, because that will handle control over to tokio. Is this right?
In order to terminate the queue_reader task I introduced a oneshot channel, over which I signal the termination, as shown in the next snippet.
use tokio::task;
use tokio::sync::{oneshot, mpsc};
#[tokio::main]
async fn main() {
let (incoming_tx, mut incoming_rx) = mpsc::channel(2);
// A new channel to communicate when the process must finish.
let (term_tx, mut term_rx) = oneshot::channel();
// Some blocking task that never ends
let queue_reader = task::spawn_blocking(move || {
// As long as termination is not signalled
while term_rx.try_recv().is_err() {
// Stand in for receiving messages from queue
incoming_tx.blocking_send(5).unwrap();
}
});
let mut acc = 0;
// Some complex condition that determines whether the job is done
while acc < 95 {
tokio::select! {
Some(v) = incoming_rx.recv() => {
acc += v;
}
}
}
assert_eq!(acc, 95);
// Signal termination
term_tx.send(()).unwrap();
println!("Finalizing thread");
queue_reader.await.unwrap();
println!("Done");
}
My question is, is this the canonical/best way to do this, or are there better alternatives?
Tokio cannot terminate CPU-bound/blocking tasks.
It is technically possible to kill OS threads, but generally it is not a good idea, as it's expensive to create new threads and it can leave your program in an invalid state. Even if Tokio decided this was something worth implementing, it would serverely limit its implementation - it would be forced into a multithread model, just to support the possibility that you'd want to kill a blocking task before it's finished.
Your solution is pretty good; give your blocking task the responsibility for terminating itself and provide a way to tell it to do so. If this future was part of a library, you could abstract the mechanism away by returning a "handle" to the task that had a cancel() method.
Are there better alternatives? Maybe, but that would depend on other factors. Your solution is good and easily extended, for example if you later needed to send different types of signal to the task.
I am developing a CLI program for rendering template files using the new MiniJinja library by mitsuhiko.
The program is here: https://github.com/benwilber/temple.
I would like to be able to extend the program by allowing the user to load custom Lua scripts for things like custom filters, functions, and tests. However, I am running into Rust lifetime errors that I've not been able to solve.
Basically, I would like to be able to register a Lua function as a custom filter function. But it's showing an error when compiling. Here is the code:
https://github.com/benwilber/temple/compare/0.3.1..lua
Error:
https://gist.github.com/c649a0b240cf299d3dbbe018c24cbcdc
How can I call a Lua function from the MiniJinja add_filter function? I would prefer to try to do this in the regular/safe way. But I'm open to unsafe alternatives if required.
Thanks!
Edit: Posted the same on Reddit and users.rust-lang.org
Lua uses state that is not safe to use from more than one thread.
A consequence of this is that LuaFunction is neither Sync or Send.
This is being enforced by this part of the error message:
help: within `LuaFunction<'_>`, the trait `Sync` is not implemented for `*mut rlua::ffi::lua_State`
In contrast a minijinja::Filter must implement Send + Sync + 'static.
(See https://docs.rs/minijinja/0.5.0/minijinja/filters/trait.Filter.html)
This means we can't share LuaFunctions (or even LuaContext) between calls to the Filters.
One option is to not pass your lua state into the closures, and instead create a new lua state every call, something like this.
env.add_filter(
"concat2",
|_env: &Environment, s1: String, s2: String|
-> anyhow::Result<String, minijinja::Error> {
lua.context(|lua_ctx| {
lua_ctx.load(include_str!("temple.lua")).exec().unwrap();
let globals = lua_ctx.globals();
let temple: rlua::Table = globals.get("temple").unwrap();
let filters: rlua::Table = temple.get("_filters").unwrap();
let concat2: rlua::Function = filters.get("concat2").unwrap();
let res: String = concat2.call::<_, String>((s1, s2)).unwrap();
Ok(res)
}
}
);
This is likely to have relatively high overhead.
Another option is to create your rlua state in one thread and communicate with it via pipes. This would look more like this:
pub fn test() {
let mut env = minijinja::Environment::new();
let (to_lua_tx, to_lua_rx) = channel::<(String,String,SyncSender<String>)>();
thread::spawn(move|| {
let lua = rlua::Lua::new();
lua.context(move |lua_ctx| {
lua_ctx.load("some_code").exec().unwrap();
let globals = lua_ctx.globals();
let temple: rlua::Table = globals.get("temple").unwrap();
let filters: rlua::Table = temple.get("_filters").unwrap();
let concat2: rlua::Function = filters.get("concat2").unwrap();
while let Ok((s1,s2, channel)) = to_lua_rx.recv() {
let res: String = concat2.call::<_, String>((s1, s2)).unwrap();
channel.send(res).unwrap()
}
})
});
let to_lua_tx = Mutex::new(to_lua_tx);
env.add_filter(
"concat2",
move |_env: &minijinja::Environment,
s1: String,
s2: String|
-> anyhow::Result<String, minijinja::Error> {
let (tx,rx) = sync_channel::<String>(0);
to_lua_tx.lock().unwrap().send((s1,s2,tx)).unwrap();
let res = rx.recv().unwrap();
Ok(res)
}
);
}
It would even be possible to start multiple lua states this way, but would require a bit more plumbing.
DISCLAIMER: This code is all untested - however, it builds with a stubbed version of minijinja and rlua in the playground. You probably want better error handling and might need some additional code to handle cleanly shutting down all the threads.
I'm trying to create a daemon in Rust which runs a process on a schedule forever:
use scheduled_thread_pool::ScheduledThreadPool;
use std::time::Duration;
let pool = ScheduledThreadPool::new(1);
pool.execute_at_fixed_rate(
Duration::new(5, 0),
Duration::new(5, 0),
move || do_business_logic(),
);
This stops the threads as soon as processing reaches the end of main(). How can I keep it running forever?
You should not use the loop { } suggested in the comments.
This has drawbacks:
it will burn cpu needlessly
your program will run forever, even in the case your do_business_logic() panics.
Lets look at the docs of scheduled_thread_pool::execute_at_fixed_rate, it says under Panics:
If the closure panics, it will not be run again.
That doesn't sound very helpful, if you want to notice in your program, wether your do_business_logic panics
So you could create a std::sync::Channel and just wait for receiving a value on the channel.
You move the sender into the closure you hand over to the thread_pool.
If your closure panics, the sender will be dropped, and the receiver will stop waiting. So you know, something happened.
Some working code:
use scheduled_thread_pool::ScheduledThreadPool;
use std::time::Duration;
use std::sync::mpsc::*;
fn main() {
let pool = ScheduledThreadPool::new(1);
let (tx, rx): (Sender<u8>, Receiver<u8>) = channel();
let killswitch = std::sync::Arc::new(std::sync::Mutex::new(false));
let killswitch2 = killswitch.clone();
pool.execute_at_fixed_rate(
Duration::new(5, 0),
Duration::new(5, 0),
move || {
if *(killswitch2.lock().unwrap()) {
panic!("gotto go!");
}
println!("haha");
tx.send(1).unwrap();
}
);
for (count, _) in rx.iter().enumerate() {
println!("got one");
if count > 0 {
println!("that's boring, killing it");
*(killswitch.lock().unwrap()) = true;
}
}
println!("Have a nice day");
}
clippy yelled at me for using a Mutex around a bool and suggested using an AtomicBool instead, but I think, that is a different rabbit hole.
I am using tungstenite to build a chat server, and the way I want to do it relies on having many threads that communicate with each other through mpsc. I want to start up a new thread for each user that connects to the server and connect them to a websocket, and also have that thread be able to read from mpsc so that the server can send messages out through that connection.
The problem is that the mpsc read blocks the thread, but I can't block the thread if I want to be reading from it. The only thing I could think of to work around that is to make two threads, one for inbound and one for outbound messages, but that requires me to share my websocket connection with both workers, which of course I cannot do.
Here's a heavily truncated version of my code where I try to make two workers in the Action::Connect arm of the match statement, which gives error[E0382]: use of moved value: 'websocket' for trying to move it into the second worker's closure:
extern crate tungstenite;
extern crate workerpool;
use std::net::{TcpListener, TcpStream};
use std::sync::mpsc::{self, Sender, Receiver};
use workerpool::Pool;
use workerpool::thunk::{Thunk, ThunkWorker};
use tungstenite::server::accept;
pub enum Action {
Connect(TcpStream),
Send(String),
}
fn main() {
let (main_send, main_receive): (Sender<Action>, Receiver<Action>) = mpsc::channel();
let worker_pool = Pool::<ThunkWorker<()>>::new(8);
{
// spawn thread to listen for users connecting to the server
let main_send = main_send.clone();
worker_pool.execute(Thunk::of(move || {
let listener = TcpListener::bind(format!("127.0.0.1:{}", 8080)).unwrap();
for (_, stream) in listener.incoming().enumerate() {
main_send.send(Action::Connect(stream.unwrap())).unwrap();
}
}));
}
let mut users: Vec<Sender<String>> = Vec::new();
// process actions from children
while let Some(act) = main_receive.recv().ok() {
match act {
Action::Connect(stream) => {
let mut websocket = accept(stream).unwrap();
let (user_send, user_receive): (Sender<String>, Receiver<String>) = mpsc::channel();
let main_send = main_send.clone();
// thread to read user input and propagate it to the server
worker_pool.execute(Thunk::of(move || {
loop {
let message = websocket.read_message().unwrap().to_string();
main_send.send(Action::Send(message)).unwrap();
}
}));
// thread to take server output and propagate it to the server
worker_pool.execute(Thunk::of(move || {
while let Some(message) = user_receive.recv().ok() {
websocket.write_message(tungstenite::Message::Text(message.clone())).unwrap();
}
}));
users.push(user_send);
}
Action::Send(message) => {
// take user message and echo to all users
for user in &users {
user.send(message.clone()).unwrap();
}
}
}
}
}
If I create just one thread for both in and output in that arm, then user_receive.recv() blocks the thread so I can't read any messages with websocket.read_message() until I get an mpsc message from the main thread. How can I solve both problems? I considered cloning the websocket but it doesn't implement Clone and I don't know if just making a new connection with the same stream is a reasonable thing to try to do, it seems hacky.
The problem is that the mpsc read blocks the thread
You can use try_recv to avoid thread blocking. The another implementation of mpsc is crossbeam_channel. That project is a recommended replacement even by the author of mpsc
I want to start up a new thread for each user that connects to the server
I think the asyn/await approach will be much better from most of the prospectives then thread per client one. You can read more about it there
I'm trying to write a simple game that runs in the browser, and I'm having a hard time modeling a game loop given the combination of restrictions imposed by the browser, rust, and wasm-bindgen.
A typical game loop in the browser follows this general pattern:
function mainLoop() {
update();
draw();
requestAnimationFrame(mainLoop);
}
If I were to model this exact pattern in rust/wasm-bindgen, it would look like this:
let main_loop = Closure::wrap(Box::new(move || {
update();
draw();
window.request_animation_frame(main_loop.as_ref().unchecked_ref()); // Not legal
}) as Box<FnMut()>);
Unlike javascript, I'm unable to reference main_loop from within itself, so this doesn't work.
An alternative approach that someone suggested is to follow the pattern illustrated in the game of life example. At a high-level, it involves exporting a type that contains the game state and includes public tick() and render() functions that can be called from within a javascript game loop. This doesn't work for me because my gamestate requires lifetime parameters, since it effectively just wraps a specs World and Dispatcher struct, the latter of which has lifetime parameters. Ultimately, this means that I can't export it using #[wasm_bindgen].
I'm having a hard time finding ways to work around these restrictions, and am looking for suggestions.
The easiest way to model this is likely to leave invocations of requestAnimationFrame to JS and instead just implement the update/draw logic in Rust.
In Rust, however, what you can also do is to exploit the fact that a closure which doesn't actually capture any variables is zero-size, meaning that Closure<T> of that closure won't allocate memory and you can safely forget it. For example something like this should work:
#[wasm_bindgen]
pub fn main_loop() {
update();
draw();
let window = ...;
let closure = Closure::wrap(Box::new(|| main_loop()) as Box<Fn()>);
window.request_animation_frame(closure.as_ref().unchecked_ref());
closure.forget(); // not actually leaking memory
}
If your state has lifetimes inside of it, that is unfortunately incompatible with returning back to JS because when you return all the way back to the JS event loop then all WebAssembly stack frames have been popped, meaning that any lifetime is invalidated. This means that your game state persisted across iterations of the main_loop will need to be 'static
I'm a Rust novice, but here's how I addressed the same issue.
You can eliminate the problematic window.request_animation_frame recursion and implement an FPS cap at the same time by invoking window.request_animation_frame from a window.set_interval callback which checks a Rc<RefCell<bool>> or something to see if there's an animation frame request still pending. I'm not sure if the inactive tab behavior will be any different in practice.
I put the bool into my application state since I'm using an Rc<RefCell<...>> to that anyway for other event handling. I haven't checked that this below compiles as is, but here's the relevant parts of how I'm doing this:
pub struct MyGame {
...
should_request_render: bool, // Don't request another render until the previous runs, init to false since we'll fire the first one immediately.
}
...
let window = web_sys::window().expect("should have a window in this context");
let application_reference = Rc::new(RefCell::new(MyGame::new()));
let request_animation_frame = { // request_animation_frame is not forgotten! Its ownership is moved into the timer callback.
let application_reference = application_reference.clone();
let request_animation_frame_callback = Closure::wrap(Box::new(move || {
let mut application = application_reference.borrow_mut();
application.should_request_render = true;
application.handle_animation_frame(); // handle_animation_frame being your main loop.
}) as Box<FnMut()>);
let window = window.clone();
move || {
window
.request_animation_frame(
request_animation_frame_callback.as_ref().unchecked_ref(),
)
.unwrap();
}
};
request_animation_frame(); // fire the first request immediately
let timer_closure = Closure::wrap(
Box::new(move || { // move both request_animation_frame and application_reference here.
let mut application = application_reference.borrow_mut();
if application.should_request_render {
application.should_request_render = false;
request_animation_frame();
}
}) as Box<FnMut()>
);
window.set_interval_with_callback_and_timeout_and_arguments_0(
timer_closure.as_ref().unchecked_ref(),
25, // minimum ms per frame
)?;
timer_closure.forget(); // this leaks it, you could store it somewhere or whatever, depends if it's guaranteed to live as long as the page
You can store the result of set_interval and the timer_closure in Options in your game state so that your game can clean itself up if needed for some reason (maybe? I haven't tried this, and it would seem to cause a free of self?). The circular reference won't erase itself unless broken (you're then storing Rcs to the application inside the application effectively). It should also enable you to change the max fps while running, by stopping the interval and creating another using the same closure.