I have an iced application that I want to control through a web UI. For this I use axum and spawn it in a different tokio task. I would like to send a message from the axum endpoint to the iced application, but I can neither invoke the update method of the iced application nor can I listen to the crossbeam channel inside the iced application.
My code looks like this right now:
mod messages;
mod ui;
mod web;
use crossbeam_channel::unbounded;
use iced::Application;
use iced::Settings;
use tokio;
#[tokio::main]
async fn main() -> iced::Result {
let (s, r) = unbounded::<messages::WebToUIMessage>();
let server_handle = tokio::task::spawn(async { web::run_server(s).await });
let result = ui::MediaHub::run(Settings::default());
// We abort the server when the UI finishes
server_handle.abort();
result
}
Related
I'm new to rust and encountered an issue while building an API with warp. I'm trying to pass some requests to another thread with a channel(trying to avoid using arc/mutex). Still, I noticed that when I pass an mpsc::sync::Sender to a warp handler, I get this error.
"std::sync::mpsc::Sender cannot be shared between threads safely"
and
"the trait Sync is not implemented for `std::sync::mpsc::Sender"
Can someone lead me in the right direction?
use std::sync::mpsc::Sender;
pub async fn init_server(run_tx: Sender<Packet>) {
let store = Store::new();
let store_filter = warp::any().map(move || store.clone());
let run_tx_filter = warp::any().map(move || run_tx.clone());
let update_item = warp::get()
.and(warp::path("v1"))
.and(warp::path("auth"))
.and(warp::path::end())
.and(post_json())
.and(store_filter.clone())
.and(run_tx_filter.clone()) //where I'm trying to send "Sender"
.and_then(request_token);
let routes = update_item;
println!("HTTP server started on port 8080");
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
pub async fn request_token(
req: TokenRequest,
store: Store,
run_tx: Sender<Packet>,
) -> Result<impl warp::Reply, warp::Rejection> {
let (tmp_tx, tmp_rx) = std::sync::mpsc::channel();
run_tx
.send(Packet::IsPlayerLoggedIn(req.address, tmp_tx))
.unwrap();
let logged_in = tmp_rx.recv().unwrap();
if logged_in {
return Ok(warp::reply::with_status(
"Already logged in",
http::StatusCode::BAD_REQUEST,
));
}
Ok(warp::reply::with_status("some token", http::StatusCode::OK))
}
I've looked through some of the examples for warp, and was also wondering what are some good resources to get knowledgable of the crate. Thank you!
This is because you're using std::sync::mpsc::Sender. std::sync implements !Sync, so you won't be able to use that. You don't want to use that anyway since it's blocking.
When you use async functionality in rust, you need to provide a runtime for it. The good news is that warp runs on the tokio runtime, so you should have access to tokio::sync::mpsc If you take a look at that link, the Sender for that mpsc implementation implements Sync and Send so it's safe to share across threads.
TLDR:
Use tokio::sync::mpsc instead of std::sync::mpsc and this won't be an issue.
I am implementing a turn-based game server using tonic. I use gRPC streaming for the game so that players get sent updates about the moves their opponents make once they connect, more or less like this (simplified):
service Game {
rpc Connect(ConnectRequest) returns (stream CommandList);
rpc PerformAction(GameRequest) returns (CommandList);
}
The way I currently handle this is that I store a Sender object for each player that connects, so that I can update them later:
static CHANNELS: Lazy<DashMap<PlayerId, Sender<Result<CommandList, Status>>>> = (...)
#[tonic::async_trait]
impl MyGame for GameService {
type ConnectStream = ReceiverStream<Result<CommandList, Status>>;
async fn connect(
&self,
request: Request<ConnectRequest>,
) -> Result<Response<Self::ConnectStream>, Status> {
let (tx, rx) = mpsc::channel(4);
// Store channel to provide future updates:
CHANNELS.insert(player_id, tx);
Ok(Response::new(ReceiverStream::new(rx)))
}
}
This way, when game actions come in, I can check the CHANNELS map to see which opponents are connected and send them an update with the new game state:
async fn perform_action(
&self,
request: Request<GameRequest>,
) -> Result<Response<CommandList>, Status> {
if let Some(channel) = CHANNELS.get(&player_id) {
// Send update to opponent
}
}
This approach is generally working quite well. One immediate problem, however, is that the CHANNELS map grows infinitely in size as players connect, I haven't been able to find an explicit callback in tonic when users disconnect from their gRPC streaming session where I could clean up the map. Does something like that exist? Alternatively, is this a complete misuse of the API and I should be doing something totally different? :)
extern crate redis;
use redis::aio::ConnectionManager;
use std::env;
pub async fn connect() -> redis::aio::ConnectionManager {
let redis_host_name =
env::var("REDIS_HOSTNAME").expect("Missing environment variable REDIS_HOSTNAME");
let redis_tls = env::var("REDIS_TLS").map(|redis_tls| redis_tls == "1").unwrap_or(false);
let uri_scheme = match redis_tls {
true => "rediss",
false => "redis",
};
let redis_conn_url = format!("{}://{}", uri_scheme, redis_host_name);
let client = redis::Client::open(redis_conn_url)
.expect("Invalid connection URL");
let redis = ConnectionManager::new(client.clone()).await.unwrap();
redis
}
I have a pub/sub system with Redis in Rust. For now it works perfectly creating the connections separately (one to publish and another to subscribe), but if the redis server goes down, it does not try to reconnect. To try to solve it, I have seen the option of the ConnectionManager of the following link:
https://docs.rs/redis/0.21.5/redis/aio/struct.ConnectionManager.html
Which implements the reconnection. The problem is that I don't see documentation to be able to implement it. With the above function I return a ConnectionManager, which I then want to pass through the AppData in the routes with ActixWeb. But I don't understand how I use those connections to publish and subscribe to the channel. Since it tells me that those functions don't exist in the ConnectionManager. Any ideas?
I am using tungstenite to build a chat server, and the way I want to do it relies on having many threads that communicate with each other through mpsc. I want to start up a new thread for each user that connects to the server and connect them to a websocket, and also have that thread be able to read from mpsc so that the server can send messages out through that connection.
The problem is that the mpsc read blocks the thread, but I can't block the thread if I want to be reading from it. The only thing I could think of to work around that is to make two threads, one for inbound and one for outbound messages, but that requires me to share my websocket connection with both workers, which of course I cannot do.
Here's a heavily truncated version of my code where I try to make two workers in the Action::Connect arm of the match statement, which gives error[E0382]: use of moved value: 'websocket' for trying to move it into the second worker's closure:
extern crate tungstenite;
extern crate workerpool;
use std::net::{TcpListener, TcpStream};
use std::sync::mpsc::{self, Sender, Receiver};
use workerpool::Pool;
use workerpool::thunk::{Thunk, ThunkWorker};
use tungstenite::server::accept;
pub enum Action {
Connect(TcpStream),
Send(String),
}
fn main() {
let (main_send, main_receive): (Sender<Action>, Receiver<Action>) = mpsc::channel();
let worker_pool = Pool::<ThunkWorker<()>>::new(8);
{
// spawn thread to listen for users connecting to the server
let main_send = main_send.clone();
worker_pool.execute(Thunk::of(move || {
let listener = TcpListener::bind(format!("127.0.0.1:{}", 8080)).unwrap();
for (_, stream) in listener.incoming().enumerate() {
main_send.send(Action::Connect(stream.unwrap())).unwrap();
}
}));
}
let mut users: Vec<Sender<String>> = Vec::new();
// process actions from children
while let Some(act) = main_receive.recv().ok() {
match act {
Action::Connect(stream) => {
let mut websocket = accept(stream).unwrap();
let (user_send, user_receive): (Sender<String>, Receiver<String>) = mpsc::channel();
let main_send = main_send.clone();
// thread to read user input and propagate it to the server
worker_pool.execute(Thunk::of(move || {
loop {
let message = websocket.read_message().unwrap().to_string();
main_send.send(Action::Send(message)).unwrap();
}
}));
// thread to take server output and propagate it to the server
worker_pool.execute(Thunk::of(move || {
while let Some(message) = user_receive.recv().ok() {
websocket.write_message(tungstenite::Message::Text(message.clone())).unwrap();
}
}));
users.push(user_send);
}
Action::Send(message) => {
// take user message and echo to all users
for user in &users {
user.send(message.clone()).unwrap();
}
}
}
}
}
If I create just one thread for both in and output in that arm, then user_receive.recv() blocks the thread so I can't read any messages with websocket.read_message() until I get an mpsc message from the main thread. How can I solve both problems? I considered cloning the websocket but it doesn't implement Clone and I don't know if just making a new connection with the same stream is a reasonable thing to try to do, it seems hacky.
The problem is that the mpsc read blocks the thread
You can use try_recv to avoid thread blocking. The another implementation of mpsc is crossbeam_channel. That project is a recommended replacement even by the author of mpsc
I want to start up a new thread for each user that connects to the server
I think the asyn/await approach will be much better from most of the prospectives then thread per client one. You can read more about it there
Does rust currently have a library implement function similar to JavaScript's setTimeout and setInverval?, that is, a library that can call multiple setTimeout and setInterval to implement management of multiple tasks at the same time.
I feel that tokio is not particularly convenient to use. I imagine it to be used like this:
fn callback1() {
println!("callback1");
}
fn callback2() {
println!("callback2");
}
set_interval(callback1, 10);
set_interval(callback1, 20);
set_timeout(callback1, 30);
Of course, I can simulate a function to make it work:
// just for test, not what I wanted at all
type rust_listener_callback = fn();
fn set_interval(func: rust_listener_callback, duration: i32) {
func()
}
fn set_timeout(func: rust_listener_callback, duration: i32) {
func();
}
If a set_interval is implemented in this way, multiple combinations, dynamic addition and deletion, and cancellation are not particularly convenient:
use tokio::time;
async fn set_interval(func: rust_listener_callback, duration: u64) {
let mut interval = time::interval(Duration::from_millis(duration));
tokio::spawn(async move {
loop {
interval.tick().await;
func()
}
}).await;
}
// emm maybe loop can be removed, just a sample
While, What I want to know is if there is a library to do this, instead of writing it myself.
I have some idea if I would write it myself. Generally, all functions are turned into a task queue or task tree, and then tokio::time::delay_for can be used to execute them one by one, but the details are actually more complicated.
However, I think that this general capability may have already been implemented but I has not found for the time being, so I want to ask here, Thank you very much.
And importantly, I hope it can support single thread
setTimeout can be done like this without the need for a crate:
tokio::spawn(async move {
tokio::time::sleep(Duration::from_secs(5)).await;
// code goes here
});
I asked myself the same question a few days ago, created a solution for this (for tokio runtimes), and found your stackoverflow post just now.
https://crates.io/crates/tokio-js-set-interval
code.rs
use std::time::Duration;
use tokio_js_set_interval::{set_interval, set_timeout};
#[tokio::main]
async fn main() {
println!("hello1");
set_timeout!(println!("hello2"), 0);
println!("hello3");
set_timeout!(println!("hello4"), 0);
println!("hello5");
// give enough time before tokios runtime exits
tokio::time::sleep(Duration::from_millis(1)).await;
}
But this must be used with caution. There is no guarantee that the futures will be executed (because tokios runtime must run long enough). Use it only:
for educational purposes,
and if you have low priority background tasks that you don't expect to get executed always
I created a library just for this which allows setting many timeouts using only 1 tokio task (instead of spawning a new task for each timeout) which provides better performance and lower memory usage.
The library also supports cancelling timeouts, and provides some ways to optimize the performance of the timeouts.
Check it out:
https://crates.io/crates/set_timeout
Usage example:
#[tokio::main]
async fn main() {
let scheduler = TimeoutScheduler::new(None);
// schedule a future which will run after 1.234 seconds from now.
scheduler.set_timeout(Duration::from_secs_f32(1.234), async move {
println!("It works!");
});
// make sure that the main task doesn't end before the timeout is executed, because if the main
// task returns the runtime stops running.
tokio::time::sleep(Duration::from_secs(2)).await;
}