I am implementing a turn-based game server using tonic. I use gRPC streaming for the game so that players get sent updates about the moves their opponents make once they connect, more or less like this (simplified):
service Game {
rpc Connect(ConnectRequest) returns (stream CommandList);
rpc PerformAction(GameRequest) returns (CommandList);
}
The way I currently handle this is that I store a Sender object for each player that connects, so that I can update them later:
static CHANNELS: Lazy<DashMap<PlayerId, Sender<Result<CommandList, Status>>>> = (...)
#[tonic::async_trait]
impl MyGame for GameService {
type ConnectStream = ReceiverStream<Result<CommandList, Status>>;
async fn connect(
&self,
request: Request<ConnectRequest>,
) -> Result<Response<Self::ConnectStream>, Status> {
let (tx, rx) = mpsc::channel(4);
// Store channel to provide future updates:
CHANNELS.insert(player_id, tx);
Ok(Response::new(ReceiverStream::new(rx)))
}
}
This way, when game actions come in, I can check the CHANNELS map to see which opponents are connected and send them an update with the new game state:
async fn perform_action(
&self,
request: Request<GameRequest>,
) -> Result<Response<CommandList>, Status> {
if let Some(channel) = CHANNELS.get(&player_id) {
// Send update to opponent
}
}
This approach is generally working quite well. One immediate problem, however, is that the CHANNELS map grows infinitely in size as players connect, I haven't been able to find an explicit callback in tonic when users disconnect from their gRPC streaming session where I could clean up the map. Does something like that exist? Alternatively, is this a complete misuse of the API and I should be doing something totally different? :)
Related
First off: I know running Rust on an ESP32 isn't a very common practice yet, and some (quite a bit of) trouble is to be expected. But I seem to have hit a roadblock.
What works:
flashing and running the code on an ESP32
passing along the certificates in the src/certificates directory
WiFi connection (simple WPA Personal, nothing fancy like WPA Enterprise)
publishing and suscribing to topics using MQTT
What doesn't work:
publising and subscribing to AWS IoT (Core) using MQTT. This needs certificates, and as far as I'm aware I'm handling this properly (see code below).
Some additional info (see code below):
server.cert.crt is renamed from the AWS provided root-CA.crt
client.cert.pem is renamed from the AWS provided my-thing-rev1.cert.pem
client.private.key is renamed from the AWS provided my-thing-rev1.private.key
I also received my-thing-rev1.public.key and my-thing-rev1-Policy, but I don't think I need these...?
I know this is not the proper way of implementing this (I should not provide the certificates directly, instead use a service to get them, but this is a very basic POC)
the code works fine if I don't want to connect to AWS, but instead use my own broker or broker.emqx.io for testing (even with the certificates included)
This is the code I'm currently using (heavily based on Rust on ESP32 STD demo app):
use embedded_svc::httpd::Result;
use embedded_svc::mqtt::client::{Connection, MessageImpl, QoS};
use esp_idf_svc::mqtt::client::{EspMqttClient, MqttClientConfiguration};
use esp_idf_svc::tls::X509;
use esp_idf_sys::EspError;
// other needed imports (not relevant here)
extern crate dotenv_codegen;
extern crate core;
const AWS_IOT_ENDPOINT: &str = dotenv!("AWS_IOT_ENDPOINT");
const AWS_IOT_CLIENT_ID: &str = dotenv!("AWS_IOT_CLIENT_ID");
const AWS_IOT_TOPIC: &str = dotenv!("AWS_IOT_TOPIC");
fn main() -> Result<()> {
esp_idf_sys::link_patches();
// other code
let mqtt_client: EspMqttClient<ConnState<MessageImpl, EspError>> = test_mqtt_client()?;
// more code
Ok(())
}
fn convert_certificate(mut certificate_bytes: Vec<u8>) -> X509<'static> {
// append NUL
certificate_bytes.push(0);
// convert the certificate
let certificate_slice: &[u8] = unsafe {
let ptr: *const u8 = certificate_bytes.as_ptr();
let len: usize = certificate_bytes.len();
mem::forget(certificate_bytes);
slice::from_raw_parts(ptr, len)
};
// return the certificate file in the correct format
X509::pem_until_nul(certificate_slice)
}
fn test_mqtt_client() -> Result<EspMqttClient<ConnState<MessageImpl, EspError>>> {
info!("About to start MQTT client");
let server_cert_bytes: Vec<u8> = include_bytes!("certificates/server.cert.crt").to_vec();
let client_cert_bytes: Vec<u8> = include_bytes!("certificates/client.cert.pem").to_vec();
let private_key_bytes: Vec<u8> = include_bytes!("certificates/client.private.key").to_vec();
let server_cert: X509 = convert_certificate(server_cert_bytes);
let client_cert: X509 = convert_certificate(client_cert_bytes);
let private_key: X509 = convert_certificate(private_key_bytes);
// TODO: fix the following error: `E (16903) esp-tls-mbedtls: mbedtls_ssl_handshake returned -0x7280`
let conf = MqttClientConfiguration {
client_id: Some(AWS_IOT_CLIENT_ID),
crt_bundle_attach: Some(esp_idf_sys::esp_crt_bundle_attach),
server_certificate: Some(server_cert),
client_certificate: Some(client_cert),
private_key: Some(private_key),
..Default::default()
};
let (mut client, mut connection) =
EspMqttClient::new_with_conn(AWS_IOT_ENDPOINT, &conf)?;
info!("MQTT client started");
// Need to immediately start pumping the connection for messages, or else subscribe() and publish() below will not work
// Note that when using the alternative constructor - `EspMqttClient::new` - you don't need to
// spawn a new thread, as the messages will be pumped with a backpressure into the callback you provide.
// Yet, you still need to efficiently process each message in the callback without blocking for too long.
//
// Note also that if you go to http://tools.emqx.io/ and then connect and send a message to the specified topic,
// the client configured here should receive it.
thread::spawn(move || {
info!("MQTT Listening for messages");
while let Some(msg) = connection.next() {
match msg {
Err(e) => info!("MQTT Message ERROR: {}", e),
Ok(msg) => info!("MQTT Message: {:?}", msg),
}
}
info!("MQTT connection loop exit");
});
client.subscribe(AWS_IOT_TOPIC, QoS::AtMostOnce)?;
info!("Subscribed to all topics ({})", AWS_IOT_TOPIC);
client.publish(
AWS_IOT_TOPIC,
QoS::AtMostOnce,
false,
format!("Hello from {}!", AWS_IOT_TOPIC).as_bytes(),
)?;
info!("Published a hello message to topic \"{}\".", AWS_IOT_TOPIC);
Ok(client)
}
Here are the final lines of output when I try to run this on the device (it's setup to compile and flash to the device and monitor (debug mode) when running cargo run):
I (16913) esp32_aws_iot_with_std: About to start MQTT client
I (16923) esp32_aws_iot_with_std: MQTT client started
I (16923) esp32_aws_iot_with_std: MQTT Listening for messages
I (16933) esp32_aws_iot_with_std: MQTT Message: BeforeConnect
I (17473) esp-x509-crt-bundle: Certificate validated
E (19403) MQTT_CLIENT: mqtt_message_receive: transport_read() error: errno=119 # <- This is the actual error
E (19403) MQTT_CLIENT: esp_mqtt_connect: mqtt_message_receive() returned -1
E (19413) MQTT_CLIENT: MQTT connect failed
I (19413) esp32_aws_iot_with_std: MQTT Message ERROR: ESP_FAIL
I (19423) esp32_aws_iot_with_std: MQTT Message: Disconnected
E (19433) MQTT_CLIENT: Client has not connected
I (19433) esp32_aws_iot_with_std: MQTT connection loop exit
I (24423) esp_idf_svc::eventloop: Dropped
I (24423) esp_idf_svc::wifi: Stop requested
I (24423) wifi:state: run -> init (0)
I (24423) wifi:pm stop, total sleep time: 10737262 us / 14862601 us
W (24423) wifi:<ba-del>idx
I (24433) wifi:new:<1,0>, old:<1,1>, ap:<1,1>, sta:<1,0>, prof:1
W (24443) wifi:hmac tx: ifx0 stop, discard
I (24473) wifi:flush txq
I (24473) wifi:stop sw txq
I (24473) wifi:lmac stop hw txq
I (24473) esp_idf_svc::wifi: Stopping
I (24473) esp_idf_svc::wifi: Disconnect requested
I (24473) esp_idf_svc::wifi: Stop requested
I (24483) esp_idf_svc::wifi: Stopping
I (24483) wifi:Deinit lldesc rx mblock:10
I (24503) esp_idf_svc::wifi: Driver deinitialized
I (24503) esp_idf_svc::wifi: Dropped
I (24503) esp_idf_svc::eventloop: Dropped
Error: ESP_FAIL
This error seems to indicate the buffer holding the incoming data is full and can't hold any more data, but I'm not sure. And I definately don't know how to fix it.
(I assume the actual certificate handling is done properly)
When I run the following command, I do get the message in AWS IoT (MQTT test client):
mosquitto_pub -h my.amazonawsIoT.com --cafile server.cert.crt --cert client.cert.pem --key client.private.key -i basicPubSub -t my/topic -m 'test'
Does anyone have some more experience with this who can point me in the right direction?
Is this actually a buffer error, and if so: how do I mitigate this error? Do I need to increase the buffer size somehow (it is running on a basic ESP32 revision 1, ESP32_Devkitc_v4, if that helps). As far as I can tell this version has a 4MB flash size, so that might explain the buffer overlow, although I think this should be enough. The total memory used is under 35% of the total storage (App/part. size: 1347344/4128768 bytes, 32.63%)
UPDATE 1: I have been made aware that this data is stored in RAM, not in flash memory (didn't cross my mind at the time), but I'm not entirely sure on how large the RAM on my specific device is (ESP32 revision 1, ESP32_Devkitc_v4). My best guess is 320KB, but I'm not sure.
UPDATE 2: I've tried changing the buffer size like so:
let conf = MqttClientConfiguration {
client_id: Some(AWS_IOT_CLIENT_ID),
crt_bundle_attach: Some(esp_idf_sys::esp_crt_bundle_attach),
server_certificate: Some(server_cert),
client_certificate: Some(client_cert),
private_key: Some(private_key),
buffer_size: 50, // added this (tried various sizes)
out_buffer_size: 50, // added this (tried various sizes)
..Default::default()
};
I've tried various combinations, but this doesn't seem to change much: either I get the exact same error, or this one (when choosing smaller numbers, for example 10):
E (18303) MQTT_CLIENT: Connect message cannot be created
E (18303) MQTT_CLIENT: MQTT connect failed
E (18313) MQTT_CLIENT: Client has not connected
I'm not sure how big this buffer size should be (when sending simple timestamps to AWS IoT), and can't find any documentation on what this number represents: is it in Bit, KiloBit, ... No idea.
I'm new to rust and encountered an issue while building an API with warp. I'm trying to pass some requests to another thread with a channel(trying to avoid using arc/mutex). Still, I noticed that when I pass an mpsc::sync::Sender to a warp handler, I get this error.
"std::sync::mpsc::Sender cannot be shared between threads safely"
and
"the trait Sync is not implemented for `std::sync::mpsc::Sender"
Can someone lead me in the right direction?
use std::sync::mpsc::Sender;
pub async fn init_server(run_tx: Sender<Packet>) {
let store = Store::new();
let store_filter = warp::any().map(move || store.clone());
let run_tx_filter = warp::any().map(move || run_tx.clone());
let update_item = warp::get()
.and(warp::path("v1"))
.and(warp::path("auth"))
.and(warp::path::end())
.and(post_json())
.and(store_filter.clone())
.and(run_tx_filter.clone()) //where I'm trying to send "Sender"
.and_then(request_token);
let routes = update_item;
println!("HTTP server started on port 8080");
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
pub async fn request_token(
req: TokenRequest,
store: Store,
run_tx: Sender<Packet>,
) -> Result<impl warp::Reply, warp::Rejection> {
let (tmp_tx, tmp_rx) = std::sync::mpsc::channel();
run_tx
.send(Packet::IsPlayerLoggedIn(req.address, tmp_tx))
.unwrap();
let logged_in = tmp_rx.recv().unwrap();
if logged_in {
return Ok(warp::reply::with_status(
"Already logged in",
http::StatusCode::BAD_REQUEST,
));
}
Ok(warp::reply::with_status("some token", http::StatusCode::OK))
}
I've looked through some of the examples for warp, and was also wondering what are some good resources to get knowledgable of the crate. Thank you!
This is because you're using std::sync::mpsc::Sender. std::sync implements !Sync, so you won't be able to use that. You don't want to use that anyway since it's blocking.
When you use async functionality in rust, you need to provide a runtime for it. The good news is that warp runs on the tokio runtime, so you should have access to tokio::sync::mpsc If you take a look at that link, the Sender for that mpsc implementation implements Sync and Send so it's safe to share across threads.
TLDR:
Use tokio::sync::mpsc instead of std::sync::mpsc and this won't be an issue.
I would like to include processing time as metadata returned by my grpc enpoints. This is so I can measure processing time on the caller's end and arrive at a measurement of queueing time, which can then be used to scale my service in and out.
As of right now, this leads to a bunch of boilerplate code in every endpoint, e.g.
async fn get_health_status(
&self,
_request: Request<HealthStatusRequest>,
) -> Result<Response<HealthStatusResponse>, Status> {
let start = Instant::now();
let health_status = self.get_health_status_impl();
let elapsed = start.elapsed().as_secs_f64();
PROMETHEUS_HANDLER.report_processing_time(elapsed, "get_health_status");
let mut response = Response::new(HealthStatusResponse { health_status });
response.metadata_mut().insert("duration", format!("{:.2}", elapsed).parse().unwrap());
Ok(response)
}
Is there a better way to avoid quite so much boilerplate, perhaps with callback methods or some other way?
I am using tungstenite to build a chat server, and the way I want to do it relies on having many threads that communicate with each other through mpsc. I want to start up a new thread for each user that connects to the server and connect them to a websocket, and also have that thread be able to read from mpsc so that the server can send messages out through that connection.
The problem is that the mpsc read blocks the thread, but I can't block the thread if I want to be reading from it. The only thing I could think of to work around that is to make two threads, one for inbound and one for outbound messages, but that requires me to share my websocket connection with both workers, which of course I cannot do.
Here's a heavily truncated version of my code where I try to make two workers in the Action::Connect arm of the match statement, which gives error[E0382]: use of moved value: 'websocket' for trying to move it into the second worker's closure:
extern crate tungstenite;
extern crate workerpool;
use std::net::{TcpListener, TcpStream};
use std::sync::mpsc::{self, Sender, Receiver};
use workerpool::Pool;
use workerpool::thunk::{Thunk, ThunkWorker};
use tungstenite::server::accept;
pub enum Action {
Connect(TcpStream),
Send(String),
}
fn main() {
let (main_send, main_receive): (Sender<Action>, Receiver<Action>) = mpsc::channel();
let worker_pool = Pool::<ThunkWorker<()>>::new(8);
{
// spawn thread to listen for users connecting to the server
let main_send = main_send.clone();
worker_pool.execute(Thunk::of(move || {
let listener = TcpListener::bind(format!("127.0.0.1:{}", 8080)).unwrap();
for (_, stream) in listener.incoming().enumerate() {
main_send.send(Action::Connect(stream.unwrap())).unwrap();
}
}));
}
let mut users: Vec<Sender<String>> = Vec::new();
// process actions from children
while let Some(act) = main_receive.recv().ok() {
match act {
Action::Connect(stream) => {
let mut websocket = accept(stream).unwrap();
let (user_send, user_receive): (Sender<String>, Receiver<String>) = mpsc::channel();
let main_send = main_send.clone();
// thread to read user input and propagate it to the server
worker_pool.execute(Thunk::of(move || {
loop {
let message = websocket.read_message().unwrap().to_string();
main_send.send(Action::Send(message)).unwrap();
}
}));
// thread to take server output and propagate it to the server
worker_pool.execute(Thunk::of(move || {
while let Some(message) = user_receive.recv().ok() {
websocket.write_message(tungstenite::Message::Text(message.clone())).unwrap();
}
}));
users.push(user_send);
}
Action::Send(message) => {
// take user message and echo to all users
for user in &users {
user.send(message.clone()).unwrap();
}
}
}
}
}
If I create just one thread for both in and output in that arm, then user_receive.recv() blocks the thread so I can't read any messages with websocket.read_message() until I get an mpsc message from the main thread. How can I solve both problems? I considered cloning the websocket but it doesn't implement Clone and I don't know if just making a new connection with the same stream is a reasonable thing to try to do, it seems hacky.
The problem is that the mpsc read blocks the thread
You can use try_recv to avoid thread blocking. The another implementation of mpsc is crossbeam_channel. That project is a recommended replacement even by the author of mpsc
I want to start up a new thread for each user that connects to the server
I think the asyn/await approach will be much better from most of the prospectives then thread per client one. You can read more about it there
I have written a bot for the Discord chat service using the discord-rs library. This library gives me events when they arise in a single thread in a main loop:
fn start() {
// ...
loop {
let event = match connection.recv_event() {
Ok(event) => event,
Err(err) => { ... },
}
}
}
I want to add some timers and other things which are calculated in their own threads and which must notify me to do something in the main loop's thread. I also want to add Twitter support. So it may look as this:
(Discord's network connection, Twitter network connection, some timer in another thread) -> main loop
This will look something like this:
fn start() {
// ...
loop {
let event = match recv_events() {
// 1. if Discord - do something with discord
// 2. if timer - handle timer's notification
// 3. if Twitter network connection - handle twitter
}
}
}
In raw C and C sockets, it could be done by (e)polling them but here I have no idea how to do that in Rust or if it is even possible. I think I want something like poll of few different sources which would provide me objects of different types.
I guess this could be implemented if I provide a wrapper for mio's Evented trait and use mio's poll as described in the Deadline example.
Is there any way to combine these events?
This library gives me events when they arise in a single thread in a main loop
The "single thread" thing is only true for small bots. As soon as you reach the 2500 guilds limit, Discord will refuse to connect your bot in a normal way. You'll have to use sharding. And I guess you're not going to provision new virtual servers for your bot shards. Chance is, you will spawn new threads instead, one event loop per shard.
Here is how I do it, BTW:
fn event_loop(shard_id: u8, total_shards: u8){
loop {
let bot = Discord::from_bot_token("...").expect("!from_bot_token");
let (mut dc, ready_ev) = bot.connect_sharded(shard_id, total_shards).expect("!connect");
// ...
}
}
fn main() {
let total_shards = 10;
for shard_id in 0..total_shards {
sleep(Duration::from_secs(6)); // There must be a five-second pause between connections from one IP.
ThreadBuilder::new().name (fomat! ("shard " (shard_id)))
.spawn (move || {
loop {
if let Err (err) = catch_unwind (move || event_loop (shard_id, total_shards)) {
log! ("shard " (shard_id) " panic: " (gstuff::any_to_str (&*err) .unwrap_or ("")));
sleep (Duration::from_secs (10));
continue} // Panic restarts the shard.
break}
}) .expect ("!spawn");
}
}
I want to add some timers and other things which are calculated in their own threads and which must notify me to do something in the main loop's thread
Option 1. Don't.
Chance is, you don't really need to come back to the Discord event loop! Let's say you want to post a reply, to update an embed, etc. You do not need the Discord event loop to do that!
Discord API goes in two parts:
1) Websocket API, represented by the Connection, is used to get events from Discord.
2) REST API, represented by the Discord interface, is used to send events out.
You can send events from pretty much anywhere. From any thread. Maybe even from your timers.
Discord is Sync. Wrap it in the Arc and share it with your timers and threads.
Option 2. Seize the opportunity.
Even though recv_event doesn't have a timeout, Discord will be constantly sending you new events. Users are signing in, signing out, typing, posting messages, starting videogames, editing stuff and what not. Indeed, if the stream of events stops then you have a problem with your Discord connection (for my bot I've implemented a High Availability failover based on that signal).
You could share a deque with your threads and timers. Once the timer is finished it will post a little something to the deque, then the even loop will check the deque for new things to do once Discord wakes it with a new event.
Option 3. Bird's-eye view.
As belst have pointed out, you could start a generic event loop, a loop "to rule them all", then lift Discord events into that loop. This is particularly interesting because with sharding you're going to have multiple event loops.
So, Discord event loop -> simple event filter -> channel -> main event loop.
Option 4. Sharded.
If you want your bot to stay online during code upgrades and restarts, then you should provision for a way to restart each shard separately (or otherwise implement a High Availability failover on the shard level, like I did). Because you can't immediately connect all your shards after a process restart, Discord won't let you.
If all your shards share the same process, then after that process restarts you have to wait five seconds before attaching a new shard. With 10 shards it's almost a minute of bot downtime.
One way to separate the shard restarts is to dedicate a process to every shard. Then when you need to upgrade the bot, you'd restart each process separately. That way you still have to wait five to six seconds per shard, but your user's don't.
Even better is that you now need to restart the Discord event loop processes only for discord-rs upgrades and similar maintance-related tasks. Your main event loop, on the other hand, can be restarted immediately and as often as you like. This should speed up the compile-run-test loop considerably.
So, Discord event loop, in a separate shard process -> simple event filter -> RPC or database -> main event loop, in a separate process.
In your case I would just start a thread for each service you need and then use a mpsc channel to send the events to the main loop.
Example:
use std::thread;
use std::sync::mpsc::channel;
enum Event {
Discord(()),
Twitter(()),
Timer(()),
}
fn main() {
let (tx, rx) = channel();
// discord
let txprime = tx.clone();
thread::spawn(move || {
loop {
// discord loop
txprime.send(Event::Discord(())).unwrap()
}
});
// twitter
let txprime = tx.clone();
thread::spawn(move || {
loop {
// twitter loop
txprime.send(Event::Twitter(())).unwrap()
}
});
// timer
let txprime = tx.clone();
thread::spawn(move || {
loop {
// timer loop
txprime.send(Event::Timer(())).unwrap()
}
});
// Main loop
loop {
match rx.recv().unwrap() {
Event::Discord(d) => unimplemented!(),
Event::Twitter(t) => unimplemented!(),
Event::Timer(t) => unimplemented!(),
}
}
}