I am trying to create a game using WebSockets and am having difficulties understanding Streams. I know they are meant to be awaited asynchronously; however, I would like a different approach.
I have a game loop, which is spawned in its tokio task. At the beginning of every frame, the game loop should read all messages received from the WebSocket. I have yet to find out how I could do this without actively blocking the task until a message arrives. Moreover, I don't know how many messages there would be to read, as I want to read all messages received since the last frame.
I know I can read the messages from the WebSocket stream as shown below:
let (ws_sender, mut ws_rcv) = ws.split();
while let Some(result) = ws_rcv.next().await {
// Handle result
}
My question is, if there is a way I could read the messages like this:
let (ws_sender, mut ws_rcv) = ws.split();
for result in ws_rcv.available_messages() {
// Handle result
}
This approach would only read the values that have already been received and wouldn't block the thread.
Related
I am new to Rust and trying to understand the Dining Philosopher code here :
https://google.github.io/comprehensive-rust/exercises/day-4/solutions-morning.html
By the time the execution reaches the following lines in the main thread, isn't it possible that none of the spawned threads have started executing their logic, resulting in nothing in 'rx' and the program simply quitting?
for thought in rx {
println!("{}", thought);
}
When iterating over a channel, it internally calls Receiver::recv, where the documentation specifies
This function will always block the current thread if there is no data available and it’s possible for more data to be sent (at least one sender still exists). Once a message is sent to the corresponding Sender (or SyncSender), this receiver will wake up and return that message.
So the receiver will block until it has data avalible, or all the senders have been dropped.
Yes, execution can reach for thought in rx { ... } before the threads have even started. However, this will still work because iterating over a Receiver will wait until there is a message and will only stop if all Senders have been destroyed (ergo it is no longer possible to receive any messages).
I have a Rust-based latency-sensitive application that subscribes to a stream of incoming data, deserializes it, processes the deserialized object, and then forwards it elsewhere.
Sometimes, I receive bursts of messages and this causes the latency to degrade a bit as it is "backed up." It would be great if I could parallelize the deserialization.
However, I need to preserve the order of the messages when I forward them along. Forwarding is extremely fast, almost negligible, so the fact that forwarding is serial is okay.
Naively, I could send a tuple of (sequence_number, data) over a channel to a pool of processor threads, and each thread could, upon processing, send a tuple of (sequence_number, processed) over a different channel to a single thread that simply forwards. The forwarding thread would also keep track of the next sequence_number to send. When it receives something over the channel, it saves to a HashMap<u64, MyData>. Then while the map contains the next sequence_number, it could forward.
But it gives me pause that I couldn't find such a library on GitHub; makes me think this could be a bad idea.
So I am wondering, is there a name for this sort of thing? Does it exist in Rust or some other language? Is there a better pattern I can follow?
Not sure of a common term but you could use FuturesOrdered from the futures crate.
Here is an example (playground):
use rand::{thread_rng, Rng};
use futures::stream::FuturesOrdered;
use futures::StreamExt as _;
use std::thread;
use core::time::Duration;
#[tokio::main]
async fn main() {
let mut ord_futures = FuturesOrdered::new();
for i in 0..100 {
// receive
ord_futures.push(async move {
tokio::time::sleep(Duration::from_secs(thread_rng().gen_range(1..5))).await;
println!("processed {i}");
i
});
}
while let Some(i) = ord_futures.next().await {
// forward
println!("received {i}");
}
}
I have a struct which roughly looks as follows
struct Node {
id: Arc<i32>,
data: Arc<Mutex<i32>>, // Actually not i32, but that is not important for this question.
rx: Receiver<()>,
tx: Sender<()>
}
I use Receiver and Sender from mpsc::channel.
I want to share this across multiple threads. I have one "user" thread in which the user of Node executes some functions on Node. This will cause some UDP messages being sent to other computers and this thread will block on rx.recv(). In the background I have one or more threads that perform a blocking receive call on UDP sockets. When they receive a message they update the data field of the Node struct and when a background thread notices that sufficiently many messages have been received, it will send () using tx.send(), to let the user-thread continue its execution.
To share a Node instance to another thread, I do something like this:
let node: Arc<Node> = ...
let node_for_background_thread = Arc::clone(&node);
let background_thread_handle = thread::spawn(move || {
node_for_background_thread.start_receive_loop();
});
I need to access all fields of Node (e.g. id and data) in both the user thread and the background threads. That's why I want to share a single instance of Node across them. But neither Receiver nor Sender is Sync, so the above doesn't compile. I know I can clone Sender to put an owned one of them in each background thread.
One solution I see is to not include rx and tx in Node. But then I would lose encapsulation since then the creator of Node instances would have to create the channel and also spawn the background threads. I want to keep it all encapsulated in Node if possible.
The code snipper above is where I could manually clone the Sender. I don't need to clone the Receiver since I will only ever have one thread that will use it.
As I answered here: https://stackoverflow.com/a/65354846/6070255
You may use std::sync::mpsc::SyncSender from the standard library. The difference is that it implements the Sync trait but it will may block if there is no space in the internal buffer while sending a message.
For more information:
std::sync::mpsc::channel
std::sync::mpsc::sync_channel
I have a TCP server listening on requests in an infinite loop:
use std::io::prelude::*;
use std::net::TcpStream;
use std::net::TcpListener;
fn main() {
let listener = TcpListener::bind("0.0.0.0:7878").unwrap();
for stream in listener.incoming() {
let mut stream = stream.unwrap();
let response = "HTTP/1.1 200 OK\r\n\r\nsdfuhgsjdfghsdfjk";
stream.write(response.as_bytes()).unwrap();
stream.flush().unwrap();
}
}
How can I break the loop after a period of time (timeout) ?
After the timeout elapsed, I need the listening loop to stop:
right that moment if there is no incoming stream (i.e if there is no streams incoming and there might not be any more in the future, I need the server to stop waiting in vain)
after processing one last stream if there is already one incoming at that moment
There are three possible solutions to cancel the blocking listener.incoming() call detailed in the Stack Overflow question "Graceful exit TcpListener.incoming()":
Make the listener non-blocking (listener.set_nonblocking(true)), and checking if the timeout has expired when the iterator returns an io::ErrorKind::WouldBlock error.
Using the nix::poll crate to use an event loop to process events. Adding an extra file descriptor that is written when the timeout occurred would allow this to about the loop.
On a Unix system, you could use let fd = listener.as_raw_fd(); before the loop, and then call libc::shutdown(fd, libc::SHUT_RD); to cause the incoming iterator to return an error.
I also found the cancellable_io crate that replaces TcpListener and implements cancelation.
IMHO it is unfortunate that the Rust std::net doesn't include a method to cancel the listener.
Perhaps this question should be marked as a duplicate of the mentioned Stack Overflow question, but I don't have the reputation to do so.
I'm trying to convert Node's sockets into streams using RxJS. The goal is to have each socket create it's own stream and have all streams merge into one. As new sockets connect, a stream is created with socketStream = Rx.Observable.fromEvent(socket, 'message').
Then the stream is merged into a master stream with something like
mainStream = mainStream.merge(socketStream)
This appears to work fine, the problem is that after 200-250 client connections, the server throws RangeError: Maximum call stack size exceeded.
I have sample server and client code that demonstrates this behavior on a gist here:
Sample Server and Client
I suspect that as clients connect/disconnect, the main stream doesn't get cleaned up properly.
The problem is that you are merging your Observable recursively. Every time you do
cmdStream = cmdStream.merge(socketStream);
You are creating a new MergeObservable/MergeObserver pair.
Taking a look at the source, you can see that what you are basically doing with each subscription is subscribing to each of your previous streams in sequence so it shouldn't be hard to see that at around 250 connections your call stack is probably at least 1000 calls deep.
A better way to approach this would be to convert use the flatMap operator and think of your connections as creating an Observable of Observables.
//Turn the connections themselves into an Observable
var connections = Rx.Observable.fromEvent(server, 'connection',
socket => new JsonSocket(socket));
connections
//flatten the messages into their own Observable
.flatMap(socket => {
return Rx.Observable.fromEvent(socket.__socket, 'message')
//Handle the socket closing as well
.takeUntil(Rx.Observable.fromEvent(socket.__socket, 'close'));
}, (socket, msg) => {
//Transform each message to include the socket as well.
return { socket : socket.__socket, data : msg};
})
.subscribe(processData, handleError);
The above I haven't tested but should fix your SO error.
I would probably also question the overall design of this. What exactly are you gaining by merging all the Observables together? You are still differentiating them by passing the socket object along with the message so it would seem these could all be distinct streams.