We are using GStreamer to read the video stream off of network cameras and we observed a weird behaviour: our program would hang unexpectedly, without any kind of notice.
We, then, narrowed it down to the fact that our client had a very, very bad internet connection and we were able to reproduce the problem with just GStreamer and a tool to simulate bad networks.
I am attaching below both our tool to simulate bad networks and the example we've written to reproduce this behaviour.
Our tool to simulate bad networks uses tc to introduce a delay, packet loss and packet corruption.
Our example boils down to using a (subclassed) bin that can intercept EoS messages posted inside it, and a pipeline that reads a RTSP feed. The bin is then dinamically added and removed from the pipeline.
Are we doing something wrong? Could this be a bug in GStreamer itself?
crapnet.sh
sudo tc qdisc add dev wlp2s0 root netem \
delay 500ms 480ms distribution normal \
loss 30% 25% \
corrupt 2%
src/custom_bin.rs
mod implementation {
use {glib::subclass::prelude::*, gstreamer::subclass::prelude::*};
use {
glib::{glib_object_impl, glib_object_subclass, subclass::simple::ClassStruct},
gstreamer::{subclass::ElementInstanceStruct, Bin, Message, MessageView},
std::sync::{mpsc, Mutex},
};
pub struct CustomBin {
pub(super) eos_guard: Mutex<Option<mpsc::SyncSender<()>>>,
}
impl ObjectImpl for CustomBin {
glib_object_impl!();
}
impl ElementImpl for CustomBin {}
impl BinImpl for CustomBin {
fn handle_message(&self, bin: &Bin, message: Message) {
let mut eos_guard = self.eos_guard.lock().unwrap();
if let MessageView::Eos(_) = message.view() {
if let Some(eos_guard) = eos_guard.take() {
return eos_guard.send(()).unwrap_or(());
}
}
self.parent_handle_message(bin, message)
}
}
impl ObjectSubclass for CustomBin {
const NAME: &'static str = "GstCustomBin";
type ParentType = Bin;
type Instance = ElementInstanceStruct<Self>;
type Class = ClassStruct<Self>;
glib_object_subclass!();
fn new() -> Self {
Self {
eos_guard: Mutex::new(None),
}
}
}
}
use {
glib::{prelude::*, subclass::prelude::*, translate::*},
gstreamer::prelude::*,
};
use {
glib::{glib_bool_error, glib_wrapper, subclass::simple::ClassStruct, BoolError, Object},
gstreamer::{
event, subclass::ElementInstanceStruct, Element, GhostPad, PadDirection, State, StateChangeError,
StateChangeSuccess,
},
std::{sync::mpsc, time::Duration},
};
glib_wrapper! {
/// A subclass of `gstreamer::Bin` that has customized behaviour for intercepting end-of-stream messages. This is necessary so that we can remove a branch from a `tee` and not bring down the entire pipeline.
pub struct CustomBin(
Object<
ElementInstanceStruct<implementation::CustomBin>,
ClassStruct<implementation::CustomBin>,
CustomBinClass
>
) #extends gstreamer::Bin, gstreamer::Element, gstreamer::Object;
match fn {
get_type => || implementation::CustomBin::get_type().to_glib(),
}
}
unsafe impl Send for CustomBin {}
unsafe impl Sync for CustomBin {}
impl CustomBin {
/// Instantiates the structure.
pub fn new(name: Option<&str>) -> Self {
Object::new(Self::static_type(), &[("name", &name)])
.expect("Failed to instantiate `CustomBin` as an `Object`")
.downcast()
.expect("Failed to downcast `Object` to `CustomBin`")
}
/// Adds a ghost sink pad for the target element. It is assumed that the element belongs to the bin.
pub fn add_ghost_sink_pad(&self, element: &Element) -> Result<(), BoolError> {
let target_pad = element
.get_static_pad("sink")
.ok_or_else(|| glib_bool_error!("Failed to get sink pad from [{}]", element.get_name()))?;
let ghost_pad = GhostPad::new(Some("sink"), PadDirection::Sink);
ghost_pad.set_target(Some(&target_pad))?;
self.add_pad(&ghost_pad)?;
Ok(())
}
/// Installs an end-of-stream message guard, which will drop the end-of-stream message and then signal it was dropped.
pub fn install_eos_guard(&self) -> mpsc::Receiver<()> {
let super_self = implementation::CustomBin::from_instance(self);
let mut eos_guard = super_self.eos_guard.lock().unwrap();
let (sender, receiver) = mpsc::sync_channel(1);
eos_guard.replace(sender);
receiver
}
/// Sends an end-of-stream event to this bin. It will become an end-of-stream message once it reaches the sink.
pub fn send_eos_event(&self) {
let super_self = implementation::CustomBin::from_instance(self);
let mut eos_guard = super_self.eos_guard.lock().unwrap();
if !self.send_event(event::Eos::new()) {
if let Some(eos_guard) = eos_guard.take() {
eos_guard.send(()).unwrap_or(());
}
}
}
/// Flushes the data in the bin by sending an EoS event and then intercepting the resulting EoS message.
pub fn flush(&self) -> Result<(), BoolError> {
let eos_guard = self.install_eos_guard();
self.send_eos_event();
eos_guard
.recv_timeout(Duration::from_secs(5))
.map_err(|error| glib_bool_error!("Timed out waiting for the EoS guard during flush (error [{:?}])", error))
}
pub fn set_null_state(&self) -> Result<StateChangeSuccess, StateChangeError> {
self.set_state(State::Null)
}
}
src/main.rs
mod custom_bin;
use custom_bin::CustomBin;
use gstreamer::prelude::*;
use {
glib::glib_bool_error,
gstreamer::{ElementFactory, PadProbeReturn, PadProbeType, Pipeline, State},
std::{error::Error, sync::mpsc, thread, time::Duration},
};
fn rtsp_location() -> String {
String::from("rtspt://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov")
}
fn video_location(index: usize) -> String {
format!("/home/valmirpretto/videos/video_{}.mp4", index)
}
fn instantiate_mp4_writer(location: &str) -> Result<CustomBin, Box<dyn Error>> {
let queue = ElementFactory::make("queue", None)?;
let h264parse = ElementFactory::make("h264parse", None)?;
let mp4mux = ElementFactory::make("mp4mux", None)?;
let filesink = ElementFactory::make("filesink", None)?;
filesink.set_property("location", &location)?;
let bin = CustomBin::new(None);
bin.add(&queue)?;
bin.add(&h264parse)?;
bin.add(&mp4mux)?;
bin.add(&filesink)?;
queue.link(&h264parse)?;
h264parse.link(&mp4mux)?;
mp4mux.link(&filesink)?;
bin.add_ghost_sink_pad(&queue)?;
Ok(bin)
}
fn main() -> Result<(), Box<dyn Error>> {
gstreamer::init()?;
let rtspsrc = ElementFactory::make("rtspsrc", None)?;
let rtph264depay = ElementFactory::make("rtph264depay", None)?;
let tee = ElementFactory::make("tee", None)?;
rtspsrc.set_property("location", &rtsp_location())?;
tee.set_property("allow-not-linked", &true)?;
let pipeline = Pipeline::new(None);
pipeline.add(&rtspsrc)?;
pipeline.add(&rtph264depay)?;
pipeline.add(&tee)?;
rtspsrc.connect_pad_added({
let rtph264depay = rtph264depay.downgrade();
move |_, src_pad| {
if let Some(rtph264depay) = rtph264depay.upgrade() {
let sink_pad = rtph264depay
.get_static_pad("sink")
.expect("Element rtph264depay without sink pad");
if src_pad.can_link(&sink_pad) {
src_pad
.link(&sink_pad)
.expect("Failed to link after we checked it could");
}
}
}
});
rtph264depay.link(&tee)?;
pipeline.set_state(State::Playing)?;
(0..).try_for_each::<_, Result<(), Box<dyn Error>>>(|index| {
println!("Iteration [{}]", index);
// Add bin to the pipeline
let bin = instantiate_mp4_writer(&video_location(index))?;
pipeline.add(&bin)?;
bin.sync_state_with_parent()?;
tee.link(&bin)?;
thread::sleep(Duration::from_secs(5));
// Remove bin from the pipeline
let bin_sink_pad = bin
.get_static_pad("sink")
.ok_or_else(|| glib_bool_error!("Bin [{}] did not have a sink pad", bin.get_name()))?;
let tee_src_pad = bin_sink_pad
.get_peer()
.ok_or_else(|| glib_bool_error!("Bin [{}] sink pad did not have a peer", bin.get_name()))?;
let (signal_sender, signal_receiver) = mpsc::sync_channel(1);
tee_src_pad.add_probe(PadProbeType::IDLE, {
let bin = bin.downgrade();
let pipeline = pipeline.downgrade();
let tee = tee.downgrade();
move |tee_src_pad, _| {
if let Some(bin) = bin.upgrade() {
if let Some(pipeline) = pipeline.upgrade() {
bin.flush().expect(&format!("Could not flush bin [{}]", bin.get_name()));
pipeline
.remove(&bin)
.expect(&format!("Could not remove bin [{}] from pipeline", bin.get_name()));
bin.set_null_state()
.expect(&format!("Could not set null state of bin [{}]", bin.get_name()));
signal_sender
.send(())
.expect(&format!("Could not signal that bin [{}] is done", bin.get_name()));
}
}
if let Some(tee) = tee.upgrade() {
tee.release_request_pad(tee_src_pad);
}
PadProbeReturn::Remove
}
});
signal_receiver
.recv_timeout(Duration::from_secs(5))
.map_err(|error| glib_bool_error!("Timed out waiting for EoS (error [{:?}])", error))?;
Ok(())
})?;
Ok(())
}
Related
I have a Rodio's Sink wrapper in HAudioSink. I also implement a try_new_from_haudio function that, in short, creates a Sink instance, wrap it in HAudioSink and already starts playing the first audio.
In Sink's docs it states: "Dropping the Sink stops all sounds. You can use detach if you want the sounds to continue playing". So when try_new_from_haudiois returning, it drops the original sink and the sound is stopping when it shouldn't.
So my question here is: what should I do to avoid it dropping when I create an instance of HAudioSink? Is ManuallyDrop the way to go?
struct HAudioSink {
sink: Sink,
}
impl HAudioSink {
pub fn try_new_from_haudio<T>(haudio: HAudio<T>) -> HResult<Self>
where
T: NativeType + Float + ToPrimitive,
{
let (_stream, stream_handle) = OutputStream::try_default()?;
let sink = Sink::try_new(&stream_handle).unwrap();
let nchannels = haudio.nchannels();
let nframes = haudio.nframes();
let sr = haudio.sr();
let mut data_interleaved: Vec<f32> = Vec::with_capacity(nchannels * nframes);
let values = haudio
.inner()
.inner()
.values()
.as_any()
.downcast_ref::<PrimitiveArray<T>>()
.unwrap();
for f in 0..nframes {
for ch in 0..nchannels {
data_interleaved.push(values.value(f + ch * nframes).to_f32().unwrap());
}
}
let source = SamplesBuffer::new(u16::try_from(nchannels).unwrap(), sr, data_interleaved);
sink.append(source);
Ok(HAudioSink { sink })
}
// Sleeps the current thread until the sound ends.
pub fn sleep_until_end(&self) {
self.sink.sleep_until_end();
}
}
#[cfg(test)]
mod tests {
use super::*;
//this test doesn't work
#[test]
fn play_test() {
let sink = HAudioSink::try_new_from_file("../testfiles/gs-16b-2c-44100hz.wav").unwrap();
sink.append_from_file("../testfiles/gs-16b-2c-44100hz.wav")
.unwrap();
sink.sleep_until_end();
}
}
If I put sink.sleep_until_end() inside try_new_from_haudio, just before returning Ok, it works.
check the following link for the reproducible example of this issue: https://github.com/RustAudio/rodio/issues/476
The problem is that for the _stream too "If this is dropped playback will end & attached OutputStreamHandles will no longer work."
see the docs on OutputStream. So you have to store it alongside your Sink:
pub struct SinkWrapper {
pub sink: Sink,
pub stream: OutputStream,
}
impl SinkWrapper {
pub fn new() -> Self {
let (stream, stream_handle) = OutputStream::try_default().unwrap();
let sink = Sink::try_new(&stream_handle).unwrap();
// Add a dummy source of the sake of the example.
let source = SineWave::new(440.0)
.take_duration(Duration::from_secs_f32(1.))
.amplify(2.);
sink.append(source);
Self { sink, stream }
}
}
I am just a beginner in Rust and so far I have managed to obey borrow-checker's warnings in a async-heavy tokio app until now.
Basically I have a struct which has a HashMap of games. I want to insert a created game into the hashmap and at the same time pass it to a game_loop that I spawn with tokio::spawn. Game loop will update the game but I also want to be able to retrieve the game from the hashmap to execute some functions to check its state etc.
I've tried wrapping it in Arc and Mutexes and whatnot. At the moment I just clone it to the game_loop but - as smarter people probably know - that will only pass a clone of the original and the entity in the hashmap wont update.
GameManager
pub struct GameManager {
games: HashMap<Uuid, Game>,
}
impl GameManager {
fn find_or_create_game(&mut self, user_options: &GameOptions) -> Uuid {
for g in self.games.values() {
println!("game id {:?}", g.id);
println!("game players {:?}", g.state.get_players());
if g.allows_joining() && g.matches_player_options(user_options) {
println!("Joining existing game");
return g.id;
}
}
let rng = ::rand::rngs::StdRng::from_seed(OsRng.gen());
let mut game = Game::new(Some(user_options.clone()), rng);
let game_id = game.id;
let (game_sender, game_receiver) = mpsc::unbounded_channel::<GameEvent>();
let broadcast = self.broadcast.clone();
self.game_channels.insert(game_id, game_sender.clone());
self.games.insert(game_id, game);
tokio::spawn(game_loop(game, broadcast, game_receiver));
return game_id;
}
}
game_loop
pub async fn game_loop(
mut game: Game,
broadcast: UnboundedSender<ServerEvent>,
mut receiver: UnboundedReceiver<GameEvent>,
) -> Result<(), io::Error> {
let dur = std::time::Duration::from_secs_f64(1.0 / game.state.options.fps as f64);
let mut interval = tokio::time::interval(dur);
loop {
interval.tick().await;
while let Some(is_event) = unconstrained(receiver.recv()).now_or_never() {
if let Some(event) = is_event {
handle_game_event(event, &mut game, &broadcast);
}
}
if game.has_ended() {
break;
} else {
game.tick();
let _ = broadcast.send(ServerEvent::Tick(game.get_tick()));
}
}
Ok(())
}
Okay yeah, I am a dummy. Thanks #Peterrabbit though for giving me the motivation to try using Arc again. Had to wrap it in a mutex but all together, I am just happy that it now works.
So now it is:
pub struct GameManager {
games: HashMap<Uuid, Arc<Mutex<Game>>>,
}
impl GameManager {
fn find_or_create_game(&mut self, user_options: &GameOptions) -> Uuid {
for g in self.games.values() {
if g.allows_joining() && g.matches_player_options(user_options) {
println!("Joining existing game");
return g.id;
}
}
let rng = ::rand::rngs::StdRng::from_seed(OsRng.gen());
let mut game = Arc::new(Mutex::new(Game::new(Some(user_options.clone()), rng)));
let game_id = game.lock().await.id;
let (game_sender, game_receiver) = mpsc::unbounded_channel::<GameEvent>();
let broadcast = self.broadcast.clone();
self.game_channels.insert(game_id, game_sender.clone());
self.games.insert(game_id, game.clone());
tokio::spawn(game_loop(game.clone(), broadcast, game_receiver));
return game_id;
}
}
pub async fn game_loop(
mut game: Arc<Mutex<Game>>,
broadcast: UnboundedSender<ServerEvent>,
mut receiver: UnboundedReceiver<GameEvent>,
) -> Result<(), io::Error> {
let dur = std::time::Duration::from_secs_f64(1.0 / game.lock().await.state.options.fps as f64);
let mut interval = tokio::time::interval(dur);
loop {
interval.tick().await;
while let Some(is_event) = unconstrained(receiver.recv()).now_or_never() {
if let Some(event) = is_event {
handle_game_event(event, &mut game, &broadcast).await;
}
}
if game.lock().await.has_ended() {
break;
} else if game.lock().await.is_running() {
handle_game_event(GameEvent::Tick(), &mut game, &broadcast).await;
}
}
Ok(())
}
For threaded applications, the Rust standard library provides std::sync::mpsc::sync_channel, a buffered channel which blocks on the reading end when the buffer is empty and blocks on the writing end when the buffer is full. In particular, if you set the buffer size to 0, then any read or write will block until there is a matching write or read.
For async code, there is futures::channel::mpsc::channel, but this does not have the same behavior. Here the minimum capacity is the number of senders on the channel, which is greater than 0. The sending end can still block (because it implements Sink, so we can use SinkExt::send and await it), but only after there's at least one thing already in the buffer.
I took a look to see if there were any packages that provide the functionality I'm looking for, but I could not find anything. Tokio provides lots of nice async synchronization primitives, but none of them did quite what I'm looking for. Plus, my program is going to run in the browser, so I don't think I'm able to use a runtime like Tokio. Does anyone know of a package that fits my use case? I would try to implement this myself, since this almost feels like the most minimal use case for the Sink and Stream traits, but even a minimal implementation of these traits seems like it would be really complicated. Thoughts?
Edit: here's a minimal example of what I mean:
fn main() {
let (tx, rx) = blocking_channel();
let ft = async move {
tx.send(3).await;
println!("message sent and received");
}
let fr = async move {
let msg = rx.recv().await;
println!("received {}", msg);
}
block_on(async { join!(ft, fr) });
}
In this example, whichever future runs first will yield to the other, and only print after both rx.recv and tx.send have been called. Obviously, the receiving end can only progress after tx.send has been called, but I want the less obvious behavior of the transmitting end also having to wait.
Interesting question. I don't think something like that already exists.
Here is a quick proof-of-concept prototype I wrote for that. It's not the prettiest, but it seems to work. There might be a better struct layout than just wrapping everything in a RefCell<Option<...>>, though. And I don't particularly like the sender_dropped and receiver_dropped variables.
Be sure to unittest it properly if used in production!!!
extern crate alloc;
use alloc::rc::Rc;
use core::cell::RefCell;
use core::pin::Pin;
use core::task::{Poll, Waker};
use futures::SinkExt;
use futures::StreamExt;
struct Pipe<T> {
send_waker: RefCell<Option<Waker>>,
receive_waker: RefCell<Option<Waker>>,
value: RefCell<Option<T>>,
sender_dropped: RefCell<bool>,
receiver_dropped: RefCell<bool>,
}
impl<T> Pipe<T> {
fn new() -> Rc<Pipe<T>> {
Rc::new(Self {
value: RefCell::new(None),
send_waker: RefCell::new(None),
receive_waker: RefCell::new(None),
sender_dropped: RefCell::new(false),
receiver_dropped: RefCell::new(false),
})
}
}
impl<T> Pipe<T> {
fn wake_sender(&self) {
if let Some(waker) = self.send_waker.replace(None) {
waker.wake();
}
}
fn wake_receiver(&self) {
if let Some(waker) = self.receive_waker.replace(None) {
waker.wake();
}
}
}
pub struct PipeSender<T> {
pipe: Rc<Pipe<T>>,
}
pub struct PipeReceiver<T> {
pipe: Rc<Pipe<T>>,
}
pub fn create_pipe<T>() -> (PipeSender<T>, PipeReceiver<T>) {
let pipe = Pipe::new();
(PipeSender { pipe: pipe.clone() }, PipeReceiver { pipe })
}
impl<T> futures::Sink<T> for PipeSender<T> {
type Error = ();
fn poll_ready(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
let result = if *self.pipe.receiver_dropped.borrow() {
Poll::Ready(Err(()))
} else if self.pipe.receive_waker.borrow().is_some() && self.pipe.value.borrow().is_none() {
Poll::Ready(Ok(()))
} else {
self.pipe.send_waker.replace(Some(cx.waker().clone()));
Poll::Pending
};
// Wake potential receiver
self.pipe.wake_receiver();
result
}
fn start_send(self: Pin<&mut Self>, item: T) -> Result<(), Self::Error> {
let prev = self.pipe.value.replace(Some(item));
assert!(prev.is_none(), "A value got lost in the pipe.");
Ok(())
}
fn poll_flush(
self: Pin<&mut Self>,
_: &mut futures::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
// Noop, start_send already completes the send
Poll::Ready(Ok(()))
}
fn poll_close(
self: Pin<&mut Self>,
_: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
// Noop, start_send already completes the send
Poll::Ready(Ok(()))
}
}
impl<T> futures::Stream for PipeReceiver<T> {
type Item = T;
fn poll_next(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
let result = {
let value = self.pipe.value.replace(None);
if let Some(value) = value {
Poll::Ready(Some(value))
} else if *self.pipe.sender_dropped.borrow() {
Poll::Ready(None)
} else {
self.pipe.receive_waker.replace(Some(cx.waker().clone()));
Poll::Pending
}
};
// Wake potential sender
self.pipe.wake_sender();
result
}
}
impl<T> Drop for PipeSender<T> {
fn drop(&mut self) {
self.pipe.sender_dropped.replace(true);
self.pipe.wake_receiver();
}
}
impl<T> Drop for PipeReceiver<T> {
fn drop(&mut self) {
self.pipe.receiver_dropped.replace(true);
self.pipe.wake_sender();
}
}
#[tokio::main]
async fn main() {
use std::time::Duration;
let (mut sender, mut receiver) = create_pipe();
tokio::join!(
async move {
for i in 0..5u32 {
println!("Sending {i} ...");
if let Err(_) = sender.send(i).await {
println!("Stream closed.");
break;
}
println!("Sent {i}.");
}
println!("Sender closed.");
},
async move {
println!("Attempting to receive ...");
while let Some(val) = receiver.next().await {
println!("Received: {val}");
println!("\n=== Waiting ... ===\n");
tokio::time::sleep(Duration::from_secs(1)).await;
println!("Attempting to receive ...");
}
println!("Receiver closed.");
}
);
}
Sending 0 ...
Attempting to receive ...
Sent 0.
Sending 1 ...
Received: 0
=== Waiting ... ===
Attempting to receive ...
Sent 1.
Sending 2 ...
Received: 1
=== Waiting ... ===
Attempting to receive ...
Sent 2.
Sending 3 ...
Received: 2
=== Waiting ... ===
Attempting to receive ...
Sent 3.
Sending 4 ...
Received: 3
=== Waiting ... ===
Attempting to receive ...
Sent 4.
Sender closed.
Received: 4
=== Waiting ... ===
Attempting to receive ...
Receiver closed.
I want to send Events between the game client and server and I already got it working, but I do not know how to do it with bevy.
I am dependent to use tokios async TcpStream, because I have to be able to split the stream into a OwnedWriteHalf and OwnedReadhalf using stream.into_split().
My first idea was to just spawn a thread that handles the connection and then send the received events to a queue using mpsc::channel
Then I include this queue into a bevy resource using app.insert_resource(Queue) and pull events from it in the game loop.
the Queue:
use tokio::sync::mpsc;
pub enum Instruction {
Push(GameEvent),
Pull(mpsc::Sender<Option<GameEvent>>),
}
#[derive(Clone, Debug)]
pub struct Queue {
sender: mpsc::Sender<Instruction>,
}
impl Queue {
pub fn init() -> Self {
let (tx, rx) = mpsc::channel(1024);
init(rx);
Self{sender: tx}
}
pub async fn send(&self, event: GameEvent) {
self.sender.send(Instruction::Push(event)).await.unwrap();
}
pub async fn pull(&self) -> Option<GameEvent> {
println!("new pull");
let (tx, mut rx) = mpsc::channel(1);
self.sender.send(Instruction::Pull(tx)).await.unwrap();
rx.recv().await.unwrap()
}
}
fn init(mut rx: mpsc::Receiver<Instruction>) {
tokio::spawn(async move {
let mut queue: Vec<GameEvent> = Vec::new();
loop {
match rx.recv().await.unwrap() {
Instruction::Push(ev) => {
queue.push(ev);
}
Instruction::Pull(sender) => {
sender.send(queue.pop()).await.unwrap();
}
}
}
});
}
But because all this has to be async I have block the pull() function in the sync game loop.
I do this using the futures-lite crate:
fn event_pull(
communication: Res<Communication>
) {
let ev = future::block_on(communication.event_queue.pull());
println!("got event: {:?}", ev);
}
And this works fine, BUT after around 5 seconds the whole program just halts and does not receive any more events.
It seems like that future::block_on() does block indefinitely.
Having the main function, in which bevy::prelude::App gets built and run, to be the async tokio::main function might also be a problem here.
It would probably be best to wrap the async TcpStream initialisation and tokio::sync::mpsc::Sender and thus also Queue.pull into synchronous functions, but I do not know how to do this.
Can anyone help?
How to reproduce
The repo can be found here
Just compile both server and client and then run both in the same order.
I got it to work by just replacing every tokio::sync::mpsc with crossbeam::channel, which might be a problem, as it does block
and manually initializing the tokio runtime.
so the init code looks like this:
pub struct Communicator {
pub event_bridge: bridge::Bridge,
pub event_queue: event_queue::Queue,
_runtime: Runtime,
}
impl Communicator {
pub fn init(ip: &str) -> Self {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_io()
.build()
.unwrap();
let (bridge, queue, game_rx) = rt.block_on(async move {
let socket = TcpStream::connect(ip).await.unwrap();
let (read, write) = socket.into_split();
let reader = TcpReader::new(read);
let writer = TcpWriter::new(write);
let (bridge, tcp_rx, game_rx) = bridge::Bridge::init();
reader::init(bridge.clone(), reader);
writer::init(tcp_rx, writer);
let event_queue = event_queue::Queue::init();
return (bridge, event_queue, game_rx);
});
// game of game_rx events to queue for game loop
let eq_clone = queue.clone();
rt.spawn(async move {
loop {
let event = game_rx.recv().unwrap();
eq_clone.send(event);
}
});
Self {
event_bridge: bridge,
event_queue: queue,
_runtime: rt,
}
}
}
And main.rs looks like this:
fn main() {
let communicator = communication::Communicator::init("0.0.0.0:8000");
communicator.event_bridge.push_tcp(TcpEvent::Register{name: String::from("luca")});
App::new()
.insert_resource(communicator)
.add_system(event_pull)
.add_plugins(DefaultPlugins)
.run();
}
fn event_pull(
communication: Res<communication::Communicator>
) {
let ev = communication.event_queue.pull();
if let Some(ev) = ev {
println!("ev");
}
}
Perhaps there might be a better solution.
I am using hyper 0.12 to build a proxy service. When receiving a response body from the upstream server I want to forward it back to the client ASAP, and save the contents in a buffer for later processing.
So I need a function that:
takes a Stream (a hyper::Body, to be precise)
returns a Stream that is functionally identical to the input stream
also returns some sort of Future<Item = Vec<u8>, Error = ...> that is resolved with the buffered contents of the input stream, when the output stream is completely consumed
I can't for the life of me figure out how to do this.
I guess the function I'm looking for will look something like this:
type BufferFuture = Box<Future<Item = Vec<u8>, Error = ()>>;
pub fn copy_body(body: hyper::Body) -> (hyper::Body, BufferFuture) {
let body2 = ... // ???
let buffer = body.fold(Vec::<u8>::new(), |mut buf, chunk| {
buf.extend_from_slice(&chunk);
// ...somehow send this chunk to body2 also?
});
(body2, buffer);
}
Below is what I have tried, and it works until send_data() fails (obviously).
type BufferFuture = Box<Future<Item = Vec<u8>, Error = ()>>;
pub fn copy_body(body: hyper::Body) -> (hyper::Body, BufferFuture) {
let (mut sender, body2) = hyper::Body::channel();
let consume =
body.map_err(|_| ()).fold(Vec::<u8>::new(), move |mut buf, chunk| {
buf.extend_from_slice(&chunk);
// What to do if this fails?
if sender.send_data(chunk).is_err() {}
Box::new(future::ok(buf))
});
(body2, Box::new(consume));
}
However, something tells me I am on the wrong track.
I have found Sink.fanout() which seems like it is what I want, but I do not have a Sink, and I don't know how to construct one. hyper::Body implements Stream but not Sink.
What I ended up doing was implement a new type of stream that does what I need. This appeared to be necessary because hyper::Body does not implement Sink nor does hyper::Chunk implement Clone (which is required for Sink.fanout()), so I cannot use any of the existing combinators.
First a struct that contains all details that we need and methods to append a new chunk, as well as notify that the buffer is completed.
struct BodyClone<T> {
body: T,
buffer: Option<Vec<u8>>,
sender: Option<futures::sync::oneshot::Sender<Vec<u8>>>,
}
impl BodyClone<hyper::Body> {
fn flush(&mut self) {
if let (Some(buffer), Some(sender)) = (self.buffer.take(), self.sender.take()) {
if sender.send(buffer).is_err() {}
}
}
fn push(&mut self, chunk: &hyper::Chunk) {
use hyper::body::Payload;
let length = if let Some(buffer) = self.buffer.as_mut() {
buffer.extend_from_slice(chunk);
buffer.len() as u64
} else {
0
};
if let Some(content_length) = self.body.content_length() {
if length >= content_length {
self.flush();
}
}
}
}
Then I implemented the Stream trait for this struct.
impl Stream for BodyClone<hyper::Body> {
type Item = hyper::Chunk;
type Error = hyper::Error;
fn poll(&mut self) -> futures::Poll<Option<Self::Item>, Self::Error> {
match self.body.poll() {
Ok(Async::Ready(Some(chunk))) => {
self.push(&chunk);
Ok(Async::Ready(Some(chunk)))
}
Ok(Async::Ready(None)) => {
self.flush();
Ok(Async::Ready(None))
}
other => other,
}
}
}
Finally I could define an extension method on hyper::Body:
pub type BufferFuture = Box<Future<Item = Vec<u8>, Error = ()> + Send>;
trait CloneBody {
fn clone_body(self) -> (hyper::Body, BufferFuture);
}
impl CloneBody for hyper::Body {
fn clone_body(self) -> (hyper::Body, BufferFuture) {
let (sender, receiver) = futures::sync::oneshot::channel();
let cloning_stream = BodyClone {
body: self,
buffer: Some(Vec::new()),
sender: Some(sender),
};
(
hyper::Body::wrap_stream(cloning_stream),
Box::new(receiver.map_err(|_| ())),
)
}
}
This can be used as follows:
let (body: hyper::Body, buffer: BufferFuture) = body.clone_body();