Can't Pass Messages with Servo/Ipc_channel for Rust - rust

I'm stuck on implementing Servo/ipc_channel. I'm looking for a trivial example to do the following. I've read through all the documentation and I just can't find an example that makes sense to me either from the docs or from the test.rs.
Here's what I want to do in pseudocode -
I have a cargo crate that acts as a main process that creates a spawned process.
The main process would look like this -
let servername = CREATESERVER();
let (sender, receiver) = GETSERVER(servername);
let message = <Some message>
sender(<Send message>)
SPAWNSERVER(servername)
loop{
print(receiver)
}
The spawned process should look like this -
let (sender, receiver) = GETSERVER(servernamefrommainprocess)
let messagespawn = <Some message in spawn process>
sender(messagespawn)
loop{
print(receiver)
}
The issue is that IpcSender can connect to a server by name, but IpcReceiver can't, so I don't know how to receive messages from an IpcOneShotServer. The examples are wrapping IpcSenders within other channels in a way I don't understand.
Here's what I have for a trivial working repo, which is close to what I want and should give you an idea of what I'm looking to do.

Related

How to use tokio DuplexStream in spawned parallel tasks?

Suppose I use a pair of DuplexStreams in an application in order to allow bidirectional communication between server and client.
use use tokio::io::duplex;
let (upstream, dwstream) = duplex(64*1024);
Then I use dwstream to send data to the client and read the data sent by the client from dwstream.
If all these operations are executed in one async block, of course there is no problem. But now, for concurrency, I spawn a new async task for reading the dwstream, and in another task I need to write data to the dwstream from time to time.
In order to move the dwstream to the new task, I have to create an Arc<Mutex<DuplexStream>> object and then clone it before it supports Move.
However, in order to read in the read task, the object must always be locked(), which actually makes it impossible to get the object in the write task.
let (upstream, dwstream) = duplex(64*1024);
let dwrc = Arc::new(Mutex::new(dwstream));
let dwrc1 = dwrc.clone();
// write task
tokio::spwan(async move {
... some operations ...
dwrc.lock().unwrap().write_all(...).await;
});
tokio::spwan(async move {
let data = dwrc1.lock().unwrap().read().await; // here I think dwstream is locked for a long time.
});
How can I resolve it?
You're supposed to write to one stream in one task, and read from the other stream in the other task. Currently, you're writing to and reading from the same stream (dwstream) across the two tasks.
let (mut upstream, mut dwstream) = duplex(64*1024);
// write task
tokio::spawn(async move {
upstream.write_all(b"abc").await;
});
// read task
tokio::spawn(async move {
let mut buf = vec![];
dwstream.read(&mut buf).await;
});
playground

Why doesn't the second await get called on tokio::spawn?

I have the following function that connects to a database using sqlx):
async fn testConnect() -> anyhow::Result<PgPool> {
delay_for(Duration::from_millis(3000)).await;
let pool = PgPoolOptions::new()
.max_connections(5)
.connect(&"database_connection_string")
.await?;
Ok(pool)
}
And I run it on the tokio runtime:
let mut threaded_rt = runtime::Builder::new()
.threaded_scheduler()
.enable_all()
.build()
.unwrap();
threaded_rt.block_on(future::lazy(move |_| {
let handle = tokio::spawn(testConnect());
return handle;
}));
Any code after delay_for inside testConnect does not get executed. Why is this and how can I make both awaits run?
If I remove the delay_for line of code, the database connection code runs as expected.
I suspect that the following happens. This is analogue to starting a background worker thread and quit without joining it.
you spawn the task on tokio and return the handle
block_on drives the tokio reactor for a little while which is just enough for a normal connection, but not enough for the delay to expire
nothing drives the reactor anymore, so the result of the spawned task is just dropped and the program exits
If so, you can fix it simply by calling threaded_rt.block_on(testConnect()) directly, the spawn() part seems to be completely pointless.

How to cancel future/close stream in multithreaded tokio?

Based on tokio's example at https://github.com/tokio-rs/tokio/blob/master/examples/proxy.rs
let (mut ri, mut wi) = inbound.split();
let (mut ro, mut wo) = outbound.split();
let client_to_server = io::copy(&mut ri, &mut wo);
let server_to_client = io::copy(&mut ro, &mut wi);
try_join(client_to_server, server_to_client).await?;
Ok(())
I have a modified version so that I can handle the termination of each connection as in:
// Server will disconnect their side normally 8s later, from what I've observed
let server_to_client = io::copy(&mut ro, &mut wi).map(|f| {
server_session_time = server_start_time.elapsed().unwrap();
f
});
// Normally, this will stop first, as the client disconnects as soon as he has the results...
let client_to_server = io::copy(&mut ri, &mut wo).map(|f| {
client_session_time = client_start_time.elapsed().unwrap();
f
});
// Join on both
match try_join(client_to_server, server_to_client).await {...}
This has allowed me to time correctly the connected time for the client side, since the clients immediately close connection upon receiving the answer, while the proxied server seems to take (in my case 8s) to close.
Given this structure of code, is there any possibility to terminate the downstream connection from server_to_client, once I exit the future of the client_to_server (i.e. not wait the 8s that I observe that it takes to be shutdown)?
Ok with a few more examples, was able to understand what I had to do.
For any people coming back to this question in the future, what is needed is that you implement the bidirectional copy yourself based on the 4 futures of each of the reads and writes with tokio::select!.
That will allow to access to all the streams and when one of them terminates, it is your option if you want to complete processing the others or just stop.
As it is above there is no way to "cancel" the "other" copy...
You can look both at the implementation of io::copy https://github.com/tokio-rs/tokio-io/blob/master/src/copy.rs and tokio::select https://docs.rs/tokio/0.2.20/tokio/macro.select.html, to build your 4-way select.

How can I share or avoid sharing a websocket resource between two threads?

I am using tungstenite to build a chat server, and the way I want to do it relies on having many threads that communicate with each other through mpsc. I want to start up a new thread for each user that connects to the server and connect them to a websocket, and also have that thread be able to read from mpsc so that the server can send messages out through that connection.
The problem is that the mpsc read blocks the thread, but I can't block the thread if I want to be reading from it. The only thing I could think of to work around that is to make two threads, one for inbound and one for outbound messages, but that requires me to share my websocket connection with both workers, which of course I cannot do.
Here's a heavily truncated version of my code where I try to make two workers in the Action::Connect arm of the match statement, which gives error[E0382]: use of moved value: 'websocket' for trying to move it into the second worker's closure:
extern crate tungstenite;
extern crate workerpool;
use std::net::{TcpListener, TcpStream};
use std::sync::mpsc::{self, Sender, Receiver};
use workerpool::Pool;
use workerpool::thunk::{Thunk, ThunkWorker};
use tungstenite::server::accept;
pub enum Action {
Connect(TcpStream),
Send(String),
}
fn main() {
let (main_send, main_receive): (Sender<Action>, Receiver<Action>) = mpsc::channel();
let worker_pool = Pool::<ThunkWorker<()>>::new(8);
{
// spawn thread to listen for users connecting to the server
let main_send = main_send.clone();
worker_pool.execute(Thunk::of(move || {
let listener = TcpListener::bind(format!("127.0.0.1:{}", 8080)).unwrap();
for (_, stream) in listener.incoming().enumerate() {
main_send.send(Action::Connect(stream.unwrap())).unwrap();
}
}));
}
let mut users: Vec<Sender<String>> = Vec::new();
// process actions from children
while let Some(act) = main_receive.recv().ok() {
match act {
Action::Connect(stream) => {
let mut websocket = accept(stream).unwrap();
let (user_send, user_receive): (Sender<String>, Receiver<String>) = mpsc::channel();
let main_send = main_send.clone();
// thread to read user input and propagate it to the server
worker_pool.execute(Thunk::of(move || {
loop {
let message = websocket.read_message().unwrap().to_string();
main_send.send(Action::Send(message)).unwrap();
}
}));
// thread to take server output and propagate it to the server
worker_pool.execute(Thunk::of(move || {
while let Some(message) = user_receive.recv().ok() {
websocket.write_message(tungstenite::Message::Text(message.clone())).unwrap();
}
}));
users.push(user_send);
}
Action::Send(message) => {
// take user message and echo to all users
for user in &users {
user.send(message.clone()).unwrap();
}
}
}
}
}
If I create just one thread for both in and output in that arm, then user_receive.recv() blocks the thread so I can't read any messages with websocket.read_message() until I get an mpsc message from the main thread. How can I solve both problems? I considered cloning the websocket but it doesn't implement Clone and I don't know if just making a new connection with the same stream is a reasonable thing to try to do, it seems hacky.
The problem is that the mpsc read blocks the thread
You can use try_recv to avoid thread blocking. The another implementation of mpsc is crossbeam_channel. That project is a recommended replacement even by the author of mpsc
I want to start up a new thread for each user that connects to the server
I think the asyn/await approach will be much better from most of the prospectives then thread per client one. You can read more about it there

Converting a future to a stream in kube-rs library

I'm trying to implement a project where I can tail the logs of multiple Kubernetes container logs simultaneously. Think tmux split pane with two tails in each pane. Anyway, I'm far far away from my actual project because I'm stuck right at the beginning. If you look at the following code then the commented out line for lp.follow = true will keep the log stream open and stream logs forever. I'm not sure how to actually use this. I found a function called .into_stream() that I can tack onto the pods.log function, but then I'm not sure how to actually use the stream. I'm not experienced enough to know if this is a limitation of the kube library, or if I'm just doing something wrong. Anyway, here is the repo if you want to look at anything else. https://github.com/bloveless/kube-logger
I'd be forever grateful for any advice or resources I can look at. Thanks!
use kube::{
api::Api,
client::APIClient,
config,
};
use kube::api::{LogParams, RawApi};
use futures::{FutureExt, Stream, future::IntoStream, StreamExt};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
std::env::set_var("RUST_LOG", "info,kube=trace");
let config = config::load_kube_config().await?;
let client = APIClient::new(config);
// Manage pods
let pods = Api::v1Pod(client).within("fritzandandre");
let mut lp = LogParams::default();
lp.container = Some("php".to_string());
// lp.follow = true;
lp.tail_lines = Some(100);
let log_string = pods.log("fritzandandre-php-0", &lp).await?;
println!("FnA Log: {}", log_string);
Ok(())
}
Originally posted here https://www.reddit.com/r/learnrust/comments/eg49tx/help_with_futuresstreams_and_the_kubers_library/

Resources