I want to write a program that passes messages from a local websocket to an remote and vice versa, but when I add a while to spawn threads I get an error. How can I fix this?
The exact same error also shows up with ws_local.
error[E0382]: use of moved value: `write_remote`
|
42 | let (mut write_remote, mut read_remote) = ws_remote.split();
| ---------------- move occurs because `write_remote` has type `SplitSink<WebSocketStream<tokio_tungstenite::MaybeTlsStream<tokio::net::TcpStream>>, Message>`, which does not implement the `Copy` trait
...
70 | let _handle_two = task::spawn(async move {
| ________________________________________________^
71 | | while let Some(msg) = read_local.next().await {
72 | | let msg = msg?;
73 | | if msg.is_text() || msg.is_binary() {
74 | | write_remote.send(msg).await;
| | ------------ use occurs due to use in generator
... |
78 | | Result::<(), tungstenite::Error>::Ok(())
79 | | });
| |_______^ value moved here, in previous iteration of loop
Here is my code:
#![cfg_attr(
all(not(debug_assertions), target_os = "windows"),
windows_subsystem = "windows"
)]
use tokio::net::{TcpListener, TcpStream};
use futures_util::{future, SinkExt, StreamExt, TryStreamExt};
use tokio_tungstenite::{
connect_async,
accept_async,
tungstenite::{Result},
};
use http::Request;
use tokio::sync::oneshot;
use futures::{
future::FutureExt, // for `.fuse()`
pin_mut,
select,
};
use tokio::io::AsyncWriteExt;
use std::io;
use std::net::SocketAddr;
use std::thread;
use tokio::spawn;
use tokio::task;
async fn client() -> Result<()> {
// Client
let request = Request::builder()
.method("GET")
.header("Host", "demo.piesocket.com")
// .header("Origin", "https://example.com/")
.header("Connection", "Upgrade")
.header("Upgrade", "websocket")
.header("Sec-WebSocket-Version", "13")
.header("Sec-WebSocket-Key", tungstenite::handshake::client::generate_key())
.uri("wss://demo.piesocket.com/v3/channel_1?api_key=VCXCEuvhGcBDP7XhiJJUDvR1e1D3eiVjgZ9VRiaV¬ify_self")
.body(())?;
let (mut ws_remote, _) = connect_async(request).await?;
let (mut write_remote, mut read_remote) = ws_remote.split();
let listener = TcpListener::bind("127.0.0.1:4444").await.expect("Can't listen");
while let Ok((stream, _)) = listener.accept().await {
let mut ws_local = accept_async(stream).await.expect("Failed to accept");
let (mut write_local, mut read_local) = ws_local.split();
// read_remote.try_filter(|msg| future::ready(msg.is_text() || msg.is_binary()))
// .forward(write_local)
// .await
// .expect("Failed to forward messages");
// read_local.try_filter(|msg| future::ready(msg.is_text() || msg.is_binary()))
// .forward(write_remote)
// .await
// .expect("Failed to forward messages");
let _handle_one = task::spawn(async move {
while let Some(msg) = read_remote.next().await {
let msg = msg?;
if msg.is_text() || msg.is_binary() {
write_local.send(msg).await;
}
};
Result::<(), tungstenite::Error>::Ok(())
});
let _handle_two = task::spawn(async move {
while let Some(msg) = read_local.next().await {
let msg = msg?;
if msg.is_text() || msg.is_binary() {
write_remote.send(msg).await;
}
};
Result::<(), tungstenite::Error>::Ok(())
});
// handle_one.await.expect("The task being joined has panicked");
// handle_two.await.expect("The task being joined has panicked");
}
Ok(())
}
fn main() {
tauri::async_runtime::spawn(client());
tauri::Builder::default()
// .plugin(PluginBuilder::default().build())
.run(tauri::generate_context!())
.expect("failed to run app");
}
looks like you need to make the split streams able to cross threads. The context within the while loop can't access the write_local and write_remote values again and again without a super-context that can hold them.
Here is a working example:
use std::sync::Arc;
use futures_util::{SinkExt, StreamExt};
use http::Request;
use tokio::task;
use tokio::{net::TcpListener, sync::Mutex};
use tokio_tungstenite::{accept_async, connect_async, tungstenite::Result};
async fn client() -> Result<()> {
// Client
let request = Request::builder()
.method("GET")
.header("Host", "demo.piesocket.com")
// .header("Origin", "https://example.com/")
.header("Connection", "Upgrade")
.header("Upgrade", "websocket")
.header("Sec-WebSocket-Version", "13")
.header("Sec-WebSocket-Key", tungstenite::handshake::client::generate_key())
.uri("wss://demo.piesocket.com/v3/channel_1?api_key=VCXCEuvhGcBDP7XhiJJUDvR1e1D3eiVjgZ9VRiaV¬ify_self")
.body(())?;
let (ws_remote, _) = connect_async(request).await?;
let (write_remote, read_remote) = ws_remote.split();
let read_remote = Arc::new(Mutex::new(read_remote));
let write_remote = Arc::new(Mutex::new(write_remote));
let listener = TcpListener::bind("127.0.0.1:4444")
.await
.expect("Can't listen");
while let Ok((stream, _)) = listener.accept().await {
let ws_local = accept_async(stream).await.expect("Failed to accept");
let (mut write_local, mut read_local) = ws_local.split();
// read_remote.try_filter(|msg| future::ready(msg.is_text() || msg.is_binary()))
// .forward(write_local)
// .await
// .expect("Failed to forward messages");
// read_local.try_filter(|msg| future::ready(msg.is_text() || msg.is_binary()))
// .forward(write_remote)
// .await
// .expect("Failed to forward messages");
let read_remote = read_remote.clone();
let _handle_one = task::spawn(async move {
let mut read_remote = read_remote.lock_owned().await;
while let Some(msg) = read_remote.next().await {
let msg = msg?;
if msg.is_text() || msg.is_binary() {
write_local.send(msg).await;
}
}
Result::<(), tungstenite::Error>::Ok(())
});
let write_remote = write_remote.clone();
let _handle_two = task::spawn(async move {
let mut write_remote = write_remote.lock_owned().await;
while let Some(msg) = read_local.next().await {
let msg = msg?;
if msg.is_text() || msg.is_binary() {
write_remote.send(msg).await;
}
}
Result::<(), tungstenite::Error>::Ok(())
});
// handle_one.await.expect("The task being joined has panicked");
// handle_two.await.expect("The task being joined has panicked");
}
Ok(())
}
fn main() {
println!("Hello!");
}
Related
I'm listening to user input in an gtk-rs input element. input.connect_changed triggers when the input changes and input.connect_activate triggers when Enter is pressed.
use gtk::prelude::*;
use gtk::{Application, ApplicationWindow};
use std::process::{Command, Output};
fn main() {
let app = Application::builder()
.application_id("com.jwestall.ui-demo")
.build();
app.connect_activate(build_ui);
app.run();
}
fn run_command(command: &str) -> Output {
Command::new("sh")
.arg("-c")
.arg(command)
.output()
.unwrap_or_else(|_| panic!("failed to execute {}'", command))
}
fn build_ui(app: &Application) {
let input = gtk::Entry::builder()
.placeholder_text("input")
.margin_top(12)
.margin_bottom(12)
.margin_start(12)
.margin_end(12)
.build();
let window = ApplicationWindow::builder()
.application(app)
.title("gtk-app")
.child(&input)
.build();
window.show_all();
input.connect_changed(|entry| {
let input_text = entry.text();
let command = format!("xdotool search --onlyvisible --name {}", input_text);
let window_id_output = run_command(&command);
if window_id_output.status.success() {
println!(
"stdout: {}",
String::from_utf8_lossy(&window_id_output.stdout)
);
} else {
println!(
"sterr: {}",
String::from_utf8_lossy(&window_id_output.stderr)
);
}
});
input.connect_activate(move |entry| {
let input_text = entry.text();
// // `xdotool windowactivate` doesn't produce any output
let command = format!("xdotool windowactivate {}", window_id_output);
let window_activate_output = run_command(&command);
println!("window_activate: {}", window_activate_output);
window.hide();
window.close();
});
}
I want to set window_id_output in input.connect_changed, then use it in input.connect_activate (in the xdotool windowactivate {} command).
How can I use window_id_output this way in these two closures?
Rust Playground
As Sven Marnach said, you can use Rc<RefCell<..>> to move data between closures.
The simplest example is probably this one, probably how the gtk event loop works anyways:
use std::rc::Rc;
use std::cell::RefCell;
fn main() {
let a = Rc::new(RefCell::new(0));
let a_ref = Rc::clone(&a);
let closure_1 = move || {
let mut a = a_ref.borrow_mut();
*a += 1;
println!("closure_1: {}", &a);
};
let a_ref = Rc::clone(&a);
let closure_2 = move || {
let a = a_ref.borrow();
println!("closure_2: {}", &a);
};
for _ in 1..10 {
closure_1();
closure_2();
}
}
For your specific case, see a reduced example below (based on your code):
use std::cell::RefCell;
use std::rc::Rc;
use gtk::prelude::*;
use gtk::{Application, ApplicationWindow};
fn main() {
let app = Application::builder()
.application_id("com.jwestall.ui-demo")
.build();
app.connect_activate(build_ui);
app.run();
}
fn process(s: &str) -> String {
format!("you entered '{}'", s)
}
fn build_ui(app: &Application) {
let input = gtk::Entry::builder()
.placeholder_text("input")
.margin_top(12)
.margin_bottom(12)
.margin_start(12)
.margin_end(12)
.build();
let window = ApplicationWindow::builder()
.application(app)
.title("gtk-app")
.child(&input)
.build();
window.show_all();
let shared_var = Rc::new(RefCell::new(String::new()));
let shared_var_ref = Rc::clone(&shared_var);
input.connect_changed(move |entry| {
let input_text = entry.text();
let mut shared = shared_var_ref.borrow_mut();
*shared = process(&input_text);
});
let shared_var_ref = Rc::clone(&shared_var);
input.connect_activate(move |_entry| {
let shared = shared_var_ref.borrow();
println!("{}", shared);
window.hide();
window.close();
});
}
I am using the stream function of redis in actix-web 4, I want to create the consumer in the main function, this is my current code
[dependencies]
actix-web = "4"
tokio = { version = "1", features = ["full"] }
redis = { version = "0.21", features = [
# "cluster",
"tokio-comp",
"tokio-native-tls-comp",
] }
#[actix_web::main]
async fn main() -> std::io::Result<()> {
utils::init::init_envfile();
env_logger::init_from_env(env_logger::Env::new());
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
let redist_stream_consumer = web::block(redis_stream_group);
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(redis_pool.clone()))
.app_data(web::Data::new(mysql_pool.clone()))
.service(web::scope("/api").configure(controller::api::config))
})
.bind(("0.0.0.0", 7777))?
.run()
.await?;
redist_stream_consumer.await.unwrap();
Ok(())
}
fn redis_stream_group() {
let client = redis::Client::open("redis://127.0.0.1/").expect("client");
let mut con = client.get_connection().expect("con");
let key = "s.order";
let group_name = "g1";
let consumer_name = "c1";
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$");
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(0);
loop {
let read_reply: StreamReadReply =
con.xread_options(&[key], &[">"], &opts).expect("read err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!("id:{} | key:{} | data:{:?}", id, key, map);
}
let id_strs: Vec<&String> = ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con.xack(key, group_name, &id_strs).expect("ack err");
}
}
}
When I use cargo r, I can run the program normally and get the sent messages, but when I execute ctrl+c, I can't exit the program.
Also I'm not sure if using web::block in the main function is correct and if there is a better way to run child threads
UPDATE: Tried using tokio::spawn, seems to work
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
for consumer_index in 1..=2 {
let c_redis_pool = redis_pool.clone();
tokio::spawn(async move {
let mut con = c_redis_pool.get().await.unwrap();
let key = "s.order";
let group_name = "g1";
let consumer_name = &format!("c{consumer_index}");
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$").await;
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(5000);
loop {
let read_reply: StreamReadReply = con
.xread_options(&[key], &[">"], &opts)
.await
.expect("err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!(
"consumer: {} | id:{} | key:{} | data:{:?}",
consumer_name,
id,
key,
map
);
}
let id_strs: Vec<&String> =
ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con
.xack(key, group_name, &id_strs)
.await
.expect("ack err");
}
}
});
}
let serve = HttpServer::new(move || {
...
}
This can be done with the standard library by useing std::thread and then creating the thread and whatever you want the other thread to do in a closure
fn main() {
thread::spawn(|| {
println!("doing things in the thread!");
});
println!("doing things outside the thread.... how boring");
}
if you want to pass data between them, you can use std::sync::mpsc to transfer data between the threads safely and quickly, using let (item_one,item_two) = mpsc::channel();, like so
fn main() {
let (sender,receiver) = mpsc::channel();
thread::spawn(move || {
let message = String::from("This message is from the thread");
sender.send(message).unwrap();
});
let letter = receiver.recv().unwrap();
note that the main thread proceeds as normal until it comes to the .recv(), at which it either receives the data from the thread, or waits until the other thread is done.
in your example you could do something like
use std::sync::mpsc;
use std::thread;
#[actix_web::main]
async fn main() -> std::io::Result<()> {
utils::init::init_envfile();
env_logger::init_from_env(env_logger::Env::new());
let port = get_env!("ACTIX_PORT", u16);
log::info!(
"starting HTTP server at http://{}:{}",
local_ipaddress::get().unwrap_or("localhost".to_string()),
port
);
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
let (consumer_sender,consumer_listener) = mpsc::channel();
thread::spawn(move || {
consumer_sender.send(redis_stream_group()).expect("You probably want to handle this case, but I'm too lazy");
});
let serve = HttpServer::new(move || {
let app_state = utils::init::AppState {
app_name: get_env!("APP_NAME", String),
pwd_secret: get_env!("PWD_SECRET", String),
jwt_secret: get_env!("JWT_SECRET", String),
jwt_exp: get_env!("JWT_EXP", i64),
};
App::new()
.app_data(web::Data::new(awc::Client::default()))
.app_data(web::Data::new(app_state))
.app_data(web::Data::new(redis_pool.clone()))
.app_data(web::Data::new(mysql_pool.clone()))
.wrap(actix_cors::Cors::default().allowed_origin_fn(|_, _| true))
.service(web::scope("/chat").configure(controller::chat::config))
.service(web::scope("/ws").configure(controller::ws::config))
.service(web::scope("/api").configure(controller::api::config))
});
if cfg!(debug_assertions) {
serve.bind(("0.0.0.0", port))?
} else {
let p = format!("/tmp/{}.socket", get_env!("APP_NAME", String));
let r = serve.bind_uds(&p)?;
let mut perms = std::fs::metadata(&p)?.permissions();
perms.set_readonly(false);
std::fs::set_permissions(&p, perms)?;
r
}
.run()
.await?;
let consumer = consumer_listener.recv().unwrap();
//then put things to do with the consumer here, or not idc
Ok(())
}
fn redis_stream_group() {
let client = redis::Client::open("redis://127.0.0.1/").expect("client");
let mut con = client.get_connection().expect("con");
let key = "s.order";
let group_name = "g1";
let consumer_name = "c1";
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$");
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(0);
loop {
let read_reply: StreamReadReply =
con.xread_options(&[key], &[">"], &opts).expect("read err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!("id:{} | key:{} | data:{:?}", id, key, map);
}
let id_strs: Vec<&String> = ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con.xack(key, group_name, &id_strs).expect("ack err");
}
}
}
I have a rust program that creates temporary email addresses using the mail.tm API, and I want to use threads to create emails simultaneously, to increase the speed. However, what I have tried, only results in printing "Getting email.." x amount of times, and exiting. I am unsure what to do about this. Any help or suggestions are appreciated.
use json;
use rand::distributions::Alphanumeric;
use rand::{thread_rng, Rng};
use reqwest;
use reqwest::header::{HeaderMap, HeaderValue, ACCEPT, CONTENT_TYPE};
use std::{collections::HashMap, io, iter, vec::Vec};
use std::thread;
fn gen_address() -> Vec<String> {
let mut rng = thread_rng();
let address: String = iter::repeat(())
.map(|()| rng.sample(Alphanumeric))
.map(char::from)
.take(10)
.collect();
let password: String = iter::repeat(())
.map(|()| rng.sample(Alphanumeric))
.map(char::from)
.take(5)
.collect();
let body = reqwest::blocking::get("https://api.mail.tm/domains")
.unwrap()
.text()
.unwrap();
let domains = json::parse(&body).expect("Failed to parse domain json.");
let domain = domains["hydra:member"][0]["domain"].to_string();
let email = format!("{}#{}", &address, &domain);
vec![email, password]
}
fn gen_email() -> Vec<String> {
let client = reqwest::blocking::Client::new();
let address_info = gen_address();
let address = &address_info[0];
let password = &address_info[1];
let mut data = HashMap::new();
data.insert("address", &address);
data.insert("password", &password);
let mut headers = HeaderMap::new();
headers.insert(ACCEPT, HeaderValue::from_static("application/ld+json"));
headers.insert(
CONTENT_TYPE,
HeaderValue::from_static("application/ld+json"),
);
let res = client
.post("https://api.mail.tm/accounts")
.headers(headers)
.json(&data)
.send()
.unwrap();
vec![
res.status().to_string(),
address.to_string(),
password.to_string(),
]
}
fn main() {
fn get_amount() -> i32 {
let mut amount = String::new();
loop {
println!("How many emails do you want?");
io::stdin()
.read_line(&mut amount)
.expect("Failed to read line.");
let _amount: i32 = match amount.trim().parse() {
Ok(num) => return num,
Err(_) => {
println!("Please enter a number.");
continue;
}
};
}
}
let amount = get_amount();
let handle = thread::spawn(move || {
for _gen in 0..amount {
let handle = thread::spawn(|| {
println!("Getting email...");
let maildata = gen_email();
println!(
"Status: {}, Address: {}, Password: {}",
maildata[0], maildata[1], maildata[2]);
});
}
});
handle.join().unwrap();
}
Rust Playground example
I see a number of sub-threads being spawned from an outer thread. I think you might want to keep those handles and join them. Unless you join those sub threads the outer thread will exit early. I set up a Rust Playground to demonstrate ^^.
In the playground example, first run the code as-is and note the output of the code - the function it's running is not_joining_subthreads(). Note that it terminates rather abruptly. Then modify the code to call joining_subthreads(). You should then see the subthreads printing out their stdout messages.
let handle = thread::spawn(move || {
let mut handles = vec![];
for _gen in 0..amount {
let handle = thread::spawn(|| {
println!("Getting email...");
let maildata = gen_email();
println!(
"Status: {}, Address: {}, Password: {}",
maildata[0], maildata[1], maildata[2]);
});
handles.push(handle);
}
handles.into_iter().for_each(|h| h.join().unwrap());
});
handle.join().unwrap();
I want to use hyper with bb8 and tokio-postgres. In every request I want to acquire a new connection from the pool. Can anybody provide me some example for this scenario?
Currently I do it like this:
fn main() {
let addr = "127.0.0.1:3000".parse().unwrap();
let pg_mgr =
PostgresConnectionManager::new("postgresql://auth:auth#localhost:5433/auth", NoTls);
rt::run(future::lazy(move || {
Pool::builder()
.build(pg_mgr)
.map_err(|e| eprintln!("Database error: {}", e))
.and_then(move |pool| {
let service = || service_fn(|req| router(req, pool.clone()));
let server = Server::bind(&addr)
.serve(service)
.map_err(|e| eprintln!("Server error: {}", e));
println!("Listening on http://{}", addr);
server
})
}))
}
fn router(
_req: Request<Body>,
_pool: Pool<PostgresConnectionManager<NoTls>>,
) -> Result<Response<Body>, hyper::Error> {
// do some staff with pool
}
But it won't compile:
error[E0597]: `pool` does not live long enough
--> src/main.rs:22:63
|
22 | let service = || service_fn(|req| router(req, pool.clone()));
| -- -----------------------------^^^^----------
| | | |
| | | borrowed value does not live long enough
| | returning this value requires that `pool` is borrowed for `'static`
| value captured here
...
30 | })
| - `pool` dropped here while still borrowed
What am I doing wrong? How to make my case work correctly?
The solution is pretty simple but to understand the problem I want to provide some additional info...
When you call and_then on a future to get the result, it passes the value of the variable to the closure passed to and_then which gives you ownership of that data.
The method serve on hypers builder (returned by Server::bind), expects for the closure to have a static lifetime.
Now to address the problem:
Good: pass the value of the closure into serve, this moves it, transferring the ownership.
Good: service_fn is defined outside of the and_then closure so that function lives long enough
Bad: The closure uses the local variable pool to pass it to the service_fn.
To resolve the problem, just move the local data into your closure like so:
let service = move || service_fn(|req| router(req, pool));
Solution found here
The simplest solution looks like:
fn main() {
let addr = "127.0.0.1:3000".parse().unwrap();
let pg_mgr =
PostgresConnectionManager::new("postgresql://auth:auth#localhost:5433/auth", NoTls);
rt::run(future::lazy(move || {
Pool::builder()
.build(pg_mgr)
.map_err(|_| eprintln!("kek"))
.and_then(move |pool| {
let service = move || {
let pool = pool.clone();
service_fn(move |req| router(req, &pool))
};
let server = Server::bind(&addr)
.serve(service)
.map_err(|e| eprintln!("Server error: {}", e));
println!("Listening on http://{}", addr);
server
})
}))
}
fn router(
_req: Request<Body>,
_pool: &Pool<PostgresConnectionManager<NoTls>>,
) -> impl Future<Item = Response<Body>, Error = hyper::Error> {
// some staff
}
It is also possible to construct service outside of rt::run with Arc and Mutex:
fn main() {
let addr = "127.0.0.1:3000".parse().unwrap();
let pg_mgr =
PostgresConnectionManager::new("postgresql://auth:auth#localhost:5433/auth", NoTls);
let pool: Arc<Mutex<Option<Pool<PostgresConnectionManager<NoTls>>>>> =
Arc::new(Mutex::new(None));
let pool2 = pool.clone();
let service = move || {
let pool = pool.clone();
service_fn(move |req| {
let locked = pool.lock().unwrap();
let pool = locked
.as_ref()
.expect("bb8 should be initialized before hyper");
router(req, pool)
})
};
rt::run(future::lazy(move || {
Pool::builder()
.build(pg_mgr)
.map_err(|_| eprintln!("kek"))
.and_then(move |pool| {
*pool2.lock().unwrap() = Some(pool);
let server = Server::bind(&addr)
.serve(service)
.map_err(|e| eprintln!("Server error: {}", e));
println!("Listening on http://{}", addr);
server
})
}))
}
fn router(
_req: Request<Body>,
_pool: &Pool<PostgresConnectionManager<NoTls>>,
) -> impl Future<Item = Response<Body>, Error = hyper::Error> {
// some staff
}
I'm building a multiplex in rust. It's one of my first applications and a great learning experience!
However, I'm facing a problem and I cannot find out how to solve it in rust:
Whenever a new channel is added to the multiplex, I have to listen for data on this channel.
The new channel is allocated on the stack when it is requested by the open() function.
However, this channel must not be allocated on the stack but on the heap somehow, because it should stay alive and should not be freed in the next iteration of my receiving loop.
Right now my code looks like this (v0.10-pre):
extern crate collections;
extern crate sync;
use std::comm::{Chan, Port, Select};
use std::mem::size_of_val;
use std::io::ChanWriter;
use std::io::{ChanWriter, PortReader};
use collections::hashmap::HashMap;
use sync::{rendezvous, SyncPort, SyncChan};
use std::task::try;
use std::rc::Rc;
struct MultiplexStream {
internal_port: Port<(u32, Option<(Port<~[u8]>, Chan<~[u8]>)>)>,
internal_chan: Chan<u32>
}
impl MultiplexStream {
fn new(downstream: (Port<~[u8]>, Chan<~[u8]>)) -> ~MultiplexStream {
let (downstream_port, downstream_chan) = downstream;
let (p1, c1): (Port<u32>, Chan<u32>) = Chan::new();
let (p2, c2):
(Port<(u32, Option<(Port<~[u8]>, Chan<~[u8]>)>)>,
Chan<(u32, Option<(Port<~[u8]>, Chan<~[u8]>)>)>) = Chan::new();
let mux = ~MultiplexStream {
internal_port: p2,
internal_chan: c1
};
spawn(proc() {
let mut pool = Select::new();
let mut by_port_num = HashMap::new();
let mut by_handle_id = HashMap::new();
let mut handle_id2port_num = HashMap::new();
let mut internal_handle = pool.handle(&p1);
let mut downstream_handle = pool.handle(&downstream_port);
unsafe {
internal_handle.add();
downstream_handle.add();
}
loop {
let handle_id = pool.wait();
if handle_id == internal_handle.id() {
// setup new port
let port_num: u32 = p1.recv();
if by_port_num.contains_key(&port_num) {
c2.send((port_num, None))
}
else {
let (p1_,c1_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
let (p2_,c2_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
/********************************/
let mut h = pool.handle(&p1_); // <--
/********************************/
/* the error is HERE ^^^ */
/********************************/
unsafe { h.add() };
by_port_num.insert(port_num, c2_);
handle_id2port_num.insert(h.id(), port_num);
by_handle_id.insert(h.id(), h);
c2.send((port_num, Some((p2_,c1_))));
}
}
else if handle_id == downstream_handle.id() {
// demultiplex
let res = try(proc() {
let mut reader = PortReader::new(downstream_port);
let port_num = reader.read_le_u32().unwrap();
let data = reader.read_to_end().unwrap();
return (port_num, data);
});
if res.is_ok() {
let (port_num, data) = res.unwrap();
by_port_num.get(&port_num).send(data);
}
else {
// TODO: handle error
}
}
else {
// multiplex
let h = by_handle_id.get_mut(&handle_id);
let port_num = handle_id2port_num.get(&handle_id);
let port_num = *port_num;
let data = h.recv();
try(proc() {
let mut writer = ChanWriter::new(downstream_chan);
writer.write_le_u32(port_num);
writer.write(data);
writer.flush();
});
// todo check if chan was closed
}
}
});
return mux;
}
fn open(self, port_num: u32) -> Result<(Port<~[u8]>, Chan<~[u8]>), ()> {
let res = try(proc() {
self.internal_chan.send(port_num);
let (n, res) = self.internal_port.recv();
assert!(n == port_num);
return res;
});
if res.is_err() {
return Err(());
}
let res = res.unwrap();
if res.is_none() {
return Err(());
}
let (p,c) = res.unwrap();
return Ok((p,c));
}
}
And the compiler raises this error:
multiplex_stream.rs:81:31: 81:35 error: `p1_` does not live long enough
multiplex_stream.rs:81 let mut h = pool.handle(&p1_);
^~~~
multiplex_stream.rs:48:16: 122:4 note: reference must be valid for the block at 48:15...
multiplex_stream.rs:48 spawn(proc() {
multiplex_stream.rs:49 let mut pool = Select::new();
multiplex_stream.rs:50 let mut by_port_num = HashMap::new();
multiplex_stream.rs:51 let mut by_handle_id = HashMap::new();
multiplex_stream.rs:52 let mut handle_id2port_num = HashMap::new();
multiplex_stream.rs:53
...
multiplex_stream.rs:77:11: 87:7 note: ...but borrowed value is only valid for the block at 77:10
multiplex_stream.rs:77 else {
multiplex_stream.rs:78 let (p1_,c1_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
multiplex_stream.rs:79 let (p2_,c2_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
multiplex_stream.rs:80
multiplex_stream.rs:81 let mut h = pool.handle(&p1_);
multiplex_stream.rs:82 unsafe { h.add() };
Does anyone have an idea how to solve this issue?
The problem is that the new channel that you create does not live long enough—its scope is that of the else block only. You need to ensure that it will live longer—its scope must be at least that of pool.
I haven't made the effort to understand precisely what your code is doing, but what I would expect to be the simplest way to ensure the lifetime of the ports is long enough is to place it into a vector at the same scope as pool, e.g. let ports = ~[];, inserting it with ports.push(p1_); and then taking the reference as &ports[ports.len() - 1]. Sorry, that won't cut it—you can't add new items to a vector while references to its elements are active. You'll need to restructure things somewhat if you want that appraoch to work.