I understand this simple example (taken from the Rust book):
use std::sync::mpsc;
use std::thread;
fn main() {
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
let val = String::from("hi");
tx.send(val).unwrap();
});
let received = rx.recv().unwrap();
println!("Got: {}", received);
}
The channel's sender is moved to the newly spawned thread and this new thread can then use the sender to send a message via the channel. Then the original thread can receive from the channel and get the message.
Edited after feedback from #Stargateur:
But what about a more complicated example where there are many threads. How do each of the threads get the sender to be able to message all of the other threads? Are all of the channels meant to be created for all of the threads of the program during the program startup in main()? So if (for example) there is 50 threads in the program there are meant to be 50 channels created in main()? And then all 50 threads would each have a clone of the 50 senders?
For example, if there were only 4 threads is it meant to be done like this?
use std::sync::mpsc;
use std::thread;
fn main() {
let (sender1, receiver1) = mpsc::channel();
let (sender2, receiver2) = mpsc::channel();
let (sender3, receiver3) = mpsc::channel();
let (sender4, receiver4) = mpsc::channel();
let builder1 = thread::Builder::new().name("thread1".into());
builder1.spawn(move || {
loop {
// Am I meant to make a copy of every sender here so I can message whichever thread I need to?
let sender1 = sender1.clone();
let sender2 = sender2.clone();
let sender3 = sender3.clone();
let sender4 = sender4.clone();
let val = String::from("hi");
sender2.send(val).unwrap();
if let Ok(received) = receiver1.try_recv() {
println!("Got: {}", received);
}
}
});
let builder2 = thread::Builder::new().name("thread2".into());
builder2.spawn(move || {
loop {
// Am I meant to make a copy of every sender here so I can message whichever thread I need to?
let sender1 = sender1.clone();
let sender2 = sender2.clone();
let sender3 = sender3.clone();
let sender4 = sender4.clone();
let val = String::from("hi");
sender1.send(val).unwrap();
if let Ok(received) = receiver2.try_recv() {
println!("Got: {}", received);
}
}
});
let builder3 = thread::Builder::new().name("thread3".into());
builder3.spawn(move || {
loop {
// Am I meant to make a copy of every sender here so I can message whichever thread I need to?
let sender1 = sender1.clone();
let sender2 = sender2.clone();
let sender3 = sender3.clone();
let sender4 = sender4.clone();
let val = String::from("hi");
sender2.send(val).unwrap();
if let Ok(received) = receiver3.try_recv() {
println!("Got: {}", received);
}
}
});
let builder4 = thread::Builder::new().name("thread4".into());
builder4.spawn(move || {
loop {
// Am I meant to make a copy of every sender here so I can message whichever thread I need to?
let sender1 = sender1.clone();
let sender2 = sender2.clone();
let sender3 = sender3.clone();
let sender4 = sender4.clone();
let val = String::from("hi");
sender2.send(val).unwrap();
if let Ok(received) = receiver4.try_recv() {
println!("Got: {}", received);
}
}
});
loop {}
}
So far to achieve something like this I've been putting the sender of the second thread into a lazy static. Then the first thread can get the sender out of the lazy static and send the messages to the second thread.
Related
I'm writing a commandline application using Tokio that controls its lifecycle by listening for keyboard interrupt events (i.e. ctrl + c); however, at the same time it must also monitor the other tasks that are spawned and potentially initiate an early shutdown if any of the tasks panic or otherwise encounter an error. To do this, I have wrapped tokio::select in a while loop that terminates once the application has at least had a chance to safely shut down.
However, as soon as the select block polls the future returned by tokio::signal::ctrl_c, the main thread panics with the following message:
thread 'main' panicked at 'there is no signal driver running, must be called from the context of Tokio runtime'
...which is confusing, because this is all done inside a Runtime::block_on call. I haven't published this application (yet), but the problem can be reproduced with the following code:
use tokio::runtime::Builder;
use tokio::signal;
use tokio::sync::watch;
use tokio::task::JoinSet;
fn main() {
let runtime = Builder::new_multi_thread().worker_threads(2).build().unwrap();
runtime.block_on(async {
let _rt_guard = runtime.enter();
let (ping_tx, mut ping_rx) = watch::channel(0u32);
let (pong_tx, mut pong_rx) = watch::channel(0u32);
let mut tasks = JoinSet::new();
let ping = tasks.spawn(async move {
let mut val = 0u32;
ping_tx.send(val).unwrap();
while val < 10u32 {
pong_rx.changed().await.unwrap();
val = *pong_rx.borrow();
ping_tx.send(val + 1).unwrap();
println!("ping! {}", val + 1);
}
});
let pong = tasks.spawn(async move {
let mut val = 0u32;
while val < 10u32 {
ping_rx.changed().await.unwrap();
val = *ping_rx.borrow();
pong_tx.send(val + 1).unwrap();
println!("pong! {}", val + 1);
}
});
let mut interrupt = Box::pin(signal::ctrl_c());
let mut interrupt_read = false;
while !interrupt_read && !tasks.is_empty() {
tokio::select! {
biased;
_ = &mut interrupt, if !interrupt_read => {
ping.abort();
pong.abort();
interrupt_read = true;
},
_ = tasks.join_next() => {}
}
}
});
}
Rust Playground
This example is a bit contrived, but the important parts are:
I am intentionally using Runtime::block_on() instead of tokio::main as I want to control the number of runtime threads at runtime.
Although, curiously, this example works if rewritten to use tokio::main.
I added let _rt_guard = runtime.enter() to ensure that the runtime context was set, but its presence or absence don't appear to make a difference.
I discovered the answer to this while I was finishing writing up the question, but since I couldn't find the answer by searching on the error message I'll share the answer here.
As I noted, the example I gave works if run from within a function annotated with tokio::main. Looking at the docs, the main macro expands to:
fn main() {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
println!("Hello world");
})
}
The example I gave did not call enable_all() on the builder (or, more specifically, enable_io()). Adding this call to the builder chain stops the panic from occurring. The runtime.enter() call I made can also be removed, since it wasn't really doing anything.
Updated example (one of the threads still panics due to a Receiver handle being dropped, but the ctrl_c await no longer panics, which is the key):
use tokio::runtime::Builder;
use tokio::signal;
use tokio::sync::watch;
use tokio::task::JoinSet;
fn main() {
Builder::new_multi_thread()
.worker_threads(2)
.enable_io()
.build()
.unwrap()
.block_on(async {
let (ping_tx, mut ping_rx) = watch::channel(0u32);
let (pong_tx, mut pong_rx) = watch::channel(0u32);
let mut tasks = JoinSet::new();
let ping = tasks.spawn(async move {
let mut val = 0u32;
ping_tx.send(val).unwrap();
while val < 10u32 {
pong_rx.changed().await.unwrap();
val = *pong_rx.borrow();
ping_tx.send(val + 1).unwrap();
println!("ping! {}", val + 1);
}
});
let pong = tasks.spawn(async move {
let mut val = 0u32;
while val < 10u32 {
ping_rx.changed().await.unwrap();
val = *ping_rx.borrow();
pong_tx.send(val + 1).unwrap();
println!("pong! {}", val + 1);
}
});
let mut interrupt = Box::pin(signal::ctrl_c());
let mut interrupt_read = false;
while !interrupt_read && !tasks.is_empty() {
tokio::select! {
biased;
_ = &mut interrupt, if !interrupt_read => {
ping.abort();
pong.abort();
interrupt_read = true;
},
_ = tasks.join_next() => {}
}
}
});
}
Rust Playground
I'm making a simple chat using Tokio for learning. I've made a server with TcpListener that works fine when I use telnets conexions. For example, if I open 2 terminals with telnet and send data, the server receives the conexion and send the data to the address that is not the sender.
The problem I have is with the TcpStream. For some reason (I think is some type of blocking problem) the stream is not sending the data until I stop the app.
I'd tried multiple configurations, but I think that the simple requiered that is close to "work" is the next one.
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut stream = TcpStream::connect("127.0.0.1:8080").await?;
println!("INFO: Socket Address: {:?}", stream.local_addr().unwrap());
let (read, mut writer) = stream.split();
let (sender, receiver) = channel();
thread::spawn(move || loop {
let mut line_buf = String::new();
stdin().read_line(&mut line_buf);
let line = line_buf.trim_end().to_string();
if line.len() > 0 {
sender.send(line_buf.trim_end().to_string()).unwrap();
}
});
loop {
let line = receiver.recv().unwrap();
println!("DEBUG User Input {line}");
writer.write_all(line.as_bytes()).await?;
}
P.D: this code doesn't have anything for receive the messages sended by the server. For manage that, I had something like:
tokio::spawn(async move {
let (read, mut writer) = socket.split();
let mut reader = BufReader::new(read);
let mut line = String::new();
loop {
tokio::select! {
result = reader.read_line(&mut line) => {
if result.unwrap() == 0 {
break;
}
print!("DEBUG: Message Received: {line}");
tx.send((line.clone(), addr)).unwrap();
line.clear();
}
result = rx.recv() => {
let (msg, other_addr) = result.unwrap();
println!("INFO: Message Address: {addr}\n\n");
if addr != other_addr {
writer.write_all(msg.as_bytes()).await.unwrap();
}
}
}
}
});
I have a rust program that creates temporary email addresses using the mail.tm API, and I want to use threads to create emails simultaneously, to increase the speed. However, what I have tried, only results in printing "Getting email.." x amount of times, and exiting. I am unsure what to do about this. Any help or suggestions are appreciated.
use json;
use rand::distributions::Alphanumeric;
use rand::{thread_rng, Rng};
use reqwest;
use reqwest::header::{HeaderMap, HeaderValue, ACCEPT, CONTENT_TYPE};
use std::{collections::HashMap, io, iter, vec::Vec};
use std::thread;
fn gen_address() -> Vec<String> {
let mut rng = thread_rng();
let address: String = iter::repeat(())
.map(|()| rng.sample(Alphanumeric))
.map(char::from)
.take(10)
.collect();
let password: String = iter::repeat(())
.map(|()| rng.sample(Alphanumeric))
.map(char::from)
.take(5)
.collect();
let body = reqwest::blocking::get("https://api.mail.tm/domains")
.unwrap()
.text()
.unwrap();
let domains = json::parse(&body).expect("Failed to parse domain json.");
let domain = domains["hydra:member"][0]["domain"].to_string();
let email = format!("{}#{}", &address, &domain);
vec![email, password]
}
fn gen_email() -> Vec<String> {
let client = reqwest::blocking::Client::new();
let address_info = gen_address();
let address = &address_info[0];
let password = &address_info[1];
let mut data = HashMap::new();
data.insert("address", &address);
data.insert("password", &password);
let mut headers = HeaderMap::new();
headers.insert(ACCEPT, HeaderValue::from_static("application/ld+json"));
headers.insert(
CONTENT_TYPE,
HeaderValue::from_static("application/ld+json"),
);
let res = client
.post("https://api.mail.tm/accounts")
.headers(headers)
.json(&data)
.send()
.unwrap();
vec![
res.status().to_string(),
address.to_string(),
password.to_string(),
]
}
fn main() {
fn get_amount() -> i32 {
let mut amount = String::new();
loop {
println!("How many emails do you want?");
io::stdin()
.read_line(&mut amount)
.expect("Failed to read line.");
let _amount: i32 = match amount.trim().parse() {
Ok(num) => return num,
Err(_) => {
println!("Please enter a number.");
continue;
}
};
}
}
let amount = get_amount();
let handle = thread::spawn(move || {
for _gen in 0..amount {
let handle = thread::spawn(|| {
println!("Getting email...");
let maildata = gen_email();
println!(
"Status: {}, Address: {}, Password: {}",
maildata[0], maildata[1], maildata[2]);
});
}
});
handle.join().unwrap();
}
Rust Playground example
I see a number of sub-threads being spawned from an outer thread. I think you might want to keep those handles and join them. Unless you join those sub threads the outer thread will exit early. I set up a Rust Playground to demonstrate ^^.
In the playground example, first run the code as-is and note the output of the code - the function it's running is not_joining_subthreads(). Note that it terminates rather abruptly. Then modify the code to call joining_subthreads(). You should then see the subthreads printing out their stdout messages.
let handle = thread::spawn(move || {
let mut handles = vec![];
for _gen in 0..amount {
let handle = thread::spawn(|| {
println!("Getting email...");
let maildata = gen_email();
println!(
"Status: {}, Address: {}, Password: {}",
maildata[0], maildata[1], maildata[2]);
});
handles.push(handle);
}
handles.into_iter().for_each(|h| h.join().unwrap());
});
handle.join().unwrap();
I'm trying to upload a file to aws in rust, for that I'm using the s3 rust client by rusoto_s3, I managed to get the multipart upload code working when these parts are sent from a single thread, however, that is not what I want, I want to upload big files and I want to be able to send these parts in multiple threads, for that, I did a little bit of googling and I came across rayon.
For info the way multipart upload works is as follows:
Initiate the multipart -> aws will return an ID
Use this ID to send the different parts, pass the file chunk, and the part number -> aws will return an Etag
Once you sent all the parts, send a complete upload request with all the completed parts as an array contains the Etag and the part number.
I'm new to rust, coming from C++ and Java background, here is my code:
#[tokio::test]
async fn if_multipart_then_upload_multiparts_dicom() {
let now = Instant::now();
dotenv().ok();
let local_filename = "./files/test_big.DCM";
let destination_filename = "24_time_test.dcm";
let mut file = std::fs::File::open(local_filename).unwrap();
const CHUNK_SIZE: usize = 7_000_000;
let mut buffer = Vec::with_capacity(CHUNK_SIZE);
let client = super::get_client().await;
let create_multipart_request = CreateMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
..Default::default()
};
// Start the multipart upload and note the upload_id generated
let response = client
.s3
.create_multipart_upload(create_multipart_request)
.await
.expect("Couldn't create multipart upload");
let upload_id = response.upload_id.unwrap();
// Create upload parts
let create_upload_part = |body: Vec<u8>, part_number: i64| -> UploadPartRequest {
UploadPartRequest {
body: Some(body.into()),
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
upload_id: upload_id.to_owned(),
part_number: part_number,
..Default::default()
}
};
let completed_parts = Arc::new(Mutex::new(vec![]));
rayon::scope(|scope| {
let mut part_number = 1;
loop {
let maximum_bytes_to_read = CHUNK_SIZE - buffer.len();
println!("maximum_bytes_to_read: {}", maximum_bytes_to_read);
file.by_ref()
.take(maximum_bytes_to_read as u64)
.read_to_end(&mut buffer)
.unwrap();
println!("length: {}", buffer.len());
println!("part_number: {}", part_number);
if buffer.len() == 0 {
// The file has ended.
break;
}
let next_buffer = Vec::with_capacity(CHUNK_SIZE);
let data_to_send = buffer;
let completed_parts_cloned = completed_parts.clone();
scope.spawn(move |_| {
let part = create_upload_part(data_to_send.to_vec(), part_number);
{
let part_number = part.part_number;
let client = executor::block_on(super::get_client());
let response = executor::block_on(client.s3.upload_part(part));
completed_parts_cloned.lock().unwrap().push(CompletedPart {
e_tag: response
.expect("Couldn't complete multipart upload")
.e_tag
.clone(),
part_number: Some(part_number),
});
}
});
buffer = next_buffer;
part_number = part_number + 1;
}
});
let completed_upload = CompletedMultipartUpload {
parts: Some(completed_parts.lock().unwrap().to_vec()),
};
let complete_req = CompleteMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
upload_id: upload_id.to_owned(),
multipart_upload: Some(completed_upload),
..Default::default()
};
client
.s3
.complete_multipart_upload(complete_req)
.await
.expect("Couldn't complete multipart upload");
println!(
"time taken: {}, with chunk:: {}",
now.elapsed().as_secs(),
CHUNK_SIZE
);
}
here is a sample of the output and error I'm getting:
maximum_bytes_to_read: 7000000
length: 7000000
part_number: 1
maximum_bytes_to_read: 7000000
length: 7000000
part_number: 2
maximum_bytes_to_read: 7000000
thread '<unnamed>' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', C:\Users\DNDT\.cargo\registry\src\github.com-1ecc6299db9ec823\tokio-1.2.0\src\runtime\blocking\pool.rs:85:33
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread '<unnamed>' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', C:\Users\DNDT\.cargo\registry\src\github.com-1ecc6299db9ec823\tokio-1.2.0\src\runtime\blocking\pool.rs:85:33
length: 7000000
I googled this error but I did not have a clear understanding on what actually its:
there is no reactor running, must be called from the context of Tokio runtime”
Here is what I found:
another question with the same error
and another question
Which seems its some compatibility issue because s3 might be using some version of tokio that is not compatible with the version of tokio I have.
Here are some relevant dependencies:
tokio = { version = "1", features = ["full"] }
tokio-compat-02 = "0.1.2"
rusoto_s3 = "0.46.0"
rusoto_core = "0.46.0"
rusoto_credential = "0.46.0"
rayon = "1.5.0"
I think the main issue comes on actually wanting to run async code in a rayon thread. I tried changing my async code to blocking code using executor::block_on, I also spend some time trying to make the compiler happy, I have multiple threads they all want to write to let completed_parts = Arc::new(Mutex::new(vec![])); so I did some cloning here to make the complier happy.
Also if my used craes matter, here are they:
extern crate dotenv;
extern crate tokio;
use bytes::Bytes;
use dotenv::dotenv;
use futures::executor;
use futures::*;
use rusoto_core::credential::{EnvironmentProvider, ProvideAwsCredentials};
use rusoto_s3::util::{PreSignedRequest, PreSignedRequestOption};
use rusoto_s3::PutObjectRequest;
use rusoto_s3::StreamingBody;
use rusoto_s3::{
CompleteMultipartUploadRequest, CompletedMultipartUpload, CompletedPart,
CreateMultipartUploadRequest, UploadPartRequest, S3,
};
use std::io::Read;
use std::sync::{Arc, Mutex};
use std::time::Duration;
use std::time::Instant;
use tokio::fs;
New to rust, so there a lot of moving pieces to get this one right!
Thanks #Jmb for the discussion, I got rid of the threads and I spawn a tokio task as follows:
create a vector to hold or the futures so we could wait for them:
let mut multiple_parts_futures = Vec::new();
spawn the async task:
loop { // loop file chuncks
...
let send_part_task_future = tokio::task::spawn(async move {
// Upload part
...
}
and then later wait for all futures:
let _results = futures::future::join_all(multiple_parts_futures).await;
worth mentioning, the completed parts need to be sorted:
let mut completed_parts_vector = completed_parts.lock().unwrap().to_vec();
completed_parts_vector.sort_by_key(|part| part.part_number);
The whole code is:
#[tokio::test]
async fn if_multipart_then_upload_multiparts_dicom() {
let now = Instant::now();
dotenv().ok();
let local_filename = "./files/test_big.DCM";
let destination_filename = generate_unique_name();
let destination_filename_clone = destination_filename.clone();
let mut file = std::fs::File::open(local_filename).unwrap();
const CHUNK_SIZE: usize = 6_000_000;
let mut buffer = Vec::with_capacity(CHUNK_SIZE);
let client = super::get_client().await;
let create_multipart_request = CreateMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
..Default::default()
};
// Start the multipart upload and note the upload_id generated
let response = client
.s3
.create_multipart_upload(create_multipart_request)
.await
.expect("Couldn't create multipart upload");
let upload_id = response.upload_id.unwrap();
let upload_id_clone = upload_id.clone();
// Create upload parts
let create_upload_part = move |body: Vec<u8>, part_number: i64| -> UploadPartRequest {
UploadPartRequest {
body: Some(body.into()),
bucket: client.bucket_name.to_owned(),
key: destination_filename_clone.to_owned(),
upload_id: upload_id_clone.to_owned(),
part_number: part_number,
..Default::default()
}
};
let create_upload_part_arc = Arc::new(create_upload_part);
let completed_parts = Arc::new(Mutex::new(vec![]));
let mut part_number = 1;
let mut multiple_parts_futures = Vec::new();
loop {
let maximum_bytes_to_read = CHUNK_SIZE - buffer.len();
println!("maximum_bytes_to_read: {}", maximum_bytes_to_read);
file.by_ref()
.take(maximum_bytes_to_read as u64)
.read_to_end(&mut buffer)
.unwrap();
println!("length: {}", buffer.len());
println!("part_number: {}", part_number);
if buffer.len() == 0 {
// The file has ended.
break;
}
let next_buffer = Vec::with_capacity(CHUNK_SIZE);
let data_to_send = buffer;
let completed_parts_cloned = completed_parts.clone();
let create_upload_part_arc_cloned = create_upload_part_arc.clone();
let send_part_task_future = tokio::task::spawn(async move {
let part = create_upload_part_arc_cloned(data_to_send.to_vec(), part_number);
{
let part_number = part.part_number;
let client = super::get_client().await;
let response = client.s3.upload_part(part).await;
completed_parts_cloned.lock().unwrap().push(CompletedPart {
e_tag: response
.expect("Couldn't complete multipart upload")
.e_tag
.clone(),
part_number: Some(part_number),
});
}
});
multiple_parts_futures.push(send_part_task_future);
buffer = next_buffer;
part_number = part_number + 1;
}
let client = super::get_client().await;
println!("waiting for futures");
let _results = futures::future::join_all(multiple_parts_futures).await;
let mut completed_parts_vector = completed_parts.lock().unwrap().to_vec();
completed_parts_vector.sort_by_key(|part| part.part_number);
println!("futures done");
let completed_upload = CompletedMultipartUpload {
parts: Some(completed_parts_vector),
};
let complete_req = CompleteMultipartUploadRequest {
bucket: client.bucket_name.to_owned(),
key: destination_filename.to_owned(),
upload_id: upload_id.to_owned(),
multipart_upload: Some(completed_upload),
..Default::default()
};
client
.s3
.complete_multipart_upload(complete_req)
.await
.expect("Couldn't complete multipart upload");
println!(
"time taken: {}, with chunk:: {}",
now.elapsed().as_secs(),
CHUNK_SIZE
);
}
I'm building a multiplex in rust. It's one of my first applications and a great learning experience!
However, I'm facing a problem and I cannot find out how to solve it in rust:
Whenever a new channel is added to the multiplex, I have to listen for data on this channel.
The new channel is allocated on the stack when it is requested by the open() function.
However, this channel must not be allocated on the stack but on the heap somehow, because it should stay alive and should not be freed in the next iteration of my receiving loop.
Right now my code looks like this (v0.10-pre):
extern crate collections;
extern crate sync;
use std::comm::{Chan, Port, Select};
use std::mem::size_of_val;
use std::io::ChanWriter;
use std::io::{ChanWriter, PortReader};
use collections::hashmap::HashMap;
use sync::{rendezvous, SyncPort, SyncChan};
use std::task::try;
use std::rc::Rc;
struct MultiplexStream {
internal_port: Port<(u32, Option<(Port<~[u8]>, Chan<~[u8]>)>)>,
internal_chan: Chan<u32>
}
impl MultiplexStream {
fn new(downstream: (Port<~[u8]>, Chan<~[u8]>)) -> ~MultiplexStream {
let (downstream_port, downstream_chan) = downstream;
let (p1, c1): (Port<u32>, Chan<u32>) = Chan::new();
let (p2, c2):
(Port<(u32, Option<(Port<~[u8]>, Chan<~[u8]>)>)>,
Chan<(u32, Option<(Port<~[u8]>, Chan<~[u8]>)>)>) = Chan::new();
let mux = ~MultiplexStream {
internal_port: p2,
internal_chan: c1
};
spawn(proc() {
let mut pool = Select::new();
let mut by_port_num = HashMap::new();
let mut by_handle_id = HashMap::new();
let mut handle_id2port_num = HashMap::new();
let mut internal_handle = pool.handle(&p1);
let mut downstream_handle = pool.handle(&downstream_port);
unsafe {
internal_handle.add();
downstream_handle.add();
}
loop {
let handle_id = pool.wait();
if handle_id == internal_handle.id() {
// setup new port
let port_num: u32 = p1.recv();
if by_port_num.contains_key(&port_num) {
c2.send((port_num, None))
}
else {
let (p1_,c1_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
let (p2_,c2_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
/********************************/
let mut h = pool.handle(&p1_); // <--
/********************************/
/* the error is HERE ^^^ */
/********************************/
unsafe { h.add() };
by_port_num.insert(port_num, c2_);
handle_id2port_num.insert(h.id(), port_num);
by_handle_id.insert(h.id(), h);
c2.send((port_num, Some((p2_,c1_))));
}
}
else if handle_id == downstream_handle.id() {
// demultiplex
let res = try(proc() {
let mut reader = PortReader::new(downstream_port);
let port_num = reader.read_le_u32().unwrap();
let data = reader.read_to_end().unwrap();
return (port_num, data);
});
if res.is_ok() {
let (port_num, data) = res.unwrap();
by_port_num.get(&port_num).send(data);
}
else {
// TODO: handle error
}
}
else {
// multiplex
let h = by_handle_id.get_mut(&handle_id);
let port_num = handle_id2port_num.get(&handle_id);
let port_num = *port_num;
let data = h.recv();
try(proc() {
let mut writer = ChanWriter::new(downstream_chan);
writer.write_le_u32(port_num);
writer.write(data);
writer.flush();
});
// todo check if chan was closed
}
}
});
return mux;
}
fn open(self, port_num: u32) -> Result<(Port<~[u8]>, Chan<~[u8]>), ()> {
let res = try(proc() {
self.internal_chan.send(port_num);
let (n, res) = self.internal_port.recv();
assert!(n == port_num);
return res;
});
if res.is_err() {
return Err(());
}
let res = res.unwrap();
if res.is_none() {
return Err(());
}
let (p,c) = res.unwrap();
return Ok((p,c));
}
}
And the compiler raises this error:
multiplex_stream.rs:81:31: 81:35 error: `p1_` does not live long enough
multiplex_stream.rs:81 let mut h = pool.handle(&p1_);
^~~~
multiplex_stream.rs:48:16: 122:4 note: reference must be valid for the block at 48:15...
multiplex_stream.rs:48 spawn(proc() {
multiplex_stream.rs:49 let mut pool = Select::new();
multiplex_stream.rs:50 let mut by_port_num = HashMap::new();
multiplex_stream.rs:51 let mut by_handle_id = HashMap::new();
multiplex_stream.rs:52 let mut handle_id2port_num = HashMap::new();
multiplex_stream.rs:53
...
multiplex_stream.rs:77:11: 87:7 note: ...but borrowed value is only valid for the block at 77:10
multiplex_stream.rs:77 else {
multiplex_stream.rs:78 let (p1_,c1_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
multiplex_stream.rs:79 let (p2_,c2_): (Port<~[u8]>, Chan<~[u8]>) = Chan::new();
multiplex_stream.rs:80
multiplex_stream.rs:81 let mut h = pool.handle(&p1_);
multiplex_stream.rs:82 unsafe { h.add() };
Does anyone have an idea how to solve this issue?
The problem is that the new channel that you create does not live long enough—its scope is that of the else block only. You need to ensure that it will live longer—its scope must be at least that of pool.
I haven't made the effort to understand precisely what your code is doing, but what I would expect to be the simplest way to ensure the lifetime of the ports is long enough is to place it into a vector at the same scope as pool, e.g. let ports = ~[];, inserting it with ports.push(p1_); and then taking the reference as &ports[ports.len() - 1]. Sorry, that won't cut it—you can't add new items to a vector while references to its elements are active. You'll need to restructure things somewhat if you want that appraoch to work.