How to cancel tokio spawn threads from the method calling them - rust

The test code below considers a situation in which there are three different threads.
Each thread has to do certain asynchronous tasks, that may take a certain time to finish.
This is "simulated" in the code below with a sleep.
On top of that, two of the threads collect information that they have to send to the third one for further processing. This is done using mpsc channels.
Due to the fact that there are out of our control information obtained from outside of the Rust application, the threads may get interrupted. This is emulated by generating a random number, and the loop on each thread breaks when that happens.
What I'm trying to achieve is a system in which whenever one of the threads has an error (simulated with the random number = 9), every other thread is cancelled too.
`
use std::sync::mpsc::channel;
use std::sync::mpsc::{Sender, Receiver, TryRecvError};
use std::thread::sleep;
use tokio::time::Duration;
use rand::distributions::{Uniform, Distribution};
#[tokio::main]
async fn main() {
execution_cycle().await;
}
async fn execution_cycle() {
let (tx_first, rx_first) = channel::<Message>();
let (tx_second, rx_second) = channel::<Message>();
let handle_sender_first = tokio::spawn(sender_thread(tx_first));
let handle_sender_second = tokio::spawn(sender_thread(tx_second));
let handle_receiver = tokio::spawn(receiver_thread(rx_first, rx_second));
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..10);
let mut cancel_from_cycle = rng_generator.sample(&mut thread_rng);
while !&handle_sender_first.is_finished() && !&handle_sender_second.is_finished() && !&handle_receiver.is_finished() {
cancel_from_cycle = rng_generator.sample(&mut thread_rng);
if (cancel_from_cycle == 9) {
println!("Aborting from the execution cycle.");
handle_receiver.abort();
handle_sender_first.abort();
handle_sender_second.abort();
}
}
if handle_sender_first.is_finished() {
println!("handle_sender_first finished.");
} else {
println!("handle_sender_first ongoing.");
}
if handle_sender_second.is_finished() {
println!("handle_sender_second finished.");
} else {
println!("handle_sender_second ongoing.");
}
if handle_receiver.is_finished() {
println!("handle_receiver finished.");
} else {
println!("handle_receiver ongoing.");
}
}
async fn sender_thread(tx: Sender<Message>) {
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..20);
let mut random_id = rng_generator.sample(&mut thread_rng);
while random_id != 9 {
let msg = Message {
id: random_id,
text: "hello".to_owned()
};
println!("Sending message {}.", msg.id);
random_id = rng_generator.sample(&mut thread_rng);
println!("Generated id {}.", random_id);
let result = tx.send(msg);
match result {
Ok(res) => {},
Err(error) => {
println!("Sending error {:?}", error);
random_id = 9;
}
}
sleep(Duration::from_millis(2000));
}
}
async fn receiver_thread(rx_first: Receiver<Message>, rx_second: Receiver<Message>) {
let mut channel_open_first = true;
let mut channel_open_second = true;
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..15);
let mut random_event = rng_generator.sample(&mut thread_rng);
while channel_open_first && channel_open_second && random_event != 9 {
channel_open_first = receiver_inner(&rx_first);
channel_open_second = receiver_inner(&rx_second);
random_event = rng_generator.sample(&mut thread_rng);
println!("Generated event {}.", random_event);
sleep(Duration::from_millis(800));
}
}
fn receiver_inner(rx: &Receiver<Message>) -> bool {
let value = rx.try_recv();
match value {
Ok(msg) => {
println!("Message {} received: {}", msg.id, msg.text);
},
Err(error) => {
if error != TryRecvError::Empty {
println!("{}", error);
return false;
} else { /* Channel is empty.*/ }
}
}
return true;
}
struct Message {
id: usize,
text: String,
}
`
In the working example here, it does exactly that, however, it does it only from inside the threads, and I would like to add a "kill switch" in the execution_cycle() method, allowing to cancel all the three threads when a certain event takes place (the random number cancel_from_cycle == 9), and do that in the most simple way possible... I tried drop(handler_sender), and also panic!() from the execution_cycle() but the spawn threads keep running, preventing the application to finish. I also tried handle_receiver().abort() without success.
How can I achieve the wished result?

Related

How can I quit/pause/unpause a timer thread

I am trying to build a pausable pomodoro timer, but can't figure out how my threads should be set up. Specifically I am having a hard time on finding a way to pause/quit the timer thread. Here's what I am currently trying to do:
I have an input thread collecting crossterm input events and sending them via an mpsc-channel to my main thread
My main thread runs the timer (via thread::sleep) and receives the input messages from the other thread
However because sleep is a blocking operation, the main thread is often unable to react to my key-events.
Any advice how I could be going about this?
Here is a minimum reproducable example:
pub fn run() {
enable_raw_mode().expect("Can run in raw mode");
let (input_worker_tx, input_worker_rx): (Sender<CliEvent<Event>>, Receiver<CliEvent<Event>>) =
mpsc::channel();
poll_input_thread(input_worker_tx);
main_loop(input_worker_rx);
}
fn main_loop(input_worker_rx: Receiver<CliEvent<Event>>) {
let stdout = std::io::stdout();
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend).expect("Terminal could be created");
terminal.clear().expect("Terminal could be cleared");
let mut remaining_time = 20;
while remaining_time > 0 {
if let InputEvent::Quit = handle_input(&input_worker_rx) {
break;
};
remaining_time -= 1;
let time = util::seconds_to_time(remaining_time);
view::render(&mut terminal, &time);
sleep(time::Duration::new(1, 0));
}
quit(&mut terminal, Some("Cya!"));
}
pub fn poll_input_thread(input_worker_tx: Sender<CliEvent<Event>>) {
let tick_rate = Duration::from_millis(200);
thread::spawn(move || {
let mut last_tick = Instant::now();
loop {
let timeout = tick_rate
.checked_sub(last_tick.elapsed())
.unwrap_or_else(|| Duration::from_secs(0));
if event::poll(timeout).expect("poll works") {
let crossterm_event = event::read().expect("can read events");
input_worker_tx
.send(CliEvent::Input(crossterm_event))
.expect("can send events");
}
if last_tick.elapsed() >= tick_rate && input_worker_tx.send(CliEvent::Tick).is_ok() {
last_tick = Instant::now();
}
}
});
}
pub fn handle_input(input_worker_rx: &Receiver<CliEvent<Event>>) -> InputEvent {
match input_worker_rx.try_recv() {
Ok(CliEvent::Input(Event::Key(key_event))) => match key_event {
KeyEvent {
code: KeyCode::Char('q'),
..
}
| KeyEvent {
code: KeyCode::Char('c'),
modifiers: KeyModifiers::CONTROL,
..
} => {
return InputEvent::Quit;
}
_ => {}
},
Ok(CliEvent::Tick) => {}
Err(TryRecvError::Empty) => {}
Err(TryRecvError::Disconnected) => eject("Input handler disconnected"),
_ => {}
}
InputEvent::Continue
}
Don't use sleep and don't use try_recv. Instead use recv_timeout with a timeout corresponding to your sleep duration:
pub fn run() {
enable_raw_mode().expect("Can run in raw mode");
let (input_worker_tx, input_worker_rx): (Sender<CliEvent<Event>>, Receiver<CliEvent<Event>>) =
mpsc::channel();
poll_input_thread(input_worker_tx);
main_loop(input_worker_rx);
}
fn main_loop(input_worker_rx: Receiver<CliEvent<Event>>) {
let stdout = std::io::stdout();
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend).expect("Terminal could be created");
terminal.clear().expect("Terminal could be cleared");
let mut target_time = Instant::now() + Duration::from_secs(20);
let mut paused = false;
let mut remaining = Duration::from_secs (0);
while paused || (target_time > Instant::now()) {
match handle_input(&input_worker_rx, Duration::from_secs (1)) {
InpuEvent::Quit => break,
InputEvent::Pause => {
if paused {
target_time = Instant::now() + remaining;
} else {
remaining = target_time - Instant::now();
}
paused = !paused;
},
_ => {},
}
let time = (if paused { remaining } else { target_time - Instant::now() })
.as_secs();
view::render(&mut terminal, &time);
}
quit(&mut terminal, Some("Cya!"));
}
pub fn poll_input_thread(input_worker_tx: Sender<CliEvent<Event>>) {
let tick_rate = Duration::from_millis(200);
thread::spawn(move || {
let mut last_tick = Instant::now();
loop {
let timeout = tick_rate
.checked_sub(last_tick.elapsed())
.unwrap_or_else(|| Duration::from_secs(0));
if event::poll(timeout).expect("poll works") {
let crossterm_event = event::read().expect("can read events");
input_worker_tx
.send(CliEvent::Input(crossterm_event))
.expect("can send events");
}
if last_tick.elapsed() >= tick_rate && input_worker_tx.send(CliEvent::Tick).is_ok() {
last_tick = Instant::now();
}
}
});
}
pub fn handle_input(input_worker_rx: &Receiver<CliEvent<Event>>, timeout: Duration) -> InputEvent {
match input_worker_rx.recv_timeout(timeout) {
Ok(CliEvent::Input(Event::Key(key_event))) => match key_event {
KeyEvent {
code: KeyCode::Char('q'),
..
}
| KeyEvent {
code: KeyCode::Char('c'),
modifiers: KeyModifiers::CONTROL,
..
} => {
return InputEvent::Quit;
}
KeyEvent {
code: KeyCode::Char(' '),
..
} => {
return InputEvent::Pause;
}
_ => {}
},
Ok(CliEvent::Tick) => {}
Err(TryRecvError::Empty) => {}
Err(TryRecvError::Disconnected) => eject("Input handler disconnected"),
_ => {}
}
InputEvent::Continue
}

How to pass a value over a channel without borrow checking issues?

In the following code, I understand why I'm not allowed to do this(I think), but I'm not sure what to do to fix the issue. I'm simply trying to perform an action based upon an incoming message on a UDPSocket. However, by sending the reference to the slice over the channel, I get a problem where the buffer doesn't live long enough. I'm hoping for some suggestions because I don't know enough about Rust to move forward.
fn main() -> std::io::Result<()> {
let (tx, rx) = mpsc::channel();
thread::spawn(move || loop {
match rx.try_recv() {
Ok(msg) => {
match msg {
"begin" => // run an operation
"end" | _ => // kill the previous operation
}
}
Err = { //Error Handling }
}
}
// start listener
let socket: UdpSocket = UdpSocket::bind("0.0.0.0:9001")?;
loop {
let mut buffer = [0; 100];
let (length, src_address) = socket.recv_from(&mut buffer)?;
println!("Received message of {} bytes from {}", length, src_address);
let cmd= str::from_utf8(&buffer[0..length]).unwrap(); // <- buffer does not live long enough
println!("Command: {}", cmd);
tx.send(cmd).expect("unable to send message to channel"); // Error goes away if I remove this.
}
}
Generally you should avoid sending non-owned values over a channel since its unlikely that a lifetime would be valid for both the sender and receiver (its possible to do, but you'd have to plan for it).
In this situation, you're trying to share pass &str across the channel but since it just references buffer which isn't guaranteed to exist whenever rx receives it, you get a borrow checking error. You would probably want to convert the &str into an owned String and pass that over the channel:
use std::net::UdpSocket;
use std::sync::mpsc;
fn main() {
let (tx, rx) = mpsc::channel();
std::thread::spawn(move || loop {
match rx.recv().as_deref() {
Ok("begin") => { /* run an operation */ }
Ok("end") => { /* kill the previous operation */ }
Ok(_) => { /* unknown */ }
Err(_) => { break; }
}
});
let socket = UdpSocket::bind("0.0.0.0:9001").unwrap();
loop {
let mut buffer = [0; 100];
let (length, src_address) = socket.recv_from(&mut buffer).unwrap();
let cmd = std::str::from_utf8(&buffer[0..length]).unwrap();
tx.send(cmd.to_owned()).unwrap();
}
}
As proposed in the comments, you can avoid allocating a string if you parse the value into a known value for an enum and send that across the channel instead:
use std::net::UdpSocket;
use std::sync::mpsc;
enum Command {
Begin,
End,
}
fn main() {
let (tx, rx) = mpsc::channel();
std::thread::spawn(move || loop {
match rx.recv() {
Ok(Command::Begin) => { /* run an operation */ }
Ok(Command::End) => { /* kill the previous operation */ }
Err(_) => { break; }
}
});
let socket = UdpSocket::bind("0.0.0.0:9001").unwrap();
loop {
let mut buffer = [0; 100];
let (length, src_address) = socket.recv_from(&mut buffer).unwrap();
let cmd = std::str::from_utf8(&buffer[0..length]).unwrap();
let cmd = match cmd {
"begin" => Command::Begin,
"end" => Command::End,
_ => panic!("unknown command")
};
tx.send(cmd).unwrap();
}
}

Tokio channel sends, but doesn't receive

TL;DR I'm trying to have a background thread that's ID'd that is controlled via that ID and web calls, and the background threads doesn't seem to be getting the message via all the types of channels I've tried.
I've tried both the std channels as well as tokio's, and of those I've tried all but the watcher type from tokio. All have the same result which probably means that I've messed something up somewhere without realizing it, but I can't find the issue:
use std::collections::{
hash_map::Entry::{Occupied, Vacant},
HashMap,
};
use std::sync::Arc;
use tokio::sync::mpsc::{self, UnboundedSender};
use tokio::sync::RwLock;
use tokio::task::JoinHandle;
use uuid::Uuid;
use warp::{http, Filter};
#[derive(Default)]
pub struct Switcher {
pub handle: Option<JoinHandle<bool>>,
pub pipeline_end_tx: Option<UnboundedSender<String>>,
}
impl Switcher {
pub fn set_sender(&mut self, tx: UnboundedSender<String>) {
self.pipeline_end_tx = Some(tx);
}
pub fn set_handle(&mut self, handle: JoinHandle<bool>) {
self.handle = Some(handle);
}
}
const ADDR: [u8; 4] = [0, 0, 0, 0];
const PORT: u16 = 3000;
type RunningPipelines = Arc<RwLock<HashMap<String, Arc<RwLock<Switcher>>>>>;
#[tokio::main]
async fn main() {
let running_pipelines = Arc::new(RwLock::new(HashMap::<String, Arc<RwLock<Switcher>>>::new()));
let session_create = warp::post()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.then(|pipelines: RunningPipelines| async move {
println!("session requested OK!");
let id = Uuid::new_v4();
let mut switcher = Switcher::default();
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
switcher.set_sender(tx);
let t = tokio::spawn(async move {
println!("Background going...");
//This would be something processing in the background until it received the end signal
match rx.recv().await {
Some(v) => {
println!(
"Got end message:{} YESSSSSS#!##!!!!!!!!!!!!!!!!1111eleven",
v
);
}
None => println!("Error receiving end signal:"),
}
println!("ABORTING HANDLE");
true
});
let ret = HashMap::from([("session_id", id.to_string())]);
switcher.set_handle(t);
{
pipelines
.write()
.await
.insert(id.to_string(), Arc::new(RwLock::new(switcher)));
}
Ok(warp::reply::json(&ret))
});
let session_end = warp::delete()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.and(warp::query::<HashMap<String, String>>())
.then(
|pipelines: RunningPipelines, p: HashMap<String, String>| async move {
println!("session end requested OK!: {:?}", p);
match p.get("session_id") {
None => Ok(warp::reply::with_status(
"Please specify session to end",
http::StatusCode::BAD_REQUEST,
)),
Some(id) => {
let mut pipe = pipelines.write().await;
match pipe.entry(String::from(id)) {
Occupied(handle) => {
println!("occupied");
let (k, v) = handle.remove_entry();
drop(pipe);
println!("removed from hashmap, key:{}", k);
let s = v.write().await;
if let Some(h) = &s.handle {
if let Some(tx) = &s.pipeline_end_tx {
match tx.send("goodbye".to_string()) {
Ok(res) => {
println!(
"sent end message|{:?}| to fpipeline: {}",
res, id
);
//Added this to try to get it to at least Error on the other side
drop(tx);
},
Err(err) => println!(
"ERROR sending end message to pipeline({}):{}",
id, err
),
};
} else {
println!("no sender channel found for pipeline: {}", id);
};
h.abort();
} else {
println!(
"no luck finding the value in handle in the switcher: {}",
id
);
};
}
Vacant(_) => {
println!("no luck finding the handle in the pipelines: {}", id)
}
};
Ok(warp::reply::with_status("done", http::StatusCode::OK))
}
}
},
);
let routes = session_create
.or(session_end)
.recover(handle_rejection)
.with(warp::cors().allow_any_origin());
println!("starting server...");
warp::serve(routes).run((ADDR, PORT)).await;
}
async fn handle_rejection(
err: warp::Rejection,
) -> Result<impl warp::Reply, std::convert::Infallible> {
Ok(warp::reply::json(&format!("{:?}", err)))
}
fn with_pipelines(
pipelines: RunningPipelines,
) -> impl Filter<Extract = (RunningPipelines,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || pipelines.clone())
}
depends:
[dependencies]
warp = "0.3"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde", "v4"] }
Results when I boot up, send a "create" request, and then an "end" request with the received ID:
starting server...
session requested OK!
Background going...
session end requested OK!: {"session_id": "6b984a45-38d8-41dc-bf95-422f75c5a429"}
occupied
removed from hashmap, key:6b984a45-38d8-41dc-bf95-422f75c5a429
sent end message|()| to fpipeline: 6b984a45-38d8-41dc-bf95-422f75c5a429
You'll notice that the background thread starts (and doesn't end) when the "create" request is made, but when the "end" request is made, while everything appears to complete successfully from the request(web) side, the background thread doesn't ever receive the message. As I've said I've tried all different channel types and moved things around to get it into this configuration... i.e. flattened and thread safetied as much as I could or at least could think of. I'm greener than I would like in rust, so any help would be VERY appreciated!
I think that the issue here is that you are sending the message and then immediately aborting the background task:
tx.send("goodbye".to_string());
//...
h.abort();
And the background task does not have time to process the message, as the abort is of higher priority.
What you need is to join the task, not to abort it.
Curiously, tokio tasks handles do not have a join() method, instead you wait for the handle itself. But for that you need to own the handle, so first you have to extract the handle from the Switcher:
let mut s = v.write().await;
//steal the task handle
if let Some(h) = s.handle.take() {
//...
tx.send("goodbye".to_string());
//...
//join the task
h.await.unwrap();
}
Note that joining a task may fail, in case the task is aborted or panicked. I'm just panicking in the code above, but you may want to do something different.
Or... you could not to wait for the task. In tokio if you drop a task handle, it will be detached. Then, it will finish when it finishes.

handling user input

How can I have some part of my application reading user input and also listening for a shutdown.
According to tokio-docs to do this type of things I should use blocking IO in a spawned task.
For interactive uses, it is recommended to spawn a thread dedicated to user input and use blocking IO directly in that thread.
And so I did with something like this
async fn read_input(mut rx: watch::Receiver<&str>) {
let mut line = String::new();
let stdin = io::stdin();
loop {
stdin.lock().read_line(&mut line).expect("Could not read line");
let op = line.trim_right();
if op == "EXIT" {
break;
} else if op == "send" {
// send_stuff();
}
line.clear();
}
}
the thing is, how can I check the receiver channel for a shutdown and break this loop?
If I await the code will block.
Am I approaching this with the wrong concept/architecture ?
Without managing your own thread, there has to be a way to use some non-blocking OS API on stdin and wrap it for tokio (tokio::io::Stdin 1.12 uses a blocking variant).
Otherwise if we follow the advice from the docs and spawn our own thread,
this is how it could be done:
fn start_reading_stdin_lines(
sender: tokio::sync::mpsc::Sender<String>,
runtime: tokio::runtime::Handle
) {
std::thread::spawn(move || {
let stdin = std::io::stdin();
let mut line_buf = String::new();
while let Ok(_) = stdin.read_line(&mut line_buf) {
let line = line_buf.trim_end().to_string();
line_buf.clear();
let sender2 = sender.clone();
runtime.spawn(async move {
let result = sender2.send(line).await;
if let Err(error) = result {
println!("start_reading_stdin_lines send error: {:?}", error);
}
});
}
});
}
fn start_activity_until_shutdown(watch_sender: tokio::sync::watch::Sender<bool>) {
tokio::spawn(async move {
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
println!("exiting after a signal...");
let result = watch_sender.send(true);
if let Err(error) = result {
println!("watch_sender send error: {:?}", error);
}
});
}
async fn read_input(
mut line_receiver: tokio::sync::mpsc::Receiver<String>,
mut watch_receiver: tokio::sync::watch::Receiver<bool>
) {
loop {
tokio::select! {
Some(line) = line_receiver.recv() => {
println!("line: {}", line);
// process the input
match line.as_str() {
"exit" => {
println!("exiting manually...");
break;
},
"send" => {
println!("send_stuff");
}
unexpected_line => {
println!("unexpected command: {}", unexpected_line);
}
}
}
Ok(_) = watch_receiver.changed() => {
println!("shutdown");
break;
}
}
}
}
#[tokio::main]
async fn main() {
let (line_sender, line_receiver) = tokio::sync::mpsc::channel(1);
start_reading_stdin_lines(line_sender, tokio::runtime::Handle::current());
let (watch_sender, watch_receiver) = tokio::sync::watch::channel(false);
// this will send a shutdown signal at some point
start_activity_until_shutdown(watch_sender);
read_input(line_receiver, watch_receiver).await;
}
Potential improvements:
if you are ok with tokio_stream wrappers, this could be combined more elegantly with start_reading_stdin_lines producing a stream of lines and mapping them to typed commands. read_input could be then based on StreamMap instead of select!.
enabling experimental stdin_forwarders feature makes reading lines easier with a for loop over lines()

Application on OSX cannot spawn more than 2048 threads

I have a Rust application on on OSX firing up a large amount of threads as can be seen in the code below, however, after looking at how many max threads my version of OSX is allowed to create via the sysctl kern.num_taskthreads command, I can see that it is kern.num_taskthreads: 2048 which explains why I can't spin up over 2048 threads.
How do I go about getting past this hard limit?
let threads = 300000;
let requests = 1;
for _x in 0..threads {
println!("{}", _x);
let request_clone = request.clone();
let handle = thread::spawn(move || {
for _y in 0..requests {
request_clone.lock().unwrap().push((request::Request::new(request::Request::create_request())));
}
});
child_threads.push(handle);
}
Before starting, I'd encourage you to read about the C10K problem. When you get into this scale, there's a lot more things you need to keep in mind.
That being said, I'd suggest looking at mio...
a lightweight IO library for Rust with a focus on adding as little overhead as possible over the OS abstractions.
Specifically, mio provides an event loop, which allows you to handle a large number of connections without spawning threads. Unfortunately, I don't know of a HTTP library that currently supports mio. You could create one and be a hero to the Rust community!
Not sure how helpful this will be, but I was trying to create a small pool of threads that will create connections and then send them over to an event loop via a channel for reading.
I'm sure this code is probably pretty bad, but here it is anyways for examples. It uses the Hyper library, like you mentioned.
extern crate hyper;
use std::io::Read;
use std::thread;
use std::thread::{JoinHandle};
use std::sync::{Arc, Mutex};
use std::sync::mpsc::channel;
use hyper::Client;
use hyper::client::Response;
use hyper::header::Connection;
const TARGET: i32 = 100;
const THREADS: i32 = 10;
struct ResponseWithString {
index: i32,
response: Response,
data: Vec<u8>,
complete: bool
}
fn main() {
// Create a client.
let url: &'static str = "http://www.gooogle.com/";
let mut threads = Vec::<JoinHandle<()>>::with_capacity((TARGET * 2) as usize);
let conn_count = Arc::new(Mutex::new(0));
let (tx, rx) = channel::<ResponseWithString>();
for _ in 0..THREADS {
// Move var references into thread context
let conn_count = conn_count.clone();
let tx = tx.clone();
let t = thread::spawn(move || {
loop {
let idx: i32;
{
// Lock, increment, and release
let mut count = conn_count.lock().unwrap();
*count += 1;
idx = *count;
}
if idx > TARGET {
break;
}
let mut client = Client::new();
// Creating an outgoing request.
println!("Creating connection {}...", idx);
let res = client.get(url) // Get URL...
.header(Connection::close()) // Set headers...
.send().unwrap(); // Fire!
println!("Pushing response {}...", idx);
tx.send(ResponseWithString {
index: idx,
response: res,
data: Vec::<u8>::with_capacity(1024),
complete: false
}).unwrap();
}
});
threads.push(t);
}
let mut responses = Vec::<ResponseWithString>::with_capacity(TARGET as usize);
let mut buf: [u8; 1024] = [0; 1024];
let mut completed_count = 0;
loop {
if completed_count >= TARGET {
break; // No more work!
}
match rx.try_recv() {
Ok(r) => {
println!("Incoming response! {}", r.index);
responses.push(r)
},
_ => { }
}
for r in &mut responses {
if r.complete {
continue;
}
// Read the Response.
let res = &mut r.response;
let data = &mut r.data;
let idx = &r.index;
match res.read(&mut buf) {
Ok(i) => {
if i == 0 {
println!("No more data! {}", idx);
r.complete = true;
completed_count += 1;
}
else {
println!("Got data! {} => {}", idx, i);
for x in 0..i {
data.push(buf[x]);
}
}
}
Err(e) => {
panic!("Oh no! {} {}", idx, e);
}
}
}
}
}

Resources