Best nested parallelization approach with Rust Futures - rust

Not sure if this is even a good question to ask but I hope that the answer could also cover if nested parallelization is bad or not in addition to my main questions. Thank you so much!
Suppose I have tasks a, b, c and d. These tasks each have their separate sub tasks, a1, a2 so on..
Sometimes, a1 might have a bunch of sub tasks to accomplish as well. The tasks are nested inside a, b, c and d because they are dependent on processing together as a whole, so there won't be a way to decouple them and create tasks e, f and g (so on..).
I am currently utilising Futures and Threadpool to process these tasks in a concurrent, parallel way (such that a, b.. run in parallel, and a1, a2 run in parallel).
However, I was greeted with
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/my_folder/my_file.rs:123:123
The code above that failed is
let thread_pool = ThreadPool::new().unwrap();
So the questions are:
Is nested parallelization correct in this case? (If not, what should I do? If yes, how should I do?)
How do I ensure that Threadpools are created safely, such that it would not fail unwrapping when the resources are inadequate?
If (2) is a wrong thing to do, how should I implement my code?
let tasks: Vec<_> = block_transactions.into_iter()
.map(|encoded_tx| {
let pool = core_pool.clone();
let rpc_url = core_rpc_url.clone();
let block = core_block.clone();
tokio::spawn(async move {
let sol_client = RpcClient::new(rpc_url.clone());
let mut tx = crate::models::transactions::Transaction {
hash: "".to_string(),
block: block.number.clone(),
is_confirmed: true,
status: TransactionStatus::Success,
fee: 0,
timestamp: block.timestamp.clone()
};
if let Some(decoded_tx) = (&encoded_tx.transaction).decode() {
// Each item in the signatures array is a digital signature
// of the given message
// Additional safety net
if decoded_tx.signatures.len() >= 1 {
// Signatures should be Vec<Signature>.
// The first one, signatures[0], is the hash that is used to
// identify the transaction (eg. in the explorer), and you can
// get the base58-encoded string using the .to_string() method.
// Seed first signature
tx.hash = decoded_tx.signatures[0].to_string();
// Seed the transaction
let tx_res = crate::actions::transactions::create_or_ignore_transaction(
&*pool.get().unwrap(),
&tx,
);
// Seed everything that depends ONLY on the created transaction
if let Ok(created_tx) = tx_res {
// Spawn the futures threader first
let thread_pool = ThreadPool::new().unwrap();
// Seed subsequents
let tst_pool = pool.clone();
let tst_dtx = decoded_tx.clone();
let tst_tx = tx.clone();
let transaction_signatures_task = async move {
let transaction_signatures: Vec<_> = tst_dtx
.signatures
.as_slice()
.into_par_iter()
.filter(|signature| &signature.to_string() != &tst_tx.hash)
.map(|signature| {
// Subsequent signature are lead to the same tx, just
// signed from a different key pair.
InsertableTransactionSignature {
transaction_hash: tst_tx.hash.clone(),
signature: signature.to_string().clone(),
timestamp: tst_tx.timestamp.clone(),
}
})
.collect();
let cts_result =
crate::actions::transaction_signatures::create_transaction_signatures(
&*tst_pool.get().unwrap(),
transaction_signatures.as_slice(),
);
if let Ok(created_ts) = cts_result {
if created_ts.len() != (tst_dtx.signatures.len() - 1) {
eprintln!("[processors/transaction] WARN: Looks like there's a signature \
creation count mismatch for tx {}", tst_tx.hash.clone());
}
} else {
let cts_result_err = cts_result.err().unwrap();
eprintln!(
"[processors/transaction] FATAL: signature seeding error for \
tx {} due to: {}",
tst_tx.hash.clone(),
cts_result_err.to_string()
);
}
};
let tst_handle = thread_pool
.spawn_with_handle(transaction_signatures_task)
.unwrap();
// A message contains a header,
// The message header contains three unsigned 8-bit values.
// The first value is the number of required signatures in the
// containing transaction. The second value is the number of those
// corresponding account addresses that are read-only.
// The third value in the message header is the number of read-only
// account addresses not requiring signatures.
// identify which are required addresses, addresses requesting write
// access, then address with readonly access
let req_sig_count =
(decoded_tx.message.header.num_required_signatures.clone()) as usize;
let rw_sig_count = (decoded_tx
.message
.header
.num_readonly_signed_accounts
.clone()) as usize;
let ro_sig_count = (decoded_tx
.message
.header
.num_readonly_unsigned_accounts
.clone()) as usize;
// tx.message.account_keys is a compact-array of account addresses,
// The addresses that require signatures appear at the beginning of the
// account address array, with addresses requesting write access first
// and read-only accounts following. The addresses that do not require
// signatures follow the addresses that do, again with read-write
// accounts first and read-only accounts following.
let at_pool = pool.clone();
let at_tx = tx.clone();
let at_dtx = decoded_tx.clone();
let at_block = block.clone();
let accounts_task = async move {
crate::processors::account::process_account_data(
&at_pool,
&sol_client,
&at_tx.hash,
&req_sig_count,
&rw_sig_count,
&ro_sig_count,
&at_dtx.message.account_keys,
&at_block.timestamp)
.await;
};
let at_handle = thread_pool
.spawn_with_handle(accounts_task)
.unwrap();
// Each instruction specifies a single program, a subset of
// the transaction's accounts that should be passed to the program,
// and a data byte array that is passed to the program. The program
// interprets the data array and operates on the accounts specified
// by the instructions. The program can return successfully, or
// with an error code. An error return causes the entire
// transaction to fail immediately.
let it_pool = pool.clone();
let it_ctx = created_tx.clone();
let it_dtx = decoded_tx.clone();
let instruction_task = async move {
crate::processors::instruction::
process_instructions(&it_pool, &it_dtx,
&it_ctx,
it_dtx.message.instructions.as_slice(),
it_dtx.message.account_keys.as_slice())
.await;
};
let it_handle = thread_pool
.spawn_with_handle(instruction_task)
.unwrap();
// Ensure we have the tx meta as well
let tmt_pool = pool.clone();
let tmt_block = block.clone();
let tmt_meta = encoded_tx.meta.clone();
let tx_meta_task = async move {
if let Some(tx_meta) = tmt_meta {
let tm_thread_pool = ThreadPool::new().unwrap();
// Process the transaction's meta and validate the various
// data structures. Seed the accounts, alternate tx hashes,
// logs, balance inputs first
// pub log_messages: Option<Vec<String>>,
let tlt_pool = tmt_pool.clone();
let tlt_txm = tx_meta.clone();
let tlt_tx = tx.clone();
let transaction_log_task = async move {
if let Some(log_messages) = &tlt_txm.log_messages {
let transaction_logs =
log_messages.into_par_iter().enumerate()
.map(|(idx, log_msg)| {
InsertableTransactionLog {
transaction_hash: tlt_tx.hash.clone(),
data: log_msg.clone(),
line: idx as i32,
timestamp: tlt_tx.timestamp.clone()
}
})
.collect::<Vec<InsertableTransactionLog>>();
let tl_result = crate::actions::transaction_logs::batch_create(
&*tlt_pool.get().unwrap(),
transaction_logs.as_slice()
);
if let Err(err) = tl_result {
eprintln!("[processors/transaction] WARN: Problem pushing \
transaction logs for tx {} due to {}", tlt_tx.hash.clone(),
err.to_string());
}
}
};
let tlt_handle = tm_thread_pool
.spawn_with_handle(transaction_log_task)
.unwrap();
// Gather and seed all account inputs
let ait_pool = tmt_pool.clone();
let ait_txm = tx_meta.clone();
let ait_tx = tx.clone();
let ait_dtx = decoded_tx.clone();
let account_inputs_task = async move {
let account_inputs: Vec<InsertableAccountInput> = (0..ait_dtx.message.account_keys.len())
.into_par_iter()
.map(|i| {
let current_account_hash =
ait_dtx.message.account_keys[i].clone().to_string();
InsertableAccountInput {
transaction_hash: ait_tx.hash.clone(),
account: current_account_hash.to_string(),
token_id: "".to_string(),
pre_balance: (ait_txm.pre_balances[i] as i64),
post_balance: Option::from(ait_txm.post_balances[i] as i64),
timestamp: block.timestamp.clone()
}
}).collect();
let result = crate::actions::account_inputs::batch_create(
&*ait_pool.get().unwrap(),
account_inputs);
if let Err(error) = result {
eprintln!("[processors/transaction] FATAL: Problem indexing \
account inputs for tx {} due to {}", ait_tx.hash.clone(),
error.to_string());
}
};
let ait_handle = tm_thread_pool
.spawn_with_handle(account_inputs_task)
.unwrap();
// If there are token balances for this transaction
// pub pre_token_balances: Option<Vec<UiTransactionTokenBalance>>,
// pub post_token_balances: Option<Vec<UiTransactionTokenBalance>>,
let tai_pool = tmt_pool.clone();
let tai_txm = tx_meta.clone();
let tai_tx = tx.clone();
let tai_block = tmt_block.clone();
let tai_dtx = decoded_tx.clone();
let token_account_inputs_task = async move {
if let (Some(pre_token_balances),
Some(post_token_balances)) =
(tai_txm.pre_token_balances, tai_txm.post_token_balances)
{
super::account_input::process_token_account_inputs(
&tai_pool,
&tai_tx.hash,
&pre_token_balances,
&post_token_balances,
&tai_dtx.message.account_keys,
&tai_block.timestamp,
).await;
}
};
let tai_handle = tm_thread_pool
.spawn_with_handle(token_account_inputs_task)
.unwrap();
let iit_pool = tmt_pool.clone();
let iit_dtx = decoded_tx.clone();
let iit_txm = tx_meta.clone();
let iit_tx = tx.clone();
let inner_instructions_task = async move {
if let Some(inner_instructions) = iit_txm.inner_instructions {
crate::processors::inner_instruction::process(&iit_pool,
inner_instructions.as_slice(),
&iit_dtx, &iit_tx,
iit_dtx.message.account_keys.as_slice())
.await;
}
};
let iit_handle = tm_thread_pool
.spawn_with_handle(inner_instructions_task)
.unwrap();
let tasks_future = future::join_all(vec![tlt_handle, ait_handle, tai_handle, iit_handle]);
tasks_future.await;
// Update the tx's metadata and proceed with instructions processing
let update_result =
crate::actions::transactions::update(&*tmt_pool.get().unwrap(), &tx);
if let Err(update_err) = update_result {
eprintln!(
"[blockchain_syncer] FATAL: Problem updating tx: {}",
update_err
);
}
} else {
eprintln!(
"[processors/transaction] WARN: tx {} has no metadata!",
tx.hash
);
}
};
let tmt_handle = thread_pool
.spawn_with_handle(tx_meta_task)
.unwrap();
let future_batch =
future::join_all(vec![at_handle, it_handle, tst_handle, tmt_handle]);
future_batch.await;
} else {
let tx_err = tx_res.err();
if let Some(err) = tx_err {
eprintln!(
"[blockchain_syncer] WARN: Problem pushing tx {} to DB \
due to: {}",
tx.hash, err
)
} else {
eprintln!(
"[blockchain_syncer] FATAL: Problem pushing tx {} to DB \
due to an unknown error",
tx.hash
);
}
}
} else {
eprintln!(
"[blockchain_syncer] FATAL: a transaction in block {} has no hashes!",
&block.number
);
}
} else {
eprintln!(
"[blockchain_syncer] FATAL: Unable to obtain \
account information vec from chain for tx {}",
&tx.hash
);
}
})
})
.collect();
future::join_all(tasks).await;

Related

Sending SOL from one wallet to another never arrives

I'm trying to send SOL from one account to another in the Devnet using system_instruction::transfer but the funds never move from the wallet_a to wallet_b. Am I missing something? The wallet A has 1.5 solana.
I also tried to use the private key in the sender but didn't work.
https://solanacookbook.com/references/basic-transactions.html#how-to-send-spl-tokens
use solana_sdk::signer::keypair::Keypair;
use solana_client::rpc_client::RpcClient;
use std::{str::FromStr};
use solana_program::{pubkey::Pubkey};
use solana_program::{system_instruction};
fn main () {
let wallet_a = "DVJM5LZEMWwypvgBRysQBUKXE6Jc2wqDcJouGerZnbZz";
let wallet_b = "9c6YTLYHxnRzvQtUwd41qGUBhdSWkGJFtedbxWg8D6eZ";
let from : Pubkey = Pubkey::from_str(&wallet_a).unwrap();
let dest : Pubkey = Pubkey::from_str(&wallet_b).unwrap();
let amount : u64 = 1000000000;
send_sol (&from, &dest, amount);
}
fn send_sol (from_wallet: &Pubkey, to_wallet: &Pubkey, amount: u64) {
println!("From: {} to: {}", from_wallet, to_wallet);
let result = system_instruction::transfer(from_wallet, to_wallet, amount);
println!("{:?}", result);
}
This is the output:
From: DVJM5LZEMWwypvgBRysQBUKXE6Jc2wqDcJouGerZnbZz to: 9c6YTLYHxnRzvQtUwd41qGUBhdSWkGJFtedbxWg8D6eZ
Instruction { program_id: 11111111111111111111111111111111, accounts: [AccountMeta { pubkey: DVJM5LZEMWwypvgBRysQBUKXE6Jc2wqDcJouGerZnbZz, is_signer: true, is_writable: true }, AccountMeta { pubkey: 9c6YTLYHxnRzvQtUwd41qGUBhdSWkGJFtedbxWg8D6eZ, is_signer: false, is_writable: true }], data: [2, 0, 0, 0, 0, 202, 154, 59, 0, 0, 0, 0] }
So, all's you've done within you send_sol function is generate the instruction to send SOL. You need to put that instruction in a Transaction and then sign/send using the RpcClient:
let rpc_url = String::from("https://api.devnet.solana.com");
let connection = RpcClient::new_with_commitment(rpc_url, CommitmentConfig::confirmed());
///Airdropping some Sol to the 'from' account
match connection.request_airdrop(&frompubkey, LAMPORTS_PER_SOL) {
Ok(sig) => loop {
if let Ok(confirmed) = connection.confirm_transaction(&sig) {
if confirmed {
println!("Transaction: {} Status: {}", sig, confirmed);
break;
}
}
},
Err(_) => println!("Error requesting airdrop"),
};
///Creating the transfer sol instruction
let ix = system_instruction::transfer(&frompubkey, &topubkey, lamports_to_send);
///Putting the transfer sol instruction into a transaction
let recent_blockhash = connection.get_latest_blockhash().expect("Failed to get latest blockhash.");
let txn = Transaction::new_signed_with_payer(&[ix], Some(&frompubkey), &[&from], recent_blockhash);
///Sending the transfer sol transaction
match connection.send_and_confirm_transaction(&txn){
Ok(sig) => loop {
if let Ok(confirmed) = connection.confirm_transaction(&sig) {
if confirmed {
println!("Transaction: {} Status: {}", sig, confirmed);
break;
}
}
},
Err(e) => println!("Error transferring Sol:, {}", e),
}

Update the values of child objects at runtime

How I can update child widget of gtk::Grid (gtk::Label in example) in runtime?
In example code after change value SpinButton, I add recreated Grid (fn grid()) with updated childs (I don't want remove old Grid in example).
In a real project I need to add a Grid with an updateable child element Label without recreate Grid. New values will be continuously read from the database.
Example:
pub fn test(parent: &gtk::Box) {
let parent_grid = gtk::Grid::new();
let parent_spin = gtk::SpinButton::with_range(0.0, 10.0, 1.0);
parent_grid.add(&parent_spin);
let parent_value = Rc::new(RefCell::new(0));
let parent_value_clone = parent_value.clone();
let parent_grid_rc = Rc::new(RefCell::new(parent_grid));
let parent_grid_rc_clone = parent_grid_rc.clone();
parent_spin.connect_value_changed(move |x| {
*parent_value_clone.borrow_mut() = x.value_as_int();
(*parent_grid_rc_clone.borrow_mut()).add(&grid(*parent_value.borrow()));
});
(*parent_grid_rc.borrow()).show_all();
parent.add(&(*parent_grid_rc.borrow()));
}
fn grid(value: i32) -> gtk::Grid {
let grid = gtk::Grid::new();
let label_box = gtk::Label::new(Some("Value: "));
let value_box = Rc::new(RefCell::new(gtk::Label::new(Some(&format!("{}", value))))); // THIS LABEL MUST BE UPDATED DURING RUNTIME
grid.add(&label_box);
grid.add(&(*value_box.borrow()));
grid.show_all();
grid
}
I'm new to this, so if there are other methods for creating dynamic objects in Rust-GTK and modifying their children, I'd love to hear about them.
Library glib helped me in my trouble.
I didn't find any clear examples on the internet, so let my solution be here.
pub fn test(parent: &gtk::Box) {
let parent_grid = gtk::Grid::new();
let parent_spin = gtk::SpinButton::with_range(0.0, 10.0, 1.0);
parent_grid.add(&parent_spin);
let parent_value = Rc::new(RefCell::new(0));
let parent_value_clone = parent_value.clone();
parent_spin.connect_value_changed(move |x| {
*parent_value_clone.borrow_mut() = x.value_as_int();
});
let grid = gtk::Grid::new();
let label_box = gtk::Label::new(Some("Value: "));
let value_box = gtk::Label::new(Some("Value"));
grid.add(&label_box);
grid.add(&value_box);
grid.show_all();
let update_label = move || {
println!("value={}", *parent_value.borrow());
value_box.set_text(&format!("{}",*parent_value.borrow()));
glib::Continue(true)
};
glib::timeout_add_seconds_local(2, update_label);
parent_grid.add(&grid);
parent_grid.show_all();
parent.add(&parent_grid);
}
fn grid(value: Rc<RefCell<i32>>) -> (gtk::Grid) {
let grid = gtk::Grid::new();
let label_box = gtk::Label::new(Some("Value: "));
let value_box = gtk::Label::new(Some("Value"));
grid.add(&label_box);
grid.add(&value_box);
let value = *value.borrow();
grid.show_all();
let update_label = move || {
value_box.set_text(&format!("{}",value));
glib::Continue(true)
};
glib::timeout_add_seconds_local(1, update_label);
grid
}

Which is the equivalent of "BLOCKCHAIN_INTERFACE 3.1.0" in the near-sdk 4.0.0-pre.4?

In the near-sdk 3.1.0 we use the BLOCKCHAIN_INTERFACE to make a Dao remote-upgrade with the next method:
#[cfg(target_arch = "wasm32")]
pub fn upgrade(self) {
// assert!(env::predecessor_account_id() == self.minter_account_id);
//input is code:<Vec<u8> on REGISTER 0
//log!("bytes.length {}", code.unwrap().len());
const GAS_FOR_UPGRADE: u64 = 20 * TGAS; //gas occupied by this fn
const BLOCKCHAIN_INTERFACE_NOT_SET_ERR: &str = "Blockchain interface not set.";
//after upgrade we call *pub fn migrate()* on the NEW CODE
let current_id = env::current_account_id().into_bytes();
let migrate_method_name = "migrate".as_bytes().to_vec();
let attached_gas = env::prepaid_gas() - env::used_gas() - GAS_FOR_UPGRADE;
unsafe {
BLOCKCHAIN_INTERFACE.with(|b| {
// Load input (new contract code) into register 0
b.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.input(0);
//prepare self-call promise
let promise_id = b
.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.promise_batch_create(current_id.len() as _, current_id.as_ptr() as _);
//1st action, deploy/upgrade code (takes code from register 0)
b.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.promise_batch_action_deploy_contract(promise_id, u64::MAX as _, 0);
// 2nd action, schedule a call to "migrate()".
// Will execute on the **new code**
b.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.promise_batch_action_function_call(
promise_id,
migrate_method_name.len() as _,
migrate_method_name.as_ptr() as _,
0 as _,
0 as _,
0 as _,
attached_gas,
);
});
}
}
To use the BLOCKCHAIN_INTERFACE I use this import:
use near_sdk::env::BLOCKCHAIN_INTERFACE;
In the near-sdk 4.0.0-pre.4 I can't use this interface to make the remote-upgrade, how I can solve it?
I read something about the MockedBlockchain, but I can't use it, the import doesn't exist or the methods are private and also says that it's only for #test
Yes, so that blockchain interface was removed completely so there is no need to go through that at all anymore. For all methods, you can just use near_sdk::sys to call each low level method. Here is the contract code migrated:
#[cfg(target_arch = "wasm32")]
pub fn upgrade(self) {
use near_sdk::sys;
// assert!(env::predecessor_account_id() == self.minter_account_id);
//input is code:<Vec<u8> on REGISTER 0
//log!("bytes.length {}", code.unwrap().len());
const GAS_FOR_UPGRADE: u64 = 20 * TGAS; //gas occupied by this fn
const BLOCKCHAIN_INTERFACE_NOT_SET_ERR: &str = "Blockchain interface not set.";
//after upgrade we call *pub fn migrate()* on the NEW CODE
let current_id = env::current_account_id().into_bytes();
let migrate_method_name = "migrate".as_bytes().to_vec();
let attached_gas = env::prepaid_gas() - env::used_gas() - GAS_FOR_UPGRADE;
unsafe {
// Load input (new contract code) into register 0
sys::input(0);
//prepare self-call promise
let promise_id =
sys::promise_batch_create(current_id.len() as _, current_id.as_ptr() as _);
//1st action, deploy/upgrade code (takes code from register 0)
sys::promise_batch_action_deploy_contract(promise_id, u64::MAX as _, 0);
// 2nd action, schedule a call to "migrate()".
// Will execute on the **new code**
sys::promise_batch_action_function_call(
promise_id,
migrate_method_name.len() as _,
migrate_method_name.as_ptr() as _,
0 as _,
0 as _,
0 as _,
attached_gas,
);
}
}
Let me know if this solves your problem completely or if there is anything else I can help with :)

wgpu-rs: `thread 'main' panicked at 'Texture[1] does not exist'`

I create a wgpu::TextureView within a render method as below:
let mut encoder = self.device.create_command_encoder(...);
let texture_view = self
.surface
.get_current_frame()?
.output
.texture
.create_view(&wgpu::TextureViewDescriptor::default());
let mut render_pass = encoder.begin_render_pas(&wgpu::RenderPassDescriptor {
color_attachments: &[wgpu::RenderPassColorAttachment {
view: &texture_view,
...
}],
...
})
render_pass.set_/* pipeline, bind_group, vertex_buffer, index_buffer */(...);
render_pass.draw_indexed(...);
self.queue.submit(std::iter::once(encoder.finish()));
But when I run the program, it panics:
thread 'main' panicked at 'Texture[1] does not exist', /home/doliphin/.cargo/registry/src/github.com-1ecc6299db9ec823/wgpu-core-0.10.0/src/hub.rs:129:32
There is an issue for this.
This can be solved by forcing the SurfaceTexture to be dropped after the TextureView.
let mut encoder = self.device.create_command_encoder(...);
let surface_texture = self.surface.get_current_frame()?.output; // SurfaceTexture
{
let texture_view = surface_texture
.texture
.create_view(&wgpu::TextureViewDescriptor::default()); // TextureView
let mut render_pass = encoder.begin_render_pas(&wgpu::RenderPassDescriptor {
color_attachments: &[wgpu::RenderPassColorAttachment {
view: &texture_view,
...
}],
...
})
render_pass.set_/* pipeline, bind_group, vertex_buffer, index_buffer */(...);
render_pass.draw_indexed(...);
}
// drop(render_pass);
// drop(texture_view);
self.queue.submit(std::iter::once(encoder.finish()));
// drop(surface_texture)
// drop(encoder)

Why do threads containing a MPSC channel never join?

I've this example collected from the internet:
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
// Transaction enum
enum Transaction {
Widthdrawl(String, f64),
Deposit(String, f64),
}
fn main() {
//A banking send receive example...
// set the number of customers
let n_customers = 10;
// Create a "customer" and a "banker"
let (customers, banker) = mpsc::channel();
let handles = (0..n_customers + 1)
.into_iter()
.map(|i| {
// Create another "customer"
let customer = customers.clone();
// Create the customer thread
let handle = thread::Builder::new()
.name(format!("{}{}", "thread", i).into())
.spawn(move || {
// Define Transaction
let trans_type = match i % 2 {
0 => Transaction::Deposit(
thread::current().name().unwrap().to_string(),
(i + 5) as f64 * 10.0,
),
_ => Transaction::Widthdrawl(
thread::current().name().unwrap().to_string(),
(i + 10) as f64 * 5.0,
),
};
// Send the Transaction
customer.send(trans_type).unwrap();
});
handle
})
.collect::<Vec<Result<thread::JoinHandle<_>, _>>>();
// Wait for threads to finish
for handle in handles {
handle.unwrap().join().unwrap()
}
// Create a bank thread
let bank = thread::spawn(move || {
// Create a value
let mut balance: f64 = 10000.0;
println!("Initially, Bank value: {}", balance);
// Perform the transactions in order
//banker.recv_timeout(Duration::new(5, 0)); <-- TIMEOUT line...
banker.into_iter().for_each(|i| {
let mut customer_name: String = "None".to_string();
match i {
// Subtract for Widthdrawls
Transaction::Widthdrawl(cust, amount) => {
customer_name = cust;
println!(
"Customer name {} doing withdrawal of amount {}",
customer_name, amount
);
balance = balance - amount;
}
// Add for deposits
Transaction::Deposit(cust, amount) => {
customer_name = cust;
println!(
"Customer name {} doing deposit of amount {}",
customer_name, amount
);
balance = balance + amount;
}
}
println!("Customer is {}, Bank value: {}", customer_name, balance);
});
});
// Let the bank finish
bank.join().unwrap(); //THE THREAD DOES NOT END!!
}
The bank thread never joins, thus not ending the main.
If I remove the join and uncomment the timeout line above, the bank thread sometimes does not wait for the customer threads send (which, I think is ok).
//banker.recv_timeout(Duration::new(5, 0)); <-- TIMEOUT line...
What could be the reason for the bank thread not joining or what could be a better way to make it understand that no more customer messages will be coming? (as I think the timeout() may not be a reliable way in here).
I need to drop the tx channel after all producers are done and then the consumer stops after that:
// Wait for threads to finish
for handle in handles {
handle.unwrap().join().unwrap()
}
drop(customers);
// Create a bank thread

Resources