Sending SOL from one wallet to another never arrives - rust

I'm trying to send SOL from one account to another in the Devnet using system_instruction::transfer but the funds never move from the wallet_a to wallet_b. Am I missing something? The wallet A has 1.5 solana.
I also tried to use the private key in the sender but didn't work.
https://solanacookbook.com/references/basic-transactions.html#how-to-send-spl-tokens
use solana_sdk::signer::keypair::Keypair;
use solana_client::rpc_client::RpcClient;
use std::{str::FromStr};
use solana_program::{pubkey::Pubkey};
use solana_program::{system_instruction};
fn main () {
let wallet_a = "DVJM5LZEMWwypvgBRysQBUKXE6Jc2wqDcJouGerZnbZz";
let wallet_b = "9c6YTLYHxnRzvQtUwd41qGUBhdSWkGJFtedbxWg8D6eZ";
let from : Pubkey = Pubkey::from_str(&wallet_a).unwrap();
let dest : Pubkey = Pubkey::from_str(&wallet_b).unwrap();
let amount : u64 = 1000000000;
send_sol (&from, &dest, amount);
}
fn send_sol (from_wallet: &Pubkey, to_wallet: &Pubkey, amount: u64) {
println!("From: {} to: {}", from_wallet, to_wallet);
let result = system_instruction::transfer(from_wallet, to_wallet, amount);
println!("{:?}", result);
}
This is the output:
From: DVJM5LZEMWwypvgBRysQBUKXE6Jc2wqDcJouGerZnbZz to: 9c6YTLYHxnRzvQtUwd41qGUBhdSWkGJFtedbxWg8D6eZ
Instruction { program_id: 11111111111111111111111111111111, accounts: [AccountMeta { pubkey: DVJM5LZEMWwypvgBRysQBUKXE6Jc2wqDcJouGerZnbZz, is_signer: true, is_writable: true }, AccountMeta { pubkey: 9c6YTLYHxnRzvQtUwd41qGUBhdSWkGJFtedbxWg8D6eZ, is_signer: false, is_writable: true }], data: [2, 0, 0, 0, 0, 202, 154, 59, 0, 0, 0, 0] }

So, all's you've done within you send_sol function is generate the instruction to send SOL. You need to put that instruction in a Transaction and then sign/send using the RpcClient:
let rpc_url = String::from("https://api.devnet.solana.com");
let connection = RpcClient::new_with_commitment(rpc_url, CommitmentConfig::confirmed());
///Airdropping some Sol to the 'from' account
match connection.request_airdrop(&frompubkey, LAMPORTS_PER_SOL) {
Ok(sig) => loop {
if let Ok(confirmed) = connection.confirm_transaction(&sig) {
if confirmed {
println!("Transaction: {} Status: {}", sig, confirmed);
break;
}
}
},
Err(_) => println!("Error requesting airdrop"),
};
///Creating the transfer sol instruction
let ix = system_instruction::transfer(&frompubkey, &topubkey, lamports_to_send);
///Putting the transfer sol instruction into a transaction
let recent_blockhash = connection.get_latest_blockhash().expect("Failed to get latest blockhash.");
let txn = Transaction::new_signed_with_payer(&[ix], Some(&frompubkey), &[&from], recent_blockhash);
///Sending the transfer sol transaction
match connection.send_and_confirm_transaction(&txn){
Ok(sig) => loop {
if let Ok(confirmed) = connection.confirm_transaction(&sig) {
if confirmed {
println!("Transaction: {} Status: {}", sig, confirmed);
break;
}
}
},
Err(e) => println!("Error transferring Sol:, {}", e),
}

Related

iter_mut() doesn't run whenever I try to add status in Bevy 0.8.1

I added collision between the player and the ground, and I want to add a jumping mechanic into my game with on_ground. However, whenever I try to add status, it just stops iterating entirely.
fn collision_detection(
ground: Query<&Transform, (With<Ground>, Without<Player>)>,
mut player: Query<(&mut Transform, &mut PlayerStatus), With<Player>>,
) {
let player_size = Vec2::new(PLAYER_SIZE_X, PLAYER_SIZE_Y);
let ground_size = Vec2::new(GROUND_SIZE_X, GROUND_SIZE_Y);
for ground in ground.iter() {
for (mut player, mut status) in player.iter_mut() {
if collide(
player.translation,
player_size,
ground.translation,
ground_size,
)
.is_some()
{
status.on_ground = true;
println!("ON GROUND")
} else {
status.on_ground = false;
}
if status.on_ground {
player.translation.y += GRAVITY;
}
}
}
}
For some reason, this part wouldn't run
for (mut player, mut status) in player.iter_mut() {
if collide(
player.translation,
player_size,
ground.translation,
ground_size,
)
.is_some()
{
status.on_ground = true;
println!("ON GROUND")
} else {
status.on_ground = false;
}
if status.on_ground {
player.translation.y += GRAVITY;
}
}
It works if I only do this though:
for mut player in player.iter_mut() {
if collide(
player.translation,
player_size,
ground.translation,
ground_size,
)
.is_some()
{
player.translation.y += GRAVITY;
}
}
If you have only one player, you can use get_single_mut() instead of iter_mut() on the query.
It returns a result, so you can check in your function easily whether the player entity had been found at all. And if not send yourself some nice debugging message :)
if let Ok((mut player, mut status)) = player.get_single_mut() {
// do your collision check
} else {
// player not found in the query
}
https://docs.rs/bevy/latest/bevy/prelude/struct.Query.html#method.get_single_mut
Edit:
Looking at your comment above: if you have an already spawn entity you can always add new components to it using .insert_bundle or .insert.

Best nested parallelization approach with Rust Futures

Not sure if this is even a good question to ask but I hope that the answer could also cover if nested parallelization is bad or not in addition to my main questions. Thank you so much!
Suppose I have tasks a, b, c and d. These tasks each have their separate sub tasks, a1, a2 so on..
Sometimes, a1 might have a bunch of sub tasks to accomplish as well. The tasks are nested inside a, b, c and d because they are dependent on processing together as a whole, so there won't be a way to decouple them and create tasks e, f and g (so on..).
I am currently utilising Futures and Threadpool to process these tasks in a concurrent, parallel way (such that a, b.. run in parallel, and a1, a2 run in parallel).
However, I was greeted with
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/my_folder/my_file.rs:123:123
The code above that failed is
let thread_pool = ThreadPool::new().unwrap();
So the questions are:
Is nested parallelization correct in this case? (If not, what should I do? If yes, how should I do?)
How do I ensure that Threadpools are created safely, such that it would not fail unwrapping when the resources are inadequate?
If (2) is a wrong thing to do, how should I implement my code?
let tasks: Vec<_> = block_transactions.into_iter()
.map(|encoded_tx| {
let pool = core_pool.clone();
let rpc_url = core_rpc_url.clone();
let block = core_block.clone();
tokio::spawn(async move {
let sol_client = RpcClient::new(rpc_url.clone());
let mut tx = crate::models::transactions::Transaction {
hash: "".to_string(),
block: block.number.clone(),
is_confirmed: true,
status: TransactionStatus::Success,
fee: 0,
timestamp: block.timestamp.clone()
};
if let Some(decoded_tx) = (&encoded_tx.transaction).decode() {
// Each item in the signatures array is a digital signature
// of the given message
// Additional safety net
if decoded_tx.signatures.len() >= 1 {
// Signatures should be Vec<Signature>.
// The first one, signatures[0], is the hash that is used to
// identify the transaction (eg. in the explorer), and you can
// get the base58-encoded string using the .to_string() method.
// Seed first signature
tx.hash = decoded_tx.signatures[0].to_string();
// Seed the transaction
let tx_res = crate::actions::transactions::create_or_ignore_transaction(
&*pool.get().unwrap(),
&tx,
);
// Seed everything that depends ONLY on the created transaction
if let Ok(created_tx) = tx_res {
// Spawn the futures threader first
let thread_pool = ThreadPool::new().unwrap();
// Seed subsequents
let tst_pool = pool.clone();
let tst_dtx = decoded_tx.clone();
let tst_tx = tx.clone();
let transaction_signatures_task = async move {
let transaction_signatures: Vec<_> = tst_dtx
.signatures
.as_slice()
.into_par_iter()
.filter(|signature| &signature.to_string() != &tst_tx.hash)
.map(|signature| {
// Subsequent signature are lead to the same tx, just
// signed from a different key pair.
InsertableTransactionSignature {
transaction_hash: tst_tx.hash.clone(),
signature: signature.to_string().clone(),
timestamp: tst_tx.timestamp.clone(),
}
})
.collect();
let cts_result =
crate::actions::transaction_signatures::create_transaction_signatures(
&*tst_pool.get().unwrap(),
transaction_signatures.as_slice(),
);
if let Ok(created_ts) = cts_result {
if created_ts.len() != (tst_dtx.signatures.len() - 1) {
eprintln!("[processors/transaction] WARN: Looks like there's a signature \
creation count mismatch for tx {}", tst_tx.hash.clone());
}
} else {
let cts_result_err = cts_result.err().unwrap();
eprintln!(
"[processors/transaction] FATAL: signature seeding error for \
tx {} due to: {}",
tst_tx.hash.clone(),
cts_result_err.to_string()
);
}
};
let tst_handle = thread_pool
.spawn_with_handle(transaction_signatures_task)
.unwrap();
// A message contains a header,
// The message header contains three unsigned 8-bit values.
// The first value is the number of required signatures in the
// containing transaction. The second value is the number of those
// corresponding account addresses that are read-only.
// The third value in the message header is the number of read-only
// account addresses not requiring signatures.
// identify which are required addresses, addresses requesting write
// access, then address with readonly access
let req_sig_count =
(decoded_tx.message.header.num_required_signatures.clone()) as usize;
let rw_sig_count = (decoded_tx
.message
.header
.num_readonly_signed_accounts
.clone()) as usize;
let ro_sig_count = (decoded_tx
.message
.header
.num_readonly_unsigned_accounts
.clone()) as usize;
// tx.message.account_keys is a compact-array of account addresses,
// The addresses that require signatures appear at the beginning of the
// account address array, with addresses requesting write access first
// and read-only accounts following. The addresses that do not require
// signatures follow the addresses that do, again with read-write
// accounts first and read-only accounts following.
let at_pool = pool.clone();
let at_tx = tx.clone();
let at_dtx = decoded_tx.clone();
let at_block = block.clone();
let accounts_task = async move {
crate::processors::account::process_account_data(
&at_pool,
&sol_client,
&at_tx.hash,
&req_sig_count,
&rw_sig_count,
&ro_sig_count,
&at_dtx.message.account_keys,
&at_block.timestamp)
.await;
};
let at_handle = thread_pool
.spawn_with_handle(accounts_task)
.unwrap();
// Each instruction specifies a single program, a subset of
// the transaction's accounts that should be passed to the program,
// and a data byte array that is passed to the program. The program
// interprets the data array and operates on the accounts specified
// by the instructions. The program can return successfully, or
// with an error code. An error return causes the entire
// transaction to fail immediately.
let it_pool = pool.clone();
let it_ctx = created_tx.clone();
let it_dtx = decoded_tx.clone();
let instruction_task = async move {
crate::processors::instruction::
process_instructions(&it_pool, &it_dtx,
&it_ctx,
it_dtx.message.instructions.as_slice(),
it_dtx.message.account_keys.as_slice())
.await;
};
let it_handle = thread_pool
.spawn_with_handle(instruction_task)
.unwrap();
// Ensure we have the tx meta as well
let tmt_pool = pool.clone();
let tmt_block = block.clone();
let tmt_meta = encoded_tx.meta.clone();
let tx_meta_task = async move {
if let Some(tx_meta) = tmt_meta {
let tm_thread_pool = ThreadPool::new().unwrap();
// Process the transaction's meta and validate the various
// data structures. Seed the accounts, alternate tx hashes,
// logs, balance inputs first
// pub log_messages: Option<Vec<String>>,
let tlt_pool = tmt_pool.clone();
let tlt_txm = tx_meta.clone();
let tlt_tx = tx.clone();
let transaction_log_task = async move {
if let Some(log_messages) = &tlt_txm.log_messages {
let transaction_logs =
log_messages.into_par_iter().enumerate()
.map(|(idx, log_msg)| {
InsertableTransactionLog {
transaction_hash: tlt_tx.hash.clone(),
data: log_msg.clone(),
line: idx as i32,
timestamp: tlt_tx.timestamp.clone()
}
})
.collect::<Vec<InsertableTransactionLog>>();
let tl_result = crate::actions::transaction_logs::batch_create(
&*tlt_pool.get().unwrap(),
transaction_logs.as_slice()
);
if let Err(err) = tl_result {
eprintln!("[processors/transaction] WARN: Problem pushing \
transaction logs for tx {} due to {}", tlt_tx.hash.clone(),
err.to_string());
}
}
};
let tlt_handle = tm_thread_pool
.spawn_with_handle(transaction_log_task)
.unwrap();
// Gather and seed all account inputs
let ait_pool = tmt_pool.clone();
let ait_txm = tx_meta.clone();
let ait_tx = tx.clone();
let ait_dtx = decoded_tx.clone();
let account_inputs_task = async move {
let account_inputs: Vec<InsertableAccountInput> = (0..ait_dtx.message.account_keys.len())
.into_par_iter()
.map(|i| {
let current_account_hash =
ait_dtx.message.account_keys[i].clone().to_string();
InsertableAccountInput {
transaction_hash: ait_tx.hash.clone(),
account: current_account_hash.to_string(),
token_id: "".to_string(),
pre_balance: (ait_txm.pre_balances[i] as i64),
post_balance: Option::from(ait_txm.post_balances[i] as i64),
timestamp: block.timestamp.clone()
}
}).collect();
let result = crate::actions::account_inputs::batch_create(
&*ait_pool.get().unwrap(),
account_inputs);
if let Err(error) = result {
eprintln!("[processors/transaction] FATAL: Problem indexing \
account inputs for tx {} due to {}", ait_tx.hash.clone(),
error.to_string());
}
};
let ait_handle = tm_thread_pool
.spawn_with_handle(account_inputs_task)
.unwrap();
// If there are token balances for this transaction
// pub pre_token_balances: Option<Vec<UiTransactionTokenBalance>>,
// pub post_token_balances: Option<Vec<UiTransactionTokenBalance>>,
let tai_pool = tmt_pool.clone();
let tai_txm = tx_meta.clone();
let tai_tx = tx.clone();
let tai_block = tmt_block.clone();
let tai_dtx = decoded_tx.clone();
let token_account_inputs_task = async move {
if let (Some(pre_token_balances),
Some(post_token_balances)) =
(tai_txm.pre_token_balances, tai_txm.post_token_balances)
{
super::account_input::process_token_account_inputs(
&tai_pool,
&tai_tx.hash,
&pre_token_balances,
&post_token_balances,
&tai_dtx.message.account_keys,
&tai_block.timestamp,
).await;
}
};
let tai_handle = tm_thread_pool
.spawn_with_handle(token_account_inputs_task)
.unwrap();
let iit_pool = tmt_pool.clone();
let iit_dtx = decoded_tx.clone();
let iit_txm = tx_meta.clone();
let iit_tx = tx.clone();
let inner_instructions_task = async move {
if let Some(inner_instructions) = iit_txm.inner_instructions {
crate::processors::inner_instruction::process(&iit_pool,
inner_instructions.as_slice(),
&iit_dtx, &iit_tx,
iit_dtx.message.account_keys.as_slice())
.await;
}
};
let iit_handle = tm_thread_pool
.spawn_with_handle(inner_instructions_task)
.unwrap();
let tasks_future = future::join_all(vec![tlt_handle, ait_handle, tai_handle, iit_handle]);
tasks_future.await;
// Update the tx's metadata and proceed with instructions processing
let update_result =
crate::actions::transactions::update(&*tmt_pool.get().unwrap(), &tx);
if let Err(update_err) = update_result {
eprintln!(
"[blockchain_syncer] FATAL: Problem updating tx: {}",
update_err
);
}
} else {
eprintln!(
"[processors/transaction] WARN: tx {} has no metadata!",
tx.hash
);
}
};
let tmt_handle = thread_pool
.spawn_with_handle(tx_meta_task)
.unwrap();
let future_batch =
future::join_all(vec![at_handle, it_handle, tst_handle, tmt_handle]);
future_batch.await;
} else {
let tx_err = tx_res.err();
if let Some(err) = tx_err {
eprintln!(
"[blockchain_syncer] WARN: Problem pushing tx {} to DB \
due to: {}",
tx.hash, err
)
} else {
eprintln!(
"[blockchain_syncer] FATAL: Problem pushing tx {} to DB \
due to an unknown error",
tx.hash
);
}
}
} else {
eprintln!(
"[blockchain_syncer] FATAL: a transaction in block {} has no hashes!",
&block.number
);
}
} else {
eprintln!(
"[blockchain_syncer] FATAL: Unable to obtain \
account information vec from chain for tx {}",
&tx.hash
);
}
})
})
.collect();
future::join_all(tasks).await;

custom window manager: Some GTK+ 3 windows receive focus but will not accept mouse clicks

As the title says. I'm writing a custom X11 window manager in Rust, using the xcb library. A specific window -- the "configuration" window for cairo-dock -- will not take button 1 clicks when focused, despite ungrabbing button 1 on that window.
Previously, I thought that said window was not holding focus, but that turns out to not be correct. Instead, the window in question is receiving focus, but not allowing any button 1 clicks through.
Relevant code for setting focus:
#[allow(clippy::single_match)]
fn set_focus(&mut self, window: xproto::Window) {
if window != self.root && window != self.focus {
let prev = self.focus;
// ungrab focus from the previous window
xproto::ungrab_button(
self.conn,
xproto::BUTTON_INDEX_1 as u8,
self.focus,
0
);
// make sure we don't accidentally have button 1 grabbed
xproto::ungrab_button(
self.conn,
xproto::BUTTON_INDEX_1 as u8,
window,
0
);
// See https://github.com/i3/i3/blob/b61a28f156aad545d5c54b9a6f40ef7cae1a1c9b/src/x.c#L1286-L1337
if self.needs_take_focus(window)
&& self.doesnt_take_focus(window)
&& window != base::NONE
&& window != self.root {
let client_message =
xproto::ClientMessageEvent::new(
32,
window,
self.atom("WM_PROTOCOLS"),
xproto::ClientMessageData::from_data32(
[
self.atom("WM_TAKE_FOCUS"),
self.last_timestamp,
0,
0,
0
]
)
);
xproto::send_event(self.conn, false, window, base::NONE as u32, &client_message);
} else {
debug!("{} can be focused normally", window);
xproto::set_input_focus(
self.conn,
xproto::INPUT_FOCUS_PARENT as u8,
window,
self.last_timestamp,
);
}
self.replace_prop(
self.root,
self.atom("_NET_ACTIVE_WINDOW"),
32,
&[window]
);
debug!("updating _NET_WM_STATE with _NET_WM_STATE_FOCUSED!");
self.remove_prop(prev, self.atom("_NET_WM_STATE"), self.atom("_NET_WM_STATE_FOCUSED"));
self.append_prop(window, self.atom("_NET_WM_STATE"), self.atom("_NET_WM_STATE_FOCUSED"));
self.focus = window;
debug!("focused window: {}", self.focus);
} else if window == self.root {
self.remove_prop(self.focus, self.atom("_NET_WM_STATE"), self.atom("_NET_WM_STATE_FOCUSED"));
debug!("focusing root -> NONE");
self.replace_prop(self.root, self.atom("_NET_ACTIVE_WINDOW"), 32, &[base::NONE]);
xproto::set_input_focus(self.conn, 0, base::NONE, base::CURRENT_TIME);
self.focus = xcb::NONE;
}
}
fn append_prop(&self, window: xproto::Window, prop: u32, atom: u32) {
// TODO: Check result
xproto::change_property(
self.conn,
xproto::PROP_MODE_APPEND as u8,
window,
prop,
xproto::ATOM_ATOM,
32,
&[atom]
);
}
fn remove_prop(&self, window: xproto::Window, prop: u32, atom: u32) {
let cookie = xproto::get_property(self.conn, false, window, prop, xproto::GET_PROPERTY_TYPE_ANY, 0, 4096);
match cookie.get_reply() {
Ok(res) => {
match res.value::<u32>() {
[] => {},
values => {
let mut new_values: Vec<u32> = Vec::from(values);
new_values.retain(|value| value != &atom);
self.replace_prop(window, prop, 32, &new_values);
},
}
},
Err(err) => error!("couldn't get props to remove from: {:#?}", err),
}
}
fn needs_take_focus(&self, window: xproto::Window) -> bool {
let properties_cookie =
xproto::get_property(
self.conn,
false,
window,
self.atom("WM_PROTOCOLS"),
xproto::ATOM_ANY,
0,
2048
);
match properties_cookie.get_reply() {
Ok(protocols) => {
let mut needs_help = false;
for proto in protocols.value::<u32>().iter() {
match self.atom_by_id_checked(proto) {
Some("WM_TAKE_FOCUS") => {
needs_help = true
},
_ => (),
}
}
needs_help
},
// FIXME
Err(_) => false,
}
}
fn doesnt_take_focus(&self, window: xproto::Window) -> bool {
match xcb_util::icccm::get_wm_hints(self.conn, window).get_reply() {
Ok(hints) => {
if let Some(input) = hints.input() {
input
} else {
false
}
},
// FIXME
Err(_) => false,
}
}
It turns out my problem was that I wasn't ungrabbing button 1 correctly; focus was actually being passed correctly (see question edit history), I was just forgetting to ungrab correctly because I forgot that the initial grab had a button mask on it. Thank you so much to Uli Schlachter in the comments helping me get it figured out.

Reducing match indentation for deeply nested properties

I need to refer to a value deep within a structure which includes an Option nested in a struct property, nested in a Result.
My current (working) solution is:
let raw = &packet[16..];
match PacketHeaders::from_ip_slice(raw) {
Err(_value) => {
/* ignore */
},
Ok(value) => {
match value.ip {
Some(Version4(header)) => {
let key = format!("{}.{}.{}.{},{}.{}.{}.{}",
header.source[0], header.source[1], header.source[2], header.source[3],
header.destination[0], header.destination[1], header.destination[2], header.destination[3],
);
let Count {packets, bytes} = counts.entry(key).or_insert(Count {packets: 0, bytes: 0});
*packets += 1;
*bytes += packet.len();
if p > 1000 { /* exit after 1000 packets */
for (key, value) in counts {
println!("{},{},{}", key, value.packets, value.bytes);
}
return ();
}
p += 1;
}
_ => {
/* ignore */
}
}
}
}
(The problem with my current code is the excessive nesting and the two matches.)
All I want is PacketHeaders::from_ip_slice(ip) >> Ok >> ip >> Some >> Version4.
How can I get this, or ignore a failure nicely (NOT crash/exit) for each captured packet?
Pattern matching nests, and the pattern for struct looks like struct literals with the addition that .. means "and match all the rest, which I don't care about", similar to how the _ pattern means "match anything". Meaning you can do
match PacketHeaders::from_ip_slice(raw) {
Ok(PacketHeaders { ip: Version4(header), .. }) => {
/* handle */
}
_ => {
/* ignore `Err(_)` and other `Ok(_)` */
},
}
and if you're going to ignore all but one case, you can use if let:
if let Ok(PacketHeaders { ip: Version4(header), .. }) = PacketHeaders::from_ip_slice(raw) {
/* handle */
}
Code ended up being:
let raw = &packet[16..];
if let Ok(PacketHeaders { ip: Some(Version4(header)), link: _, vlan: _, transport: _, payload: _}) = PacketHeaders::from_ip_slice(raw) {
let key = format!("{}.{}.{}.{},{}.{}.{}.{}",
header.source[0], header.source[1], header.source[2], header.source[3],
header.destination[0], header.destination[1], header.destination[2], header.destination[3],
);
let Count {packets, bytes} = counts.entry(key).or_insert(Count {packets: 0, bytes: 0});
*packets += 1;
*bytes += packet.len();
}

Why do threads containing a MPSC channel never join?

I've this example collected from the internet:
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
// Transaction enum
enum Transaction {
Widthdrawl(String, f64),
Deposit(String, f64),
}
fn main() {
//A banking send receive example...
// set the number of customers
let n_customers = 10;
// Create a "customer" and a "banker"
let (customers, banker) = mpsc::channel();
let handles = (0..n_customers + 1)
.into_iter()
.map(|i| {
// Create another "customer"
let customer = customers.clone();
// Create the customer thread
let handle = thread::Builder::new()
.name(format!("{}{}", "thread", i).into())
.spawn(move || {
// Define Transaction
let trans_type = match i % 2 {
0 => Transaction::Deposit(
thread::current().name().unwrap().to_string(),
(i + 5) as f64 * 10.0,
),
_ => Transaction::Widthdrawl(
thread::current().name().unwrap().to_string(),
(i + 10) as f64 * 5.0,
),
};
// Send the Transaction
customer.send(trans_type).unwrap();
});
handle
})
.collect::<Vec<Result<thread::JoinHandle<_>, _>>>();
// Wait for threads to finish
for handle in handles {
handle.unwrap().join().unwrap()
}
// Create a bank thread
let bank = thread::spawn(move || {
// Create a value
let mut balance: f64 = 10000.0;
println!("Initially, Bank value: {}", balance);
// Perform the transactions in order
//banker.recv_timeout(Duration::new(5, 0)); <-- TIMEOUT line...
banker.into_iter().for_each(|i| {
let mut customer_name: String = "None".to_string();
match i {
// Subtract for Widthdrawls
Transaction::Widthdrawl(cust, amount) => {
customer_name = cust;
println!(
"Customer name {} doing withdrawal of amount {}",
customer_name, amount
);
balance = balance - amount;
}
// Add for deposits
Transaction::Deposit(cust, amount) => {
customer_name = cust;
println!(
"Customer name {} doing deposit of amount {}",
customer_name, amount
);
balance = balance + amount;
}
}
println!("Customer is {}, Bank value: {}", customer_name, balance);
});
});
// Let the bank finish
bank.join().unwrap(); //THE THREAD DOES NOT END!!
}
The bank thread never joins, thus not ending the main.
If I remove the join and uncomment the timeout line above, the bank thread sometimes does not wait for the customer threads send (which, I think is ok).
//banker.recv_timeout(Duration::new(5, 0)); <-- TIMEOUT line...
What could be the reason for the bank thread not joining or what could be a better way to make it understand that no more customer messages will be coming? (as I think the timeout() may not be a reliable way in here).
I need to drop the tx channel after all producers are done and then the consumer stops after that:
// Wait for threads to finish
for handle in handles {
handle.unwrap().join().unwrap()
}
drop(customers);
// Create a bank thread

Resources