Trying to output sound with rust - audio

I'm trying to output a simple sound (a square waveform). All seems to work except no sound is going to my speakers. I found cpal as a lib to manage sound, but maybe there is a better way to handle streams. Here is my code
use cpal::{Sample};
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
fn main() {
let err_fn = |err| eprintln!("an error occurred on the output audio stream: {}", err);
let host = cpal::default_host();
let device = host.default_output_device().expect("no output device available");
let supported_config = device.default_output_config().unwrap();
println!("Device: {}, Using config: {:?}", device.name().expect("flute"), supported_config);
let config = supported_config.into();
let stream = device.build_output_stream(&config, write_silence, err_fn).unwrap();
stream.play().unwrap();
std::thread::sleep(std::time::Duration::from_millis(3000));
}
fn write_silence(data: &mut [f32], _: &cpal::OutputCallbackInfo) {
let mut counter = 0;
for sample in data.iter_mut() {
let s = if (counter / 20) % 2 == 0 { &1.0 } else { &0.0 };
counter = counter + 1;
*sample = Sample::from(s);
}
println!("{:?}", data);
}
line 27 (println!("{:?}", data)) does output the samples of a square waveform.
I'm running this in the linux/crostini thing of chromebooks. It can be the problem.
Does anyone knows why it's not working ? And is cpal a good start for audio processing ?

Related

iter_mut() doesn't run whenever I try to add status in Bevy 0.8.1

I added collision between the player and the ground, and I want to add a jumping mechanic into my game with on_ground. However, whenever I try to add status, it just stops iterating entirely.
fn collision_detection(
ground: Query<&Transform, (With<Ground>, Without<Player>)>,
mut player: Query<(&mut Transform, &mut PlayerStatus), With<Player>>,
) {
let player_size = Vec2::new(PLAYER_SIZE_X, PLAYER_SIZE_Y);
let ground_size = Vec2::new(GROUND_SIZE_X, GROUND_SIZE_Y);
for ground in ground.iter() {
for (mut player, mut status) in player.iter_mut() {
if collide(
player.translation,
player_size,
ground.translation,
ground_size,
)
.is_some()
{
status.on_ground = true;
println!("ON GROUND")
} else {
status.on_ground = false;
}
if status.on_ground {
player.translation.y += GRAVITY;
}
}
}
}
For some reason, this part wouldn't run
for (mut player, mut status) in player.iter_mut() {
if collide(
player.translation,
player_size,
ground.translation,
ground_size,
)
.is_some()
{
status.on_ground = true;
println!("ON GROUND")
} else {
status.on_ground = false;
}
if status.on_ground {
player.translation.y += GRAVITY;
}
}
It works if I only do this though:
for mut player in player.iter_mut() {
if collide(
player.translation,
player_size,
ground.translation,
ground_size,
)
.is_some()
{
player.translation.y += GRAVITY;
}
}
If you have only one player, you can use get_single_mut() instead of iter_mut() on the query.
It returns a result, so you can check in your function easily whether the player entity had been found at all. And if not send yourself some nice debugging message :)
if let Ok((mut player, mut status)) = player.get_single_mut() {
// do your collision check
} else {
// player not found in the query
}
https://docs.rs/bevy/latest/bevy/prelude/struct.Query.html#method.get_single_mut
Edit:
Looking at your comment above: if you have an already spawn entity you can always add new components to it using .insert_bundle or .insert.

Which is the equivalent of "BLOCKCHAIN_INTERFACE 3.1.0" in the near-sdk 4.0.0-pre.4?

In the near-sdk 3.1.0 we use the BLOCKCHAIN_INTERFACE to make a Dao remote-upgrade with the next method:
#[cfg(target_arch = "wasm32")]
pub fn upgrade(self) {
// assert!(env::predecessor_account_id() == self.minter_account_id);
//input is code:<Vec<u8> on REGISTER 0
//log!("bytes.length {}", code.unwrap().len());
const GAS_FOR_UPGRADE: u64 = 20 * TGAS; //gas occupied by this fn
const BLOCKCHAIN_INTERFACE_NOT_SET_ERR: &str = "Blockchain interface not set.";
//after upgrade we call *pub fn migrate()* on the NEW CODE
let current_id = env::current_account_id().into_bytes();
let migrate_method_name = "migrate".as_bytes().to_vec();
let attached_gas = env::prepaid_gas() - env::used_gas() - GAS_FOR_UPGRADE;
unsafe {
BLOCKCHAIN_INTERFACE.with(|b| {
// Load input (new contract code) into register 0
b.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.input(0);
//prepare self-call promise
let promise_id = b
.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.promise_batch_create(current_id.len() as _, current_id.as_ptr() as _);
//1st action, deploy/upgrade code (takes code from register 0)
b.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.promise_batch_action_deploy_contract(promise_id, u64::MAX as _, 0);
// 2nd action, schedule a call to "migrate()".
// Will execute on the **new code**
b.borrow()
.as_ref()
.expect(BLOCKCHAIN_INTERFACE_NOT_SET_ERR)
.promise_batch_action_function_call(
promise_id,
migrate_method_name.len() as _,
migrate_method_name.as_ptr() as _,
0 as _,
0 as _,
0 as _,
attached_gas,
);
});
}
}
To use the BLOCKCHAIN_INTERFACE I use this import:
use near_sdk::env::BLOCKCHAIN_INTERFACE;
In the near-sdk 4.0.0-pre.4 I can't use this interface to make the remote-upgrade, how I can solve it?
I read something about the MockedBlockchain, but I can't use it, the import doesn't exist or the methods are private and also says that it's only for #test
Yes, so that blockchain interface was removed completely so there is no need to go through that at all anymore. For all methods, you can just use near_sdk::sys to call each low level method. Here is the contract code migrated:
#[cfg(target_arch = "wasm32")]
pub fn upgrade(self) {
use near_sdk::sys;
// assert!(env::predecessor_account_id() == self.minter_account_id);
//input is code:<Vec<u8> on REGISTER 0
//log!("bytes.length {}", code.unwrap().len());
const GAS_FOR_UPGRADE: u64 = 20 * TGAS; //gas occupied by this fn
const BLOCKCHAIN_INTERFACE_NOT_SET_ERR: &str = "Blockchain interface not set.";
//after upgrade we call *pub fn migrate()* on the NEW CODE
let current_id = env::current_account_id().into_bytes();
let migrate_method_name = "migrate".as_bytes().to_vec();
let attached_gas = env::prepaid_gas() - env::used_gas() - GAS_FOR_UPGRADE;
unsafe {
// Load input (new contract code) into register 0
sys::input(0);
//prepare self-call promise
let promise_id =
sys::promise_batch_create(current_id.len() as _, current_id.as_ptr() as _);
//1st action, deploy/upgrade code (takes code from register 0)
sys::promise_batch_action_deploy_contract(promise_id, u64::MAX as _, 0);
// 2nd action, schedule a call to "migrate()".
// Will execute on the **new code**
sys::promise_batch_action_function_call(
promise_id,
migrate_method_name.len() as _,
migrate_method_name.as_ptr() as _,
0 as _,
0 as _,
0 as _,
attached_gas,
);
}
}
Let me know if this solves your problem completely or if there is anything else I can help with :)

Gtk-rs: Set label within glib::timeout_add

I'm new to rust and having trouble with the scope of objects/variables in GTK. I have the following code, which works, but I need to set a label in the GTK Window to the text of the variable watch_text. Here is the code:
use adw::subclass::prelude::AdwApplicationWindowImpl;
use gtk::prelude::*;
use gtk::subclass::prelude::*;
use gtk::{gio, glib, CompositeTemplate};
use glib::{clone, DateTime, timeout_add};
use std::time::Duration;
use std::sync::{Arc, Mutex};
fn setup_signals(&self) {
let imp = imp::FurWindow::from_instance(self);
let running = Arc::new(Mutex::new(false));
imp.start_button.connect_clicked(clone!(#weak self as this, #strong running => move |_| {
if !*running.lock().unwrap() {
let mut secs: u32 = 0;
let mut mins: u32 = 0;
let mut hrs: u32 = 0;
this.inapp_notification("Starting Timer!");
*running.lock().unwrap() = true;
let stopwatch = DateTime::now_local();
let duration = Duration::new(1,0);
let timer_repeat = timeout_add(duration, clone!(#strong running as running_clone => move || {
if *running_clone.lock().unwrap() {
secs += 1;
if secs > 59 {
secs = 0;
mins += 1;
if mins > 59 {
mins = 0;
hrs += 1;
}
}
let watch_text: &str = &format!("{:02}:{:02}:{:02}", hrs, mins, secs).to_string();
println!("{}",watch_text);
// **Here the println works, everything prints correctly,
// but I need to add watch_text to the label "watch"
// this.set_watch_time(watch_text);
}
Continue(*running_clone.lock().unwrap())
}));
} else {
this.inapp_notification("Stopping Timer!");
*running.lock().unwrap() = false;
}
}));
}
The issue is that in the commented section, no matter how I try to access or clone imp.watch, I get an error NonNull<GObject> cannot be sent between threads safely. How can I set the label text to watch_text?
The issue is that timeout_add() requires the passed callback to be Send, which is nice, because with this function you can pass values from one working thread to the GUI thread, to be processed and update the interface accordingly.
But GUI objects are not Send, because they live in the GUI thread and must be used only from the GUI thread, so they cannot be used for timeout_add().
But that is precisely why there is this other timeout_add_local(), that works just like the other one, except that it does not require Send, and that it must be called from the GUI thread, or else it will panic.

Best nested parallelization approach with Rust Futures

Not sure if this is even a good question to ask but I hope that the answer could also cover if nested parallelization is bad or not in addition to my main questions. Thank you so much!
Suppose I have tasks a, b, c and d. These tasks each have their separate sub tasks, a1, a2 so on..
Sometimes, a1 might have a bunch of sub tasks to accomplish as well. The tasks are nested inside a, b, c and d because they are dependent on processing together as a whole, so there won't be a way to decouple them and create tasks e, f and g (so on..).
I am currently utilising Futures and Threadpool to process these tasks in a concurrent, parallel way (such that a, b.. run in parallel, and a1, a2 run in parallel).
However, I was greeted with
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/my_folder/my_file.rs:123:123
The code above that failed is
let thread_pool = ThreadPool::new().unwrap();
So the questions are:
Is nested parallelization correct in this case? (If not, what should I do? If yes, how should I do?)
How do I ensure that Threadpools are created safely, such that it would not fail unwrapping when the resources are inadequate?
If (2) is a wrong thing to do, how should I implement my code?
let tasks: Vec<_> = block_transactions.into_iter()
.map(|encoded_tx| {
let pool = core_pool.clone();
let rpc_url = core_rpc_url.clone();
let block = core_block.clone();
tokio::spawn(async move {
let sol_client = RpcClient::new(rpc_url.clone());
let mut tx = crate::models::transactions::Transaction {
hash: "".to_string(),
block: block.number.clone(),
is_confirmed: true,
status: TransactionStatus::Success,
fee: 0,
timestamp: block.timestamp.clone()
};
if let Some(decoded_tx) = (&encoded_tx.transaction).decode() {
// Each item in the signatures array is a digital signature
// of the given message
// Additional safety net
if decoded_tx.signatures.len() >= 1 {
// Signatures should be Vec<Signature>.
// The first one, signatures[0], is the hash that is used to
// identify the transaction (eg. in the explorer), and you can
// get the base58-encoded string using the .to_string() method.
// Seed first signature
tx.hash = decoded_tx.signatures[0].to_string();
// Seed the transaction
let tx_res = crate::actions::transactions::create_or_ignore_transaction(
&*pool.get().unwrap(),
&tx,
);
// Seed everything that depends ONLY on the created transaction
if let Ok(created_tx) = tx_res {
// Spawn the futures threader first
let thread_pool = ThreadPool::new().unwrap();
// Seed subsequents
let tst_pool = pool.clone();
let tst_dtx = decoded_tx.clone();
let tst_tx = tx.clone();
let transaction_signatures_task = async move {
let transaction_signatures: Vec<_> = tst_dtx
.signatures
.as_slice()
.into_par_iter()
.filter(|signature| &signature.to_string() != &tst_tx.hash)
.map(|signature| {
// Subsequent signature are lead to the same tx, just
// signed from a different key pair.
InsertableTransactionSignature {
transaction_hash: tst_tx.hash.clone(),
signature: signature.to_string().clone(),
timestamp: tst_tx.timestamp.clone(),
}
})
.collect();
let cts_result =
crate::actions::transaction_signatures::create_transaction_signatures(
&*tst_pool.get().unwrap(),
transaction_signatures.as_slice(),
);
if let Ok(created_ts) = cts_result {
if created_ts.len() != (tst_dtx.signatures.len() - 1) {
eprintln!("[processors/transaction] WARN: Looks like there's a signature \
creation count mismatch for tx {}", tst_tx.hash.clone());
}
} else {
let cts_result_err = cts_result.err().unwrap();
eprintln!(
"[processors/transaction] FATAL: signature seeding error for \
tx {} due to: {}",
tst_tx.hash.clone(),
cts_result_err.to_string()
);
}
};
let tst_handle = thread_pool
.spawn_with_handle(transaction_signatures_task)
.unwrap();
// A message contains a header,
// The message header contains three unsigned 8-bit values.
// The first value is the number of required signatures in the
// containing transaction. The second value is the number of those
// corresponding account addresses that are read-only.
// The third value in the message header is the number of read-only
// account addresses not requiring signatures.
// identify which are required addresses, addresses requesting write
// access, then address with readonly access
let req_sig_count =
(decoded_tx.message.header.num_required_signatures.clone()) as usize;
let rw_sig_count = (decoded_tx
.message
.header
.num_readonly_signed_accounts
.clone()) as usize;
let ro_sig_count = (decoded_tx
.message
.header
.num_readonly_unsigned_accounts
.clone()) as usize;
// tx.message.account_keys is a compact-array of account addresses,
// The addresses that require signatures appear at the beginning of the
// account address array, with addresses requesting write access first
// and read-only accounts following. The addresses that do not require
// signatures follow the addresses that do, again with read-write
// accounts first and read-only accounts following.
let at_pool = pool.clone();
let at_tx = tx.clone();
let at_dtx = decoded_tx.clone();
let at_block = block.clone();
let accounts_task = async move {
crate::processors::account::process_account_data(
&at_pool,
&sol_client,
&at_tx.hash,
&req_sig_count,
&rw_sig_count,
&ro_sig_count,
&at_dtx.message.account_keys,
&at_block.timestamp)
.await;
};
let at_handle = thread_pool
.spawn_with_handle(accounts_task)
.unwrap();
// Each instruction specifies a single program, a subset of
// the transaction's accounts that should be passed to the program,
// and a data byte array that is passed to the program. The program
// interprets the data array and operates on the accounts specified
// by the instructions. The program can return successfully, or
// with an error code. An error return causes the entire
// transaction to fail immediately.
let it_pool = pool.clone();
let it_ctx = created_tx.clone();
let it_dtx = decoded_tx.clone();
let instruction_task = async move {
crate::processors::instruction::
process_instructions(&it_pool, &it_dtx,
&it_ctx,
it_dtx.message.instructions.as_slice(),
it_dtx.message.account_keys.as_slice())
.await;
};
let it_handle = thread_pool
.spawn_with_handle(instruction_task)
.unwrap();
// Ensure we have the tx meta as well
let tmt_pool = pool.clone();
let tmt_block = block.clone();
let tmt_meta = encoded_tx.meta.clone();
let tx_meta_task = async move {
if let Some(tx_meta) = tmt_meta {
let tm_thread_pool = ThreadPool::new().unwrap();
// Process the transaction's meta and validate the various
// data structures. Seed the accounts, alternate tx hashes,
// logs, balance inputs first
// pub log_messages: Option<Vec<String>>,
let tlt_pool = tmt_pool.clone();
let tlt_txm = tx_meta.clone();
let tlt_tx = tx.clone();
let transaction_log_task = async move {
if let Some(log_messages) = &tlt_txm.log_messages {
let transaction_logs =
log_messages.into_par_iter().enumerate()
.map(|(idx, log_msg)| {
InsertableTransactionLog {
transaction_hash: tlt_tx.hash.clone(),
data: log_msg.clone(),
line: idx as i32,
timestamp: tlt_tx.timestamp.clone()
}
})
.collect::<Vec<InsertableTransactionLog>>();
let tl_result = crate::actions::transaction_logs::batch_create(
&*tlt_pool.get().unwrap(),
transaction_logs.as_slice()
);
if let Err(err) = tl_result {
eprintln!("[processors/transaction] WARN: Problem pushing \
transaction logs for tx {} due to {}", tlt_tx.hash.clone(),
err.to_string());
}
}
};
let tlt_handle = tm_thread_pool
.spawn_with_handle(transaction_log_task)
.unwrap();
// Gather and seed all account inputs
let ait_pool = tmt_pool.clone();
let ait_txm = tx_meta.clone();
let ait_tx = tx.clone();
let ait_dtx = decoded_tx.clone();
let account_inputs_task = async move {
let account_inputs: Vec<InsertableAccountInput> = (0..ait_dtx.message.account_keys.len())
.into_par_iter()
.map(|i| {
let current_account_hash =
ait_dtx.message.account_keys[i].clone().to_string();
InsertableAccountInput {
transaction_hash: ait_tx.hash.clone(),
account: current_account_hash.to_string(),
token_id: "".to_string(),
pre_balance: (ait_txm.pre_balances[i] as i64),
post_balance: Option::from(ait_txm.post_balances[i] as i64),
timestamp: block.timestamp.clone()
}
}).collect();
let result = crate::actions::account_inputs::batch_create(
&*ait_pool.get().unwrap(),
account_inputs);
if let Err(error) = result {
eprintln!("[processors/transaction] FATAL: Problem indexing \
account inputs for tx {} due to {}", ait_tx.hash.clone(),
error.to_string());
}
};
let ait_handle = tm_thread_pool
.spawn_with_handle(account_inputs_task)
.unwrap();
// If there are token balances for this transaction
// pub pre_token_balances: Option<Vec<UiTransactionTokenBalance>>,
// pub post_token_balances: Option<Vec<UiTransactionTokenBalance>>,
let tai_pool = tmt_pool.clone();
let tai_txm = tx_meta.clone();
let tai_tx = tx.clone();
let tai_block = tmt_block.clone();
let tai_dtx = decoded_tx.clone();
let token_account_inputs_task = async move {
if let (Some(pre_token_balances),
Some(post_token_balances)) =
(tai_txm.pre_token_balances, tai_txm.post_token_balances)
{
super::account_input::process_token_account_inputs(
&tai_pool,
&tai_tx.hash,
&pre_token_balances,
&post_token_balances,
&tai_dtx.message.account_keys,
&tai_block.timestamp,
).await;
}
};
let tai_handle = tm_thread_pool
.spawn_with_handle(token_account_inputs_task)
.unwrap();
let iit_pool = tmt_pool.clone();
let iit_dtx = decoded_tx.clone();
let iit_txm = tx_meta.clone();
let iit_tx = tx.clone();
let inner_instructions_task = async move {
if let Some(inner_instructions) = iit_txm.inner_instructions {
crate::processors::inner_instruction::process(&iit_pool,
inner_instructions.as_slice(),
&iit_dtx, &iit_tx,
iit_dtx.message.account_keys.as_slice())
.await;
}
};
let iit_handle = tm_thread_pool
.spawn_with_handle(inner_instructions_task)
.unwrap();
let tasks_future = future::join_all(vec![tlt_handle, ait_handle, tai_handle, iit_handle]);
tasks_future.await;
// Update the tx's metadata and proceed with instructions processing
let update_result =
crate::actions::transactions::update(&*tmt_pool.get().unwrap(), &tx);
if let Err(update_err) = update_result {
eprintln!(
"[blockchain_syncer] FATAL: Problem updating tx: {}",
update_err
);
}
} else {
eprintln!(
"[processors/transaction] WARN: tx {} has no metadata!",
tx.hash
);
}
};
let tmt_handle = thread_pool
.spawn_with_handle(tx_meta_task)
.unwrap();
let future_batch =
future::join_all(vec![at_handle, it_handle, tst_handle, tmt_handle]);
future_batch.await;
} else {
let tx_err = tx_res.err();
if let Some(err) = tx_err {
eprintln!(
"[blockchain_syncer] WARN: Problem pushing tx {} to DB \
due to: {}",
tx.hash, err
)
} else {
eprintln!(
"[blockchain_syncer] FATAL: Problem pushing tx {} to DB \
due to an unknown error",
tx.hash
);
}
}
} else {
eprintln!(
"[blockchain_syncer] FATAL: a transaction in block {} has no hashes!",
&block.number
);
}
} else {
eprintln!(
"[blockchain_syncer] FATAL: Unable to obtain \
account information vec from chain for tx {}",
&tx.hash
);
}
})
})
.collect();
future::join_all(tasks).await;

read in chunks with async-std

I'm trying to implement something similar to reading a file in Java with AsynchronousByteChannel like
AsynchronousFileChannel channel = AsynchronousFileChannel.open(path...
channel.read(buffer,... new CompletionHandler<Integer, ByteBuffer>() {
#Override
public void completed(Integer result) {
...use buffer
}
i.e. read as much as OS gives, process, ask for more and so on.
What would be the most straightforward way to achieve this with async_std?
You can use the read method of the async_std::io::Read trait:
use async_std::prelude::*;
let mut reader = obtain_read_somehow();
let mut buf = [0; 4096]; // or however large you want it
// read returns a Poll<Result> so you have to handle the result
loop {
let byte_count = reader.read(&mut buf).await?;
if byte_count == 0 {
// 0 bytes read means we're done
break;
}
// call whatever handler function on the bytes read
handle(&buf[..byte_count]);
}

Resources