I am trying to process live system audio using cpal in Rust. This is a simplified version for now and part of a bigger project. In theory, the build_input_stream_raw() data_callback would return a vector of f32 amplitudes from eg. a song/video/stream. However the data (&cpal::Data) returned is always 0.0.
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
pub fn main() {
let host = cpal::default_host();
let device = host.default_input_device().unwrap();
let config = device
.default_input_config()
.expect("Failed to get default input config");
let sample_format = config.sample_format();
let err_fn = move |err| {
eprintln!("an error occurred on stream: {}", err);
};
let stream = device
.build_input_stream_raw(
&config.into(),
sample_format,
move |data: &cpal::Data, cb: &cpal::InputCallbackInfo| {
// amplitudes as f32 vec
},
err_fn,
None,
)
.unwrap();
stream.play().unwrap();
std::thread::sleep(std::time::Duration::from_secs(10));
drop(stream);
}
I already read the documentation and multiple universal cpal examples online, without any significant progress. My current solution does not rely on null sinks, as I want it to be as universal as possible.
If there is a better tool for that purpose, I am open for suggestions.
cpal version: 0.15.0
audio system: ALSA
Related
I'm using the btleplug crate on Windows 11 to scan for peripherals and print out a list of devices' MAC address and names. For some reason, all names are missing. The code I'm using is:
use btleplug::api::{Central, Manager as _, Peripheral as _, ScanFilter};
use btleplug::platform::{Manager};
use std::error::Error;
use std::time::Duration;
use tokio::time;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let manager = Manager::new().await?;
// get adapter
let adapters = manager.adapters().await?;
let central = adapters.into_iter().nth(0).unwrap();
// start scanning for devices
central.start_scan(ScanFilter::default()).await?;
time::sleep(Duration::from_secs(2)).await;
// get a list of devices that have been discovered
let devices = central.peripherals().await?;
for device in devices {
// print the address and name of each device
let properties= device.properties()
.await
.unwrap()
.unwrap();
let names: Vec<String> = properties
.local_name
.iter()
.map(|name| name.to_string())
.collect();
println!("{}: {:?}", device.address(), names);
}
Ok(())
}
And the output I get is
6E:A4:49:71:1C:01: []
D3:81:93:EA:E4:B9: []
2C:76:79:7E:4B:72: []
E9:84:B3:82:CF:5A: []
25:DB:2D:F0:7B:5F: []
17:EB:FE:FD:1C:1B: []
...
The example usage in the docs use the local_name attribute assuming it exists, and I've followed that. How do I get the names to appear?
Most Bluetooth LE devices that "advertise" their presence, however, do not transmit their "full local name" or "shortened local name". You can check this with various apps on your mobile phone, for example nRF Connect
There are several reasons for this. On the one hand, the amount of information to be transmitted in the "advertisements" is severely limited and other data may be more important. On the other hand, such a unique name can pose a major privacy problem.
I have a tokio TcpStream. I want to pass some type T over this stream. This type T implement Serialize and Deserialize. How can I obtain a Sink<T> and a Stream<T>?
I found the crates tokio_util and tokio_serde, but I can't figure out how to use them to do what I want.
I don't know your code structure or the codec you're planning on using, but I've figured out how to glue everything together into a workable example.
Your Sink<T> and Stream<Item=T> are going to be provided by the Framed type in tokio-serde. This layer deals with passing your messages through serde. This type takes four generic parameters: Transport, Item (the stream item), SinkItem, and Codec. Codec is a wrapper for the specific serializer and deserializer you want to use. You can view the provided options here. Item and SinkItem are just going to be your message type which must implement Serialize and Deserialize. Transport needs to be a Sink<SinkItem> and/or Stream<Item=Item> itself in order for the frame to implement any useful traits. This is where tokio-util comes in. It provides various Framed* types which allow you to convert things implementing AsyncRead/AsyncWrite into streams and sinks respectively. In order to construct these frames, you need to specify a codec which delimits frames from the wire. For simplicity in my example I just used the LengthDelimitedCodec, but there are other options provided as well.
Without further adieu, here's an example of how you can take a tokio::net::TcpStream and split it into an Sink<T> and Stream<Item=T>. Note that T is a result on the stream side because the serde layer can fail if the message is malformed.
use futures::{SinkExt, StreamExt};
use serde::{Deserialize, Serialize};
use tokio::net::{
tcp::{OwnedReadHalf, OwnedWriteHalf},
TcpListener,
TcpStream,
};
use tokio_serde::{formats::Json, Framed};
use tokio_util::codec::{FramedRead, FramedWrite, LengthDelimitedCodec};
#[derive(Serialize, Deserialize, Debug)]
struct MyMessage {
field: String,
}
type WrappedStream = FramedRead<OwnedReadHalf, LengthDelimitedCodec>;
type WrappedSink = FramedWrite<OwnedWriteHalf, LengthDelimitedCodec>;
// We use the unit type in place of the message types since we're
// only dealing with one half of the IO
type SerStream = Framed<WrappedStream, MyMessage, (), Json<MyMessage, ()>>;
type DeSink = Framed<WrappedSink, (), MyMessage, Json<(), MyMessage>>;
fn wrap_stream(stream: TcpStream) -> (SerStream, DeSink) {
let (read, write) = stream.into_split();
let stream = WrappedStream::new(read, LengthDelimitedCodec::new());
let sink = WrappedSink::new(write, LengthDelimitedCodec::new());
(
SerStream::new(stream, Json::default()),
DeSink::new(sink, Json::default()),
)
}
#[tokio::main]
async fn main() {
let listener = TcpListener::bind("0.0.0.0:8080")
.await
.expect("Failed to bind server to addr");
tokio::task::spawn(async move {
let (stream, _) = listener
.accept()
.await
.expect("Failed to accept incoming connection");
let (mut stream, mut sink) = wrap_stream(stream);
println!(
"Server received: {:?}",
stream
.next()
.await
.expect("No data in stream")
.expect("Failed to parse ping")
);
sink.send(MyMessage {
field: "pong".to_owned(),
})
.await
.expect("Failed to send pong");
});
let stream = TcpStream::connect("127.0.0.1:8080")
.await
.expect("Failed to connect to server");
let (mut stream, mut sink) = wrap_stream(stream);
sink.send(MyMessage {
field: "ping".to_owned(),
})
.await
.expect("Failed to send ping to server");
println!(
"Client received: {:?}",
stream
.next()
.await
.expect("No data in stream")
.expect("Failed to parse pong")
);
}
Running this example yields:
Server received: MyMessage { field: "ping" }
Client received: MyMessage { field: "pong" }
Note that it's not required that you split the stream. You could instead construct a tokio_util::codec::Framed out of the TcpStream, and construct a tokio_serde::Framed with a tokio_serde::formats::SymmetricalJson<MyMessage>, and then that Framed would implement Sink and Stream accordingly. Also a lot of the functionality in this example is feature-gated, so be sure to enable the appropriate features according to the docs.
I am creating a custom plugin with application/x-rtp as sync and src. I want to add custom payload to the extension. My plugin definition is :
glib::wrapper! {
pub struct CustomPlugin(ObjectSubclass<imp::CustomPlugin>) #extends gst_base::BaseTransform, gst::Element, gst::Object;
}
In transform_ip, I just want to add custom data to the extension.
As of current I have
fn transform_ip(&self, element: &Self::Type, buf: &mut gst::BufferRef) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut out_frame: BufferMap<Writable> = buf.map_writable().map_err(|_| {
gst::element_error!(
element,
gst::CoreError::Failed,
["Failed to map input buffer readable"]
);
gst::FlowError::Error
})?;
let mut_buf: &mut Buffer;// = out_frame...
let mut rtp_buffer = RTPBuffer::from_buffer_writable(mut_buf).unwrap();
rtp_buffer.add_extension_onebyte_header(1u8, &[1u8]).unwrap();
Ok(gst::FlowSuccess::Ok)
}
Line containing let mut_buf: &mut Buffer;// = out_frame... needs to be fixed.
My questions are :
Am I going in the right direction. Do I need to use a different Base class. The current base class is gst_base::BaseTransform.
I am new to multimedia and Rust. Is there a good resource apart from gstreamer. It is a good source but I find it difficult to follow. Though tutorial with Rust is fantastic with detailed explanations.
GStreamer Change to fix above issue
Gstreamer team made a change to the interface and now RTPBuffer::from_buffer_writable accepts &mut gst::BufferRef, which we already have.
Final code looks like
fn transform_ip(&self, element: &Self::Type, buf: &mut gst::BufferRef) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut rtp_buffer = RTPBuffer::from_buffer_writable(&mut buf).unwrap();
rtp_buffer.add_extension_onebyte_header(1u8, &[1u8]).unwrap();
Ok(gst::FlowSuccess::Ok)
}
And we have fix. Thanks, GStreamer team to resolving this fast.
I have a glutin window and OpenGL context and am trying to add sound to my program. I stumbled upon rodio as it seemed the best fit for my project.
Following the docs, I wrote a helper function for loading a file as a sound source and implemented it in my program:
use rodio::Decoder;
use std::{
fs::File,
io::BufReader,
error:Error
}
fn load_sound_file(path: &str) -> Result<Decoder<BufReader<File>>, Box<dyn Error>> {
// load a sound from a file
let file = BufReader::new(File::open(path)?);
// decode that sound into a source
let source = Decoder::new(file)?;
Ok(source)
}
use glutin::event_loop::EventLoop;
use rodio::{ OutputStream, Sink };
fn main() {
let el = EventLoop::new();
...
let (_stream, stream_handle) = OutputStream::try_default().unwrap();
let sink = Sink::try_new(&stream_handle).unwrap();
let beep_sound = load_sound_file("beep.ogg").unwrap();
...
el.run(move |event, _, control_flow| {
sink.append(beep_sound);
...
}
}
This doesn't work however, as the compiler complains that it cannot move out of `beep_sound`, a captured variable in an FnMut closure because beep_sound has type Decoder<BufReader<File>> which doesn't implement the Copy trait.
I then tried using a clone of beep_sound:
...
el.run(move |event, _, control_flow| {
let beep_clone = beep_sound.clone();
sink.append(beep_clone);
...
}
But then the compiler complains that there's no method named `clone` found for struct `Decoder<BufReader<File>>` in the current scope.
I'm still inexperienced when it comes to Rust closures and ownership and some help would be much appreciated.
I'm trying to work on a nrf52-dk board and trying to blink a light and get SPI working at the same time.
I could have one or the other working at a time, but not both at once.
I'm pretty sure that nrf52810_hal::pac::Peripherals::take() shouldn't be called more than once, as its data would change all references, but it gets moved when I specify pins.
I'm unsure how I'd be able to make it work without passing in without using up the variable.
In the following example, the "PERIPHERALS are null" is always written out and the code panics because of the statements that are following it.
I do need to have the PERIPHERALS being "static mut", as I need them in another "thread", because it's an interrupt-called function, to which I'm unable to pass data.
#![no_main]
#![no_std]
#[allow(unused_extern_crates)]
use panic_halt as _;
use asm_delay::AsmDelay;
use cortex_m_rt::entry;
use cortex_m_semihosting::hprint;
use hal::gpio::Level;
use hal::pac::interrupt;
use nrf52810_hal as hal;
use nrf52810_hal::prelude::*;
use nrf52810_pac as nrf;
use nrf52810_pac::{Interrupt, NVIC};
static mut PERIPHERALS: Option<nrf::Peripherals> = None;
#[entry]
unsafe fn main() -> ! {
let p = hal::pac::Peripherals::take().unwrap();
let port0 = hal::gpio::p0::Parts::new(p.P0);
let spiclk = port0.p0_25.into_push_pull_output(Level::Low).degrade();
let spimosi = port0.p0_24.into_push_pull_output(Level::Low).degrade();
let spimiso = port0.p0_23.into_floating_input().degrade();
let pins = hal::spim::Pins {
sck: spiclk,
miso: Some(spimiso),
mosi: Some(spimosi),
};
let spi = hal::Spim::new(
p.SPIM0,
pins,
hal::spim::Frequency::K500,
hal::spim::MODE_0,
0,
);
let reference_data = "Hello World!".as_bytes();
let mut eh_spi = embedded_hal_spy::new(spi, |_| {});
use embedded_hal::blocking::spi::Write;
match eh_spi.write(reference_data) {
Ok(_) => {}
Err(_) => {}
}
PERIPHERALS = nrf::Peripherals::take();
if PERIPHERALS.is_none() {
hprint!("PERIPHERALS are null!").unwrap();
}
NVIC::unmask(Interrupt::SWI0_EGU0);
let mut d = AsmDelay::new(asm_delay::bitrate::U32BitrateExt::mhz(74));
PERIPHERALS
.as_ref()
.unwrap()
.P0
.dir
.write(|w| w.pin20().output());
PERIPHERALS
.as_ref()
.unwrap()
.P0
.out
.write(|w| w.pin20().low());
loop {
NVIC::pend(Interrupt::SWI0_EGU0);
d.delay_ms(100u32);
}
}
#[interrupt]
fn SWI0_EGU0() {
static mut LED_STATE: bool = false;
flip_led(LED_STATE);
*LED_STATE = !*LED_STATE;
}
fn flip_led(led_state: &mut bool) {
match led_state {
true => unsafe {
PERIPHERALS
.as_ref()
.unwrap()
.P0
.out
.write(|w| w.pin20().low());
},
false => unsafe {
PERIPHERALS
.as_ref()
.unwrap()
.P0
.out
.write(|w| w.pin20().high());
},
}
}
Do you actually need to access all of Peripherals from your interrupt context? Probably not, you probably need to access only specific peripherals there. You can move those out of the Peripherals struct and into a static. Then you'll have the peripherals you need in your main as local variables there, and everything else in a static.
But there's an even better solution, in my opinion: Use RTIC. It's designed to handle that exact use case. It allows you to specify exactly which resources you need in which context, and will make those resources available there. You can even safely share resources between different contexts. It will automatically protect them with mutexes, as required.
I can't recommend RTIC highly enough. For me, the only reason not to use it would be, if my program doesn't have any interrupt handlers.