Recieve Bluetooth Audio Signal as Microphone Input for SpeechRecognizer - android-studio

I am using a Raspberry Pi to take a microphone input and filter background noise using a pretrained ML model. I would like to stream the filtered audio to an Android Device and use SpeechRecognizer in Android Studio to transcribe the text. However, from what I've seen in the docs, RecognizerIntent is the only interface for the SpeechRecognizer, which only sources directly from the microphone. Is there any way I can receive the Bluetooth audio as a microphone input so that it can be fed to the SpeechRecognizer?

Related

How to get data from Stm32 USB audio device and send to DAC?

I am using a STM32F103RCT6 board with STM32CubeIDE. I enabled the USB Audio device and the code is working and windows recognized the board as an audio device or speaker.
I have searched and read documents but I have no idea what it is doing in the code.
The code is only MX_USB_DEVICE_Init();
1- Is it now receiving data from PC and saving them somewhere in RAM in a buffer?
2- How I can get access to the data and send them to DAC?
I am not going to use I2C and a codec because I don't have a codec IC. So I just want to use DAC to generate the audio to send to speakers.
Thanks!

Why do i get low quality audio input in raspberry pi 3 using Bluetooth headset?

Im working on project that implements speech recognition on #Raspberry_Pi
I use Bluetooth microphone for audio input and use pulseaudio as driver, but i get low quality sound that make me can't continue further step. Does anyone know how to solve this?

no audio data from PC send to STM32F4 audio class USB

I'm working in an audio project. We use stm32f407 like a USB audio device to get audio data from PC then send out by I2S module. We are using stm32f4 Discovery kit and STM32cubeMX. After generate code by following this video, i change nothing and flash to Kit; my PC identifies that STM Audio device but there isn't any data send to my kit when play music, except MuteCMD . My question is:
i don't know which function is callback when data stream from PC to Kit.
why PC identifies that my kit is an audio output device but the callback of volume control isn't called when I config volume on PC and there isn't any data of music send to my device. The only one mute control callback function is called when i mute the PC.
this is my config in STM32cubeMX
pinout config figure
USB device config figure 1
USB device config figure 2
USB device config figure 3
PC identifies AUDIO device figure
choosing PC's audio output device figure
fail to play test tone figure
You should set USBD_AUDIO_FREQ to 22050 (or 44100, or 11025). Your value is 22100 and it seems like Windows or built-in audio drivers can't use that frequency.
I had exact same problem.
My project is generated from STM32Cube.
Windows recognizes the F7-DISCO board as sound card but failed to play test sounds.
I changed USBD_AUDIO_FREQ to 48000 and PID to 0x5730 (22320 in decimal).
And then everything works fine.

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

How to query the capability of the audio output channel configuration via Android API?

I am writing a sound player to play multi-channel audio files, and I need to know whether the running Android device can physically support multi-channel playback. I mean the final output will not be downmix to 2.0 stereo.
Is there an API to get this information?
For example, some devices can play audio via the MHL or HDMI interface, in this case, the query result of multi-channel should return true.
And some devices will always mixdown audio to stereo, in this case, the result should return false.
Thanks~

Resources