A2DP/SCO -PCM/HCI - audio

Wanted to know what exactly is the difference between routing the A2DP/SCO packets through PCM and HCI.
Do both PCM and HCI use ALSA framework for decoding the packets and then send it to speakers ?
Does PCM require some special hardware and HCI does not ?

A paper, titled "Audio Streaming over Bluetooth" (PDF) from the Ottowa Linux Summit 2008 may shed some more light on this.
In particular (quoting from page 194):
The audio data transferred over the SCO channel can be
provided via the normal Host Controller Interface (HCI)
hardware driver or via a PCM back-channel. In case
of a desktop computer, the HCI will be used. In case
of an embedded device (for example a mobile phone),
the SCO channel will be directly connected via a PCM
interface to the main audio codec.

A2DP uses ACL packets, voice calls (handsfree) uses SCO packets over the air.
HCI can transport both ACL and SCO, this is the case fx. when a BT dongle is plug'ed into a PC through USB. BT chips often have PCM interface to which SCO data can be routed but usually its not accessible unless you can access the PINs of the chip. The PCM interface can be connected to an analouge input/output.

Related

PCM data handling with Raspberry 4 and audio codec (TLV320AIC3104)

I´m working on a project with a Raspberry 4 compute module and IO board.
This is a schema of the project:
https://drive.google.com/file/d/1mg5IhAKTUE2Athzafis1KSsEJS1T7DXG/view?usp=sharing
I have to handle audio and voice to make calls with the raspberry. My setup consists of an audio box, which has a headset connected to it, pass the signal through a TLV320AIC3104 audio codec connected via PCM to the raspberry, and then make calls through a USB modem, which has an integrated audio card. I need two way communication.
So far, I can make the calls with the raspberry and have audio and voice with the caller, using a USB headset.
I am using the TLV320AIC3104EVM-K evaluation board (https://www.ti.com/lit/ug/slau218a/slau218a.pdf?ts=1664262721095&ref_url=https%253A%252F%252Fwww.ti.com%252Ftool%252FTLV320AIC3104EVM-K) to learn how to connect the audio codec to the raspberry, but so far I have been unable to get anything. When the TLV320AIC3104EVM-K is connected to the PC via USB I can use the headset passing through the audio box without problems.
The thing is I can´t figure out how to connect the audio codec to the raspberry to get the headset audio, and then finish the setup passing the information to the modem and back.
I understand that the connection between the TLV320AIC3104EVM-K and the Raspberry is actually an I2S connection. I have connected the GPIO PCM ports (GPIO 18, 19, 20, 21). I have configured the raspberry in the file /etc/asound.conf to have the card0 as default, being this card the one from the modem.
/etc/asound.conf
pcm.!default {
type hw
card 0
}
ctl.!default {
type hw
card 0
}
And have the GPIO ports configured to ALT0.
What am I missing here?

How to analyze live Bluetooth packet in wireshark?

If i use bluez hcitool, like hcitool scan then I could see packets in wireshark properly under interface bluetooth0. And I am sure that bluez in using on-board Bluetooth chip.
I have written my own application with my own Bluetooth stack ( i am not using bluez ) for USB Bluetooth dongle (using libusb ), but when I start wireshark with bluetooth1 interface, then wireshark not show any packet.
Should my application send packets to wireshark? if so, can some one please direct me how to see my Bluetooth packets in wireshark?

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

ALSA driver for USB modem external audio codec on SoC

I have an USB modem which exports a PCM interface, fed to an I2C audio codec.
The codec is supported as a SoC ALSA codec, and I'm developing a driver to manage the sound levels through ALSA mixers.
I think I have two options:
either create a dummy SoC sound card with the codec as an aux (snd_soc_aux_dev) device. The codec's configuration is fixed in the init() function and ALSA is not managing the PCM interface, just the levels. This way I'm not using all the functions already implemented in the codec's driver to set clocks, rates, formats.
or create a modem sound card which exports a DAI with the correct rate and format parameters. This way I can use the codec driver's implementation of all the functions.
Which place should I put this kind of driver? As an extension to the USB driver, or as a SoC one?

How USB to Bluetooth DTR/RTS work?

A number of bluetooth Arduino shields (Bluefruit EZ-link, SparkFun Bluetooth Silver) support DTR/RTS and have special out pin to wire. How do they work? Does it require special drivers (linux f.e.)? Can any bluetooth receiver be used or modified to provide DTR/RTS? Since setting DTR/RTS is vendor-specific does it depend on transmitter side or receiver (bluetooth shield) only?
The only idea is that special USB drivers needed that send special AT commands to make BT receiver know actual DTR/RTS value.
The documentation of Bluetooth SPP (https://developer.bluetooth.org/TechnologyOverview/Documents/SPP_SPEC.pdf) states in section 4.1 RS232 Control Signals that "all devices are required to send information on all changes in RS232 control signals".
And since Bluefruit EZ-link is not using any special drivers on the side of the computer, it must be that the standard BT virtual serial port drivers that manage the ports created for the BT connection to Arduino handle the control signals properly and send them over to the BT shield connected to Arduino. Hence no work should be needed on the side of the computer, and it only depends on the receiver: whether it has the control signals accessible on any of the output pins and operates them as it should, or not (as usually is the case, unfortunately).

Resources