How to write a PCM codec device driver with long/short frame sync(Not I2S) in Raspberry Pi?
I know how to write an Alsa sound architecture device driver with I2S(write a platform device driver and codec driver) but I suppose raspberry only supports I2S in default.
Finally, I found the answer! PCM has several modes for clocks! I2S is one of the others are DSP Mode A and B, left and right justified and raspbian support all of them.
For the use, another PCM mode is:
1- Codec driver supports it!
2- In the snd_soc_dai_link platform driver, choose them.
More information: I2S DSP modes ,Alsa Device Drivers,DAI hardware audio formats.
Related
I´m working on a project with a Raspberry 4 compute module and IO board.
This is a schema of the project:
https://drive.google.com/file/d/1mg5IhAKTUE2Athzafis1KSsEJS1T7DXG/view?usp=sharing
I have to handle audio and voice to make calls with the raspberry. My setup consists of an audio box, which has a headset connected to it, pass the signal through a TLV320AIC3104 audio codec connected via PCM to the raspberry, and then make calls through a USB modem, which has an integrated audio card. I need two way communication.
So far, I can make the calls with the raspberry and have audio and voice with the caller, using a USB headset.
I am using the TLV320AIC3104EVM-K evaluation board (https://www.ti.com/lit/ug/slau218a/slau218a.pdf?ts=1664262721095&ref_url=https%253A%252F%252Fwww.ti.com%252Ftool%252FTLV320AIC3104EVM-K) to learn how to connect the audio codec to the raspberry, but so far I have been unable to get anything. When the TLV320AIC3104EVM-K is connected to the PC via USB I can use the headset passing through the audio box without problems.
The thing is I can´t figure out how to connect the audio codec to the raspberry to get the headset audio, and then finish the setup passing the information to the modem and back.
I understand that the connection between the TLV320AIC3104EVM-K and the Raspberry is actually an I2S connection. I have connected the GPIO PCM ports (GPIO 18, 19, 20, 21). I have configured the raspberry in the file /etc/asound.conf to have the card0 as default, being this card the one from the modem.
/etc/asound.conf
pcm.!default {
type hw
card 0
}
ctl.!default {
type hw
card 0
}
And have the GPIO ports configured to ALT0.
What am I missing here?
I want to use alsa on a Beaglebone Black to send audio through usb audio out and receive it on my computer.
I have seen that there are some gadgets in a legacy folder in the kernel, and seen some tutorials on how to set up mass storage and network gadgets, but I am confused about what the state of audio gadgets is and what to compile and configure for this.
Can you explain the various components and configurations that need to go into place to make this happen, covering which kernel modules, drivers, kinds of scripts, and configurations that might be needed to do this?
You need to enable USB gadget subsystem in your Linux kernel for Beaglebone Black. Assuming of course that you have USB device controller and USB device connector on your Beaglebone. Here there are more information:
https://www.lynxbee.com/usb-audio-gadget-driver/
USB devices contains so called USB descriptors which tells USB host (PC) as a what device type it works. Audio gadget is one of the type of that descriptor that tells that this device (in this case BeagleBone) should be working as a audio device.
Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.
I have an USB modem which exports a PCM interface, fed to an I2C audio codec.
The codec is supported as a SoC ALSA codec, and I'm developing a driver to manage the sound levels through ALSA mixers.
I think I have two options:
either create a dummy SoC sound card with the codec as an aux (snd_soc_aux_dev) device. The codec's configuration is fixed in the init() function and ALSA is not managing the PCM interface, just the levels. This way I'm not using all the functions already implemented in the codec's driver to set clocks, rates, formats.
or create a modem sound card which exports a DAI with the correct rate and format parameters. This way I can use the codec driver's implementation of all the functions.
Which place should I put this kind of driver? As an extension to the USB driver, or as a SoC one?
What is the difference between ALSA and Pulse Audio ?
What is the use of ALSA and Pulse Audio as applied to A2DP streaming when the my device works as a sink device?
Thanks, Yogesh
ALSA is a driver and API to interface directly with the sound device. It allows low-level reading, writing and device control, as well as some MIDI support. Pulse Audio is a sound server which can use ALSA to access the sound card and also provide network audio and more advanced software mixing/scheduling.
I do not know much about Pulse Audio's implementation of A2DP (if any), however this is not something that ALSA is designed to support.
J