ALSA/Pulse Audio - bluetooth

What is the difference between ALSA and Pulse Audio ?
What is the use of ALSA and Pulse Audio as applied to A2DP streaming when the my device works as a sink device?
Thanks, Yogesh

ALSA is a driver and API to interface directly with the sound device. It allows low-level reading, writing and device control, as well as some MIDI support. Pulse Audio is a sound server which can use ALSA to access the sound card and also provide network audio and more advanced software mixing/scheduling.
I do not know much about Pulse Audio's implementation of A2DP (if any), however this is not something that ALSA is designed to support.
J

Related

Multiple Audio codecs in ALSA

I have one query regarding ALSA codec registration. I need to connect two Audio codecs to two I2S interfaces of Host. Both Audio codecs are connected separately to each I2S interface.
Can someone point to some link or reference implementation for the same?
Thanks
For editing configuration and all , you can refer
https://unix.stackexchange.com/questions/293450/alsa-send-audio-to-two-audio-devices this link. But for that, your I2S driver should support multiple devices.

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

Is sound system in linux layered system as OSI model?

I'm new with linux and especially with sound system. I've read many articles about this subject but I'm still confused. I know that Alsa provides audio functionality to the rest of the system. This mean that Alsa is lowest "layer" on sound system (after hardware itself). I also know that ALSA by itself can only handle one application at a time. So here are my questions:
1)Is PulseAudio a bridge to provide usage of Alsa for multiple apps?
2)Are Gstreamer, Phonon and Xine same bridge programs as PulseAudio?
3)Is Alsa converting analog signal into digital signal?
My questions may seem stupid. Thank you.
The OSI model isn't really a good fit to ALSA, as it really only provides layer 1.
PulseAudio is an audio server and is the single client of a ALSA device interface. It provides something analogous to Layer 7 of the OSI model to applications. It mixes the audio output streams from each client application connection down to a single stream for output. It provides an alsa-compatible interface to audio client software (e.g. GStreamer and Xine) which acts as a proxy and connects to the audio server.
Analogue to digital (and digital to analogue) conversion takes place in hardware in what is referred to, rather confusingly, as a CoDec.

ALSA driver for USB modem external audio codec on SoC

I have an USB modem which exports a PCM interface, fed to an I2C audio codec.
The codec is supported as a SoC ALSA codec, and I'm developing a driver to manage the sound levels through ALSA mixers.
I think I have two options:
either create a dummy SoC sound card with the codec as an aux (snd_soc_aux_dev) device. The codec's configuration is fixed in the init() function and ALSA is not managing the PCM interface, just the levels. This way I'm not using all the functions already implemented in the codec's driver to set clocks, rates, formats.
or create a modem sound card which exports a DAI with the correct rate and format parameters. This way I can use the codec driver's implementation of all the functions.
Which place should I put this kind of driver? As an extension to the USB driver, or as a SoC one?

Spotify Streaming - Wireless Bluetooth Codec

As I understand it, streaming via bluetooth is handled via the A2DP profile. While the SBC codec is default, A2DP supports AAC, MP3, and a few other Codecs.
My question is, since spotify files are in the OGG VORBIS format (OGG Container, Vorbis Codec), what is the best way to handle streaming via Bluetooth without quality loss? Is there a specific A2DP implementation? Are folks like Jambox, etc just using the SBC implementation?
Spotify's streaming format is an implementation detail to all clients, and making the assumption that it's OGG Vorbis is not something you should do, and in some circumstances is actually a false assumption.
Since you've managed to use every single Spotify tag in your question, I don't know which platform you're developing for. However, the correct thing to do is take the PCM data the Spotify playback library gives you and use whatever playback stack your target platform gives you. On Android, iOS, Mac OS, etc the system will handle audio output devices for you, including Bluetooth streaming.

Resources