Multiple Audio codecs in ALSA - linux

I have one query regarding ALSA codec registration. I need to connect two Audio codecs to two I2S interfaces of Host. Both Audio codecs are connected separately to each I2S interface.
Can someone point to some link or reference implementation for the same?
Thanks

For editing configuration and all , you can refer
https://unix.stackexchange.com/questions/293450/alsa-send-audio-to-two-audio-devices this link. But for that, your I2S driver should support multiple devices.

Related

Can anyone explain how voice commands works via Bluetooth remote(Nexus player remote) in Android(Nexus player)?

Can anyone please elaborate following questions?
How bluetooth stack handles audio data?
How audio commands are processed?
Did we need any service to process audio data?
Thanks in advance.
Basically, voice commands over BLE require:
some audio codec for reducing required bandwidth (ADPCM and SBC are common, OPUS is emerging),
some audio streaming method through BLE,
decoding and getting the audio stream from BLE daemon to a command processing framework.
In the android world, command processing framework is google sauce (closed) that most easily gets its audio from an ALSA device. What is left to be done is getting audio from the remote to an ALSA device.
So for audio streaming, either you:
use a custom L2CAP channel or a custom GATT service, this requires a custom android service app and/or modifications to Bluedroid to handle those, it will need a way to inject audio stream as ALSA, most probably with a "loop" audio device driver,
declare audio as custom HID reports, this way, Bluedroid injects them back to the kernel, then add a custom HID driver that processes these reports and exposes an audio device.
Audio over BLE is not standard, so all implementations do not do the actual same thing. In Nexus Player case, implementation uses HID: It streams an ADPCM audio stream, chunked in HID reports. There is a special HID driver "hid-atv-remote.c" in Android linux kernel that exposes an ALSA device in addition to input device. Bluedroid has no information about audio, all it does is forwarding HID reports from BLE to UHID.

Can PulseAudio/ALSA work without built-in soundcard?

I am new to PulseAudio and ALSA, so please go easy on me. This might seem like a dumb question, but it is quite important to have it answered.
I am developing application on ARM imx6 board (lets call it BOARD1), with built-in sound card support. With ALSA, I am able to play audio throgh Headset_OUT. But now, we want to move to a new board (lets call it BOARD2), which does not have built in soundcard. But the idea is to connect a bluetooth module to the BOARD2 and have the audio streamed to the bluetooth speaker.
My question is, is it possible to use PULSEAUDIO to send/receive audio to external (bluetooth) audio device without local embedded soundcard (i.e. is it possible to do audio encoding/decoding in just software with pulseaudio and gstreamer combination) ?
Regards

Is sound system in linux layered system as OSI model?

I'm new with linux and especially with sound system. I've read many articles about this subject but I'm still confused. I know that Alsa provides audio functionality to the rest of the system. This mean that Alsa is lowest "layer" on sound system (after hardware itself). I also know that ALSA by itself can only handle one application at a time. So here are my questions:
1)Is PulseAudio a bridge to provide usage of Alsa for multiple apps?
2)Are Gstreamer, Phonon and Xine same bridge programs as PulseAudio?
3)Is Alsa converting analog signal into digital signal?
My questions may seem stupid. Thank you.
The OSI model isn't really a good fit to ALSA, as it really only provides layer 1.
PulseAudio is an audio server and is the single client of a ALSA device interface. It provides something analogous to Layer 7 of the OSI model to applications. It mixes the audio output streams from each client application connection down to a single stream for output. It provides an alsa-compatible interface to audio client software (e.g. GStreamer and Xine) which acts as a proxy and connects to the audio server.
Analogue to digital (and digital to analogue) conversion takes place in hardware in what is referred to, rather confusingly, as a CoDec.

Spotify Streaming - Wireless Bluetooth Codec

As I understand it, streaming via bluetooth is handled via the A2DP profile. While the SBC codec is default, A2DP supports AAC, MP3, and a few other Codecs.
My question is, since spotify files are in the OGG VORBIS format (OGG Container, Vorbis Codec), what is the best way to handle streaming via Bluetooth without quality loss? Is there a specific A2DP implementation? Are folks like Jambox, etc just using the SBC implementation?
Spotify's streaming format is an implementation detail to all clients, and making the assumption that it's OGG Vorbis is not something you should do, and in some circumstances is actually a false assumption.
Since you've managed to use every single Spotify tag in your question, I don't know which platform you're developing for. However, the correct thing to do is take the PCM data the Spotify playback library gives you and use whatever playback stack your target platform gives you. On Android, iOS, Mac OS, etc the system will handle audio output devices for you, including Bluetooth streaming.

ALSA/Pulse Audio

What is the difference between ALSA and Pulse Audio ?
What is the use of ALSA and Pulse Audio as applied to A2DP streaming when the my device works as a sink device?
Thanks, Yogesh
ALSA is a driver and API to interface directly with the sound device. It allows low-level reading, writing and device control, as well as some MIDI support. Pulse Audio is a sound server which can use ALSA to access the sound card and also provide network audio and more advanced software mixing/scheduling.
I do not know much about Pulse Audio's implementation of A2DP (if any), however this is not something that ALSA is designed to support.
J

Resources