Set volume of ALSA stream - linux

I need to be able to set the volume of my ALSA stream (snd_pcm_t from the PCM Interface). This is a common operation and i don't understand why there is no easy way? How can i do this? Streaming attenuated data is not an option since this will cause choppy volume adjustments because data is buffered. In DirectSound and WinMM it's a simple function call. Have i missed something? Should i use the mixer interface? The control interface? I see no connection between snd_pcm_t and the control interface. Am i using the wrong API?

I am porting to PulseAudio instead, seems to be well performing and has better documentation.

Related

Alternative to ALSA dmix

In an embedded Linux project I have exactly two processes that need to access the audio device. So far I'm using ALSA dmix for that. However, dmix is giving me a lot of trouble (as explained in this question).
Now I'm wondering - are there any simple alternatives to dmix? I can imagine that PulseAudio is doing a much better job, but I'm not sure if its not an overkill to bring a general-usage sound server into a small embedded project, just for mixing two audio streams.

ALSA individual PCM volume control

I'm playing PCM sounds simultaneously using ALSA device "plug:dmix" (one call of snd_pcm_open() for each sound), but I can't find a way to control each sound's volume separately (like DirectSound does), I've only managed to set MASTER and PCM volumes, using simple mixer interface. Is there a way to do this using ALSA's C API? I know how to do it by hand, but I prefer to use the library API, if it is possible at all. Already searched the documentation and related questions, can't find an answer.

Is sound system in linux layered system as OSI model?

I'm new with linux and especially with sound system. I've read many articles about this subject but I'm still confused. I know that Alsa provides audio functionality to the rest of the system. This mean that Alsa is lowest "layer" on sound system (after hardware itself). I also know that ALSA by itself can only handle one application at a time. So here are my questions:
1)Is PulseAudio a bridge to provide usage of Alsa for multiple apps?
2)Are Gstreamer, Phonon and Xine same bridge programs as PulseAudio?
3)Is Alsa converting analog signal into digital signal?
My questions may seem stupid. Thank you.
The OSI model isn't really a good fit to ALSA, as it really only provides layer 1.
PulseAudio is an audio server and is the single client of a ALSA device interface. It provides something analogous to Layer 7 of the OSI model to applications. It mixes the audio output streams from each client application connection down to a single stream for output. It provides an alsa-compatible interface to audio client software (e.g. GStreamer and Xine) which acts as a proxy and connects to the audio server.
Analogue to digital (and digital to analogue) conversion takes place in hardware in what is referred to, rather confusingly, as a CoDec.

Mutliple sound streams with configurable volume level using ALSA lib

I would like to use ALSA library to play mutliple sound streams, with each stream having its own customizable volume level. Would like to avoid using higher level abstractions like pulseaudio, since this is to be used on a ARM target board with single channel output, would like to avoid compiling pulseaudio and the associated issue. Please suggest in what all possible ways such an implementation can be done. Any guidance with usage of ALSA plugins dmix / softvol is welcome

manipulating audio input buffer on Ubuntu Linux

Suppose that I want to code an audio filter in C++ that is applied on every audio or to a specific microphone/source, where should I start with this on ubuntu ?
edit, to be clear I don't get how to do this and what is the role of Pulseaudio, ALSA and Gstreamer.
Alsa provides an API for accessing and controlling audio and MIDI hardware. One portion of ALSA is a series of kernel-mode device drivers, whilst the other is a user-space library that applications link against. Alsa is single-client.
PulseAudio is framework that facilitates multiple client applications accessing a single audio interface (alsa is single-client). It provides a daemon process which 'owns' the audio interface and provides a IPC transport for audio between the daemon and applications using it. This is used heavily in open source desktop environments. Use of Pulse is largely transparent to applications - they continue to access the audio input and output using the alsa API with audio transport and mixing. There is also Jack which is targeted more towards 'professional' audio applications - perhaps a bit of a misnomer, although what is meant here is low latency music production tools.
gStreamer is a general purpose multi-media framework based on the signal-graph pattern, in which components have a number of inputs and output pins and provide a transformation function. A Graph of these components is build to implement operations such as media decoding, with special nodes for audio and video input or output. It is similar in concept to CoreAudio and DirectShow. VLC and libAV are both open source alternatives that operate along similar lines. Your choice between these is a matter of API style, and implementation language. gStreamer, in particular, is an OO API implemented in C. VLC is C++.
The obvious way of implementing the problem you describe is to implement a gStreamer/libAV/VLC component. If you want to process the audio and then route it to another application, this can be achieved by looping it back through Pulse or Jack.
Alsa provides a plug-in mechanism, but I suspect that implementing this from the ALSA documentation will be tough going.
The de-facto architecture for building effects plug-ins of the type you describe is Steinberg's VST. There are plenty of open source hosts and examples of plug-ins that can be used on Linux, and crucially, there is decent documentation. As with a gStreamer/libAV/VLC, you be able to route audio in an out of this.
Out of these, VST is probably the easiest to pick up.

Resources