What is pro-audio in linux sound architecture? - audio

I learn about pipewire . In some of the articles, I saw pipewire bring together audio, video and pro-audio under Linux . Here what is pro-audio ?

Pipewire's pro audio is a mode of pipewire where direct access to ALSA nodes is granted, making it possible to define properties such as latency and more.
This is made with the professional use in mind, where low-latency is extremely important, studio recordings or other forms of monitoring audio.
To check the current settings of latency use pw-top command, to learn more about the nodes use pw-cli ls Node or pw-dump <node-id>, where the node-id can be found with pw-cli.
The QUANT property of pw-top is the current buffer size (latency) of listed node.

Related

What is the simplest way to implement small group, low latency, one-to-many audio broadcast

I have a Linode server and need to broadcast one to-many audio (they can hear but can not talk back) to a group of three to five people. I looked at WebRTC and the Janus server but it seems complete overkill. Using commercial applications like Skype, Discord etc. results in low audio quality and it is mono. Best possible audio quality and low latency (on a par with that of Skype, Discord etc.) is essential.
Any pointers would be greatly appreciated.
I can recommend building such system based on Icecast streaming. It's an old proven technology which has a latency close to real-time.
You could use any set of Icecast-enabled tools for that.
As example, here's what you an do with tools by our company:
Larix Broadcaster mobile app allows streaming in audio-only
mode.
Nimble Streamer software media server can get Larix' input and
produce Icecast stream. You can use any Icecast-enabled here
instead.
SLDP Player can play Icecast produced by Nimble
Streamer or any other Icecast-enabled server.
That can also be built with other companies products, so you can pick the right tools yourself.
A super simple setup would be to just use command line tool called ffmpeg (it also has an api) see doc at https://trac.ffmpeg.org/wiki/ffserver
Where your source audio lives just launch either the ffmpeg or ffserver
ffserver -f /etc/ffserver.conf
in that config put location of source audio and output url it will publish to ... then your client receivers can use ffplay with
ffplay <stream URL>
ffmpeg is a free open source industry workhorse for audio/video manipulation ... its the underlying technology several more visable tools like vlc use under the covers

manipulating audio input buffer on Ubuntu Linux

Suppose that I want to code an audio filter in C++ that is applied on every audio or to a specific microphone/source, where should I start with this on ubuntu ?
edit, to be clear I don't get how to do this and what is the role of Pulseaudio, ALSA and Gstreamer.
Alsa provides an API for accessing and controlling audio and MIDI hardware. One portion of ALSA is a series of kernel-mode device drivers, whilst the other is a user-space library that applications link against. Alsa is single-client.
PulseAudio is framework that facilitates multiple client applications accessing a single audio interface (alsa is single-client). It provides a daemon process which 'owns' the audio interface and provides a IPC transport for audio between the daemon and applications using it. This is used heavily in open source desktop environments. Use of Pulse is largely transparent to applications - they continue to access the audio input and output using the alsa API with audio transport and mixing. There is also Jack which is targeted more towards 'professional' audio applications - perhaps a bit of a misnomer, although what is meant here is low latency music production tools.
gStreamer is a general purpose multi-media framework based on the signal-graph pattern, in which components have a number of inputs and output pins and provide a transformation function. A Graph of these components is build to implement operations such as media decoding, with special nodes for audio and video input or output. It is similar in concept to CoreAudio and DirectShow. VLC and libAV are both open source alternatives that operate along similar lines. Your choice between these is a matter of API style, and implementation language. gStreamer, in particular, is an OO API implemented in C. VLC is C++.
The obvious way of implementing the problem you describe is to implement a gStreamer/libAV/VLC component. If you want to process the audio and then route it to another application, this can be achieved by looping it back through Pulse or Jack.
Alsa provides a plug-in mechanism, but I suspect that implementing this from the ALSA documentation will be tough going.
The de-facto architecture for building effects plug-ins of the type you describe is Steinberg's VST. There are plenty of open source hosts and examples of plug-ins that can be used on Linux, and crucially, there is decent documentation. As with a gStreamer/libAV/VLC, you be able to route audio in an out of this.
Out of these, VST is probably the easiest to pick up.

Audio hooking or a custom audio driver for audio processing and routing to the default audio device

I have developed a pretty complex audio software for my client with plugins for Winamp, Windows Media player and VST. Now the client is interested in some method to avoid maintaining the multitude of plugins, we have no way to support all the media players out there.
The client does not care for Unix/Mac yet, so I can look only at Windows XP and Vista/7/
Basically, what we need is a way to always reliably intercept as much audio stream protocols as possible (well, except maybe ASIO, that's another story, I guess), then pass this audio through our custom effects engine and then route back to the default audio device, whatever it is.
Now I am thinking, what options do I have (theoretically).
I could use hooks. I need to hook globally older vaweOut and also DirectSound.
But will this still work on Vista/7?
I could use a virtual driver, like the author of the Virtual Audio Cable did:
http://software.muzychenko.net/eng/vac.htm
Seems a pretty daunting task. Anyway, the client will contact the author of VAC to see if he agrees to sell his source code for a reasonable price.
This driver could install itself as a default audio output device, intercept the audio stream from Windows, and pass it back to default device. Hmm, but what about various DirectSound audio buffers, do I have to mix them myself or is there any way I could tell Windows mixer to mix all for me and pass a single mixed audio stream?
It seems, this custom driver will of course kill all the hardware audio acceleration, but we can live with that, if we warn our customers about this issue.
As I understand, the most current Windows driver standard is WDF.
But maybe it does not work for audio on Windows Vista/7?
I know, Vista/7 has a different audio stack from XP.
If I can do it using WDF, what driver should I write - kernel mode or user mode?
Maybe I am missing more elegant and simple options to intercept, process and route audio on Windows?
Try Virtual Audio Streaming SDK. Also virutal sound card and let you read/process audio data in realtime.
http://www.virtualaudiostreaming.net/sdk-license.html

Low level audio programming

I wonder; does audio software like Cubase and Audacity use PlaySound calls??
Where can I learn about low level audio programming? As far as I've found information on the web, MCI seems to be the lowest level audio API in Windows...
Thanks
Edit: I don't ask for information specific for Windows only.
There's several audio APIs to choose from. The oldest and most widely supported is the waveOut API - look for functions starting with waveOut in MSDN. A slightly newer one is DirectSound which is geared more towards games, but it's main feature over waveOut is positional 3D sound which professional audio software doesn't use (it was also supposed to have lower latency than waveOut, but that never really materialized). For low latency audio, there is ASIO. Professional audio apps support this API, but not all drivers do (it's a standard feature in professional sound cards, but not gaming or on-board hardware). ASIO can provide much lower latency than waveOut or DirectSound. Finally, there's the kernel streaming interface, which is the lowest-level audio interface still accessible from user-mode code. This is a direct pipe into Windows's internal mixer which combines output from all apps that are currently playing sound into the signal that gets sent to the sound card. It's scarcely documented though. There's a driver called ASIO4ALL (just google it) that provides ASIO support on soundcards without ASIO drivers by implementing the ASIO API on top of the kernel streaming interface.
I'm a little late to the game here, but I posted a Windows API history last week that might add a little more context. The choice of API really depends on your needs. If you want to avoid 3rd party libraries, it really only comes down to MME, XAudio2, and Core Audio (WASAPI).
A Brief History of Windows Audio APIs
Hope this helps!
Actually, if you are looking for more than Windows-only output support, then the best way to start is to review Phil Burk's PortAudio, available as of this writing at http://www.portaudio.com/ .
ASIO is a good quality interface, but it's proprietary and owned by Steinberg.
There are many lower-level interfaces to audio output than MCI in modern Windows. These include, at least, DirectSound, XAudio and WASAPI.
I recommend avoiding the Windows APIs as much as possible, and learning PortAudio instead.

Best cross-platform audio library for synchronizing audio playback

I'm writing a cross-platform program that involves scrolling a waveform along with uncompressed wav/aiff audio playback. Low latency and accuracy are pretty important. What is the best cross-platform audio library for audio playback when synchronizing to an external clock? By that I mean that I would like to be able to write the playback code so it sends events to a listener many times per second that includes the "hearing frame" at the moment of the notification.
That's all I need to do. No recording, no mixing, no 3d audio, nothing. Just playback with the best possible hearing frame notifications available.
Right now I am considering RTAudio and PortAudio, mostly the former since it uses ALSA.
The target platforms, in order of importance, are Mac OSX 10.5/6, Ubuntu 10.11, Windows XP/7.
C/C++ are both fine.
Thanks for your help!
The best performing cross platform library for this is jack. Properly configured, jack on Linux can outperform Windows asio easily (in terms of low latency processing without dropouts). But you cannot expect normal users to use jack (the demon should be started by the user before the app is started, and it can be a bit tricky to set up). If you are making an app specifically for pro-audio I would highly recommend looking in to jack.
Edit:
Portaudio is not as high-performance, but is much simpler for the user (no special configuration should be needed on their end, unlike jack). Most open source cross platform audio programs that I have used use portaudio (much moreso than openal), but unlike jack I have not used it personally. It is callback based, and looks pretty straightforward though.
OpenAL maybe an option for you.

Resources