Reading Microphone Data by Polling using ALSA [or V4L2] - linux

I am trying to read data from multiple microphones in Linux (ubuntu 14.04). I have a specific constraint that the reading from microphones should be via polling(so no waiting until there is data, although the data comes in high frequency). I wanted to know if that is possible in Linux? Unfortunately audio capture is not the area of my expertise and I would like to know if the choice of using Alsa is a good one. To better understand the problem, here is a pseudo-code that I had in mind:
open_the_audio_device();
set_the_parameters_of_the_audio_device();
while (!done)
{
poll_result=poll_the_devices(); //other non-audio devices are also polled here preferably, something like using select on all different file descriptors of audio, video, socket, etc.
if(poll_success_for_audio_device)
receive_audio_from_the_device_that_has_data();
else
do_some_other_very_fast_stuff_and_start_loop_again();
}
close_the_device();
My questions are 2 fold:
Is Alsa a good choice for this?
Can it be done somehow with some library that gives me a file descriptor so that I can use it with select function? if so this is optimal because there are other non-audio devices also working with select.
Thank you for your attention.

To prevent the snd_pcm_read*() calls from blocking, enable non-blocking mode with snd_pcm_nonblock().
To get pollable file descriptors, call snd_pcm_poll_descriptors_count() and snd_pcm_poll_descriptors().
It is possible to have multiple descriptors because some plugins might implement notifications differently.
To translate the result of a poll() on those descriptors back into a POLLIN/POLLOUT value, call snd_pcm_poll_descriptors_revents().

Related

Is MMAP what I need from ALSA to play simultaneous, immediate sounds in my game?

I'm new to ALSA and I've managed to get PCM sound played in SND_PCM_ACCESS_RW_INTERLEAVED mode. My problem is that I just can't find a way to make that mode useful for what I'm trying to do. (If someone can tell me how, I'll be glad to read). I've been reading there is this MMAP mode, but it's not as easy to find simple examples for it. I wonder if it is what I need and how I could implement it.
What I want to do is have my little game (a simple space shoot-up) to immediately play a sound when I shoot or get shot. If an enemy shoots while another sound is being played, the sounds should add up and saturate as necessary, but no sound event should be interrupted. In other words, I need to be able to edit the very byte that's about to be played.
In my useless attempts to try MMAP (without really knowing how it works in practice; just following vague theoretical instructions), I set up everything just like for SND_PCM_ACCESS_RW_INTERLEAVED, but change it to SND_PCM_ACCESS_MMAP_INTERLEAVED. Then I call snd_pcm_avail_update, which seems to work and returns a large number of available frames. After that, I call snd_pcm_mmap_begin, passing the parameters, previously filling "frames" with a reasonable number (a 10, for example). The function fails and returns an error code -77. I haven't been able to find what that means. The areas array remains unmodified.
What does that error mean? Where can I get a list of the errors? How can I overcome it? Is there a good, simple, example of how to use MMAP (or some other thing) to perform something more or less like what I'm trying to do?
I appreciate your help :)
ALSA returns negative values on error. 77 is most likely EBADFD which indicates that the device is in an invalid state (under/overrun or not running at all). In case of underrun you're probably using a too low buffersize.
In any case, there's no way to modify audio data that you've already submitted to the alsa driver (snd_pcm_mmap_commit/writei/writen). The trick to have audio sound immediately is just to use very low buffer sizes, < 10ms will do. For this you'll want to use hw: devices, other device types usually add latency.
You still have to mix sounds together manually before you pass them to alsa.
There's a nice mmap example in the comments on this question: Alsa api: how to use mmap in c?.
That being said, ALSA is a valid choice for this kind of application but you don't necessarily need to use memory mapping. Read/write access doesn't introduce additional latency, it just copies audio around a bit more.

Designing a Linux char device driver so multiple processes can read

I notice that for serial devices, e.g. /dev/ttyUSB0, multiple processes can open the device but only one process gets the bytes (whichever reads them first).
However, for the Linux input API, e.g. /dev/input/event0, multiple processes can open the device, and all of the processes are able to read the input events.
My current goal:
I'd like to write a driver for several multi-position switches (e.g. like a slider switch with 3 or 4 possible positions), where apps can get a notification of any switch position changes. Ideally I'd like to use the Linux input API, however it seems that the Linux input API has no support for the concept of multi-position switches. So I'm looking at making a custom driver with similar capabilities to the Linux input API.
Two questions:
From a driver design point-of-view, why is there that difference in behaviour between Linux input API and Linux serial devices? I reckon it could be useful for multiple processes to all be able to open one serial port and all listen to incoming bytes.
What is a good way to write a Linux character device driver so that it's like the Linux input API, so multiple processes can open the device and read all the data?
The distinction is partly historical and partly due to the different expectation models.
The event subsystem is designed for unidirectional notification of simple events from multiple writers into the system with very little (or no) configuration options.
The tty subsystem is intended for bidirectional end-to-end communication of potentially large amounts of data and provides a reasonably flexible (albeit fairly baroque) configuration mechanism.
Historically, the tty subsystem was the main mechanism of communicating with the system: you plug your "teletype" into a serial port and bits went in and out. Different teletypes from different vendors used different protocols and thus the termios interface was born. To make the system perform well in a multi-user context, buffering was added in the kernel (and made configurable). The expectation model of the tty subsystem is that of a point-to-point link between moderately intelligent endpoints who will agree on what the data passing between them will look like.
While there are circumstances where "single writer, multiple readers" would make sense in the tty subsystem (GPS receiver connected to a serial port, continually reporting its position, for instance), that's not the main purpose of the system. But you can easily accomplish this "multiple readers" in userspace.
The event system on the other hand, is basically an interrupt mechanism intended for things like mice and keyboards. Unlike teletypes, input devices are unidirectional and provide little or no control over the data they produce. There is also little point in buffering the data. Nobody is going to be interested in where the mouse moved ten minutes ago.
I hope that answers your first question.
For your second question: "it depends". What do you want to accomplish? And what is the "longevity" of the data? You also have to ask yourself whether it makes sense to put the complexity in the kernel or if it wouldn't be better to put it in userspace.
Getting data out to multiple readers isn't particularly difficult. You could create a receive buffer per reader and fill each of them as the data comes in. Things get a little more interesting if the data comes in faster than the readers can consume it, but even that is mostly a solved problem. Look at the network stack for inspiration!
If your device is simple and just produces events, maybe you just want to be an input driver?
Your second question is a lot more difficult to answer without knowing more about what you want to accomplish.
Update after you added your specific goal:
When I do position switches, I usually just create a character device and implement poll and read. If you want to be fancy and have a lot of switches, you could do mmap but I wouldn't bother.
Userspace just opens your /dev/foo and reads the current state and starts polling. When your switches change state, you just wake up the readers and they they'll read again. All your readers will wake up, they'll all read the new state and everyone will be happy.
Be careful to only wake up readers when your switches are 'settled'. Many position switches are very noisy and they'll bounce around a fair bit.
In other words: I would ignore the input system altogether for this. As you surmise, position switches are not really "inputs".
How a character device handles these kinds of semantics is completely up to the driver to define and implement.
It would certainly be possible, for example, to implement a driver for a serial device that will deliver all read data to every process that has the character driver open. And it would also be possible to implement an input device driver that delivers events to only one process, whichever one is queued up to receive the latest event. It's all a matter of coding the appropriate implementation.
The difference is that it all comes down to a simple question: "what makes sense". For a serial device, it's been decided that it makes more sense to handle any read data by a single process. For an input device, it's been decided that it makes more sense to deliver all input events to every process that has the input device open. It would be reasonable to expect that, for example, one process might only care about a particular input event, say pointer button #3 pressed, while another process wants to process all pointer motion events. So, in this situation, it might make more sense to distribute all input events to all concerned parties.
I am ignoring some side issues, for simplicity, like in the situation of serial data being delivered to all reading processes what should happen when one of them stops reading from the device. That's also something that would be factored in, when deciding how to implement the semantics of a particular device.
What is a good way to write a Linux character device driver so that it's like the Linux input API, so multiple processes can open the device and read all the data?
See the .open member of struct file_operations for the char device. Whenever userspace opens the device, then the .open function is called. It can add the open file to a list of open files for the device (and then .release removes it).
The char device data struct should most likely use a kernel struct list_head to keep a list of open files:
struct my_dev_data {
...
struct cdev cdev;
struct list_head file_open_list;
...
}
Data for each file:
struct file_data {
struct my_dev_data *dev_data;
struct list_head file_open_list;
...
}
In the .open function, add the open file to dev_data->file_open_list (use a mutex to protect these operations as needed):
static int my_dev_input_open(struct inode * inode, struct file * filp)
{
struct my_dev_data *dev_data;
dev_data = container_of(inode->i_cdev, struct my_dev_data, cdev);
...
/* Allocate memory for file data and channel data */
file_data = devm_kzalloc(&dev_data->pdev->dev,
sizeof(struct file_data), GFP_KERNEL);
...
/* Add open file data to list */
INIT_LIST_HEAD(&file_data->file_open_list);
list_add(&file_data->file_open_list, &dev_data->file_open_list);
...
file_data->dev_data = dev_data;
filp->private_data = file_data;
}
The .release function should remove the open file from dev_data->file_open_list, and release the memory of file_data.
Now that the .open and .release functions maintain the list of open files, it is possible for all open files to read data. Two possible strategies:
A separate read buffer for each open file. When data is received, it is copied into the buffers of all open files.
A single read buffer for the device, but a separate read pointer for each open file. Data can be freed from the buffer once it has been read through all open files.
Serial to input/event
You could try to look into serial mouse driver source code.
This seem to be what you're searching for: from a TTYSx build a input/event device.
Simplier: creating a server, instead of a driver.
Historically, the 1st character device I remember is /dev/lp0.
To be able to write on it from many source, without overlap or other conflict,
a LPR server was wrotten.
To share a device, you have to
open this device in exclusive (rw) mode.
Create a socket (un*x or TCP) to listen on
redirect request from socket's clients to the device and maybe
store device status (from reading device's answers)
send device status to socket's clients when required.

Adding audio effects (reverb etc..) to a BackgroundAudioPlayer driven streaming audio app

I have a windows phone 8 app which plays audio streams from a remote location or local files using the BackgroundAudioPlayer. I now want to be able to add audio effects, for example, reverb or echo, etc...
Please could you advise me on how to do this? I haven't been able to find a way of hooking extra audio processing code into the pipeline of audio processing even through I've read much about WASAPI, XAudio2 and looked at many code examples.
Note that the app is written in C# but, from my previous experience with writing audio processing code, I know that I should be writing the audio code in native C++. Roughly speaking, I need to find a point at which there is an audio buffer containing raw PCM data which I can use as an input for my audio processing code which will then write either back to the same buffer or to another buffer which is read by the next stage of audio processing. There need to be ways of synchronizing what happens in my code with the rest of the phone's audio processing mechanisms and, of course, the process needs to be very fast so as not to cause audio glitches. Or something like that; I'm used to how VST works, not how such things might work in the Windows Phone world.
Looking forward to seeing what you suggest...
Kind regards,
Matt Daley
I need to find a point at which there is an audio buffer containing
raw PCM data
AFAIK there's no such point. This MSDN page hints that audio/video decoding is performed not by the OS, but by the Qualcomm chip itself.
You can use something like Mp3Sharp for decoding. This way the mp3 will be decoded on the CPU by your managed code, you can interfere / process however you like, then feed the PCM into the media stream source. Main downside - battery life: the hardware-provided codecs should be much more power-efficient.

Sync two soundcards

I have a program written in C++ that uses RtAudio ( Directsound ) to capture and playback audio at 48kHz samplerate.
The input capture uses a callback option. The callback writes data to a ringbuffer.
The output is a blocking write function in a separate thread that reads from the ringbuffer.
If the input and output devices are the same the audio loops thru perfectly.
Now I want to get audio from device 1 and playback on device 2. Each device has its own sampleclock set to 48kHz but are not in sync. After a couple of seconds the input and output are out of sync.
Is it possible to sync two independent oudio devices?
There are two challenges you face:
getting the two devices to start at the same time.
getting the two devices to stay in sync.
Both of these tasks are difficult. In the pro audio world, #2 is accomplished with special hardware to sync the word-clocks of multiple devices. It can also be done with a high quality video signal. I believe it can also be done with firewire devices, but I'm not sure how that works. In practice, I have used devices with no sync ("wild") and gotten very reasonable sync for up to an hour or two. Depending on what you are trying to do, the sync should not drift more than a few milliseconds over the course of a few minutes. If it does, you can consider your hardware broken (of course, cheap hardware is often broken).
As for #1, I'm not sure this is possible in any reliable sense with directsound. To the extent that it's possible with any audio API, it is difficult at best: both cards have streams that require some time to setup, open and start playing. In general, the solution is to use an API where this time is super low (ASIO, for example). This works reasonably well for applications like video, but I don't know if it really solves the problem in general.
If you really need to solve this problem, you could open both cards, starting to play silence, and use the timing information generated by the cards to establish the delay between putting data into the card and its eventual playback (this will be different for each card and probably each time you run) and use that data to calculate when to start actual playback. I don't know if RTAudio supplies the necessary timing information, but PortAudio does. This document may help.

Simulate Microphone (virtual mic)

I've got a problem where I need to "simulate" microphone output.
Data will be coming over the network, decoded into PCM and basically needs to be written into the mic - which then other programs can read/record/whatever.
I've been reading up on alsa but information is pretty sparse. The file plugin seemes promising - I was thinking of having a named pipe as "infile" which I could then deliver data to from my application. I can't get it to work however (vlc/audacity just segfault).
pcm.testing {
type file
slave {
pcm {
type hw
card 0
device 0
}
}
infile "/dev/urandom"
format "raw"
}
Are there any better ways of doing this? Any suggestions on alsa plug-ins (particularly the file plugin)?
Your sound will come over the network and what would cache it until something wants to read? Or would data be discarded?
In general something like the below (only barely tested) should work as a virtual mic, but I think that it will always read file from beginning when device opened and you need to check how does it handle end of file. Perhaps what you would try it using pipes but then caching/discarding incoming data needs to be handled by the app reading from network.
pcm.virtmic {
type file
format "raw"
slave.pcm "default"
file '/dev/null'
infile '/dev/urandom'
}
See alsa docs for more options.
Again, not sure if this tool is what you really need for the task. It would have been really nifty if you could start a command with the 'infile' option, like you can with 'file' but unfortunately you can't...
Hope that helps.
UPDATE: slave.pcm must not be "null" but some real device. It seems that is used for timing or I don't know but using null causes the recorder process to block forever. This device could force you at a given sample rate though so be careful. Using "default" is a sane default value. infile needs to provide a raw sound data with the correct/matching format and rate. btw you can look at alsa server and jackd and other sound systems and libraries for alternative solutions for your task

Resources