How to write dummy ALSA compliant device driver? - linux

I want to write dummy ALSA compliant driver as a loadable kernel module. When accessing it by aplay/arecord throught the ALSA-lib, let's say, it must behave as normal 7.1 channel audio device providing all the basic controls at least - sampling rates, number of channels, format, etc...
Underneath it will just get every channel from the audio stream and will send it through the network as UDP packet stream.
It must be capable to be loaded multiple times and ultimately it would expose as many as want audio devices under /dev. In that way we will have multiple virtual sound cards in the system.
What should be the minimal structure of such a kernel module?
Can you give me an example skeleton (at least the interfaces) to be 100% ALSA compliant?
ALSA driver examples are so poor...

I think I've just found what I need.
There are no better ALSA interface examples than "dummy" and "aloop" templates under sound/drivers directory in the kernel tree:
https://alsa-project.org/main/index.php/Matrix:Module-dummy
https://www.alsa-project.org/main/index.php/Matrix:Module-aloop
I'll need to implement the network part only.
EDIT:
Adding yet another project for a very simple but essential virtual ALSA driver:
https://alsa-project.org/main/index.php/Minivosc
EDIT 2020_09_25:
Yet another great ALSA example:
https://www.openpixelsystems.org/posts/2020-06-27-alsa-driver/

install alsa-base alsa-util
modprobe snd-dummy
use alsamixer or use mocp(need install moc) to conifg add dummy-audio success

Related

Sharing a microphone audio stream on Linux

As much as it matters my scenario is developing an accessibility application not any kind of malicious eavesdropping, whereas also within this scenario there are various research and development implied scenarios, all of which should greatly benefit from being able to read the microphone audio stream by multiple simultaneously running unrelated processes such as recording tools and/or different versions of my own code.
Problem Statement
I am reading a microphone input stream using a high level python API like follows:
import sounddevice
audio_stream = sounddevice.InputStream(
device=self.microphone_device,
channels=max(self.channels),
samplerate=self.audio_props['sample_rate'],
blocksize=int(self.audio_props['frame_elements_size']),
callback=self.audio_callback)
I would like to learn whether it is possible (on linux) to read the microphone audio stream simultaneously to another program such as Google Meet / Zoom reading it. I.e. effectively share the audio stream.
As is with the mentioned python wrapper, it is no big surprise that when the above code is started while a video call is in progress, it will simply fail to open the stream:
Expression 'paInvalidSampleRate' failed in
'src/hostapi/alsa/pa_linux_alsa.c', line: 2043
Expression 'PaAlsaStreamComponent_InitialConfigure( &self->playback, outParams, self->primeBuffers, hwParamsPlayback, &realSr )'
failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2716
Expression 'PaAlsaStream_Configure( stream, inputParameters, outputParameters, sampleRate, framesPerBuffer, &inputLatency, &outputLatency, &hostBufferSizeMode )'
failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2837
Admittedly, I am not very well versed with ALSA terminology and in general the sound stack on linux yet.
My question is, can this be accomplished directly using ALSA library API, or otherwise via other sound stacks or sound system configuration? Or if all else is not meant to work, via a proxy program/driver that is able to expose an audio buffer to multiple consumers without incurring noticeable degradation in audio stream latency?
You can do this directly with ALSA. Dsnoop should do the trick. It is a plugin included with ALSA that allows sharing input streams.
From the page I linked above:
dsnoop is the equivalent of the dmix plugin, but for recording sound. The dsnoop plugin allows several applications to record from the same device simultaneously.
From the ALSA docs:
If you want to use multiple input(capture) clients you need to use the dsnoop plugin:
You can poke around there for details on how to use it. This issue on GitHub will also help you get started, it details how to configure the dsnoop interface so you can read from it with pyaudio.
Update
To configure ALSA, edit /etc/asound.conf with something like this (from the ALSA docs on dsnoop):
pcm.mixin {
type dsnoop
ipc_key 5978293 # must be unique for all dmix plugins!!!!
ipc_key_add_uid yes
slave {
pcm "hw:1,0"
channels 2
period_size 1024
buffer_size 4096
rate 44100
periods 0
period_time 0
}
bindings {
1 1
1 0
}
}
You can test to see if your configuration works with something like this:
arecord -d 30 -f cd -t wav -D pcm.mixin test.wav
So, this is more an audio question than a python question I guess. :)
Depending on the API, Streams can be device exclusive or not. ASIO for professional audio for example is often device exclusive, so just one application(like a DAW) has access to it. On Windows for example you can turn this on and off as seen here:
https://help.ableton.com/hc/en-us/articles/209770485-Disabling-exclusive-mode-for-ASIO-interfaces
Most Python packages like pyaudio and so on are just providing bindings for portaudio, which does the heavy lifting, so also have a look at the portaudio documentation. Portaudio "combines" all the different APIs like ASIO,ALSA,WASAPI,Core Audio, and so on.
For ALSA to create more than one Stream at the same time you might need dmix, have a look at this Stackoverflow question:
https://unix.stackexchange.com/questions/355662/alsa-doesnt-work-when-multiple-applications-are-opened

PortAudio unreliable: Expression '...' failed

I'm currently experimenting with real-time signal processing, so I went and tried out PortAudio (from C).
I have two audio interfaces on my computer, onboard sound (Intel HD Audio) and a USB audio interface. Both generally work fine under ALSA on Linux. I also tried the USB audio interface under JACK on Linux and this also works perfectly.
What I do:
My code just initializes PortAudio, opens and starts a stream (one channel, paInt32 sample format, defaultLowInputLatency / defaultLowOutputLatency, though I tried changing to paFloat32 or defaultHighInputLatency / defaultHighOutputLatency, which didn't improve anything).
On each invocation of the callback, it copies sizeof(int32_t) * frameCount bytes via memcpy from the input to the output buffer, then returns paContinue. It does nothing else in the callback. No memory allocation, no system calls, nothing it could block on. It just outputs what it has read. The code is very simple, still I can't get it running.
Replacing the memcpy with a loop copying frameCount elements of type int32_t over from the input to the output buffer didn't change anything.
What I've tried:
The following scenarios were tried out with PortAudio.
Input and output via USB audio interface, callback mechanism on PortAudio, ALSA backend.
Input and output via USB audio interface, blocking I/O on PortAudio with 1024 samples buffer size, ALSA backend.
Input via USB audio interface, output via onboard sound, callback mechanism on PortAudio, ALSA backend.
Input via USB audio interface, output via onboard sound, blocking I/O on PortAudio with 1024 samples buffer size, ALSA backend.
Input and output via USB audio interface, callback mechanism on PortAudio, JACK backend.
Input and output via USB audio interface, blocking I/O on PortAudio with 1024 samples buffer size, JACK backend.
The problems I encountered:
The results were as follows. (Numbers represent the scenarios described above.)
No output from device.
Output from device unsteady (interrupted). All the time lots of buffer underruns.
No output from device.
Output from device unrealiable. Sometimes it works, sometimes it doesn't. (Without changing anything, just running the executable multiple times.) When it works, latency starts off low, but increases over time and gets very noticeable.
No output from device.
No output from device.
Between each try, ALSA has been tested if it's still responsive (sometimes it got completely "locked up", so that no application could output sound any longer) and rebooted the system in case ALSA got "locked up", then continued the testing.
More details, that might be useful when tracking the problem down:
In the scenarios where there is no output at all, I get the following error messages when using ALSA as backend.
Expression 'err' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 3350
Expression 'ContinuePoll( self, StreamDirection_In, &pollTimeout, &pollCapture )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 3876
Expression 'PaAlsaStream_WaitForFrames( stream, &framesAvail, &xrun )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 4248
I get the following error message when using JACK as backend.
Cannot lock down 42435354 byte memory area (Cannot allocate memory)
In addition, no matter what method I use, I always get these warnings.
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
When using ALSA, I also get one or two complaints about underruns.
ALSA lib pcm.c:7905:(snd_pcm_recover) underrun occurred
The PortAudio functions that I call (Pa_Initialize, Pa_OpenStream, Pa_StartStream, Pa_StopStream, Pa_CloseStream, Pa_Terminate, in this order), all return paNoError.
The paex_read_write_wire.c (blocking I/O) example that comes with PortAudio can usually access the device, but also experiences lots of underruns (like my test case 2).
In either case, there's nothing interesting showing up in dmesg. (Just checked that, since ALSA has a kernel-level component.)
My question:
Anyone knows what's the problem here and how I could fix it? Or at least, how I could narrow it down a bit more?
When you write only a single block of samples, the playback device will run out of samples just when you're about to write the next block.
You should fill up the playback device's buffer with zero samples before you start the read/write loop.
I'm unclear on the details, but for me switching from default low input/output latency to default high input/output latency cured this problem, and I can't perceive a change in latency.

Bi-directional sniffing/snooping on an ALSA MIDI SysEx exchange

Does anyone know of a good way to get a bi-directional dump of MIDI SysEx data on Linux? (between a Yamaha PSR-E413 MIDI keyboard and a copy of the Yamaha MusicSoft Downloader running in Wine)
I'd like to reverse-engineer the protocol used to copy MIDI files to and from my keyboard's internal memory and, to do that, I need to do some recording of valid exchanges between the two.
The utility does work in Wine (with a little nudging) but I don't want to have to rely on a cheap, un-scriptable app in Wine when I could be using a FUSE filesystem.
Here's the current state of things:
My keyboard connects to my PC via a built-in USB-MIDI bridge. USB dumpers/snoopers are a possibility, but I'd prefer to avoid them if possible. I don't want to have to decode yet another layer of protocol encoding before I even get started.
I run only Linux. However, if there really is no other option than a Windows-based dumper/snooper, I can try getting the USB 1.1 pass-through working on my WinXP VirtualBox VM.
I run bare ALSA for my audio system with dmix for waveform audio mixing.
If a sound server is necessary, I'm willing to experiment with JACK.
No PulseAudio please. It took long enough to excise it from my system.
If the process involves ALSA MIDI routing:
a virtual pass-through device I can select from inside the Downloader is preferred because it often only appears in an ALSA patch bay GUI like patchage an instant before it starts communicating with the keyboard.
Neither KMIDIMon nor GMIDIMonitor support snooping bi-directionally as far as I can tell.
virmidi isn't relevant and I haven't managed to get snd-seq-dummy working.
I I suppose I could patch ALSA to get dumps if I really must, but it's really an option of last resort.
The vast majority of my programming experience is in Python, PHP, Javascript, and shell script.
I have almost no experience programming in C.
I've never even seen a glimpse of kernel-mode code.
I'd prefer to keep my system stable and my uptime high.
This question has been unanswered for some time and while I do not have an exact answer to your problem I maybe have something that can push you in the right direction (or maybe others with similar problems).
I had a similar albeit less complex problem when I wanted to sniff the data used to set and read presets on an Akai LPK25 MIDI keyboard. Similar to your setup the software to setup the keyboard can run in Wine but I also had no luck in finding a sniffer setup for Linux.
For the lack of an existing solution I rolled my own using ALSA MIDI routing over virmidi ports. I understand why you see them as useless because without additional software they cannot help at sniffing MIDI traffic.
My solution was programming a MIDI relay/bridge in Java where I read input from a virmidi port, display the data and send it further to the keyboard. The answer from the keyboard (if any) is also read, displayed and finally transmitted back to the virmidi port. The application in Wine can be setup to use the virmidi port for communication and in theory this process is completely transparent (except for potential latency issues). The application is written in a generic way and not hardcoded to my problem.
I was only dealing with SysEx messages of about 20 bytes length so I am not sure how well the software works for sniffing the transfer of large amounts of data. But maybe you can modify it / write your own program following the example.
Sources available here: https://github.com/hiben/MIDISpy
(Java 1.6, ant build file included, source is under BSD license)
I like using aseqdump for that.
http://www.linuxcommand.org/man_pages/aseqdump1.html
You could use virtual midi devices for this purpose. So you have to load snd_seq_dummy so that it creates at least two ports:
$ sudo modprobe -r snd_seq_dummy
$ sudo modprobe snd_seq_dummy ports=1 duplex=1
Then you should have a device named Midi through:
$ aconnect -i -o -l
client 0: 'System' [type=kernel]
0 'Timer '
1 'Announce '
client 14: 'Midi Through' [type=kernel]
0 'Midi Through Port-0:A'
1 'Midi Through Port-0:B'
client 131: 'VMPK Input' [type=user,pid=50369]
0 'in '
client 132: 'VMPK Output' [type=user,pid=50369]
0 'out '
I will take the port and device numbers form this example. You have to inspect them yourself according to your setup.
Now you plug your favourate MIDI Device to the Midi Through ports:
$ aconnect 132:0 14:0
$ aconnect 14:0 131:0
At this time you have a connection where you can spy on both devices at the same time. You could use aseqdump to spy the MIDI conversation. There are different possibilities. I suggest to spy the connection between the loopback devices and the real device. This allows you for rawmidi connections to the loopback devices.
$ aseqdump -p 14:0,132:0 | tee dump.log
Now everything is set up for use. You just have to be careful about port names in your MIDI application. It should read MIDI data from Midi Through Port-0:B and write data to Midi Through Port-0:B.
Some additional hint: You could use the graphical frontend patchage for connecting and inspecting the MIDI connections via drag and drop. If you do this you will see that every Midi Through port occurs twice once as input and once as output. Both have to be connected in order to make this setup work.
If you want to use GMidiMonitor or some other application you spy on both streams intermixed (without showing the direction) using aconnect suppose 129:0 is the Midi Monitor port :
$ aconnect 14:0 129:0
$ aconnect 132:0 129:0
If you want to have exact direction information you could add another GMidiMonitor instance that you connect only to one of the ports. The missing messages come from the other port.
What about using gmidimonitor? See http://home.gna.org/gmidimonitor/

PCM voice data on serial port to sound device conversion in linux

I have a telephony modem which gives voice to my interfaced application via a serial USB ttyUSB0 in 16bit PCM 8000hz. I am able to capture this data and play with audacity. I want this port to be detected as a sound device in linux (I am on ubuntu). Is it possible? Are there any other options?
I'm guessing you are using a huawei 3G modem or something similar which gives ttyUSB1 for audio. Make sure you have the serial driver binded to it. Then simply pass the port itself as a "file" for input for any program of your choice.You need root access for that.You figured out the audio settings so it must be enough.I have voice calling working in UBUNTU 11.10 with Huawei. So let me know if i can help any further.
Ok, I see it's very old question but answers helped me to get a right direction so I decided to help others.
The one way to achieve (in addition to below) what are you are
looking for is to write dynamic kernel module.
Have it register as a sound device, and check that it has a GSM
module present (which module is it exactly can be recognized in
dmesg, lsmod, or output).
Then establish communication between user space representation as a
sound card and serial usb module.
The other way is to get module that you recognized by dmesg, lsmod and extend its functionality as a sound card.
All are tricky tasks because:
in the first case you have to resolve intermodule communication at the kernel level...... which is, lets say, a little hard even if programmer has a right background in subject.
the second case is hard in that you have to deal with:
USB stack (which is little unpleasant for human beings) and
sound subsystem (which is a little burdensome because of historical issues).
Without being an experienced kernel programmer there are small chances to succeed.

Can v4l2 be used to read audio and video from the same device?

I have a capture card that captures SDI video with embedded audio. I have source code for a Linux driver, which I am trying to enhance to add video4linux2 support. My changes are based on the vivi example.
The problem I've come up against is that all the example I can find deal with only video or only audio. Even on the client side, everything seems to assume v4l is just video, like ffmpeg's libavdevice.
Do I need to have my driver create two separate devices, a v4l2 device and an alsa device? It seems like this makes the job of keeping audio and video in sync much more difficult.
I would prefer some way for each buffer passed between the driver and the app (through v4l2's mmap interface) contain a frame, plus some audio that matches up (with respect to time) with that frame.
Or perhaps have each buffer contain a flag indicating if it is a video frame, or a chunk of audio. Then the time stamps on the buffers could be used to sync things up.
But I don't see a way to do this with the V4L2 API spec, nor do I see any examples of v4l2-enabled apps (gstreamer, ffmpeg, transcode, etc) reading both audio and video from a single device.
Generally, the audio capture part of a device shows up as a separate device. It's usually a different physical device (posibly sharing a card), which makes sense. I'm not sure how much help that is, but it's how all of the software I'm familiar with works...
There are some spare or reserved fields in the v4l2 buffers that can be used to pass audio or other data from the driver to the calling application via pointers to mmaped buffers.
I modified the BT8x8 driver to use this approach to pass data from an A/D card synchronized to the video on Ubuntu 6.06.
It worked OK, but the effort of maintaining my modified driver caused me to abandon this approach.
If you are still interested I could dig out the details.
IF you want your driver to play with gstreamer etc. a separate audio device generally is what is expected.
Most of the cheap v4l2 capture card's audio is only an analog pass through with a volume control requiring a jumper to capture the audio via the sound card's line input.

Resources