How to create artificial microphone input in Linux? - linux

I'm working on an audio recognition project.
For testing, I'd like to be able to have a program:
load audio data from a file
provide it to the Linux kernel, as if it were coming from a microphone
have any user-space program sampling the microphone be obtaining data sourced
from my file.
Is that possible in Linux without having to write a new kernel module?

EDIT: i guess that solution won't work .. but see my comment below.
this shoud be simple under linux.
here are the steps:
make a named pipe with mkfifo (mkfifo ~/audio_out.pipe)
cat the audiofile into this pipe (cat test.wav > ~/audio_out.pipe)
get the program you want to listen, to get input from this pipe. maybe you have to make a symlink for programs not flexible enough to read from any device.
I hope I got your question right.

Related

Linux: how to dump audio output PCM bit stream like tcpdump

I am trying to do some audio debugging on my Linux system.
I learned how to record the sound of the current playing media but how can I get the PCM data without DAC/ADC?
I mean, just like wireshark or tcpdump tool, is there some sort of alsadump that I can make use of?
I want to do bit-exact comparison of the output PCM data to make sure the audio processing algorithm (which is an executable binary) worked correctly.
Thanks a lot.

Beaglebone Black Custom Audio Cape DMA/IRQ trouble

I'm running a BBB straight out of the box running debian. Kernel version is 3.8.13-bone-47.
I'm working with a cape that is very similar to the one here. The difference is that I'm using a TLV320AIC3106 instead of the AIC3104, and I only have enabled the audio out, I'm not interested in recording audio in this application.
My pinout for my application is identical to the cape in the link above.
I've followed the link here to get the cape up and running. Everything that I have matches the output of the tutorial up until I try and play a sample wave file.
When I play a sample wave file, I get the following message: aplay: pcm_write:1710: write error: Input/output error
Running dmesg gives me ALSA sound/core/pcm_lib.c:1010 playback write error (DMA or IRQ trouble?)
Where I'm having trouble is I don't understand how the DMA is coming in to play. Is this a DMA problem? Is it a symptom of something else going wrong like my I2C? Am I missing a configuration somewhere else?
Any thoughts on how to track this down are appreciated.
I realize it has been covered in multiple places before, but it can never be stressed enough. Make sure that when you're sending out information, make sure it goes to the right address over the I2C. I figured out this morning that the audio codec was at address 0x1B, while the driver was addressed to 0x18. Small but critical difference.
The easy fix is to edit the BB-BONE-AUDI-02-00A0.dts file.
Edit line 65 to <0x1B>. Re-compile using the line: dtc -O dtb -o BB-BONE-AUDI-02-00A0.dtbo -b 0 -# BB-BONE-AUDI-02-00A0.dts
move the generated file to the /lib/firmware directory
Insert it using echo BB-BONE-AUDI-02 > /sys/devices/bone_capemgr*/slots
After applying this simple fix it seems to work. I can't say for certain because I have to get the audio amplifier circuit up and running still. At least aplay will play the file without crashing out on me, which is a start.

Linux tee command with multiple fifo. fifo blocks tee

I am trying to develope one program to play and record some rtmp streames. The program is developed in Qt.
i am using the rtmpdump and mplayer. since both are running in seperate process, i am using a fifo to pass the stream from rtmpdump to mplayer. I need seperate process because mplayer need to be controlled by user. so mplayer is runnig in slave mode.
this is working fine for playing the stream.
now i want to record the stream to another file. i know that i can use the mplayer to do that. but using a single mplayer it is not possible as it supports only either playing or recording. so thought of using the tee command to split the stream and use 2 mplayer process, one for recording and one for playing.
now the stream flows like this
rtmpdump | tee fifo_for_playing fifo_for recording
one mplayer is reading the fifo_for_playing and another is reading fifo_for_recording.
now the problem is, since mplayer which supposed to record will start only when the user press record button, fifo_for_recording will block the tee as it is not opened. so playing also will not start.
can anybody suggest a solution or better way to achieve this? what i am trying to do is tee with non blocking fifo. so even if one fifo is not opened for reading, it will not block the tee..
Fifos do not have a buffer (or if they have one it is very small). If you write to it and no one is reading you block, as you're finding out.
You could write a little program to read the fifo and buffer it in memory or to disk. Maybe the dd program can do that?
Or you could call with rtmpdump with the -stop option in a loop, and have it write its output to a file. Then process the files the old fashioned way without the fifo.

Simulate Microphone (virtual mic)

I've got a problem where I need to "simulate" microphone output.
Data will be coming over the network, decoded into PCM and basically needs to be written into the mic - which then other programs can read/record/whatever.
I've been reading up on alsa but information is pretty sparse. The file plugin seemes promising - I was thinking of having a named pipe as "infile" which I could then deliver data to from my application. I can't get it to work however (vlc/audacity just segfault).
pcm.testing {
type file
slave {
pcm {
type hw
card 0
device 0
}
}
infile "/dev/urandom"
format "raw"
}
Are there any better ways of doing this? Any suggestions on alsa plug-ins (particularly the file plugin)?
Your sound will come over the network and what would cache it until something wants to read? Or would data be discarded?
In general something like the below (only barely tested) should work as a virtual mic, but I think that it will always read file from beginning when device opened and you need to check how does it handle end of file. Perhaps what you would try it using pipes but then caching/discarding incoming data needs to be handled by the app reading from network.
pcm.virtmic {
type file
format "raw"
slave.pcm "default"
file '/dev/null'
infile '/dev/urandom'
}
See alsa docs for more options.
Again, not sure if this tool is what you really need for the task. It would have been really nifty if you could start a command with the 'infile' option, like you can with 'file' but unfortunately you can't...
Hope that helps.
UPDATE: slave.pcm must not be "null" but some real device. It seems that is used for timing or I don't know but using null causes the recorder process to block forever. This device could force you at a given sample rate though so be careful. Using "default" is a sane default value. infile needs to provide a raw sound data with the correct/matching format and rate. btw you can look at alsa server and jackd and other sound systems and libraries for alternative solutions for your task

How to screen capture screenshots or movies on the Linux framebuffer

How can the linux frame buffer, on Cell Linux, be captured to obtain either screen shots or movies?
Is there a tool to do this for a running program, or must the program writing to, and presumably controlling, the frame buffer also handle capture and recording? If so, how would the program do so?
Many tools for doing so, for example FBGrab and fbdump; look at the sources for those two, it would be pretty easy to extend either one or write your own which captures video instead of just snapshots.
However, I would recommend that the program writing to the framebuffer be the one recording as well, in order to synchronize capturing frames between writing them (and not partially through a write, or skipping, or ...)
you could use ffmpeg or avconv (eg. avconv -f fbdev -i /dev/fb0 mymovie.flv).

Resources