Alsa: Creating a virtual microphone - linux

How can I correctly change a .asoundrc file to realize a virtual microphone?
I have the follow .asoundrc file, but it doesn't work:
pcm.audinp {
type file
slave.pcm front
file /dev/null
infile /writable/home/user/virtualmic.pipe
format "raw"
}
pcm.!default {
type plug
slave.pcm "audinp"
}

Related

ALSA: combining type "Dsnoop" with type "fifo"

I am trying to get my Linux audio system to handle audio via ALSA. These are the requirements
Handle echo cancellation - I am using https://github.com/voice-engine/ec, following the steps mentioned and it works independently without a problem
Divide the echo canceled stream to 2
One going into a noise-canceling plugin again works independently, the audio output of this is used by a different program
The other to a compressor, the audio output of this will be the default device
Problem
I am facing problems in using "dsnoop" to share/divide the audio stream into two. When dsnoop plugins' slave is set to be a FIFO it throws an error.
executed:
sudo arecord -D default -f cd defRecording.wav -c 1 -r 32000
error:
ALSA lib pcm_direct.c:1809:(_snd_pcm_direct_get_slave_ipc_offset) Invalid type 'fifo' for slave PCM
arecord: main:828: audio open error: Invalid argument
This is the current asound.conf settings
asound.conf
pcm.!default {
type asym
playback.pcm "playback"
capture.pcm "capture"
}
pcm.playback {
type plug
slave.pcm "eci"
}
# Stream Output 1: Final
pcm.capture {
type plug
slave.pcm "compressor"
}
# Stream Output 2: Final
pcm.capture2 {
type plug
slave.pcm "werman"
}
# Stream output 2: Noise Cancellation
pcm.werman {
type ladspa
slave.pcm "array";
path "/usr/lib/ladspa";
plugins [{
label noise_suppressor_mono
input {
# VAD Threshold %
controls [ 1 ]
}
}]
}
# Stream output 1: Compressor
pcm.compressor {
type ladspa
slave.pcm "array";
path "/usr/lib/ladspa";
plugins [{
label dysonCompress
input {
#peak limit, release time, fast ratio, ratio
controls [0 1 0.5 0.99]
}
}]
}
# Used to share the record device
pcm.array {
type dsnoop
slave {
pcm "eco"
channels 1
}
ipc_key 666666
}
# Writes audio coming from any sort of player to ec.input, this is read by the echo
# cancellation software.
pcm.eci {
type plug
slave {
format S16_LE
rate 32000
channels 1
pcm {
type file
slave.pcm null
file "/tmp/ec.input"
format "raw"
}
}
}
# Read FIFO output which contains echo cancelled audio
pcm.eco {
type plug
slave.pcm {
type fifo
infile "/tmp/ec.output"
rate 32000
format S16_LE
channels 1
}
#ipc_key 666666
}
Note:
eco is used to read the FIFO file which contains the echo canceled audio coming in from cancellation software. This software's input is hw:0 and records audio directly from the microphone, and then processes and passes this over to ec.output
Dsnoop works well when the slave.pcm is a hardware device but as soon as I point to something else it fails.
Is there a workaround or any other solution to tackle this problem?
Dsnoop can only have a hardware slave so it can not take in a FIFO plugin as an input.
To solve this, I made the echo cancellation software output data into 2 different FIFO files and then read that via ALSA.

add ladspa stereo postprocessing into pipewire

How do I add LADSPA plugin into pipewire configuration to be used for audio postprocessing?
There are number of existing ladspa plugins.
The ladspa plugin must work on stereo (two channels) audio.
There is an existing pipewire module that can encapsulate any number of ladspa plugins called filter-chain
First we need to add filter-chain module in our build system. In yocto bitbake recipe it is added like this:
RDEPENDS_libpipewire += " \
${PN}-modules-client-node \
+ ${PN}-modules-filter-chain \
.....
Then add appropriate pipewire.conf block using the filter-chain to add the specific ladspa plugin when pipewire is started:
{ name = libpipewire-module-filter-chain
args = {
node.name = "processing_name"
node.description = "Specifc postprocessing"
media.name = "Specifc postprocessing"
filter.graph = {
nodes = [
{
type = ladspa
name = plugin_name
plugin = libplugin_name
label = plugin_name #this needs to correspond to ladspa plugin code
control = {
"Some control" = 1234
}
}
]
}
capture.props = {
node.passive = true
media.class = Audio/Sink
audio.channels=2
audio.position=[FL,FR]
}
playback.props = {
media.class = Audio/Source
audio.channels=2
audio.position=[FL,FR]
}
}
}
The main point of integration is the label part in the node block. This must correspond with ladspa plugin code. I think ladspa id can be used instead.
Then the capture/playback props determine if the ladspa plugins will have stereo channels for processing and they describe type of nodes that are created for output and input.
Every postprocessing node has implicitly two nodes - one for input and another one for output.
Afterwards the ladspa plugin needs to be connected with the session manager of choice. In case of wireplumber we may use lua script to detect and connect the plugin nodes to appropriate sinks (alsa sink for example) and client nodes.
Example graph:

ALSA: Use plugin in c code while playing sound

I'm playing a wav sound file with some c code such as this. It uses all the APIs:
snd_pcm_*
I would like to use the equalizer plugin:
libasound_module_ctl_equal.so, libasound_module_pcm_equal.so
found in "libasound2-plugin-equal"
How can I integrate and call an Alsa plugin from c code?
you need to make it part of alsa chain e.g in ~/.asoundrc add
pcm.plugequal {
type equal;
slave.pcm "plughw:0,0";
}
pcm.!default {
type plug;
slave.pcm plugequal;
}
Than you can use the command to play sound file
aplay some.wav
For Ctl device you can add below in ~/.asoundrc
ctl.!default {
type equal;
}
You can just call
alsamixer
The answer is simpler that I imagined:
snd_pcm_open(&pcm_handle, "equal", SND_PCM_STREAM_PLAYBACK, 0) < 0);
You can pass the name of the plugin to snd_pcm_open with the right set of default file.

What's of_node parameter in the struct device?

The explanation in struct device says
Associated device tree node.
But, I didn't clearly understand this.
Can anyone provide an example?
of_node is related to Open Firmware it holds the information of a device tree.
Device Tree is like config file (named nodes and properties) which describes hardware in detail.
The main advantage of device tree is you don't have to keep modifying kernel for specific hardware. All you have to do is define your h/w in device tree fmt and feed it to bootloader. Boot loader, for example uboot, passes the device tree information to kernel and kernel initializes the devices based on those information it received from boot-loader.
the below is example of device tree.
{
compatible = "acme,coyotes-revenge";
cpus {
cpu#0 {
compatible = "arm,cortex-a9";
};
cpu#1 {
compatible = "arm,cortex-a9";
};
};
serial#101F0000 {
compatible = "arm,pl011";
};
serial#101F2000 {
compatible = "arm,pl011";
};
interrupt-controller#10140000 {
compatible = "arm,pl190";
};
external-bus {
ethernet#0,0 {
compatible = "smc,smc91c111";
};
i2c#1,0 {
compatible = "acme,a1234-i2c-bus";
rtc#58 {
compatible = "maxim,ds1338";
};
};
flash#2,0 {
compatible = "samsung,k8f1315ebm", "cfi-flash";
};
};
};
struct device_node (type of of_node) contains struct property which contains all properties of device tree node. It also has pointer to other properties and other node (siblings and parent) and a name variable which is name of the property (eg, register). So that's how we can have diffrent data like address from device tree in our driver code.

What content type should i use with .cfa sound file to play with j2me?

I m using the following code for play a sound file in j2me
try
{
String ctype_1 = "audio/x-wav";// #LINE 1
temp = Manager.createPlayer(this.getClass().getResourceAsStream(audioFileString),ctype_1 );
if(temp != null){
temp.setLoopCount(1);
temp.realize();
temp.prefetch() ;
return temp;
}
}
catch(Exception ex)
{
ex.printStackTrace();
System.out.println("MediaException in my sound ==>"+ex);
}
at comment // #LINE 1 we have to give the content type (in this the type is for .wav file)
My question is that what content type i should give for the sound file .caf extension.
AFAIK .cfa file isn't supported by Java ME. Go through these articles,
MMAPI for Java ME
Finding audio format in Java ME

Resources