I'm playing a wav sound file with some c code such as this. It uses all the APIs:
snd_pcm_*
I would like to use the equalizer plugin:
libasound_module_ctl_equal.so, libasound_module_pcm_equal.so
found in "libasound2-plugin-equal"
How can I integrate and call an Alsa plugin from c code?
you need to make it part of alsa chain e.g in ~/.asoundrc add
pcm.plugequal {
type equal;
slave.pcm "plughw:0,0";
}
pcm.!default {
type plug;
slave.pcm plugequal;
}
Than you can use the command to play sound file
aplay some.wav
For Ctl device you can add below in ~/.asoundrc
ctl.!default {
type equal;
}
You can just call
alsamixer
The answer is simpler that I imagined:
snd_pcm_open(&pcm_handle, "equal", SND_PCM_STREAM_PLAYBACK, 0) < 0);
You can pass the name of the plugin to snd_pcm_open with the right set of default file.
Related
I am trying to get my Linux audio system to handle audio via ALSA. These are the requirements
Handle echo cancellation - I am using https://github.com/voice-engine/ec, following the steps mentioned and it works independently without a problem
Divide the echo canceled stream to 2
One going into a noise-canceling plugin again works independently, the audio output of this is used by a different program
The other to a compressor, the audio output of this will be the default device
Problem
I am facing problems in using "dsnoop" to share/divide the audio stream into two. When dsnoop plugins' slave is set to be a FIFO it throws an error.
executed:
sudo arecord -D default -f cd defRecording.wav -c 1 -r 32000
error:
ALSA lib pcm_direct.c:1809:(_snd_pcm_direct_get_slave_ipc_offset) Invalid type 'fifo' for slave PCM
arecord: main:828: audio open error: Invalid argument
This is the current asound.conf settings
asound.conf
pcm.!default {
type asym
playback.pcm "playback"
capture.pcm "capture"
}
pcm.playback {
type plug
slave.pcm "eci"
}
# Stream Output 1: Final
pcm.capture {
type plug
slave.pcm "compressor"
}
# Stream Output 2: Final
pcm.capture2 {
type plug
slave.pcm "werman"
}
# Stream output 2: Noise Cancellation
pcm.werman {
type ladspa
slave.pcm "array";
path "/usr/lib/ladspa";
plugins [{
label noise_suppressor_mono
input {
# VAD Threshold %
controls [ 1 ]
}
}]
}
# Stream output 1: Compressor
pcm.compressor {
type ladspa
slave.pcm "array";
path "/usr/lib/ladspa";
plugins [{
label dysonCompress
input {
#peak limit, release time, fast ratio, ratio
controls [0 1 0.5 0.99]
}
}]
}
# Used to share the record device
pcm.array {
type dsnoop
slave {
pcm "eco"
channels 1
}
ipc_key 666666
}
# Writes audio coming from any sort of player to ec.input, this is read by the echo
# cancellation software.
pcm.eci {
type plug
slave {
format S16_LE
rate 32000
channels 1
pcm {
type file
slave.pcm null
file "/tmp/ec.input"
format "raw"
}
}
}
# Read FIFO output which contains echo cancelled audio
pcm.eco {
type plug
slave.pcm {
type fifo
infile "/tmp/ec.output"
rate 32000
format S16_LE
channels 1
}
#ipc_key 666666
}
Note:
eco is used to read the FIFO file which contains the echo canceled audio coming in from cancellation software. This software's input is hw:0 and records audio directly from the microphone, and then processes and passes this over to ec.output
Dsnoop works well when the slave.pcm is a hardware device but as soon as I point to something else it fails.
Is there a workaround or any other solution to tackle this problem?
Dsnoop can only have a hardware slave so it can not take in a FIFO plugin as an input.
To solve this, I made the echo cancellation software output data into 2 different FIFO files and then read that via ALSA.
How do I add LADSPA plugin into pipewire configuration to be used for audio postprocessing?
There are number of existing ladspa plugins.
The ladspa plugin must work on stereo (two channels) audio.
There is an existing pipewire module that can encapsulate any number of ladspa plugins called filter-chain
First we need to add filter-chain module in our build system. In yocto bitbake recipe it is added like this:
RDEPENDS_libpipewire += " \
${PN}-modules-client-node \
+ ${PN}-modules-filter-chain \
.....
Then add appropriate pipewire.conf block using the filter-chain to add the specific ladspa plugin when pipewire is started:
{ name = libpipewire-module-filter-chain
args = {
node.name = "processing_name"
node.description = "Specifc postprocessing"
media.name = "Specifc postprocessing"
filter.graph = {
nodes = [
{
type = ladspa
name = plugin_name
plugin = libplugin_name
label = plugin_name #this needs to correspond to ladspa plugin code
control = {
"Some control" = 1234
}
}
]
}
capture.props = {
node.passive = true
media.class = Audio/Sink
audio.channels=2
audio.position=[FL,FR]
}
playback.props = {
media.class = Audio/Source
audio.channels=2
audio.position=[FL,FR]
}
}
}
The main point of integration is the label part in the node block. This must correspond with ladspa plugin code. I think ladspa id can be used instead.
Then the capture/playback props determine if the ladspa plugins will have stereo channels for processing and they describe type of nodes that are created for output and input.
Every postprocessing node has implicitly two nodes - one for input and another one for output.
Afterwards the ladspa plugin needs to be connected with the session manager of choice. In case of wireplumber we may use lua script to detect and connect the plugin nodes to appropriate sinks (alsa sink for example) and client nodes.
Example graph:
How can I correctly change a .asoundrc file to realize a virtual microphone?
I have the follow .asoundrc file, but it doesn't work:
pcm.audinp {
type file
slave.pcm front
file /dev/null
infile /writable/home/user/virtualmic.pipe
format "raw"
}
pcm.!default {
type plug
slave.pcm "audinp"
}
I m using the following code for play a sound file in j2me
try
{
String ctype_1 = "audio/x-wav";// #LINE 1
temp = Manager.createPlayer(this.getClass().getResourceAsStream(audioFileString),ctype_1 );
if(temp != null){
temp.setLoopCount(1);
temp.realize();
temp.prefetch() ;
return temp;
}
}
catch(Exception ex)
{
ex.printStackTrace();
System.out.println("MediaException in my sound ==>"+ex);
}
at comment // #LINE 1 we have to give the content type (in this the type is for .wav file)
My question is that what content type i should give for the sound file .caf extension.
AFAIK .cfa file isn't supported by Java ME. Go through these articles,
MMAPI for Java ME
Finding audio format in Java ME
I am downloading a text string from a web service into an RBuf8 using this kind of code (it works..)
void CMyApp::BodyReceivedL( const TDesC8& data ) {
int newLength = iTextBuffer.Length() + data.Length();
if (iTextBuffer.MaxLength() < newLength)
{
iTextBuffer.ReAllocL(newLength);
}
iTextBuffer.Append(data);
}
I want to then convert the RBuf8 into a char* string I can display in a label or whatever.. or for the purposes of debug, display in
RDebug::Printf("downloading text %S", charstring);
edit for clarity..
My conversion function looks like this..
void CMyApp::DownloadCompleteL() {
{
RBuf16 buf;
buf.CreateL(iTextBuffer.Length());
buf.Copy(iTextBuffer);
RDebug::Printf("downloaded text %S", buf);
iTextBuffer.SetLength(0);
iTextBuffer.ReAlloc(0);
}
But this still causes a crash. I am using S60 3rd Edition FP2 v1.1
What you may need is something to the effect of:
RDebug::Print( _L( "downloaded text %S" ), &buf );
This tutorial may help you.
void RBuf16::Copy(const TDesC8&) will take an 8bit descriptor and convert it into a 16bit descriptor.
You should be able to display any 16bit descriptor on the screen. If it doesn't seem to work, post the specific API you're using.
When an API can be used with an undefined number of parameters (like void RDebug::Printf(const char*, ...) ), %S is used for "pointer to 16bit descriptor". Note the uppercase %S.
Thanks, the %S is a helpful reminder.
However, this doesn't seem to work.. my conversion function looks like this..
void CMyApp::DownloadCompleteL() {
{
RBuf16 buf;
buf.CreateL(iTextBuffer.Length());
buf.Copy(iTextBuffer);
RDebug::Printf("downloaded text %S", buf);
iTextBuffer.SetLength(0);
iTextBuffer.ReAlloc(0);
}
But this still causes a crash. I am using S60 3rd Edition FP2 v1.1
You have to supply a pointer to the descriptor in RDebuf::Printf so it should be
RDebug::Print(_L("downloaded text %S"), &buf);
Although use of _L is discouraged. _LIT macro is preferred.
As stated by quickrecipesonsymbainosblogspotcom, you need to pass a pointer to the descriptor.
RDebug::Printf("downloaded text %S", &buf); //note the address-of operator
This works because RBuf8 is derived from TDes8 (and the same with the 16-bit versions).