BUTT - Streaming alsa loopback audio using BUTT to Icecast2 server - linux

I've set up a radio for local network, everything (Icecast2, BUTT (Broadcast Using This Tool)) is ready and working except one thing. BUTT intends to capture audio from input device, but that is not what I want, I want to stream desktop audio. I've created a loopback device using modprobe snd-aloop. I actually managed to route the audio from a specific program to that loopback device and I was able to hear it on the web player but the sound is stuttering, speeding up and getting back to normal its hard to describe. It's like decoder cannot catch up. All these are happening in less than a second.
Output from pactl list short sinks
1 alsa_output.pci-0000_00_1f.3.analog-stereo module-alsa-card.c s16le 2ch 48000Hz RUNNING
3 alsa_output.platform-snd_aloop.0.analog-stereo module-alsa-card.c s16le 2ch 44100Hz IDLE
11 alsa_output.pci-0000_01_00.1.hdmi-stereo module-alsa-card.c s16le 2ch 44100Hz IDLE
I have created ~/.asoundrc as most of the tutorial encouraged me to do so, and I believe this problem can be solved by this file but I am not familiar enough with alsa and its features. Sample rate of sound card is 48000 Hz but BUTT is forcing me to select 44100.
Contents of ~/.asoundrc
pcm.!default {
type asym
playback.pcm "LoopAndReal"
#capture.pcm "looprec"
capture.pcm "hw:0,0"
}
pcm.looprec {
type hw
card "Loopback"
device 1
subdevice 0
}
pcm.LoopAndReal {
type plug
slave.pcm mdev
route_policy "duplicate"
}
pcm.mdev {
type multi
slaves.a.pcm pcm.MixReale
slaves.a.channels 2
slaves.b.pcm pcm.MixLoopback
slaves.b.channels 2
bindings.0.slave a
bindings.0.channel 0
bindings.1.slave a
bindings.1.channel 1
bindings.2.slave b
bindings.2.channel 0
bindings.3.slave b
bindings.3.channel 1
}
pcm.MixReale {
type dmix
ipc_key 1024
slave {
pcm "hw:0,0"
#rate 48000
rate 44100
periods 128
period_time 0
period_size 1024 # must be power of 2
buffer_size 8192
}
}
pcm.MixLoopback {
type dmix
ipc_key 1025
slave {
pcm "hw:Loopback,0,0"
#rate 48000
rate 44100
periods 128
period_time 0
period_size 1024 # must be power of 2
buffer_size 8192
}
}
Thanks in advance for any help.

I've managed to pipe my desktop audio to mic input using purely pulse audio and no alsa, thanks to this thread: https://unix.stackexchange.com/questions/82259/how-to-pipe-audio-output-to-mic-input

Related

How to ensure that ffmpeg libraries uses/ not uses GPU

My library ( Linux, Debian) uses FFMpeg libraries ( avformat, avcodec, swscale etc) for reading video stream from network cameras. Actually, I need to capture each video frame from network camera, decode it, scale and store in memory- and other thread pass this data to calling program for display.
Problem is, that all works in CPU and take a huge amount of CPU resource. How can I enforce usage of GPU accelerator for processing?
I have video card: VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
My decode thread look like this ( I omit declarations, error handling etc, so pls don't look for grammar mistakes:)))
fmt = avformat_alloc_context();
//initialising, setting option by av_dict_set
// finding video stream index
***
// finding decoder and allocate its contexts
frame = av_frame_alloc();
while ( av_read_frame(ctx->fmt, &pkt) >= 0)
{
AVPacket orig_pkt = pkt;
avcodec_send_packet(ctx->dec_ctx, pkt);
avcodec_receive_frame(ctx->dec_ctx, frame);
***
// get buffer allocated for store of frame data
buff = get_free_buffer(ctx);
sws_scale(ctx->sws, (const uint8_t * const*)frame->data,
frame->linesize, 0, ctx->dec_ctx->height, buff->data,
buff->linesize);
ret = decode_packet(ctx, frame, &pkt, &got_frame);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
}
while (pkt.size > 0);
av_packet_unref(&orig_pkt);
}
*****
You can find HW accelerated ffmpeg recoding commands on the internet, i am using
ffmpeg -vaapi_device /dev/dri/renderD128 -i "inputfile" -vf format=nv12,hwupload -c:v h264_vaapi -f mp4 -qp 18 -map 0 "outputfile.mp4"
You can list HW accelerators by command ffmpeg -hwaccels and DRI framework path using command ls /dev/dri/, then video codec/encoder (h264_vaapi in above example) you can find using command ffmpeg -encoders. -f mp4 parameter may not be necessary to define file format, -qp sets quality (in this case similar to original), -map 0 will try to use all streams of the input file, not just the stream with highest quality, first/default subtitle...
On another hand, when i do not define the HW accelerator device and use default encoder libx264, i can see CPU is maxed out and so no HW acceleration is likely used.

ALSA: snd_pcm_writei returns EAGAIN

I'm struggling to understand what I'm doing wrong in my audio playback routine.
I've a thread that takes buffers from other threads and plays them the same way this alsa example program does: https://www.alsa-project.org/alsa-doc/alsa-lib/_2test_2pcm_8c-example.html
I'm referring to the write_loop() function. This is the device configuration setup up of the pcm.c example program (output of snd_pcm_dump()):
ALSA <-> PulseAudio PCM I/O Plugin
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 44100
exact rate : 44100 (44100/1)
msbits : 16
buffer_size : 22050
period_size : 4410
period_time : 100000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 4410
period_event : 0
start_threshold : 22050
stop_threshold : 22050
silence_threshold: 0
silence_size : 0
boundary : 6206523236469964800
What I see placing some printf() around snd_pcm_writei() is that it gets executed 5 times straight and every next loop snd_pcm_writei() takes 100ms to complete. This is exactly what I was expecting to see.
This is device setup of my program:
ALSA <-> PulseAudio PCM I/O Plugin
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : FLOAT_LE
subformat : STD
channels : 1
rate : 44100
exact rate : 44100 (44100/1)
msbits : 32
buffer_size : 13230
period_size : 4410
period_time : 100000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 4410
period_event : 0
start_threshold : 4410
stop_threshold : 13230
silence_threshold: 0
silence_size : 0
boundary : 7447827883763957760
What happens is snd_pcm_writei() runs 5 times (and this is ok) but after that every new loop it returns immediately with -EAGAIN. Retrying continuously for 100ms (100% cpu usage) to play the same buffer eventually it gets played, snd_pcm_writei() returns a positive number and for next audio buffer I get immediately -EAGAIN, for 100ms; and so on. The audio playback, however, is fine.
What I don't understand is why it doesn't wait 100ms to play the new buffer instead of returning immediately -EAGAIN (cannot find anything in ALSA docs about snd_pcm_writei() returning -EAGAIN).
Thanks in advance for any help!
A PCM device can be in blocking mode (waiting) or in non-blocking mode (returning -EAGAIN).
This mode can be set with a flag when calling snd_pcm_open(), or with snd_pcm_nonblock().

Pulseaudio/alsa : slow playback device wake-up

I have a Debian machine (3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux) on which I have some audio devices connected. The devices that are stereo works well, but I have a problem with a mono headset. When I type the command
aplay -v -D plughw:2,0 ~/piano2.wav
the device waits up to 3-4 seconds before starting to output sound. If I retype the command in the following 5 seconds, the sound is played directly, but if I wait a bit more, I have to wait 3-4 seconds again before hearing anything.
Here is the output when I run the command above :
Playing WAVE '/home/console/piano2.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Stereo
Plug PCM: Route conversion PCM (sformat=S16_LE)
Transformation table:
0 <- 0*0.5 + 1*0.5
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 2
rate : 48000
exact rate : 48000 (48000/1)
msbits : 16
buffer_size : 24000
period_size : 6000
period_time : 125000
tstamp_mode : NONE
period_step : 1
avail_min : 6000
period_event : 0
start_threshold : 24000
stop_threshold : 24000
silence_threshold: 0
silence_size : 0
boundary : 6755399441055744000
Slave: Rate conversion PCM (16000, sformat=S16_LE)
Converter: libspeex (builtin)
Protocol version: 10002
Its setup is:
stream : PLAYBACK
access : MMAP_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 48000
exact rate : 48000 (48000/1)
msbits : 16
buffer_size : 24000
period_size : 6000
period_time : 125000
tstamp_mode : NONE
period_step : 1
avail_min : 6000
period_event : 0
start_threshold : 24000
stop_threshold : 24000
silence_threshold: 0
silence_size : 0
boundary : 6755399441055744000
Slave: Hardware PCM card 2 'Jabra PRO 9460' device 0 subdevice 0
Its setup is:
stream : PLAYBACK
access : MMAP_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 16000
exact rate : 16000 (16000/1)
msbits : 16
buffer_size : 8000
period_size : 2000
period_time : 125000
tstamp_mode : NONE
period_step : 1
avail_min : 2000
period_event : 0
start_threshold : 8000
stop_threshold : 8000
silence_threshold: 0
silence_size : 0
boundary : 9007199254740992000
appl_ptr : 0
hw_ptr : 0
And this is my .asoundrc file :
pcm.!default {
type plug
slave {
pcm "hw:0,0"
}
}
pcm.device1 {
type plug
slave {
pcm "hw:1,0"
}
}
pcm.device2 {
type plug
slave {
pcm "hw:2,0"
}
}
pcm.device3 {
type plug
slave {
pcm "hw:3,0"
}
}
ctl.!default {
type hw
card 0
}
ctl.device1 {
type hw
card 1
}
ctl.device2 {
type hw
card 2
}
ctl.device3 {
type hw
card 3
}
Does anybody have and idea why I get such a delay when my device wakes-up ?
Thanks

Alsa amixer lists both playback and capture device when using softvol [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm using alsa with dmix and softvol to mix multiple sound sources and control their individual volumes. This works but there is a thing that's bothering me. The mixer control show When i play back a wave file with aplay. But the description mentions a playback and a capture channel but i want to use separate controls for both playback and capture. This is my amixer output:
Simple mixer control 'SpeechPlayback',0
Capabilities: volume volume-joined
Playback channels: Mono
Capture channels: Mono
Limits: 0 - 255
Mono: 255 [100%]
This is the .asoundrc I use:
pcm.!default pcm.snd_card0
pcm.snd_card0 {
type hw
card 0
device 0
}
ctl.snd_card0 {
type hw
card 0
device 0
}
pcm.dmix0 {
type dmix
ipc_key 1024
ipc_key_add_uid true
slave.pcm "snd_card0"
slave {
period_time 0
period_size 256
rate 44100
format S16_LE
}
}
ctl.dmix0 {
type hw
card 0
device 0
}
pcm.dsnoop0 {
type dsnoop
ipc_key 2048
ipc_key_add_uid true
slave.pcm "snd_card0"
slave {
period_time 0
period_size 256
rate 8000
}
}
ctl.dsnoop0 {
type hw
card 0
device 0
}
############################################################################
# Volume controls for the different PCM devices
# controls become available after first playback
# volume e.g.: amixer set Ring 80%
############################################################################
pcm.ring {
type plug
slave{
channels 1
rate 44100
pcm{
type softvol
slave.pcm "dmix0"
control {
name "Ring"
count 1
}
}
}
}
pcm.speech_play {
type plug
slave{
channels 1
rate 44100
pcm{
type softvol
slave.pcm "dmix0"
control {
name "SpeechPlayback"
count 1
}
}
}
}
pcm.speech_capture {
type plug
slave{
channels 1
rate 8000
pcm{
type softvol
slave.pcm "dsnoop0"
control {
name "SpeechCapture"
count 1
}
}
}
}
Is there anybody who knows how to separate the playback from the capture controls. I have tried asym but could not find a configuration that works for me.
Best regards,
Jeroen van der Laan
The naming of the control determines its direction renaming "Ring" to "Ring Playback Volume" ill create a ring control with only a playback option.

audio stream sampling rate in linux

Im trying read and store samples from an audio microphone in linux using C/C++. Using PCM ioctls i setup the device to have a certain sampling rate say 10Khz using the SOUND_PCM_WRITE_RATE ioctl etc. The device gets setup correctly and im able to read back from the device after setup using the "read".
int got = read(itsFd, b.getDataPtr(), b.sizeBytes());
The problem i have is that after setting the appropriate sampling rate i have a thread that continuously reads from /dev/dsp1 and stores these samples, but the number of samples that i get for 1 second of recording are way off the sampling rate and always orders of magnitude more than the set sampling rate. Any ideas where to begin on figuring out what might be the problem?
EDIT:
Partial source code:
/////////main loop
while(goforever) {
// grab a buffer:
AudioBuffer<uint16> buffer;
agb->grab(buffer);
pthread_mutex_lock(&qmutex_data);
rec.push(buffer);
pthread_mutex_unlock(&qmutex_data);
if(tim.getSecs()>=5)
goforever =false;
}
////////////grab function:
template <class T>
void AudioGrabber::grab(AudioBuffer<T>& buf) const
{
AudioBuffer<T> b(itsBufsamples.getVal(),
itsStereo.getVal() ? 2U : 1U,
float(itsFreq.getVal()),
NO_INIT);
int got = read(itsFd, b.getDataPtr(), b.sizeBytes());
if (got != int(b.sizeBytes()))
PLERROR("Error reading from device: got %d of %u requested bytes",
got, b.sizeBytes());
buf = b;
}
Just because you ask for a 10kHz sampling rate, it doesn't mean that your hardware supports it. Many sound cards only support one or two sampling rates - mine for example only supports these:
$ grep -rH rates /proc/asound/ | cut -d : -f 2- | sort -u
rates [0x160]: 44100 48000 96000
rates [0x560]: 44100 48000 96000 192000
rates [0x5e0]: 44100 48000 88200 96000 192000
Therefore, you have to check the return value of the SOUND_PCM_WRITE_RATE ioctl() to verify that you got the rate that you wanted, as mentioned here:
SOUND_PCM_WRITE_RATE
Sets the sampling rate in samples per second. Remember that all sound
cards have a limit on the range; the
driver will round the rate to the
nearest speed supported by the
hardware, returning the actual
(rounded) rate in the argument.

Resources