PCM channel mapping to physical speakers/mics (ALSA library) - audio

My requirements are:
Reading number of channels in playback interface
Read number of channels in each capture interface
Mapping WAV channels to specific speakers in/outs
When it comes to speakers it could posssibly be achieved by inspeciting output of alsa-info command:
[ 2.254295] input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card2/input10
[ 2.254441] input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card2/input11
[ 2.254543] input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1b.0/sound/card2/input12
[ 2.254726] input: HDA Intel PCH Line Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card2/input13
[ 2.254789] input: HDA Intel PCH Line Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card2/input14
[ 2.254845] input: HDA Intel PCH Line Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card2/input15
[ 2.254904] input: HDA Intel PCH Line Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card2/input16
[ 2.254966] input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card2/input17
And as I undestand this is mapping between PCI Express and it's names are provided by sound card driver provider. The following outupt could tell about:
Number of IOs in sound card
Number of playback IOs (this is by inspecting it's name in search of 'mic')
Is there any way to refer to:
/devices/pci0000:00/0000:00:1b.0/sound/card2/input13
... directly by playing a one channel WAV into it?
Generally I would like to be able to list all sound interfaces and collect parameters that will allow me to play by SDL to any physical speaker and record on particular WAV channel from any physical microphone. I managed to partially achieved this goal by:
Determine a device that will be used by aplay -l. In my example:
card 0: Device [USB Sound Device], device 0: USB Audio [USB Audio]
Subdevices: 0/1
Subdevice #0: subdevice #0
Determine number of playback capture channels (so far by inspeciting physical device - there is one Line-In and one MIC) However output of cat /proc/asound/card0/stream0 gives me:
Capture: Status: Stop Interface 2 Altset 1 Format: S16_LE Channels: 2
Endpoint: 5 IN (ASYNC) Rates: 44100, 48000 Bits: 16
So it tells me that I have one capture interface with 2 channels (but I expect 2 captures - one for Line-In and second for stereo Mic)
So I know that if mic is connected to the interface then I should expect 2 channel WAV and each will correspond to one of Mic channel
Quite simillar story is when it comes to playback interface. Here is cat /proc/asound/card0/stream0 for playback:
Playback:
Status: Running
Interface = 1
Altset = 2
Packet Size = 196
Momentary freq = 48000 Hz (0x30.0000)
Interface 1
Altset 1
Format: S16_LE
Channels: 8
Endpoint: 6 OUT (ADAPTIVE)
Rates: 44100, 48000
Bits: 16
Interface 1
Altset 2
Format: S16_LE
Channels: 2
Endpoint: 6 OUT (ADAPTIVE)
Rates: 44100, 48000
Bits: 16
Interface 1
Altset 3
Format: S16_LE
Channels: 4
Endpoint: 6 OUT (ADAPTIVE)
Rates: 44100, 48000
Bits: 16
Interface 1
Altset 4
Format: S16_LE
Channels: 6
Endpoint: 6 OUT (ADAPTIVE)
Rates: 44100, 48000
Bits: 16
Interface 1
Altset 5
Format: S16_LE
Channels: 2
Endpoint: 6 OUT (ADAPTIVE)
Rates: 96000
Bits: 16
I that case I have physical connectors input for 7.1 speaker setup + headphones. So I expect to have controll over 10 channels but I have over 8 (headphones are allways duplicated as if there was 2.1) Is there any way to access seperately to theese channels?
There is also an SPDIF input/output physical interface. Should I expect to have duplicated PCMs on each physical interface allways or there is any way to separate theese streams? I'd like to sqeeze from this sound cars as much I/O as I can :)

Related

Sound through HDMI does not work under pipeware on Fedora 35

I recently installed Fedora 35. I used an HDMI cable to use a TV as a second screen. I am able to use the video, but the audio does not work on the screen when using this computer.
Fedora 35 currently uses pipeware + wireplumber by default, as described at https://fedoraproject.org/wiki/Changes/WirePlumber
I already tried to switch to pipewire-media-session as described above, but it did not work.
The sound through HDMI works: I can play a testing sound using speaker-test:
$ speaker-test -c2 -f440 -tsine -Dhdmi:CARD=PCH,DEV=0
gnome-settings shows me "HDMI/DisplayPort - Internal audio" as an option to use, but there is no sound.
However, the sound does not work using pipeware on Gnome. Follows some more information:
$aplay -L
null
Discard all samples (playback) or generate zero samples (capture)
pipewire
PipeWire Sound Server
default
Default ALSA Output (currently PipeWire Media Server)
sysdefault:CARD=PCH
HDA Intel PCH, ALC3234 Analog
Default Audio Device
front:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
Front output / input
surround21:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
4.0 Surround output to Front and Rear speakers
surround41:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=PCH,DEV=0
HDA Intel PCH, ALC3234 Analog
7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
hdmi:CARD=PCH,DEV=0
HDA Intel PCH, HDMI 0
HDMI Audio Output
hdmi:CARD=PCH,DEV=1
HDA Intel PCH, HDMI 1
HDMI Audio Output
hdmi:CARD=PCH,DEV=2
HDA Intel PCH, HDMI 2
HDMI Audio Output
Any help is appreciated.
probably, you use a video card shared with your cpu. Try adding support like this.
Open terminal and type
sudo nano /etc/modprobe.d/alsa-base.conf`
Add this at the end of the file:
options snd-hda-intel model=auto
Active IOMMU in boot:
sudo nano /etc/default/grub
Change GRUB_CMDLINE_LINUX="" to:
GRUB_CMDLINE_LINUX="intel_iommu=on,igfx_off"
Save the file and update grup.
sudo update-grub
Reboot

aplay quits before entire song is read

Problem statement: I do not receive the entire song in the output when dumped as a file (I can't hear the song through the jack but I could dump the file contents).
excerpts: I am new to ALSA programming and I have an embedded board with limited set of commands. I have gone through the links here: ALSA tutorial required but I couldn't figure out this timing related issues.
Setup:
OS: linux 4.14.70
aplay: version 1.1.4 by Jaroslav Kysela <perex#perex.cz>
Advanced Linux Sound Architecture Driver Version k4.14.70.
The audio box involved has a separate hardware and a separate DSP for stand alone processing
Flow of information: Linux -> DSP core
The input song is communicated to the linux core, loading the song into DMA area -> Read DMA into separate DMA ring buffer area used by the DSP and write it to I2S output path into a file
I could see the size of the song is 960000 Bytes, with sample rate of 48000, S16_LE formwat, 2 channel, 16 bit bit-depth -> which calculates as shown below - as per the page "https://www.colincrawley.com/audio-duration-calculator/"
Bit Rate: 1536 kbps
Duration:
0 Hours : 0 Minutes : 5 Seconds . 34 Milliseconds
As I put the logs, my DSP core is processing the song only for a period of approx. 1 second before "aplay" application sends an ioctl call to close the audio interface on the linux.
My questions are:
How does aplay understand time ? For a 5 seconds time, how could we be sure that it has run for a period of 5 seconds.
Is there a way to understand to wait until entire song is transmitted to the DSP core to process before the close IOCTL command is issued?
Some more info about the input file I am feeding:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 2
rate : 48000
exact rate : 48000 (48000/1)
msbits : 16
buffer_size : 24000
period_size : 6000
period_time : 125000
tstamp_mode : NONE
tstamp_type : MONOTONIC
period_step : 1
avail_min : 6000
period_event : 0
start_threshold : 24000
stop_threshold : 24000
silence_threshold: 0
silence_size : 0
boundary : 6755399441055744000
appl_ptr : 0
hw_ptr : 0
I would be happy to provide more information into understanding why the aplay application closes the song early. But please be aware that its a closed-source project.
Command I use:
aplay input.wav -c 2 -r 48000 -t wav
input size: 960044 bytes (including wav header)
output size: 306 KB observed before IOCTL call to close the audio interface occurs.
For a input.wav file of 960044 file size i.e., 938 KB,
time aplay input.wav returns:
real 0m0.988s
user 0m0.012s
sys 0m0.080s
To find the duration of wav file:
fileLength/(sampleRate*channel*bits per sample/8) = 960000/((48000 * 2 * 16)/8) = 5 seconds
If I run the same song on my Ubuntu machine, it is as expected:
real 0m5.452s
user 0m0.025s
sys 0m0.029s
Any hints on why this could occur ? As seen above, I could see the aplay applications quits in 0.98 seconds. But the song has to be played for 5 seconds.
Looks like there was some problem with the custom audio hardware that was delaying the timing to process the song. It now seems to be fixed. Basically put, the sound hardware has to give sufficient delays atleast as per AXI protocol as the bytes are being read. But in this case, it is not so. Its now resolved. Thanks for the attention

Define a clean and working asound.conf for my embedded device

I am currently using a very complex asound.conf file from a reference design BSP. I would like to define my own asound.conf.
My current need on my embedded device :
Play mono files only with 44100 Hz rate. In speaker mode I have only one output speaker.
When I plug a jack, I must able to hear the sound on both headphones.
I need also to be able to record sound from a microphone in mono with 11500 Hz rate.
My available audio card :
# aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: wm8960audio [wm8960-audio], device 0: HiFi wm8960-hifi-0 []
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: wm8960audio [wm8960-audio], device 1: HiFi-ASRC-FE (*) []
Subdevices: 1/1
Subdevice #0: subdevice #0
#
I am not using the same rate between output and input. But the ASRC device allows me to keep good performance with different rates. That's why I want to use device 1 and not device 0.
I tried to define my config as follow :
# cat /etc/asound.conf
pcm_slave.out {
pcm {
type hw
card 0
device 1
}
channels 2
period_time 0
period_size 512
buffer_size 1024
rate 44100
}
pcm.out_mono {
ipc_key 1042
type dmix
slave out
bindings.0 0
bindings.0 1
}
pcm_slave.in {
pcm {
type hw
card 0
device 1
}
channels 2
rate 11025
}
pcm.in_mono {
ipc_key 1043
type dsnoop
slave in
bindings.0 1
}
Its working great with speaker (so with one speaker only) and cpu performance is very good. I play the sound using out_mono pcm. But I am able to hear the sound in one headphone only in jack mode when I used in_mono pcm. In the asound.conf I tried to say that I want to redirect the mono sound on both outputs but it is not working :
bindings.0 0
bindings.0 1
The second line of bindings is erasing the first one... So I am looking for a way to be able to hear the sound on two output. Of course, if I used default pcm instead of out_mono, the sound is working perfectly on both outputs.
Did I misunderstand something in asound conf definition?
The dmix plugin has a 1:1 mapping of its own channels to slave channels.
To allow other conversions, use the plug plugin. Its bindings can be configured with ttable, but the defaults should be OK:
pcm.out_mono {
type plug
slave.pcm {
ipc_key 1042
type dmix
slave out
}
}

DTS or AC-3 realtime encoder over hdmi via pulseaudio [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Here is my problem my laptop (Debian 8) is connected to my TV via HDMI, itself connected to my 5.1 home theater via SPDIF optical cable.
And SPDIF only allow mono, stereo channels using PCM encoding or multi channels using Dolby format so DTS or AC-3 encoding.
My system correctly detects constraints:
cat /proc/asound/card0/eld#0.0
monitor_present 1
eld_valid 1
monitor_name LG TV
connection_type HDMI
eld_version [0x2] CEA-861D or below
edid_version [0x3] CEA-861-B, C or D
manufacture_id 0x6d1e
product_id 0x1
port_id 0x0
support_hdcp 0
support_ai 1
audio_sync_delay 0
speakers [0xffff] FL/FR LFE FC RL/RR RC FLC/FRC RLC/RRC FLW/FRW FLH/FRH TC FCH
sad_count 4
sad0_coding_type [0x1] LPCM
sad0_channels 2
sad0_rates [0x14e0] 32000 44100 48000 96000 192000
sad0_bits [0xe0000] 16 20 24
sad1_coding_type [0x2] AC-3
sad1_channels 6
sad1_rates [0xe0] 32000 44100 48000
sad1_max_bitrate 640000
sad2_coding_type [0xa] E-AC-3/DD+ (Dolby Digital Plus)
sad2_channels 6
sad2_rates [0xe0] 32000 44100 48000
sad3_coding_type [0x7] DTS
sad3_channels 6
sad3_rates [0xc0] 44100 48000
sad3_max_bitrate 1536000
I already looked on the net the majority of topics are really outdated at best 2012. I found a first solution, a52 alsa plugin but unfortunately I feel that it does not work or configs are not read by pulseaudio.
#####
# Description: Pour utiliser le plugin a52 d'alsa avec PulseAudio. Les valeurs par défaut sont channels 6 (valeurs possible 2,4,6), bitrate 448 kbit/s par défaut et fréquence échantillonnage 48000 Hz (44100 ou 48000 possible).
# A mettre dans ~/.asoundrc .
pcm.a52hdmi {
#args [CARD]
#args.CARD {
type string
default 0
}
type rate
slave {
pcm {
type a52
bitrate 640
rate 48000
channels 6
card $CARD
}
rate 48000 #nécessaire pour PulseAudio
}
}
I found a way to view my films using mpv it work because if I understand well it bypass pulseaudio.
mpv --fullscreen --speed=24000/25025 --hwdec=vaapi --deinterlace=yes --af scaletempo,lavcac3enc=tospdif=yes:bitrate=640:minch=2
But I really would like pulseaudio work itself in AC-3 or DTS to have 5.1 sound through SPDIF.
I found a first solution but I have some noise and cracking on audio :
https://github.com/darealshinji/dcaenc
I found another solution :
https://www.linuxquestions.org/questions/linux-hardware-18/alsa-sb-omni-surround-5-1-iec958-is-routed-to-the-analog-output-not-the-digital-output-4175609669/
But it seems that alsa not able to assign the correct device number :( (I just add that I change device 2 by device $DEV and I add it to input params)
Result :
hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 (HDMI Audio Output)
hdmi:CARD=HDMI,DEV=1 HDA Intel HDMI, HDMI 1 (HDMI Audio Output)
hdmi:CARD=HDMI,DEV=2 HDA Intel HDMI, HDMI 2 (HDMI Audio Output)
hdmi:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 3 (HDMI Audio Output)
hdmi:CARD=HDMI,DEV=4 HDA Intel HDMI, HDMI 4 (HDMI Audio Output)
...
a52:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 (IEC958 (AC3) Digital Surround 5.1 with all software conversions)
a52:CARD=HDMI,DEV=7 HDA Intel HDMI, HDMI 1 (IEC958 (AC3) Digital Surround 5.1 with all software conversions)
a52:CARD=HDMI,DEV=8 HDA Intel HDMI, HDMI 2 (IEC958 (AC3) Digital Surround 5.1 with all software conversions)
a52:CARD=HDMI,DEV=9 HDA Intel HDMI, HDMI 3 (IEC958 (AC3) Digital Surround 5.1 with all software conversions)
a52:CARD=HDMI,DEV=10 HDA Intel HDMI, HDMI 4 (IEC958 (AC3) Digital Surround 5.1 with all software conversions)
a52upmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 (IEC958 (AC3) Digital Surround 2.0 -> 5.1 with all software conversions)
a52upmix:CARD=HDMI,DEV=7 HDA Intel HDMI, HDMI 1 (IEC958 (AC3) Digital Surround 2.0 -> 5.1 with all software conversions)
a52upmix:CARD=HDMI,DEV=8 HDA Intel HDMI, HDMI 2 (IEC958 (AC3) Digital Surround 2.0 -> 5.1 with all software conversions)
a52upmix:CARD=HDMI,DEV=9 HDA Intel HDMI, HDMI 3 (IEC958 (AC3) Digital Surround 2.0 -> 5.1 with all software conversions)
a52upmix:CARD=HDMI,DEV=10 HDA Intel HDMI, HDMI 4 (IEC958 (AC3) Digital Surround 2.0 -> 5.1 with all software conversions)
dcahdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 (DTS Encoding through HDMI)
dcahdmi:CARD=HDMI,DEV=1 HDA Intel HDMI, HDMI 1 (DTS Encoding through HDMI)
dcahdmi:CARD=HDMI,DEV=2 HDA Intel HDMI, HDMI 2 (DTS Encoding through HDMI)
dcahdmi:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 3 (DTS Encoding through HDMI)
dcahdmi:CARD=HDMI,DEV=4 HDA Intel HDMI, HDMI 4 (DTS Encoding through HDMI)
...
Full config : https://pastebin.com/ZtF9npBD
I hope to hear from you soon ;)

How to configure different ALSA defaults for capture through one device and playback through another?

I'm looking for some help in configuring the audio on a Raspberry Pi as all my Googling efforts have fallen short so far!
My setup:
Raspberry PI 3 (running Debian Jessie)
USB WebCam (Logitech) which I'm using to capture audio
External speaker in 3.5mm audio jack which is used for playback
So far I've managed to configure ALSA to, by default, capture via the USB Webcam and playback via the 3.5mm jack. For example, the following works fine:
# Capture from Webcam
arecord test.wav
# Playback through 3.5mm jack
aplay test.wav
By default this captures audio in 8-bit, 8KHz, Mono. However, I'd like the default capture process to use 16-bit, 16KHz, Mono settings, and this is where I'm stuck.
Here's my working ~/.asoundrc file:
pcm.!default {
type asym
playback.pcm {
type hw
card 1
device 0
}
capture.pcm {
type plug
slave {
pcm {
type hw
card 0
device 0
}
}
}
}
And my /etc/modprobe.d/alsa-base.conf:
options snd_usb_audio index=0
options snd_bcm2835 index=1
options snd slots=snd-usb-audio,snd-bcm2835
And the output of cat /etc/asound/cards:
0 [U0x46d0x825 ]: USB-Audio - USB Device 0x46d:0x825
USB Device 0x46d:0x825 at usb-3f980000.usb-1.4, high speed
1 [ALSA ]: bcm2835 - bcm2835 ALSA
bcm2835 ALSA
I've followed various guides to set the format, rate and channels attributes without any success. For example, this didn't work:
pcm.!default {
type asym
playback.pcm {
type hw
card 1
device 0
}
capture.pcm {
type plug
slave {
pcm {
type hw
card 0
device 0
}
format S16_LE
rate 16000
channels 1
}
}
}
(I've also tried moving those attribute inside the pcm block in one of many desperate attempts!)
In truth I have no experience with audio on Linux at all, and am utterly lost and any guidance would be hugely appreciated!
aplay uses whatever sample format the file actually has, but arecord creates a new file, so you have to specify the sample format if you do not want the silly defaults:
arecord -f S16_LE -r 16000 -c 1 test.wav

Resources