I'm writing an app for embedded device. The device is connected to an 8 mic board, so 8 channels are transferred into the board. In ALSA this device is visible as HW:3,0.
I've opened the HW:3,0 stream and with:
snd_pcm_hw_params_test_channels()
I've checked the number of allowed channels. The output was 1 - 8.
What happens, if I open the stream and set the number of channels to 4? Does ALSA drops the rest of 4 channels and I get a buffer of CH1 | CH2 | CH3 | CH4 samples or I get CH1 | ... | CH8 in a buffer?
Thank you for help,
Renegade
The hw_params_* constraints are managed by the driver. So the driver gets told that the stream has four channels, and the driver is then responsible to configure the hardware to generate four samples per frame.
Related
I have an WAV audio file with 3 channels: 2 for the sound (stereo) and 1 with short pulses to help record and synchronize with an external device (for a scientific experiment, not important).
The thing is: I have a sound card with 4 audio inputs and outputs, but when I try to play the file there is an error about the number of channels:
sounddevice.PortAudioError: Error opening OutputStream: Invalid number of channels [PaErrorCode -9998]
This occurs both with SoundDevice and Pyamge.play, and also when i added a 4th blank channel.
photo of the list of devices connected the computer
Thanks!
I have an Ice Lake laptop. The A/V system has several HDMI 2.0 outputs and each appears to have 8-channel audio. However, I can't figure out how to get multi-channel to play through these outputs. amixer shows playback channel maps for these outputs, for example
numid=41,iface=PCM,name='Playback Channel Map',device=5
; type=INTEGER,access=rw---R--,values=8,min=0,max=36,step=0
: values=0,0,0,0,0,0,0,0
| container
| chmap-variable=FL,FR
It appears that it should be possible to change these channel maps, but I can't find out how to do so. Running amixer -c0 cset numid=41 1,1,1,1,1,1,1,1 doesn't change anything. Does anyone know what syntax amixer wants to changes these channel maps?
I am using a Circular Microphone Board that has 4 I2S outputs with 8 channels in total and getting this audio into a Beaglebone AI. I am able to input 2 channels and record (arecord) the audio right now with mcasp1_axr0 interface.
However I want to record the 8 channels now, so I need 4 interfaces and my question is: must these interfaces be mcasp1_axr0, mcasp1_axr1, mcasp1_axr2 and mcasp1_axr3. Or can they be for exaple, mcasp1_axr0, mcasp1_axr1, mcasp1_axr10 and mcasp1_axr11?
Thanks in advance
I have two channels of analog audio (left and right) connected to ALSA device and I let soundcard do the sampling.
The result is one digital stream of interleaved audio. My problem occurs when i try to play them. Once these channels are swapped and once not. It seems like it depends on which channel was sampled first or on time when the playing begun.
To clarify my situation:
I have three sound cards: Cards A and B are sampling analog audio, then I send one digitalized audio channel from each to card C, thru LAN. So for example I send only left channel from card A to card C and simultaneously I send right channel from card B to card C.
On card C, I reasemble those data packets to interleaved stream. So i take first sample (which is from card A) and then sample from card B. This way i can play this buffer like interleaved audio. Card C is then playing data from this buffer. Given that soundcard starts playing samples to left channel, then i should have no problem. But sometimes it swaps the channels and I can't figure out why.
I'm controlling this all with ARM processor.
Is there a way i can access ALSA's internal frame buffer or how to say what in the stream would be played first ?
It leads to another question, how does for example in wav format the player knows what part of data is for left and what for right channel ?
WAV is rather easy: channels are stored in the LSB-order in which they appear in dwChannelMask, (a bitmask listing any of the speakers present). So if the bitmask is 0x3, bits 0 and 1 are set, and you'll have two audio streams in the WAV: the first is left (bitmask 0x1) and the second is right (bitmask 0x2). If the bitmask was 0xB, there would be a third audio stream, a bass channel (0x8).
ALSA is Linux Audio and that's just not as well designed. There's no such thing as the internal ALSA streambuffer.
I'm new to USB development, and i'm quite confused about what data rates are realistic.
I'm trying to develop an external sound card connected on an AVR32 processor, which supports USB Full Speed(12 Mb/s). I'll use USB audio class 1 to send the audio data to a PC. I need to send 24 bit, 48kHz, 2 channels as INput to the computer, but also 24 bit, 48kHz, 1 channel OUTput from the computer. Streaming both ways.
That gives me a data rate of: 24 bit * 48kHz * 3 channels = 3,5 Mb/s, which should be possible by using USB Full Speed?
I understand that the Audio Class sends data via an Isochronous transfer, but i'm confused about how many transactions ( e.g. IN = 256 bytes ) it is possible to make in one frame? according to the USB specification (http://www.usb.org/developers/docs/usb20_docs/#usb20spec - > table 5-4) it seems to be possible to send more than one transaction per frame?
Is it possible to send both IN and OUT packets within one frame?
Thanks in advance!