Playing an audio file with more than 2 channels - audio

I have an WAV audio file with 3 channels: 2 for the sound (stereo) and 1 with short pulses to help record and synchronize with an external device (for a scientific experiment, not important).
The thing is: I have a sound card with 4 audio inputs and outputs, but when I try to play the file there is an error about the number of channels:
sounddevice.PortAudioError: Error opening OutputStream: Invalid number of channels [PaErrorCode -9998]
This occurs both with SoundDevice and Pyamge.play, and also when i added a 4th blank channel.
photo of the list of devices connected the computer
Thanks!

Related

How do I record audio using a microphone and play it in a speaker simultaneously by only storing one sec of Audio in LINUX

I am now working on a Audio algorithm which need 256 samples of audio from a micro phone and i need to process this 256 samples and result should get played on a speaker. I have done it using two wave file which is already on the file, now i need to do it in the real time.
Need a solution for this

Two Audio Input at the same time using phone

How do I take 2 different audio inputs at the same time from a mobile?
I tried taking audio from my phone as primary mic and headphones as secondary mic but failed to take 2 different audio inputs

Do MCASP input pins need to be in order?

I am using a Circular Microphone Board that has 4 I2S outputs with 8 channels in total and getting this audio into a Beaglebone AI. I am able to input 2 channels and record (arecord) the audio right now with mcasp1_axr0 interface.
However I want to record the 8 channels now, so I need 4 interfaces and my question is: must these interfaces be mcasp1_axr0, mcasp1_axr1, mcasp1_axr2 and mcasp1_axr3. Or can they be for exaple, mcasp1_axr0, mcasp1_axr1, mcasp1_axr10 and mcasp1_axr11?
Thanks in advance

Recording composite video to an audio file

I'm trying to record raw composite video siganl to an audio file by connecting the yellow rca cable from a player to the mic input in my pc so I can then put the cable in my audio output and connect it with the video input in an old crt tv and play back the signal to the tv so that I can view the original video.
But that didn't work and I can only see random white lines.
Is that due to frequency limits in the audio format or in the onboard audio chip, or is analog-digital conversion and the other way when recording and playing back damaging the signal?
Video signals operate in ranges above 1 Mhz, where high-quality audio signals only max out at ~96Khz. Video signals would likely need to be be encoded in a format that an audio recorder could pick up, then decoded back into a video signal before a television could render it properly. This answer on the Sound Design exchange may be of interest to you.
A very high bitrate uncompressed audio file may be able to store a low-fidelity video signal, a black and white signal could be stored at sub-vhs quality, but could be at least a resolvable image, recording component video may be possible even though syncing the seperate tracks would be hard.
I tried it.
Sampling rate is 192KHz. It can record up to 192/2=96KHz.
I succeed to capture part of luminance signal.
Color signal is in very high frequency.
So we can't record color signal using soundcard.
Video is very distorted.
However we may can caputure more clearly using soundcard more highter sampling rate.
https://m.youtube.com/watch?v=-Q_YraNAGhw&feature=youtu.be

ALSA - Which channel is sampled first

I have two channels of analog audio (left and right) connected to ALSA device and I let soundcard do the sampling.
The result is one digital stream of interleaved audio. My problem occurs when i try to play them. Once these channels are swapped and once not. It seems like it depends on which channel was sampled first or on time when the playing begun.
To clarify my situation:
I have three sound cards: Cards A and B are sampling analog audio, then I send one digitalized audio channel from each to card C, thru LAN. So for example I send only left channel from card A to card C and simultaneously I send right channel from card B to card C.
On card C, I reasemble those data packets to interleaved stream. So i take first sample (which is from card A) and then sample from card B. This way i can play this buffer like interleaved audio. Card C is then playing data from this buffer. Given that soundcard starts playing samples to left channel, then i should have no problem. But sometimes it swaps the channels and I can't figure out why.
I'm controlling this all with ARM processor.
Is there a way i can access ALSA's internal frame buffer or how to say what in the stream would be played first ?
It leads to another question, how does for example in wav format the player knows what part of data is for left and what for right channel ?
WAV is rather easy: channels are stored in the LSB-order in which they appear in dwChannelMask, (a bitmask listing any of the speakers present). So if the bitmask is 0x3, bits 0 and 1 are set, and you'll have two audio streams in the WAV: the first is left (bitmask 0x1) and the second is right (bitmask 0x2). If the bitmask was 0xB, there would be a third audio stream, a bass channel (0x8).
ALSA is Linux Audio and that's just not as well designed. There's no such thing as the internal ALSA streambuffer.

Resources