How is it that there is a Single Input to a headphone but the headphone is able to split the signals as per the channels. How is this splitting happening? To be more specific how is surround sound created by headphones with same single input ?
If you look at the TRS (Tip, Ring, Sleeve) connector jack on the end of your headphone cable, you'll see it is comprised of different sections, as so:
The input will normally be a stereo signal, with the left and right channels carried separately.
From memory, I think the tip picks up the left channel and the ring picks up the right but that doesn't matter so much with regard to your question.
As for surround sound, any "surround sound" from headphones is simulated as part of the stereo image.
"Surround sound" is usually achieved via a surrounding array of speakers, rather than via headphones.
I should also add that the above processes are analogue and have nothing whatsoever to do with bytes; any digital signal sent from your computer is converted to analogue before it reaches the headphone socket.
Related
I'm trying to record raw composite video siganl to an audio file by connecting the yellow rca cable from a player to the mic input in my pc so I can then put the cable in my audio output and connect it with the video input in an old crt tv and play back the signal to the tv so that I can view the original video.
But that didn't work and I can only see random white lines.
Is that due to frequency limits in the audio format or in the onboard audio chip, or is analog-digital conversion and the other way when recording and playing back damaging the signal?
Video signals operate in ranges above 1 Mhz, where high-quality audio signals only max out at ~96Khz. Video signals would likely need to be be encoded in a format that an audio recorder could pick up, then decoded back into a video signal before a television could render it properly. This answer on the Sound Design exchange may be of interest to you.
A very high bitrate uncompressed audio file may be able to store a low-fidelity video signal, a black and white signal could be stored at sub-vhs quality, but could be at least a resolvable image, recording component video may be possible even though syncing the seperate tracks would be hard.
I tried it.
Sampling rate is 192KHz. It can record up to 192/2=96KHz.
I succeed to capture part of luminance signal.
Color signal is in very high frequency.
So we can't record color signal using soundcard.
Video is very distorted.
However we may can caputure more clearly using soundcard more highter sampling rate.
https://m.youtube.com/watch?v=-Q_YraNAGhw&feature=youtu.be
As far as I unterstand BLE uses two 1-bit fields for applying sequence numbers to packets (SN, NESN). From my (admittedly basic) knowledge about (wireless) communication a 1-bit sequence number is perfectly fine as long as a sender does not continue sending data until the last message is acknowledged by the receiver.
Because of that it is trivial to understand how BLE works with a one-packet-per-interval scheme. However BLE allows multiple packets in a single connection interval. So far I couldn't find any information on how this scenario is handled without allocating more bits for larger sequence numbers.
Any pointers in the right direction on where I'm going wrong or where I can read up on this would be appreciated.
I have two channels of analog audio (left and right) connected to ALSA device and I let soundcard do the sampling.
The result is one digital stream of interleaved audio. My problem occurs when i try to play them. Once these channels are swapped and once not. It seems like it depends on which channel was sampled first or on time when the playing begun.
To clarify my situation:
I have three sound cards: Cards A and B are sampling analog audio, then I send one digitalized audio channel from each to card C, thru LAN. So for example I send only left channel from card A to card C and simultaneously I send right channel from card B to card C.
On card C, I reasemble those data packets to interleaved stream. So i take first sample (which is from card A) and then sample from card B. This way i can play this buffer like interleaved audio. Card C is then playing data from this buffer. Given that soundcard starts playing samples to left channel, then i should have no problem. But sometimes it swaps the channels and I can't figure out why.
I'm controlling this all with ARM processor.
Is there a way i can access ALSA's internal frame buffer or how to say what in the stream would be played first ?
It leads to another question, how does for example in wav format the player knows what part of data is for left and what for right channel ?
WAV is rather easy: channels are stored in the LSB-order in which they appear in dwChannelMask, (a bitmask listing any of the speakers present). So if the bitmask is 0x3, bits 0 and 1 are set, and you'll have two audio streams in the WAV: the first is left (bitmask 0x1) and the second is right (bitmask 0x2). If the bitmask was 0xB, there would be a third audio stream, a bass channel (0x8).
ALSA is Linux Audio and that's just not as well designed. There's no such thing as the internal ALSA streambuffer.
I am bit stuck, how can I make my arduino record into .wav files?
The arduino is connected with a microphone, and am using the Arduino ADC.
Any ideas? Will I be able to play them back using my pc?
many question cross my head
1- Is this possible using an arduino Uno
2- Is this possile using just a microphone connected to the Arduino ADC
3- if yes how can i get the wav format.
The idea gonna be like this
Ardiuno microphone-->Uno ADC -->arduino (library making wav sound)--> Storing data to a an SD card connected via SPI or maybe (connecting a Raspberry as a storage device)
also another question:
4- Do I need an amplifier due to the act that analog output from the microphone is very weak so the ADC couldn't detect the variation
In another log i had seen that i should connect the microphone to a level shifter.And that cause of the analog output is AC so i have to make the negative wave as 0 (for 10 it ADC)
the zero point as 512 and the positive as 1024 (10 bit ADC).(really i'm not sure about this part)
doing some research i got this library "https://github.com/TMRh20/TMRpcm/wiki/Advanced-Features#recording-audio" which is supposed to do the job, I mean making some wav file from the analog input.
So any help would be appreciated
Thx in advance,
Salah Laaroussi
Yes, although a bit complex it is very possible to do this via an uno.
The biggest hurdles to overcome is the limited amount of RAM and the clock speed. You will have to setup twin buffers to handle writing to the SD card. Make sure the card has a high enough write speed or the entire program will come to a screeching halt as you will run out of memory.
apc mag has a great article detailing out the circuit and code.
http://apcmag.com/arduino-projects-digital-audio-recorder.htm/
There are many things you haven't prepared yet:
output of microphone (assuming you know about electronics: still requires a biasing circuit e.g. a resistor + capacitor).
the output of the microphone is still very weak (in the magnitude of mV), which Arduino is incapable of capturing so you need a pre-amplifier
the design of the pre-amplifier will also include DC offset which makes the output of the microphone all above 0VDC which is in the range of the Arduino ADC otherwise the arduino will capture only those above 0VDC.
I am trying to record what it is just playing out to the speaker using following ALSA APIs:
snd_pcm_mmap_writei()
snd_pcm_mmap_readi()
Both functions are called one to next in the same thread. The writei() function returns quickly (I believe it returns once playback buffer is available), while the readi() returns until designated samples are captured. But the samples captured are not what is has just played out. I am guessing that ALSA is not in a duplex mode, i.e., it has to finish playback first, then start to record, which records nothing meaningful but just clicks. The speaker still plays out the sound correctly.
All HW/SW parameters are setup correctly. If I do audio capture only, I will get a good sound wave.
The PCM handles are opened with normal mode (not non-block, not async).
Anybody has suggestions how to make this work?
You do not need to use the mmap functions; the normal writei/readi calls suffice.
To handle two PCM streams at the same time, run them in separate threads, or use non-blocking mode so that the same event loop can handle both devices.
You need to fill the playback buffer before the data is played, and capture data can be read only after the capture buffer has been filled, so the overall latency is the playback buffer size plus the capture period size plus any hardware delays and sound propagation delays.