I have a Raspberry Pi with an i2s MEMS mic attached. I'm recording audio from it, using the SOX library, and trying to increase my ALSA buffer_size.
My ALSA buffer_size is currently 65536, but I want to increase this. Is there any theoretical limit to the buffer size? How large can it be?
Thanks!
The theoretical limit is 2^32 frames. But the practical limit is whatever your hardware actually supports.
To read the current maximum buffer size, call snd_pcm_hw_params_get_buffer_size_max() ("current" because it might be constrained by other hardware parameters, such as the sample rate or number of channels).
Related
I am looking at a tutorial on jitter buffer. It has the following diagram:
My understanding is that in case of adaptive jitter buffer, 'd' may vary over time. My question is that since a system is asking for audio data (in case of VoIP), will the amount the data given to the system by adaptive jitter buffer vary over time ?
I have tried to send 12-bit audio to be listened to in real time through the HC05 SPP bluetooth module hooked up to an arduino and DAC over serial with a python RFCOMM socket. I have since learned that Serial Port Protocol is not very great at all for this purpose due to its low bandwidth. I figured I could definitely send the data and then play it out through a DAC, but I doubt an arduino would hold an array the size of a WAV file and maybe not even an mp3 file, but that would defeat the purpose of controlling the audio (play,pause,rewind,etc) from my computer. Would it be more realistic and worthwhile to use an A2DP enabled bluetooth module? Or is it still possible to listen to acceptable quality 12-16 bit audio in real time with SPP? I have tried to use lower bit songs, adjusted baud rates for the arduino and HC-05 serial ports, and tried to adjust the magnitude of the values outputted by the DAC to the audio port and I still seem to get crackly audio. It seems the problem comes down to the low bitrate transfer speed of SPP, or am I wrong?
Is it realistic to stream 12-16 bit audio through SPP bluetooth in realtime?
Sure, at some awfully slow sample rate <= 8 kHz. You'd be better off sending 8-bit audio at a higher sample rate.
Would it be more realistic and worthwhile to use an A2DP enabled bluetooth module?
Yes, absolutely, without question. That's what it's designed for, as I mentioned in your other question.
Or is it still possible to listen to acceptable quality 12-16 bit audio in real time with SPP?
Acceptable is subjective. If it's just voice, you can get away with it. If you want reasonable audio quality for music, almost universally, no, it's not acceptable.
It seems the problem comes down to the low bitrate transfer speed of SPP, or am I wrong?
Without any code to inspect and debug, it's impossible to say what the specific problem is that you're referring to. Undoubtedly, the low bandwidth will not enable good quality audio anyway.
If you must continue to use SPP and simple codecs like PCM, at least use differential PCM to save a bit more bandwidth.
I am using python sockets to connect to a bluetooth HC-05 module with my PC. I want to send music to the HC-05 by converting a wav file to a string array which will later by converted to integers ranging from 0-65535 on an arduino. The arduino and HC-05 communicate via serial at 9600 baud. Then those ints will be passed into a DAC via I2C. I am wondering if there could be a memory issue sending an enormous number of strings from my PC. Is it possible the original quality of the sound will be distorted as a result of the different rates of sending/receiving data across the devices? Or will the sound signal just be delayed on the DAC?
The arduino and HC-05 communicate via serial at 9600 baud.
This is going to be too low to be usable for audio.
9600 baud gives you 7680 bits of data per second. At 16 bits per sample, you're looking at a sample rate of 481 Hz, which is too low for intelligible audio. It's barely even high enough to reproduce sound at all.
You need to:
Increase the baud rate. Ideally you'll want at least 57600 baud, for 46 kbit/sec of data. If higher baud rates are available, use them.
Use fewer bits for each sample. Using 4 bits for each sample at 56 kbaud will give you a respectable sample rate of 11.5 kHz. The audio will sound a little tinny at 4 bits, but it'll be intelligible.
I want to capture audio on Linux with low latency in a program I'm writing.
I've run some experiments using the ALSA API, using snd_pcm_readi() to
capture sound, then immediately using snd_pcm_writei() to play it back.
I've tried playing with the number of frames captured, and the buffer size,
but I don't seem to be able to get the latency down to less than a second
or so.
Am I better off using PulseAudio or JACK? Can those be used to play the
captured audio?
To reduce capture latency, reduce the period size of the capture device.
To reduce playback latency, reduce the buffer size of the playback device.
Jack can play the captured audio (just connect the input ports to the output ports), but you still have to configure its periods/buffers.
Also see Relation between period size of speaker and mic and Recording from ALSA - understanding memory mapping.
I've doing some work on low latency audio programming,
My experience is, first, your capture buffer should be small, like 10ms period buffer. (let's assuming you're using 512 frame buffer, and 48000 sample rate).
Then, you should config your Output device start_threshold to at least 2 * frame size ( 1 * frame size if your don't have much process of recorded data).
For record device, like CL. said, use a relative small period size is better, but not too small to avoid too much irq.
Also, you can change your process schedule to FIFO schedule.
Then, hopefully, you will get about 20ms total latency.
I believe you should at first ensure that you are running a Linux kernel which actually allows you to achieve low typical latency.
There are several kernel compile-time configuration options which you might look into:
CONFIG_HZ_1000
CONFIG_IRQ_FORCED_THREADING
CONFIG_PREEMPT
CONFIG_PREEMPT_RT_FULL (available only with RT patch)
Apart from that, there are more things you can do in order to optimize your audio latency in Linux. Some starting reference points can be found there:
http://wiki.linuxaudio.org/wiki/real_time_info
I want to modulate digital data into audio. Then communicate it through any audio channel and demodulate at the destination from audio to data again. To do this I hope to use computer sound card and software modem without using any hardware implementation. In the internet, I found that this can be through the technique called Audio Frequency-Shift Keying(AFSK). I want to know that can I obtain bit rate more than 1200bps from AFSK and if it is no what the reason behind that this limitation.
Is there any technique efficient than AFSK for this purpose ?
The most common currently-used form of AFSK is the Bell202 modem at 1200 baud. There are a few other standards which also use 1200 baud, and some that run at less than 1200 bits per second, but none that I know of that run greater than 1200.
However, as far as I know, there's no reason you couldn't write a software modem to transmit and receive at a higher baud rate. Bell202 uses bit stuffing (allowing the data stream to use the same tone no more than 5 bits in a row) to help keep the transmitter and receiver from falling out of sync with each other, so a higher baud rate might require bit stuffing at a lower threshold (every 4 or 3 bits).
Another consideration is that the sound cards you're using should use a sampling rate equal to or a multiple of the baud rate you choose. This is one of the reasons 1200 baud is so common, as 1200Hz and 48000Hz are common sample rates with audio hardware.
So 1200 baud isn't a limit. It's just a standard.