How to quantize ppq time to position in audio buffer - audio

i'm currently working on the sound sequencer. I decided to use PPQN as time grid. And now i have this problem:
Let's say that tempo is 120 BPM, PPQ is 96 (default values) and sample rate is 44100 Hz. That makes 1 pulse equal to 5.2 ms which is 229.32 Samples (5.2ms*44.1samples/ms). So the position in buffer will be [229.32]. And now i can truncate this 0.32 samples so that every third pulse will be one sample larger than other or interpolate it somehow.
Any ideas or maybe somebody knows the proper way?

Related

How find sampleCount knowing length audio file and sampleRate?

I have been looking for a long time how to find sampleCount, but there is no answer. It is possible to say an algorithm or formula for calculation. It is known 850ms , the file weight is 37 KB, the resolution of the wav file , sampleRate is 48000.... I can check , you should get sampleCount equal to 40681 as I have in the file . this is necessary so that I can calculate sampleCount for other audio files.I am waiting for your help
I found and I get 40800 . I multiplied the rate with the time in seconds
Yes, the sample count is equal to the sample rate, multiplied by the duration.
So for an audio file that is exactly 850 milliseconds, at 48 kHz sample rate:
850 * 48000 = 40800 samples
Now, with MP3s you have to be careful. There is some padding at the beginning of the file for cleanly initializing the decoder, and the amount of padding can vary based on the encoder and its configuration. (You can read all about the troubles this has caused on the Wikipedia page for "gapless playback".) Additionally, your MP3 duration will be determined on MP3 frame boundaries, and not arbitrary PCM boundaries... assuming your decoder/player does not support gapless playback.

How samples are aligned in the audio file?

I'm trying to better understand how samples are aligned in the audio file.
Let's say we have a 2s audio file with sampling rate = 3.
I think there are three possible ways to align those samples. Looking at the picture below, can you tell me which one is correct?
Also, is this a standard for all audio files or does different formats have different rules?
Cheers!
Sampling rate in audio typically tells you how many samples are in one second, a unit called Hertz. Strictly speaking, the correct answer would be (1), as you have 3 samples within one second. Assuming there's no latency, PCM and other formats dictate that audio starts at 0. Next "cycle" (next second) also starts at zero, same principle like with a clock.
To get total length of the audio (following question in the comment), you should simply take number of samples / rate. Example from a 30s WAV using soxi, one of canonical tools used in the community for sound manipulation:
Input File : 'book_00396_chp_0024_reader_11416_5_door_Freesound_validated_380721_0-door_Freesound_validated_381380_0-9IfN8dUgGaQ_snr10_fileid_1138.wav'
Channels : 1
Sample Rate : 16000
Precision : 16-bit
Duration : 00:00:30.00 = 480000 samples ~ 2250 CDDA sectors
File Size : 960k
Bit Rate : 256k
Sample Encoding: 16-bit Signed Integer PCM
480000 samples / (16000 samples / seconds) = 30 seconds exactly. Citing manual, duration is "Equivalent to number of samples divided by the sample-rate."

What is the bit rate?

I am new to audio programming,
But I am wondering formula of bitRate,
According to wiki https://en.wikipedia.org/wiki/Bit_rate#Audio,
bit rate = sample rate X bit depth X channels
and
sample rate is the number of samples (or snapshots taken) per second obtained by a digital audio device.
bit depth is the number of bits of information in each sample.
So why bit rate = sample rate X bit depth X channels?
From my perspective, if bitDepth = 2 bit, sample rate = 3 HZ
then I can transfer 6 bit data in 1 second
For example:
Sample data = 00 //at 1/3 second.
Sample data = 01 //at 2/3 second.
Sample data = 10 //at 3/3 second.
So I transfer 000110 in 1 second, is that correct logic?
Bit-rate is the expected amount of bits per interval (eg: per second).
Sound cycles are measured in hertz, where 1 hertz == 1 second. So to get full sound data that represents that 1 second of audio, you calculate how many bits are needed to be sent (or for media players, they check the bit-rate in a file-format's settings so they can read & playback correctly).
Why is channels involved (isn't sample rate X bit-depth enough)?
In digital audio the samples are sent for each "ear" (L/R channel). There will always be double the amount of samples in a stereo sound versus if it was mono sound. Usually there is a "flag" to specify if sound is stereo or mono.
Logic Example: (without bit-depth, and assuming 1-bit per sample)...
There is speech "Hello" recorded at 200 samples/sec at bitrate of 100/sec. What happens?
If stereo flag, each ear gets 100 samples per sec (correct total of 200 played)
If mono, audio speech will sound slow by half (since only 100 samples played at expected bit-rate of 100, but remember, a full second was recorded at 200 sample/sec. You get half of "hello" in one second and the other at next second to (== slowed speech).
Taking the above example, you will find these audio gives slow/double speed adventures in your "new to audio programming" experience. The fix will be either setting channels amount or setting bit-rate correctly. Good luck.
The 'sample rate' is the rate at which each channel is sampled.
So 'sample rate X bit depth' will give you the bit rate for a single channel.
You then need to multiply that by the number of channels to get the total bit rate flowing through the system.
For example the CD standard has a sample rate of 44100 samples per second and a bit depth of 16 giving a bit rate of 705600 per channel and a total bit rate of 1411200 bits per seconds for stereo.

discrete fourier transform frequency bound?

for a 8KHz wav sound i took 20ms sample which has 160 samples of data, plotted the FFT spectrum in audacity.
It gave the magnitudes in 3000 and 4000 Hz as well, shouln't it be giving the magnitudes until
the 80Hz,because there is 160 samples of data?
For a sample rate of Fs = 8 khz the FFT will give meaningful results from DC to Nyquist (= Fs / 2), i.e. 0 to 4 kHz. The width of each FFT bin will be 1 / 20 ms = 50 Hz.
actually audacity shows the peaks as 4503Hz which means understands to 1Hz bins. by the way if I take 20ms and repeat it 50 times to make as 1s sample,is the fft going to be for 1Hz bins? and also audacity has the option for the window as far as I know If you use windowing then the components should be multiple times of 2,like 1,2,4,8,etc.. but it shows the exact frequencies,then why it uses the windowing?
The best sampling rate is 2*frequency.
in different frequencys you should to change the sampling rate.

setting timestamps for audio samples in directshow graph

I am developing a directshow audio decoder filter, to decode AC3 audio.
the filter is used in a live graph, decoding TS multicast.
the demuxer (mainconcept) provides me with the audio data demuxed, but does not provide timestamps for the sample.
how can I get/compute the correct timestamp of the audio?
I found this forum post:
http://www.ureader.com/msg/14712447.aspx
In it, a member gives the following formula for calculating the timestamps for audio, given it's format (sample rate, number of channels, bits per sample):
With PCM audio, duration_in_secs = 8 * buffer_size / wBitsPerSample /
nChannels / nSamplesPerSec or duration_in_secs = buffer_size /
nAvgBytesPerSec (since, for PCM audio, nAvgBytesPerSec =
wBitsPerSample * nChannels * nSamplesPerSec / 8).
The only thing you need to add is a tracking variable that tells you what sample number in the stream that you are at, so you can use it to offset the start time and end time by the duration (duration_in_secs) when doing linear streaming. For seek operations you would of course need to know or calculate the sample number into the stream.
Don't forget that the units for timestamps in DirectShow are typed as REFERENCE_TIME, a long integer or Int64. Each unit is equal to 100 nanoseconds. That is why you see in video filters the value 10,000,000 being divided by the relevant number of frames per second (FPS) to calculate timestamps for each frame because 10,000,000 equals 1 second in a REFERENCE_TIME variable.
Each AC-3 frame embeds data for 6 * 256 samples. Sampling rate can be 32 kHz, 44.1 kHz or 48 kHz (as defined by AC-3 specification Digital Audio Compression Standard (AC-3, E-AC-3)). The frames themselves do not carry timestamps, so you needs to assume continuous stream and increment time stamps respectively. As you mentioned the source is live, you might need to re-adjust time stamps on data starvation.
Each AC-3 frame is of fixed length (which you can identify from bitstream header), so you might also be checking if demultiplexer is giving you a single AC-3 frame or a few in a batch.

Resources