AAC RTP timestamps and synchronization - audio

I am currently streaming audio (AAC-HBR at 8kHz) and video (H264) using RTP. Both feeds works fine individually, but when put together they get out of sync pretty fast (lass than 15 sec).
I am not sure how to increment the time stamp on the audio RTP header, I thought it should be the time difference between two RTP packets (around 127ms) or a constant increment of 1/8000 (0.125 ms). But neither worked, instead I managed to find a sweet spot. When I increment the time stamp by 935 for each packet It stays synchronized for about a minute.

AAC frame size is 1024 samples. Try to increment by (1/8000) * 1024 = 128 ms. Or a multiple of that in case your packet has multiple AAC frames.
Does that help?

Bit late, but thought of putting up my answer.
Timestamp on Audio RTP packet == the number of audio samples contained in RTP packet.
For AAC, each frame consist of 1024 samples, so timestamp on RTP packet should increase by 1024.
Difference between the clocktime of 2 RTP packets = (1/8000)*1024 = 128ms, i.e sender should send the rtp packets with difference of 128 ms.
Bit more information from other sampling rates:
Now AAC sampled at 44100hz means 44100 sample of signal in 1 sec.
So 1024 samples means (1000ms/44100)*1024 = 23.21995 ms
So the timestamp between 2 RTP packets = 1024, but
The difference of clock time between 2 RTP packets in rtp session should be 23.21995ms.
Trying to correlate with other example:
For example for G711 family (PCM, PCMU, PCMA), The sampling frequency = 8k.
So the 20ms packet should have samples == 8000/50 == 160.
And hence RTP timestamps are incremented by 160.
The difference of clock time between 2 RTP packets should be 20ms.

IMHO video and audio de-sync in android is difficult to fight if they are taken from different media recorders. They just capture different start frames and there is no way (as it seems) to find out how big de-sync is and adjust it with audio or video timestamps on flight.

Related

Problem understanding audio stream number of samples when decoded with ffmpeg

The two streams I am decoding are an audio stream (adts AAC, 1 channel, 44100, 8-bit, 128bps) and a video stream (H264) which are received in an Mpeg-Ts stream, but I noticed something that doesn't make sense to me when I decode the AAC audio frames and try to line up the audio/video stream timestamps. I'm decoding the PTS for each video and audio frame, however I only get a PTS in the audio stream every 7 frames.
When I decode a single audio frame I get back 1024 samples, always. The frame rate is 30fps, so I see 30 frames each with 1024 samples which comes equals 30,720 samples and not the expected 44,100 samples. This is a problem when computing the timeline as the timestamps on the frames are slightly different between the audio and video streams. It's very close, but since I compute the timestamps via (1024 samples * 1,000 / 44,100 * 10,000 ticks) it's never going to line up exactly with the 30fps video.
Am I doing something wrong here with decoding the ffmpeg audio frames, or misunderstanding audio samples?
And in my particular application, these timestamps are critical as I am trying to line up LTC timestamps which are decoded at the audio frame level, and lining those up with video frames.
FFProbe.exe:
Video:
r_frame_rate=30/1
avg_frame_rate=30/1
codec_time_base=1/60
time_base=1/90000
start_pts=7560698279
start_time=84007.758656
Audio:
r_frame_rate=0/0
avg_frame_rate=0/0
codec_time_base=1/44100
time_base=1/90000
start_pts=7560686278
start_time=84007.625311

About definition for terms of audio codec

When I was studying Cocoa Audio Queue document, I met several terms in audio codec. There are defined in a structure named AudioStreamBasicDescription.
Here are the terms:
1. Sample rate
2. Packet
3. Frame
4. Channel
I known about sample rate and channel. How I was confused by the other two. What do the other two terms mean?
Also you can answer this question by example. For example, I have an dual-channel PCM-16 source with a sample rate 44.1kHz, which means there are 2*44100 = 88200 Bytes PCM data per second. But how about packet and frame?
Thank you at advance!
You are already familiar with the sample rate defintion.
The sampling frequency or sampling rate, fs, is defined as the number of samples obtained in one second (samples per second), thus fs = 1/T.
So for a sampling rate of 44100 Hz, you have 44100 samples per second (per audio channel).
The number of frames per second in video is a similar concept to the number of samples per second in audio. Frames for our eyes, samples for our ears. Additional infos here.
If you have 16 bits depth stereo PCM it means you have 16*44100*2 = 1411200 bits per second => ~ 172 kB per second => around 10 MB per minute.
To the definition in reworded terms from Apple:
Sample: a single number representing the value of one audio channel at one point in time.
Frame: a group of one or more samples, with one sample for each channel, representing the audio on all channels at a single point on time.
Packet: a group of one or more frames, representing the audio format's smallest encoding unit, and the audio for all channels across a short amount of time.
As you can see there is a subtle difference between audio and video frame notions. In one second you have for stereo audio at 44.1 kHz: 88200 samples and thus 44100 frames.
Compressed format like MP3 and AAC pack multiple frames in packets (these packets can then be written in MP4 file for example where they could be efficiently interleaved with video content). You understand that dealing with large packets helps to identify bits patterns for better coding efficiency.
MP3, for example, uses packets of 1152 frames, which are the basic atomic unit of an MP3 stream. PCM audio is just a series of samples, so it can be divided down to the individual frame, and it really has no packet size at all.
For AAC you can have 1024 (or 960) frames per packet. This is described in the Apple document you pointed at:
The number of frames in a packet of audio data. For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC. For formats with a variable number of frames per packet, such as Ogg Vorbis, set this field to 0.
In MPEG-based file format a packet is referred to as a data frame (not to be
mingled with the previous audio frame notion). See Brad comment for more information on the subject.

How to prevent data throttling with audio codec streaming

I am sampling a incoming audio stream at 8Ksps. I have a codec that takes ~1.6ms to encode a packet of data (80 samples) into an encoded packet (5 samples). At this rate I get 8000*1.662e-3 ~= 13 samples every encoding cycle. But I need 80 samples every cycle. How do I keep the stream continuous? My only guess is slow down the bitrate of the outgoing encode stream but I'm not sure how to calculate this in general such that buffers on the incoming side don't fill up and the receiving side's buffers don't get starved.
This seems like a basic tenet of streaming but I can't find any info on methods. Thanks for any help!

How to calculate effective time offset in RTP

I have to calculate time offset between packets in RTP streams. With video stream encoded with Theora codec i have timestamp field like
2856000
2940000
3024000
...
So I assume that transmission offset is 84000. With audio speex codec i have timestamp field like
38080
38400
38720
...
So I assume that transmission offset is 320. Why values so different? Are they microseconds, milliseconds, or what? Can i generalize a formula to calculate delay between packets in microseconds that works with any codec? Thank you.
RTP timestamps are media dependant. They use the sampling rate of the codec in use. You have to convert them to milliseconds before comparing with your clock or with timestamps from other RTP streams.
Added:
To convert the timstamp to seconds, just divide the timestamp by the sample rate. For most audio codecs, the sample rate is 8 kHz.
See here for a few examples.
Note that video codecs typically use 90000 for the timestamp rate.
Instead of guessing at the clock rate, look at the a=rtpmap line in the sdp for the payload in use. Example:
a=audio 5678 RTP/AVP 0 8 99
a=rtpmap 0 PCMU/8000
a=rtpmap 8 PCMA/8000
a=rtpmap 99 AAC-LD/16000
If the payload is 0 or 8, timestamps are 8KHz. If it's 99, they're 16KHz. Note that the rtpmap line has an optional 'channels' parameter, as in "a=rtpmap payload name/rate[/channels]"
Been researching this question for about an hour for the case of audio. Seems like the answer is: the RTP timestamp is incremented by the number of audio time units (samples) in a packet. Take this example where you have a stream of encoded, 2 channel audio, sampled at 44100 before the audio was encoded. Say that you send 512 audio samples (256 time units because we have 2 channel audio) for every packet. Assuming the first packet has a timestamp of 0 (it should be random though according to the RTP spec (RFC 3550)), the second timestamp would be 256, and the third 512. The receiver can convert the value back to an actual time by dividing the timestamp by the audio sample rate, so the first packet would be T0, the second equals 256/44100=0.0058 seconds, the third equals 512/44100=0.0116 seconds, etc.
Someone please correct me if I'm wrong, I'm not sure why there aren't any articles online that state it this way. I guess it would be more complicated if the resolution of the RTP timestamp is different than the sample rate of the audio stream. Nevertheless, converting the timestamp to a different resolution is not complicated. Use the example as before, but change the resolution of the RTP timestamp to 90 kHz, as in MPEG4 Audio (RFC 3016). From the source side the first timestamp is 0, the second is 90000*(256/44100)=522, and the third is 1044. And on the receiver, the time is 0 for first packet, 522/90000=0.0058 for the second, and 1044/90000=0.0116 for the third. Again, someone please correct me if I'm wrong.

setting timestamps for audio samples in directshow graph

I am developing a directshow audio decoder filter, to decode AC3 audio.
the filter is used in a live graph, decoding TS multicast.
the demuxer (mainconcept) provides me with the audio data demuxed, but does not provide timestamps for the sample.
how can I get/compute the correct timestamp of the audio?
I found this forum post:
http://www.ureader.com/msg/14712447.aspx
In it, a member gives the following formula for calculating the timestamps for audio, given it's format (sample rate, number of channels, bits per sample):
With PCM audio, duration_in_secs = 8 * buffer_size / wBitsPerSample /
nChannels / nSamplesPerSec or duration_in_secs = buffer_size /
nAvgBytesPerSec (since, for PCM audio, nAvgBytesPerSec =
wBitsPerSample * nChannels * nSamplesPerSec / 8).
The only thing you need to add is a tracking variable that tells you what sample number in the stream that you are at, so you can use it to offset the start time and end time by the duration (duration_in_secs) when doing linear streaming. For seek operations you would of course need to know or calculate the sample number into the stream.
Don't forget that the units for timestamps in DirectShow are typed as REFERENCE_TIME, a long integer or Int64. Each unit is equal to 100 nanoseconds. That is why you see in video filters the value 10,000,000 being divided by the relevant number of frames per second (FPS) to calculate timestamps for each frame because 10,000,000 equals 1 second in a REFERENCE_TIME variable.
Each AC-3 frame embeds data for 6 * 256 samples. Sampling rate can be 32 kHz, 44.1 kHz or 48 kHz (as defined by AC-3 specification Digital Audio Compression Standard (AC-3, E-AC-3)). The frames themselves do not carry timestamps, so you needs to assume continuous stream and increment time stamps respectively. As you mentioned the source is live, you might need to re-adjust time stamps on data starvation.
Each AC-3 frame is of fixed length (which you can identify from bitstream header), so you might also be checking if demultiplexer is giving you a single AC-3 frame or a few in a batch.

Resources