FFmpeg adding extra 128ms in audio file while converting a WAV to AAC - audio

I have a stream of audio bytes and doing a live stream using HLS. First, I'm converting a few audio bytes to WAV chunks and then converting WAV to AAC. While converting it to AAC by FFmpeg adds an extra 128ms in every chunk. Due to the extra 128ms audio chunk, over time audio length is getting significantly increase compare to original audio length.
I tried to read audio chunk size in multiple of 1024 samples for AAC conversion but it didn't work.

Related

Mux segmented mpegts audio and video to single clip with error correction

I have a recording as a collection of files in mpegts format, like
audio: a-1.ts, a-2.ts, a-3.ts, a-4.ts
video: v-1.ts, v-2.ts, v-3.ts
I need to make a single video clip in mp4 or mkv format.
However, there are two problems:
audio and video segments have different duration each, number of audio segments is different from number of video segments. Total duration of audio and video matches. Hence I can not concat pairwise audio video segments using mpeg and merge them afterwards, I get sync issues increasing progressively
few segments are corrupt or missing. So if I concat audio and video streams separately using ffmpeg I get streams of different lengths. When I merge these streams using ffmpeg I have correct a/v synchronization until time when first missing packet is encountered.
It's OK if video freezes for a while or there is silence for a while as long as most of the video is in sync with audio.
I've checked with tsduck and PCR seems to be present in all audio and video segments yet I could not find a way to merge streams using mpegTS PCR as sync reference. Please advise how can I achieve this.

How is the AAC encoder priming delay handled in HLS?

As per Apple, in AAC encoding 2112 priming samples are added at the beginning of audio. When creating HLS stream with AAC audio, will these priming samples be added to the beginning of each HLS segment or only to the first HLS segment? And, how does this AAC encoder delay affect HLS DISCONTINUITY tags later in the HLS stream?
https://developer.apple.com/library/archive/documentation/QuickTime/QTFF/QTFFAppenG/QTFFAppenG.html
I depends on the AAC you use.
For 'old-style' AAC-LC you only have priming samples at the beginning of the stream and not at the beginning of each segment.
But the delay is carried through the entire stream.
Typically a new piece of media is displayed after a DISCONTINUITY tag - for example an advertisement - so you will receive another set of priming samples.
Your AAC audio decoder needs to discard the priming samples (first 2112) PCM output samples after startup and after DISCONTINUITY.
If you use the more modern xHE-AAC - you don't have to worry about priming samples anymore.
Another wrinkle - in the early days it was just assumed that AAC-LC has 2112 priming samples.
Now the number can be different and it can be signaled in the MP4 container as Edit-List.

Problem understanding audio stream number of samples when decoded with ffmpeg

The two streams I am decoding are an audio stream (adts AAC, 1 channel, 44100, 8-bit, 128bps) and a video stream (H264) which are received in an Mpeg-Ts stream, but I noticed something that doesn't make sense to me when I decode the AAC audio frames and try to line up the audio/video stream timestamps. I'm decoding the PTS for each video and audio frame, however I only get a PTS in the audio stream every 7 frames.
When I decode a single audio frame I get back 1024 samples, always. The frame rate is 30fps, so I see 30 frames each with 1024 samples which comes equals 30,720 samples and not the expected 44,100 samples. This is a problem when computing the timeline as the timestamps on the frames are slightly different between the audio and video streams. It's very close, but since I compute the timestamps via (1024 samples * 1,000 / 44,100 * 10,000 ticks) it's never going to line up exactly with the 30fps video.
Am I doing something wrong here with decoding the ffmpeg audio frames, or misunderstanding audio samples?
And in my particular application, these timestamps are critical as I am trying to line up LTC timestamps which are decoded at the audio frame level, and lining those up with video frames.
FFProbe.exe:
Video:
r_frame_rate=30/1
avg_frame_rate=30/1
codec_time_base=1/60
time_base=1/90000
start_pts=7560698279
start_time=84007.758656
Audio:
r_frame_rate=0/0
avg_frame_rate=0/0
codec_time_base=1/44100
time_base=1/90000
start_pts=7560686278
start_time=84007.625311

Remove audio streams from a .m2ts video file

I have a video which has 3 audio streams in the file. The first one is English and the other ones are in different languages. How can I get rid of these audio streams without losing the quality of the video and the English stream.
I think ffmpeg should be used but I don't know how to do it.
Video
Bit rate mode: Variable
Overall bit rate: 38.6 Mb/s
Chroma subsampling: 4:2:0
Audio
Format: DTS-HD
Compression mode: Lossless

Convert an audio file into a pcap with codec G722

I need to convert an audio file (any common format) into a rtp stream saved in a .pcap file with G.722 Codec.
The generated .pcap file will be sent with SIPp using:
<exec play_pcap_audio="g722.pcap"/>
I know it is possible to send also .wav file with the following command, if the .wav is correctly encoded:
<exec rtp_stream="g711.wav"/>
But seems that is not possible to encode a .wav with G722.
There are multiple solutions on the web and SO on how to convert a .pcap into an audio file, but I'm actually looking for the opposite.
Steps to convert wav audio to .pcap file:
Split audio to 20 ms chunks
Encode each chunk with G.722 encoder
Create RTP header for each encoded chunk
Save RTP stream to .pcap
I've never used SIPp, but if it can process encoded G.722 stream, then use ffmpeg for encoding:
ffmpeg -i sample.wav -ar 16000 -acodec g722 sample.g722
Get softphone supporting wav files as source and G.722 codec, make call with only G.722 enabled, capture RTP stream to pcap.

Resources