With FFMPEG, I'm sending a stream from Computer A over to Computer B, via UDP.
This is done over a MPEGTS stream, encoded with libx264 and aac.
Computer B takes this stream with FFMPEG and puts it into an m3u8 playlist.
After a random time (2-35 minutes), the message
[mpegts # 0533f000] AAC bitstream not in ADTS format and extradata missing
av_interleaved_write_frame(): Invalid data found when processing input
appears.
What I figures is that the receiving FFMPEG can't read the header file of the audio part for this particular package, and since it can't put video and audio together anymore, it stops creating the .ts files and just stops running.
Here's the cmdline of the receiving stream:
ffmpeg -i udp://address -vcodec copy -acodec copy -map 0 -f segment -segment_list playlist.m3u8 -analyzeduration 100000 -probesize 100000-segment_list_flags +live-cache -segment_time 8 -segment_wrap 10 out%03d.ts
Now I need to know the answer to either one of these 2 questions:
1) Can I put something in my commandline in order to avoid this particular problem or
2) Can I tell FFMPEG to just ignore it for this particular message, quite possibly creating weird audio or none at all, and to simply move on to the next one?
Related
A bit of history. I am using Plex as my media server, but for reasons unknown, it has issues transcoding the DTS-HD MA 7.1 audio to EAC3 stereo and keeps buffering (the server has plenty of horsepower on all fronts, CPU/RAM/drive space & speed, gigabit networks connections for all devices. The playback device (TCL Roku TV, with a 3rd party soundbar connected via HDMI ARC) doesn't support the built-in 7.1 audio, so I get silence if I play it back directly by putting the file on a USB stick.
Also, I am by no means a ffmpeg guru, I figured out what I do know by Google University and asking questions, so please be kind and forgive me if I ask follow-up questions that may seem n00b-ish, and please provide example commands (preferably in the context of my command below so that I can have a known point of reference to start with).
I have a movie with 4K (HEVC Main 10 HDR) video and DTS-HD MA 7.1 audio that I am looking to leave the video and audio untouched, but to add a 2nd audio track in either EAC3 or if necessary, just AC3 in stereo
So what I am looking for is as follows:
video.mkv
Existing->4k video file (no change)
Existing->7.1 audio (no change)
Convert and add->stereo audio as a 2nd audio track to the output.mkv file
Below is the command I've historically used with ffmpeg to convert and replace the audio file with the stereo audio, but since I'd prefer to leave the 7.1 audio in place, this doesn't work:
ffmpeg -i "D:\video.mkv" -c:v copy -c:a aac -b:a 128k "D:\output.mkv"
And if this cannot be done as a single command, please also let me know what steps I do need to take to be able to do it.
Thanks in advace,
Mike
ffmpeg -i input.mkv -map 0 -map 0:a -c copy -c:a:1 eac3 output.mkv
-map 0 select all streams.
-map 0:a select all audio streams. This combines with -map 0 so now you have 1 video and 2 audio streams selected.
-c copy stream copy all streams.
-c:a:1 eac3 encode output audio stream #1 with eac3 encoder. This overrides -c copy for this particular stream.
I have been trying to implement HTTP live streaming using mpeg-dash but need guidance on some issues.
Provided :
I have audio and video encoded stream in buffered input.
a direct mpeg-2 transport stream for above is also available in a buffer.
Current approach :
Save the transport stream into chunks of fixed length.
use ffmpeg to extract video stream.
ffmpeg -i latest_chunk.ts -s 720x480 -c:v libx264 -b:v 600k -y -an output_video_stream.mp4
use ffmpeg to extract video stream.
ffmpeg -i latest_chunk.ts -c:a aac -b:a 128k -y -vn output_audio_stream.mp4
use mp4box to create dash segments and mpd.
mp4box -dash 7000 -profile live output_video_stream.mp4 output_audio_stream.mp4 -out manifest.mpd
A server running continuously in another thread serves the generated mpd and segments.
Issues :
The above approach gives a considerable amount of latency. Can this be done more efficiently?
I want to know if there is a method to take directly encoded streams buffer as input and produce mpeg-dash segments and mpd. HTTP server will do the rest. If there is, please provide an example.
Also i provided the length of the transport stream chunks (in sec) in mp4box as argument -mpd-refresh 12, but the player only requests for the mpd once, plays the segments, and stops. It also does not include minimumUpdatePeriod attribute in the generated mpd file
mp4box -dash 7000 -profile live -mpd-refresh 12 output_video_stream.mp4 output_audio_stream.mp4 -out manifest.mpd
Does the mpeg-dash has support for mpeg-2 encoded media streams?
Any advice/solution/reference for the same is appreciated.
I have a .mov file (codec = motion jpeg) that has an audio stream that includes small pulses at every second.
When I convert this file to mp4 using ffmpeg I notice that all my pulses are now off by one frame.
I simply used "ffmpeg -i source_file.mov target_file.mp4"
Here is an image of the comparison between the audio signals:
A1 is the original audio (.mov) and A2 is the mp4 output audio of ffmpeg.
As you can see the pulses are one frame late compared to the original.
I know that the h264 codec is lossy but one frame offset seems like a big loss if you ask me.
Is there any option I could use with ffmpeg to have a better audio stream ?
Here is the input file: https://www.dropbox.com/s/6y5g7lo5dvu0ub1/BBB_09_tree_trunk_009_ANIM_001.mov?dl=0
Here is the output file:
https://www.dropbox.com/s/10zuzwn0qs8l853/BBB_09_tree_trunk_009_ANIM_001.mp4?dl=0
If you copy the audio over, you shouldn't get the shift.
ffmpeg -i source_file.mov -c:a copy target_file.mp4
I've been working on this issue for my own needs and my file format has to be mp4. I'm working from mxf files. I've tried several options and found this to give the most accurate result (I've removed specifics for simplicity):
ffmpeg -ss 00:00:00.021 -i "input.mxf" -itsoffset -0.044 -i "input.mxf" -c:v libx264 -c:a aac -map 0:a -map 1:v "output.mp4"
Starting the first file at 21ms and mapping it as the audio, then shifting the video back 44ms gave gave me the most accurate sync (within several samples). I don't know why 22ms wasn't as accurate (when that's what the primer sample issue seems to equate to) and I found nothing that allowed me to work more granular, in samples. A filter with a PTS offset had no affect. Perhaps it works differently with different file formats. It's also worth noting that the same command without the -itsoffest gave the same sync result with one difference; the video stream duration was 1 frame and 1ms off the audio and container durations. With the -itsoffest, the durations were only 1ms different. You can use 22ms to achieve an accurate duration, but check your sync, it might be out that slightest bit more.
Also worth noting that I stumbled across some developer commentary on the -itsoffset tag which clarified that it doesn't work on audio, it works on video. It seems like the answer above is suggesting to map the offest against the audio, which apparently is not how the function is built to work. https://trac.ffmpeg.org/ticket/1349
try mpeg2 audio: -acodec mp2 it worked for me
I'm using ffmpeg to extract audio from MPEG Transport Stream file recorded by DVB-S card. The command:
ffmpeg -i video.ts -vn audio.wav
The source file seems to be corrupted. I noticed the corruption happens from time to time, especially for videos longer than 1 hour. I've got errors like these:
[mp2 # 0x1bb5500] Header missing
Error while decoding stream #0:1
[mpegts # 0x17eaf40] Continuity check failed for pid 5261 expected 2 got 6
The problem is that the resulting audio.wav is shorter than the source video (40m33s and 40m59s accordingly). I'm looking for the way to preserve the original length in the resulting audio file.
I tried the recent ffmpeg under Windows and avconv under Ubuntu, output format was MP3 and WAV. For every case I've got the same results.
I didn't find whether it's possible to do it with ffmpeg however I found ProjectX - a tool which tries to fix the broken TS stream. Website: http://project-x.sourceforge.net/
With:
java -jar ProjectX.jar -demux my_video.ts
the stream is demuxed into audio and video files which are guaranteed to have the same length. I simply mux them back using ffmpeg.
I'm using ffmpeg to build a short hunk of video from a machine-generated png. This is working, but the video now needs to have a soundtrack (an [audio] field) for some of the other things I'm doing with it. I don't actually want any sound in the video, so: is there a way to get ffmpeg to simply set up an empty soundtrack property in the video, perhaps as part of the call that creates the video? I guess I could make an n-second long silent mp3 and bash it in, but is there a simpler / more direct way? Thanks!
Thanks to #Alvaro for the links; one of these worked after a bit of massaging. It does seem to be a two-step process: First make the soundtrack-less video and then do:
ffmpeg -ar 44100 -acodec pcm_s16le -f s16le -ac 2 -channel_layout 2.1
-i /dev/zero -i in.mp4 -vcodec copy -acodec libfaac -shortest out.mp4
The silence comes from /dev/zero and -shortest makes the process stop at the end of the video. Argument order is significant here; -shortest needs to be down near the output file spec.
This assumes that your ffmpeg installation has libfaac installed, which it might not. But, otherwise, this seems to be working.
I guess you need to create a media file properly with audio and video stream. As far as i know, there is not a direct way.
If you know your video duration, first create the dummy audio and after when you create the video try to join the audio part.
In superuser, you can find more info link1 link2