I'm posting today because I have a problem when I use ffmpeg. I have a small program that creates mp4 files for me, often with sounds and sometimes without, until now I used ffmpeg to concatenate them.
But I just realized that if I have an mp4 file that doesn't contain audio then the whole audio track of the final file goes out of sync. This only happens when I have some mp4 files that don't have audio. I think it's also useful to know that the program I'm using gives me a lot of mp4 (>20) and I can't know in advance which ones have audio or not...
Here is the code I use, I saw on the forum that the mp4 format was badly managed and that it was better to use ts format for concatenation that's why I do this
for f in $(ls *.mp4); do
ffmpeg -i $f -c copy -bsf:v h264_mp4toannexb -f mpegts $f.ts
done
CONCAT=$(echo $(ls *.ts ) | sed -e "s/ /|/g")
ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -i "concat:$CONCAT" -c copy output.ts
ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -i output.ts -acodec copy -vcodec copy output.mp4
I noticed that when the final file goes out of sync this error appears :
Input #0, mpegts, from 'concat:video0.mp4.ts|video01.mp4.ts|video012.mp4.ts|video0123.mp4.ts|video01234.mp4.ts|video012345.mp4.ts|video0123456.mp4.ts|video01234567.mp4.ts|video012345678.mp4.ts|video0123456789.mp4.ts|video012345678910.mp4.ts|video01234567891011.mp4.ts|video0123456789101112.mp4.ts|video012345678910111213.mp4.ts|video01234567891011121314.mp4.ts|video0123456789101112131415.mp4.ts|video012345678910111213141516.mp4.ts|video01234567891011121314151617.mp4.ts|video0123456789101112131415161718.mp4.ts|video012345678910111213141516171819.mp4.ts|zzzz.mp4.ts':
Duration: 00:00:05.02, start: 1.420222, bitrate: 63089 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 640x1136, 30 fps, 240 tbr, 90k tbn, 60 tbc
Stream #0:1[0x101](und): Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, mono, fltp, 145 kb/s
Output #0, mpegts, to 'output.ts':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709, progressive), 640x1136, q=2-31, 30 fps, 240 tbr, 90k tbn, 90k tbc
Stream #0:1(und): Audio: aac (LC) ([15][0][0][0] / 0x000F), 44100 Hz, mono, fltp, 145 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mpegts # 0x7f9c68800000] DTS 130180 < 573000 out of order <-- HERE
[mpegts # 0x7f9c68800000] DTS 130180 < 654000 out of order <-- HERE
[mpegts # 0x7f9c68800000] DTS 130180 < 1477180 out of order <-- HERE
I've tried several different combinations of members answer on other topics but nothing worked for me.
I don't know if the problem comes from the moment I convert .mp4 files to .ts and I forget to set appropriate codec, or if it comes from the concat command ?
I'm almost sure that there is a simple way to fix this issue but unfortunately I don't have enough ffmpeg knowledge.
Thanks for your help :)
EDIT 1 :
I changed the for loop of the programme to :
for f in $(ls *.mp4); do
TEST=$(ffprobe -i $f -show_streams -select_streams a -loglevel error)
if [ "$TEST" ]; then
ffmpeg -i $f -c copy -bsf:v h264_mp4toannexb -f mpegts $f.ts
fi
done
It only makes the script to ignore the files that don't have audio, but this way it works, so maybe adding dummy audio to those files could make the job as #Gyan suggested in the comments, but how to do that ?
EDIT 2 SOLVED
I found how to add dummy audio on my silent file, that solved my problem this is my final working for loop :
for f in $(ls *.mp4); do
TEST=$(ffprobe -i $f -show_streams -select_streams a -loglevel error)
if [ "$TEST" ]; then
ffmpeg -i $f -c copy -bsf:v h264_mp4toannexb -f mpegts $f.ts
else
ffmpeg -f lavfi -i anullsrc -i $f -shortest -c:v copy -c:a aac -map 0:a -map 1:v $f.a.ts
ffmpeg -i $f.a.ts -c copy -bsf:v h264_mp4toannexb -f mpegts $f.ts
rm $f.a.ts
fi
done
Related
i am trying to use multiple filters in ffpmeg, but it does not allow more than one -af.
so, then i decided to try to do it with a -complex_filter.
sudo ffmpeg -f alsa -i default:CARD=Device \
-filter_complex \
"lowpass=5000,highpass=200; \
volume=+5dB; \
afftdn=nr=0.01:nt=w;" \
-c:a libmp3lame -b:a 128k -ar 48000 -ac 1 -t 00:00:05 -y $recdir/audio_$(date '+%Y_%m_%d_%H_%M_%S').mp3
it must work, but for some reason i get an error:
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'default:CARD=Device':
Duration: N/A, start: 1625496748.441207, bitrate: 1536 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
[AVFilterGraph # 0xaaab0a8b14e0] No such filter: ''
Error initializing complex filters.
Invalid argument
i have tried quotes and others, nothing helps..
ffmpeg -f alsa -i default:CARD=Device \
-filter_complex \
"lowpass=5000,highpass=200,volume=+5dB,afftdn=nr=0.01:nt=w" \
-c:a libmp3lame -b:a 128k -ar 48000 -ac 1 -t 00:00:05 -y $recdir/audio_$(date '+%Y_%m_%d_%H_%M_%S').mp3
If you end your filtergraph with ; then ffmpeg expects another filter. That is why you got the error No such filter: ''. Avoid ending with ;.
You have a linear series of simple filters so separate the filters with commas. This also means you can still use -af instead of -filter_complex if you prefer.
See FFmpeg Filtering Introduction to see the difference between ; and ,.
I want to add a second audio stream to an mp4 video file already containing sound.
The second audio stream is a little longer than the video, but I want the final product to be the same length.
I tried using the -shortest feature but the second audio stream I wanted to add want not audible at all.
I think -shortest only allows for one stream, so what can I do to keep the video the same length and keep both audio streams?
Here is the full command I used before asking this question:
ffmpeg -i input.mp4 -i input.m4a -map 0:v -map 0:a -shortest output.mp4
Output of ffmpeg -i output.mp4:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.45.100
Duration: 00:00:32.08, start: 0.000000, bitrate: 1248 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 480x600 [SAR 1:1 DAR 4:5], 1113 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
You have to map the audio from the 2nd input as well.
ffmpeg -i input.mp4 -i input.m4a -map 0:v -map 0:a -map 1:a -shortest -fflags +shortest -max_interleave_delta 100M output.mp4
See https://stackoverflow.com/a/55804507/ for explanation of how to make shortest effective.
I have a long audio part
and a short video part which I want to mux together.
I'm trying the following command to mux:
Video_0-0002.h264 - whole file (2 secs long)
Audio.wav - from 4 till 6 seconds
ffmpeg -y -i /Documents/viz-1/01/Video_0-0002.h264 -i /Documents/viz-1/01/Audio.wav -codec:v copy -f mp4 -af atrim=4:6 -strict experimental -movflags faststart /Documents/viz-1/01/Video_0-0001.mp4
But the audio is messed up...
how can I do it correctly?
Also tried, sounds like there is silence in the end.
ffmpeg -y -i Video_0-0003.h264 -i Audio.wav -c:v copy -af atrim=6:8,asetpts=PTS-STARTPTS -strict experimental -movflags +faststart Video_0-0003.mp4
Input #0, h264, from 'Video_0-0003.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 388x388 [SAR 1:1 DAR 1:1], 30 fps, 30 tbr, 1200k tbn, 60 tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'Audio.wav':
Duration: 00:00:16.98, bitrate: 1411 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Output #0, mp4, to 'Video_0-0003.mp4':
Metadata:
encoder : Lavf57.56.100
Stream #0:0: Video: h264 (Main) ([33][0][0][0] / 0x0021), yuv420p(progressive), 388x388 [SAR 1:1 DAR 1:1], q=2-31, 30 fps, 30 tbr, 1200k tbn, 1200k tbc
Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc57.64.101 aac
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
[mp4 # 0x7fca8f015000] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 # 0x7fca8f015000] Starting second pass: moving the moov atom to the beginning of the file
frame= 60 fps=0.0 q=-1.0 Lsize= 242kB time=00:00:02.02 bitrate= 982.2kbits/s speed= 21x
video:207kB audio:32kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.382400%
[aac # 0x7fca8f017400] Qavg: 1076.270
Try
ffmpeg -y -i /Documents/viz-1/01/Video_0-0002.h264 -i /Documents/viz-1/01/Audio.wav -c:v copy -af atrim=4:6,asetpts=PTS-STARTPTS -strict experimental -movflags +faststart /Documents/viz-1/01/Video_0-0001.mp4
You can try to cut audio by video timing, and then marge video and audio track.
Use -vn and -an in separate ffmpeg process.
ffmpeg -i video.mp4 -c:v h264 -an -y video.h264
ffmpeg -i video.mp4 -c:a aac -t 00:01:00 -vn -y audio.aac
And for marge tracks:
ffmpeg -i auido.acc -i video.h264 -c:v copy -c:a copy -f mp4 -y out.mp4
So I have 2 files of the same video. One of the video files has two different audio streams
Stream #0:1(eng): Audio: flac, 48000 Hz, stereo, s16 (default)
Stream #0:2(jpn): Audio: flac, 48000 Hz, stereo, s16
I'm looking to map both of those streams and "paste" them on to the other identical file.
Just so it could look something like this when you would open in mpc-hc or something.
If you don't want to keep the existing audio in the uni-track video, then use
ffmpeg -i uni.mkv -i dual.mkv -c copy -map 0 -map -0:a -map 1:a swapped.mkv
(Remove -map -0:a if you do want to keep it)
I have multiple audio tracks and subtitles to extract in a single .mkv file. I'm new to ffmpeg commands, this is what I've tried (audio):
ffmpeg -i VIDEO.mkv -vn -acodec copy AUDIO.aac
It just extract 1 audio. What I want is tell ffmpeg to extract every single audio files and subtitle files to a destination, and keep the original name of each files and extensions. (Because I don't know which extension does the audio files are, sometimes maybe .flac or .aac).
I'm not sure about the solutions I'd found online, because it's quite complicated, and I need explanations to know how it's works, so that I can manipulate the command in the future. By the way, I planned to run the code from Windows CMD. Thanks.
There is no option yet in ffmpeg to automatically extract all streams into an appropriate container, but it is certainly possible to do manually.
You only need to know the appropriate containers for the formats you want to extract.
Default stream selection only chooses one stream per stream type, so you have to manually map each stream with the -map option.
1. Get input info
Using ffmpeg or ffprobe you can get the info in each individual stream, and there is a wide variety of formats (xml, json, cvs, etc) available to fit your needs.
ffmpeg example
ffmpeg -i input.mkv
The resulting output (I cut out some extra stuff, the stream numbers and format info are what is important):
Input #0, matroska,webm, from 'input.mkv':
Metadata:
Duration: 00:00:05.00, start: 0.000000, bitrate: 106 kb/s
Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv444p, 320x240 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
Stream #0:1: Audio: vorbis, 44100 Hz, mono, fltp (default)
Stream #0:2: Audio: aac, 44100 Hz, mono, fltp (default)
Stream #0:3: Audio: flac, 44100 Hz, mono, fltp (default)
Stream #0:4: Subtitle: ass (default)
ffprobe example
ffprobe -v error -show_entries stream=index,codec_name,codec_type input.mkv
The resulting output:
[STREAM]
index=0
codec_name=h264
codec_type=video
[/STREAM]
[STREAM]
index=1
codec_name=vorbis
codec_type=audio
[/STREAM]
[STREAM]
index=2
codec_name=aac
codec_type=audio
[/STREAM]
[STREAM]
index=3
codec_name=flac
codec_type=audio
[/STREAM]
[STREAM]
index=4
codec_name=ass
codec_type=subtitle
[/STREAM]
2. Extract the streams
Using the info from one of the commands above:
ffmpeg -i input.mkv \
-map 0:v -c copy video_h264.mkv \
-map 0:a:0 -c copy audio0_vorbis.oga \
-map 0:a:1 -c copy audio1_aac.m4a \
-map 0:a:2 -c copy audio2.flac \
-map 0:s -c copy subtitles.ass
In this case, the example above is the same as:
ffmpeg -i input.mkv \
-map 0:0 -c copy video_h264.mkv \
-map 0:1 -c copy audio0_vorbis.oga \
-map 0:2 -c copy audio1_aac.m4a \
-map 0:3 -c copy audio2.flac \
-map 0:4 -c copy subtitles.ass
I prefer the first example because the input file index:stream specifier:stream index is more flexible and efficient; it is also less prone to incorrect mapping.
See documentation on stream specifiers and the -map option to fully understand the syntax. Additional info is in the answer to FFmpeg mux video and audio (from another video) - mapping issue.
These examples will stream copy (re-mux) so no re-encoding will occur.
Container formats
A partial list to match the stream with the output extension for some common formats:
Video Format
Extensions
H.264
.mp4, .m4v, .mov, .h264, .264
H.265/HEVC
.mp4, .h265, .265
VP8/VP9
.webm
AV1
.mp4
MPEG-4
.mp4, .avi
MPEG-2
.mpg, .vob, .ts
DV
.dv, .avi, .mov
Theora
.ogv/.ogg
FFV1
.mkv
Almost anything
.mkv, .nut
Audio Format
Extensions
AAC
.m4a, .aac
MP3
.mp3
PCM
.wav
Vorbis
.oga/.ogg
Opus
.opus, .oga/.ogg, .mp4
FLAC
.flac, .oga/.ogg
Almost anything
.mka, .nut
Subtitle Format
Extensions
Subrip/SRT
.srt
SubStation Alpha/ASS
.ass
You would first list all the audio streams:
ffmpeg -i VIDEO.mkv
and then based on the output you can compile the command to extract the audio tracks individually.
Using some shell script you can then potentially automate this in a script file so that you can do it generically for any mkv file.
Subtitles are pretty much the same. The subtitles will be printed in the info and then you can extract them, similar to:
ffmpeg -threads 4 -i VIDEO.mkv -vn -an -codec:s:0.2 srt myLangSubtitle.srt
0.2 is the identifier that you have to read from the info.
I solved it like this:
ffprobe -show_entries stream=index,codec_type:stream_tags=language -of compact $video1 2>&1 | { while read line; do if $(echo "$line" | grep -q -i "stream #"); then echo "$line"; fi; done; while read -d $'\x0D' line; do if $(echo "$line" | grep -q "time="); then echo "$line" | awk '{ printf "%s\r", $8 }'; fi; done; }
Output:
Only set $video1 var before command.
Enjoy it!.
If someone steps in this question with a modern version of ffmpeg, it looks like they added the option there.
I needed to convert a file by maintaining all tracks:
ffmpeg -i "${input_file}" -vcodec hevc -crf 28 -map 0 "${output_file}"
To achieve what the original question asked, probably this could be used:
mappings="`ffmpeg -i \"${filein}\" |& awk 'BEGIN { i = 1 }; /Stream.*Audio/ {gsub(/^ *Stream #/, \"-map \"); gsub(/\(.*$/, \" -acodec mp3 audio\"i\".mp3\"); print; i +=1}'`"
ffmpeg -i "${input_file}" ${mappings}
The 1st line (mappings=...) extracts the existing audio streams and converts them in "-map X:Y -acodec mp3 FILENAME", while the 2nd one executes the extraction
The following script extracts all audio streams from files in current directory
ls |parallel "ffmpeg -i {} 2>&1 |\
sed -n 's/.*Stream \#\(.\+\)\:\(.\+\)\: Audio\: \([a-zA-Z0-9]\+\).*$/-map \1:\2 -c copy \"{.}.\1\2.\3\"/p' |\
xargs -n5 ffmpeg -i {} "