I'm trying to concatenate 5 videos where the first and last have no audio track. I have tried the following command:
ffmpeg -i 1-copyright/copyright2018640x480.mp4 -i 2-openingtitle/EOTIntroFINAL640x480.mp4 -i 3-videos/yelling.mp4 -i 4-endtitle/EOTOutroFINAL640x480.mp4 -i 5-learnabout/Niambi640.mp4 -filter_complex "[0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:a:0] [4:v:0] [4:a:0] concat=n=5:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" output_video.mp4
and I get the output error:
Stream specifier ':a:0' in filtergraph description [0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:a:0] [4:v:0] [4:a:0] concat=n=5:v=1:a=1 [v] [a] matches no streams.
I know the first and last videos have no audio but I dont know how to write the statement to ignore the audio track in those videos. I have tried removing the [0:a:0] but that just throws another error:
Stream specifier ':v:0' in filtergraph description [0:v:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] [3:v:0] [3:a:0] [4:v:0] [4:a:0] concat=n=5:v=1:a=1 [v] [a] matches no streams.
It doesnt make sense and Im kinda lost.
If you're concatenating audio as well, then all video inputs must be paired with an audio stream. If the file itself doesn't have any audio, then a dummy silent track can be used.
Use
ffmpeg -i 1-Video.mp4 -i 2-openingtitle/EOTIntroFINAL640x480.mp4
-i 3-videos/yelling.mp4 -i 4-endtitle/EOTOutroFINAL640x480.mp4
-i 5-learnabout/Niambi640.mp4 -f lavfi -t 0.1 -i anullsrc -filter_complex
"[0:v:0][5:a][1:v:0][1:a:0][2:v:0][2:a:0][3:v:0][3:a:0][4:v:0][5:a] concat=n=5:v=1:a=1 [v][a]"
-map "[v]" -map "[a]" output_video.mp4
Related
I need to stream some ffmpeg output that is generated with ffmpeg (audio to video); for this I'm using -filter_complex avectorscope.
I am testing the pipe with ffplay, something like this works:
ffmpeg -i video.mp4 -f h264 - | ffplay -
and something like this also works fine (writing to a file):
ffmpeg -i input.mp3 -filter_complex "[0:a]avectorscope=s=1920x1080,format=yuv420p[v]" \
-map "[v]" -map 0:a -vcodec libx264 avectorscope.mp4
But what I really need is something like this:
ffmpeg -i input.mp3 -filter_complex "[0:a]avectorscope=s=1920x1080,format=yuv420p[v]" \
-map "[v]" -map 0:a -vcodec libx264 -f h264 | ffplay -
but when I try that I get this error:
Automatic encoder selection failed for output stream #0:1. Default encoder for format h264 (codec none) is probably disabled. Please choose an encoder manually.
Error selecting an encoder for stream 0:1
pipe:: Invalid data found when processing input
So I can encode it to a file but not encode it to a pipe with the same flags.
I also tried with other format (-f flv and -f mpegts) and they also work (for ffplay), but it doesn't work with other tools that require h264 stream as input.
I hope someone can help!
-f h264 represents a raw H.264 bitstream so audio can't be included.
Use a container with multiplexing e.g. nut
ffmpeg -i input.mp3 -filter_complex "[0:a]avectorscope=s=1920x1080,format=yuv420p[v]" \ -map "[v]" -map 0:a -vcodec libx264 -f nut | ffplay -f nut -
I explored google and StackOverflow for how to add background music to the video and many of them suggested to use
ffmpeg -i input.mp4 -i audio.mp3 -shortest output.mp4
I have been trying to achieve this but it just does not work. When I try to add map like
ffmpeg -i "input.mp4" -i bg.mp3 -map 0:v:0 -map 1:a:0 oo.mp4
The video sound is replaced by the bg.mp3
And if I try -map 0 -map 1:a:0 or not provide map, the audio is not added at all.
How do I add the background music? I don't also get any error.
-map is a selector; select a type of stream from an input "file". To merge two audio streams, you need an audio filter:
ffmpeg -i input.mp4 -i audio.mp3 -lavfi "[0:a][1:a]amerge[out]" -map 0:v -map [out]:a -shortest output.mp4
-lavfi: Same as -filter_complex, because you have two inputs
[0:a][1:a] take audio stream from the first and second inputs
-map 0:v select the video stream from the first input without processing
-map [out]:a select the audio stream from the filtergraph (processed)
The shortest option in the amerge filter is set by default.
If you have problems, you might want to check also the amix filter, the audio codecs of your files, and the volume filter to adjust the volume of the inputs in the filtergraph.
Additional references:
https://ffmpeg.org/ffmpeg-filters.html#amerge
https://ffmpeg.org/ffmpeg-filters.html#amix
https://ffmpeg.org/ffmpeg-filters.html#volume
If the video length is longer than music you can add "-stream_loop -1" to repeat music until end of video
ffmpeg -i video_with_audio.mkv -stream_loop -1 -i background_music.mp3 -lavfi "[0:a][1:a]amerge[out]" -map 0:v -map [out]:a -shortest video_with_audio_and_background_music.mkv
If you want increase or decrease the volume, follow this command:
ffmpeg -i video_with_audio.mkv -stream_loop -1 -i background_music.mp3 -lavfi "[1:a]volume=0.2,apad[A];[0:a][A]amerge[out]" -map 0:v -map [out]:a -shortest video_with_audio_and_background_music.mkv
Is it possible to concatenate multiple files, if some of the files are videos with audio and some are audio only. The end result should look like this:
--------------------------------------------------------
|###(v/a)### | ### (a) ### | ### (a) ### | ###(v/a)### |
--------------------------------------------------------
v/a: video + audio
a : audio only (blank screen)
I tried to do it with the following command:
ffmpeg
-i chunk1.mp4
-i chunk2.m4a
-i chunk3.mp4
-filter_complex "[0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] concat=n=3:v=1:a=1 [v] [a]"
-map "[v]" -map "[a]" -strict -2 result.mp4
So I tried to only use the audio track from input 1 ([1:a:0]) but unfortunately, I'm getting this error message:
Stream specifier ':v:0' in filtergraph description [0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0] concat=n=3:v=1:a=1 [v] [a] matches no streams.
I thought that this must be possible somehow, since I can also combine a large audio file and a small video with ffmpeg. The result would then be a video file where the last frame just freezes but the audio still plays along. I would like to achieve the same result with either a frozen last frame or simply a black frames. Is this possible?
For the command given in the Q, use
ffmpeg -i chunk1.mp4 -i chunk2.m4a -i chunk3.mp4 -filter_complex \
"color=black:s=WxH:r=N:d=T[1v]; \
[0:v:0] [0:a:0] [1v] [1:a:0] [2:v:0] [2:a:0] concat=n=3:v=1:a=1 [v] [a]"
-map "[v]" -map "[a]" -strict -2 result.mp4
where WxH is the resolution of the videos, N the framerate, and T the duration of the audio file.
I have a project that requires merging of a video file with another audio file. The expected out put is an video file that will have both the audio from actual video and the merged audio file. The length of the output video file will be same to the size of the actual video file.
Is there a single line FFMPEG command to achieve this using copy and -map parameters ?
The video form I will be using is either flv or mp4
And the audio file format will be mp3
There can be achieved without using map also.
ffmpeg -i video.mp4 -i audio.mp3 output.mp4
In case you want the output.mp4 to stop as soon as one of the input stops (audio/video)
then use
-shortest
For example: ffmpeg -i video.mp4 -i audio.mp3 -shortest output.mp4
This will make sure that the output stops as and when any one of the inputs is completed.
Since you have asked that you want to do it with map. this is how you do it:
ffmpeg -i video.mp4 -i audio.mp3 -map 0:0 -map 1:0 -shortest output.mp4
Now, since you want to retain the audio of the video file, consider you want to merge audio.mp3 and video.mp4. These are the steps:
Extract audio from the video.mp4
ffmpeg -i video.mp4 1.mp3
Merge both audio.mp3 and 1.mp3
ffmpeg -i audio.mp3 -i 1.mp3 -filter_complex amerge -c:a libmp3lame -q:a 4 audiofinal.mp3
Remove the audio from video.mp4 (this step is not required. but just to do it properly)
ffmpeg -i video.mp4 -an videofinal.mp4
Now merge audiofinal.mp3 and videofinal.mp4
ffmpeg -i videofinal.mp4 -i audiofinal.mp3 -shortest final.mp4
note: in the latest version of ffmpeg it will only prompt you to use '-strict -2' in case it does then use this:
ffmpeg -i videofinal.mp4 -i audiofinal.mp3 -shortest -strict -2 final.mp4
hope this helps.
You can not do that using one cmd.
1. Get the audio from video file, the audio file name is a.mp3
ffmpeg.exe -i video.mp4 a.mp3
2. Merge two audio files(audio.mp3+a.mp3=audiofinal.mp3)
ffmpeg.exe -i audio.mp3 -i a.mp3 -filter_complex amerge -c:a libmp3lame -q:a 4 audiofinal.mp3
3. Merge video file and audio file(video.mp4+audiofinal.mp3=output.mp4)
ffmpeg.exe -i video.mp4 -i audiofinal.mp3 -map 0:v -map 1:a -c copy -y output.mp4
I don't think extracting the audio from the video is necessary. We can just use -filter_complex amix to merge both audios:
ffmpeg -i videowithaudio.mp4 -i audiotooverlay.mp3 -filter_complex amix -map 0:v -map 0:a -map 1:a -shortest videowithbothaudios.mp4
-filter_complex amix overlays the audio from the first input file on top of audio in the second input file.
-map 0:v video stream of the first input file.
-map 0:a audio stream of the first input file.
-map 1:a audio stream of the second input file.
-shortest the length of the output is the length of the shortest input
Use case:
add music to your background
you rendered a video, but muted some part of it, so you don't want to render it again(coz it's too long), instead you render only audio track(fast) and wanna merge it with original video.
Assuming
you have your video with you speech (or just audio track, whatever)
your music_file is not loud. Otherwise, you will not hear yourself D:
Steps:
1) Extract audio from the video
ffmpeg -i test.mp4 1.mp3
test.mp4 - your file
2) Merge both audio.mp3 and 1.mp3
ffmpeg -i audio.mp3 -i 1.mp3 -filter_complex amerge -c:a libmp3lame -q:a 4 audiofinal.mp3
audiofinal.mp3 - audio with music
3) Delete audio from original
ffmpeg -i example.mkv -c copy -an example-nosound.mkv
example-nosound.mkv - your video without audio
4) Merge with proper audio
ffmpeg -i audiofinal.mp3 -i example-nosound.wmv -c:v copy -vcodec copy final.wmv
final.wmv - your perfect video.
This is very easy with FFmpeg:
ffmpeg -i vid.mp4 -i audio.mp3 -codec:a libmp3lame -ar 44100 -ab 64k -ac 1 -q:v 1 -pix_fmt yuv420p -map 0:0 -map 1:0
First remove the sound from video if you are not able to merge video and audio by using this command:
ffmpeg -i video.mp4 -an videofinal.mp4
I have two ffmpeg commands:
ffmpeg -i d:\1.mp4 -i d:\1.mp4 -filter_complex "[0:0] [0:1] [1:0] [1:1] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" d:\3.mp4
and
ffmpeg -i d:\1.mp4 -vf scale=320:240 d:\3.mp4
How to use them both simultaneously?
For posterity:
The accepted answer does not work if the input sources are of different sizes (which is the primary reason why you need to scale before combining).
What you need to do is to first scale and then pipe that video output into the concat filter like so:
ffmpeg -i input1.mp4 -i input2.mp4 -filter_complex \
"[0:v]scale=1024:576:force_original_aspect_ratio=1[v0]; \
[1:v]scale=1024:576:force_original_aspect_ratio=1[v1]; \
[v0][0:a][v1][1:a]concat=n=2:v=1:a=1[v][a]" -map [v] -map [a] output.mp4
Had this problem today and was pulling my hair for good three hours trying to figure this out and unfortunately the accepted answer did not work as noted in the comments.
ffmpeg -i d:\1.mp4 -i d:\2.mp4 -filter_complex "concat=n=2:v=1:a=1 [v] [a]; \
[v]scale=320:200[v2]" -map "[v2]" -map "[a]" d:\3.mp4
Firstly we concatenate everything and pipe result to [v] [a] (see filtergraph syntax docs - its output from concat filter). Next we take [v], scale it and output to [v2], lastly we take [v2] and [a] and mux it to d:\3.mp4 file.
Construct custom filtergraph, move resize procedure nearer to the video source, for example let's deal with a more complex graph in order to grasp the spirit of its construction language:
ffmpeg.exe -i Movie_oriented_minus_90.mov -i Movie_pause.mp4 -i Sound_pause.aac -filter_complex "[0:v:0]scale=1920:1080 [c1],[c1]vflip[c2],[c2]hflip[clip], [clip] [0:a:0] [1:v:0] [2:a:0] concat=n=2:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" -c:v libx264 -q:v 0 -acodec mp3 -s 1920x1080 Movie_oriented_plus_90_with_pause.mp4
Stage 1: Movie_oriented_minus_90 is source 0, its video stream
[0:v:0] fed into scale filter and produced as [c1], then [c1] flipped
vertically into [c2] and then [c2] flipped horizontally into [clip]
thus rotated for 180 degree
Stage 2: 1st video stream concatenated with 2nd source stream, i.e.
1st video stream: [clip] (processed stream from source 0) and sound
from original video [0:a:0] 2nd video stream: constructed from video
from source 1 [1:v:0] and audio [2:a:0] from source 2 (30 sec of
silence made with -filter_complex "aevalsrc=0:d=30" during separated
run of ffmpeg)
Stage 3: the resulting video sequence [v] and [a] then compressed
with x264 codec into the target mp4 file
So, the main problem with your question was that you tried to concatenate streams with dfferent sizes and only then applied resizing operation for the already aggregated stream which is of course can't consist of media samples with different sizes.