I am using FFmpeg to join a bunch of videos in a folder
All are videos of same type
I have using
ffmpeg -f concat -i file_list.txt -codec copy out.mp4
and also tried to reencode
ffmpeg -f concat -i mylist.txt -c:v libx264 -c:a libvorbis out.mp4
and also this
ffmpeg -f concat safe 0 -i file_list.txt -c copy out.mp4
All of them are resulting in the same issue video getting stuck at the joint for about 1 min for each clip and the audio keeps playing on,
I have downloaded the videos from Reddit using youtube-dl and changed the aspect ratio to be 16:9
ffmpeg -i $f -lavfi '[0:v]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16' -vb 800k fix/$f;
I have noticed the videos have a different bit-rate
I still don't know what's wrong I don't have much experience with FFmpeg so please if someone can help me
I am using Mac M1 Air
Related
In my previously opened topic:
How to make FFmpeg automatically inject mp3 audio tracks in the single cycled muted video
I've got detailed explanation from #llogan how to broadcast looped short muted video on youtube automatically injecting audio tracks in it without interrupting a translation.
I plan to enhance the flow and the next question I faced with is how to dynamically put an additional text to the broadcast.
Prerequisites:
youtube broadcast is up and running by ffmpeg
short 3 min video is paying in infinity loop
audio tracks from playlist are automatically taken by "ffmpeg concat" and injected in the video one by one
this is a basic command to start translation:
ffmpeg -re -fflags +genpts -stream_loop -1 -i video.mp4 -re -f concat
-i input.txt -map 0:v -map 1:a -c:v libx264 -tune stillimage -vf format=yuv420p -c:a copy -g 20 -b:v 2000k -maxrate 2000k -bufsize
8000k -f flv rtmp://a.rtmp.youtube.com/live2/my-key
Improvements I want to bring
I plan to store some metadata in audio files (basically it's an artist name and a song name)
At the moment a particular song starts playing artist/song name should be taken from metadata and displayed on the video as text during the whole song is playing.
When the current song finishes and a new one starts playing the previous artist/song text should be replaced with the new one etc
My question is how to properly take metadata and add it to the existing broadcast config using ffmpeg?
This is a fairly broad question and I don't have a complete solution. But I can provide a partial answer containing several commands that you can use to help implement a solution.
Update text on video on demand
See Can you insert text from a file in real time with ffmpeg streaming?
Get title & artist metadata
With ffprobe:
ffprobe -v error -show_entries format_tags=title -of default=nw=1:nk=1 input.mp3
ffprobe -v error -show_entries format_tags=artist -of default=nw=1:nk=1 input.mp3
Or combined: format_tags=title,artist (note that title will display first, then artist, regardless of order in the command).
Get duration of a song
See How to get video duration in seconds?
What you need to figure out
The hard part is knowing when to update the file referenced in textfile in drawtext filter as shown in Update text on video on demand above.
Lazy solution
Pre-make a video per song including the title and artist info. Simple Bash example:
audio=input.mp3; ffmpeg -stream_loop -1 -i video.mp4 -i "$audio" -filter_complex "[0:v]scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720,setsar=1,fps=25,drawtext=text='$(ffprobe -v error -show_entries format_tags=title,artist -of default=nw=1:nk=1 $audio)':fontsize=18:fontcolor=white:x=10:y=h-th-10,format=yuv420p[v]" -map "[v]" -map 1:a -c:v libx264 -c:a aac -ac 2 -ar 44100 -g 50 -b:v 2000k -maxrate 2000k -bufsize 6000k -shortest "${audio%.*}.mp4"
Now that you already did the encoding, and everything is conformed to the same attributes for proper concatenation, you can probably just stream copy your playlist to YouTube (but I didn't test):
ffmpeg -re -f concat -i input.txt -c copy -f flv rtmp://a.rtmp.youtube.com/live2/my-key
Refer to your previous question on how to dynamically update the playlist.
References:
FFmpeg Wiki: Streaming to YouTube
Resizing videos with ffmpeg to fit into specific size
How to concatenate videos in ffmpeg with different attributes?
I have several videos and photos and need to merge them with the cross-dissolve effect. The algorithm is next:
Create videos from images and add silent audio to them (so they will also have a sound stream):
ffmpeg -y -f lavfi -i anullsrc -loop 1 -i /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/ea5c93fd-d946-4742-b8f7-ea9ae4d43441.jpg -c:v libx264 -t 10 -pix_fmt yuv420p -vf scale=750:1280 /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/ea5c93fd-d946-4742-b8f7-ea9ae4d43441.mp4
Combine all the videos and audios into one using this command:
ffmpeg
-i /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/temp_68d437c0-f5e2-4651-b07e-91533480b6ef.mp4
-i /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/temp_48f3c111-610d-40c7-ac71-6ce2fbb16184.mp4
-i /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/temp_1593b5d8-7e16-417d-9372-2267581cd504.mp4
-i /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/temp_1ac7f6be-1b12-4e31-b904-1491cc9b9494.mp4
-i /tmp/media/import-2020-Aug-19-Wednesday-05-40-34/temp_ea5c93fd-d946-4742-b8f7-ea9ae4d43441.mp4
-filter_complex
"[0:v]trim=start=0:end=8.032,setpts=PTS-STARTPTS[clip0];
[1:v]trim=start=2:end=13.047,setpts=PTS-STARTPTS[clip1];
[2:v]trim=start=2:end=13.558,setpts=PTS-STARTPTS[clip2];
[3:v]trim=start=2:end=13.186,setpts=PTS-STARTPTS[clip3];
[4:v]trim=start=2,setpts=PTS-STARTPTS[clip4];
[0:v]trim=start=9.032:end=10.032,setpts=PTS-STARTPTS[out0];
[1:v]trim=start=14.047:end=15.047,setpts=PTS-STARTPTS[out1];
[2:v]trim=start=14.558:end=15.558,setpts=PTS-STARTPTS[out2];
[3:v]trim=start=14.186:end=15.186,setpts=PTS-STARTPTS[out3];
[1:v]trim=start=0:end=2,setpts=PTS-STARTPTS[in1];
[2:v]trim=start=0:end=2,setpts=PTS-STARTPTS[in2];
[3:v]trim=start=0:end=2,setpts=PTS-STARTPTS[in3];
[4:v]trim=start=0:end=2,setpts=PTS-STARTPTS[in4];
[in1]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1[fadein1];
[in2]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1[fadein2];
[in3]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1[fadein3];
[in4]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1[fadein4];
[out0]format=pix_fmts=yuva420p,fade=t=out:st=0:d=2:alpha=1[fadeout0];
[out1]format=pix_fmts=yuva420p,fade=t=out:st=0:d=2:alpha=1[fadeout1];
[out2]format=pix_fmts=yuva420p,fade=t=out:st=0:d=2:alpha=1[fadeout2];
[out3]format=pix_fmts=yuva420p,fade=t=out:st=0:d=2:alpha=1[fadeout3];
[fadein1]fifo[fadein1fifo];
[fadein2]fifo[fadein2fifo];
[fadein3]fifo[fadein3fifo];
[fadein4]fifo[fadein4fifo];
[fadeout0]fifo[fadeout0fifo];
[fadeout1]fifo[fadeout1fifo];
[fadeout2]fifo[fadeout2fifo];
[fadeout3]fifo[fadeout3fifo];
[fadeout0fifo][fadein1fifo]overlay[crossfade0];
[fadeout1fifo][fadein2fifo]overlay[crossfade1];
[fadeout2fifo][fadein3fifo]overlay[crossfade2];
[fadeout3fifo][fadein4fifo]overlay[crossfade3];
[clip0][crossfade0][clip1][crossfade1][clip2][crossfade2][clip3][crossfade3][clip4]concat=n=9[output];
[0:a][1:a]acrossfade=d=10:c1=tri:c2=tri[A1];
[A1][2:a]acrossfade=d=10:c1=tri:c2=tri[A2];
[A2][3:a]acrossfade=d=10:c1=tri:c2=tri[A3];
[A3][4:a]acrossfade=d=10:c1=tri:c2=tri[audio] "
-vsync 0 -map "[output]" -map "[audio]" /tmp/media/final/some_filename_d0d2aab0-792a-4540-b2d3-e64abe98bf5c.mp4
And all works pretty well, but if I have, for example:
picture
video
video
picture
Then the sound from the second video is mapping to the first picture and sound from the third video to second video. And the third video actually goes without sound.
It seems like it's happening because the silent sound of the first picture is pretty short. An I right?
If so, how can I increase its duration?
I would much appreciate any help with this!
Assuming 5 inputs of 10 seconds each, all with audio streams*, with ffmpeg 4.3 or newer, use the xfade and acrossfade filters.
ffmpeg
-i in1.mp4
-i in2.mp4
-i in3.mp4
-i in4.mp4
-i in5.mp4
-filter_complex
" [0][1]xfade=transition=fade:duration=2:offset=8[V01];
[V01][2]xfade=transition=fade:duration=2:offset=16[V02];
[V02][3]xfade=transition=fade:duration=2:offset=24[V03];
[V03][4]xfade=transition=fade:duration=2:offset=32[video];
[0:a][1:a]acrossfade=d=2:c1=tri:c2=tri[A01];
[A01][2:a]acrossfade=d=2:c1=tri:c2=tri[A02];
[A02][3:a]acrossfade=d=2:c1=tri:c2=tri[A03];
[A03][4:a]acrossfade=d=2:c1=tri:c2=tri[audio]"
-vsync 0 -map "[video]" -map "[audio]" out.mp4
*if there's no existing audio stream, add one using the command in step 1.
If the existing audio stream of a file isn't 10 seconds long, use these filters on it before acrossfade.
[input]aresample=async=1:first_pts=0,apad,atrim=0:10[filtered]
and then use this filtered stream as input.
I'm trying to calculate the audio + visual difference between a harshly compressed video file and one that hasn't been.
I'm using pipes because ultimately I wish this to take src from a camera stream.
I've managed to get the video results that I'm looking for, but I'm struggling with the audio.
I've added a line to invert the phase of the compressed audio, so that when they add up in the blend they should almost cancel each other out, but that doesn't happen.
ffmpeg -i input.avi -f avi -c:v libxvid -qscale:v 30 -c:a wmav1 - | \
ffmpeg -i - -f avi -af "aeval='-val(0)':c=same" - | \
ffmpeg -i input.avi -i - -filter_complex "blend=all_mode=difference" -c:v libx264 -crf 18 -f avi - | \
ffplay -
I can still hear all the audio, when what I should be hearing are solely compression artifacts. thx
To preface, I'm not sure your method would identify audio compression 'artifacts'
Your command doesn't perform any audio comparison, it only inverts a single channel. Also, the audio and video are compressed twice and the codecs the last ffmpeg command receives are the default AVI codecs of mpeg4 and mp3.
Use
ffmpeg -i input.avi -f matroska -c:v libxvid -qscale:v 30 -c:a wmav1 - |\
ffmpeg -i input.avi -i - -filter_complex "[0][1]blend=all_mode=difference;[1]aselect=gt(n\,0),asetpts=PTS-STARTPTS[1a];[0][1a]amerge,aeval=val(0)-val(1):c=mono" -c:v rawvideo -c:a pcm_s16le -f matroska - |\
ffplay -
I assume your audio is mono. If your audio has N channels, your aeval will need N expressions where the Mth expression is val(M-1)-val(N+M-1)
I also trim out the first encoded audio frame in order to mitigate encoder delay that Paul mentioned, and it seems to work here.
There might be some delay introduced with encoded audio samples. Also your command is incorrect.
I have a webcam and a separate mic. I want to record what is happening.
It almost works, however the audio seems to play quickly and parts missing while playing over the video.
This is the command I am currently using to get it partially working
ffmpeg -thread_queue_size 1024 -f alsa -ac 1 -i plughw:1,0 -f video4linux2 -thread_queue_size 1024 -re -s 1280x720 -i /dev/video0 -r 25 -f avi -q:a 2 -acodec libmp3lame -ab 96k out.mp4
I have tried other arguments, but unsure if it has to do with the formats I am using or incorrect parameter settings.
Also, the next part would be how to stream it. Everytime I try going through rtp it complains about multiple streams. I tried doing html as well, but didn't like the format. html html://localhost:50000/live_feed or rts rts://localhost:5000
edit:
I am running this on a rpi 3.
I am trying to cut a video in 2 parts then reassembling with ffmpeg but the final output has a small audio glitch right where the segments meet. I am using the following command to split the video 1.mp4 in 2 parts:
ffmpeg -i 1.mp4 -ss 00:00:00 -t 00:00:02 -async 1 1-1.mp4
and
ffmpeg -i 1.mp4 -ss 00:00:02 -t 00:00:02 -async 1 1-2.mp4
Once I have the 2 parts I am concatening them back together with:
ffmpeg -f concat -i files.txt -c copy output.mp4
files.txt is correctly listing both files. Can anyone point me to where the problem might be?
Thanks
The glitch is likely due to the audio priming sample showing up in between.
Since you're re-encoding the segments, you can do this in one command:
ffmpeg -i 1.mp4 -filter_complex
"[0]trim=duration=2[v1];[0]trim=2:4,setpts=PTS-STARTPTS[v2];
[0]atrim=duration=2[a1];[0]atrim=2:4,asetpts=PTS-STARTPTS[a2];
[v1][a1][v2][a2]concat=n=2:v=1:a=1[v][a]"
-map "[v]" -map "[a]" output.mp4
I had the same problem for about 3 weeks.
just merge the mp3 files using sox
sox in1.mp3 in2.mp3 in3.mp3 out.mp3
When I used concat with FFMPEG it made 12.5ms (I saw them on using Audacity) audio gaps. (I don't know why)
Maybe for your case it'll be better to extract the audio and video to two separate files using ffmpeg, merge them (video using FFMPEG and audio using sox) then put the files together into one container (mp4) file