How make video and audio duration the same with ffmpeg? - audio

I am merging a few user-generated videos together with ffmpeg-concat and sometimes run into an audio sync issue. I figured that it fails when audio and video duration mismatch. E.g.:
ffprobe -v error -select_streams v:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 IMG_7679.mov
16.666016
ffprobe -v error -select_streams a:0 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1 IMG_7679.mov
16.670998
Question is — how do make audio and video duration equal prior to concat, without loosing the content?
Or maybe classic ffmpeg's concat solves this issue somehow and I should use it?

you can use the trim and or atrim filter to cut a part of the audio or video.
[v]trim=0:3.23,setpts=START-PTS[vout]
[a]atrim=0:3.23,asetpts=START-PTS[aout]
setpts and asetpts fixes the timestamps

Related

Using android FFMPEG library for limited video frames [duplicate]

I need to create multiple thumbnails (ex. 12) from a video at equal times using ffmpeg.
So for example if the video is 60 seconds - I need to extract a screenshot every 5 seconds.
Im using the following command to get the frame in the 5ths second.
ffmpeg -ss 5 -i video.webm -frames:v 1 -s 120x90 thumbnail.jpeg
Is there a way to get multiple thumbnails with one command?
Get duration (optional)
Get duration using ffprobe. This is an optional step but is helpful if you will be scripting or automating the next commands.
ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 input.mp4
Example result:
60.000000
Output one frame every 5 seconds
Using the select filter:
ffmpeg -i input.mp4 -vf "select='not(mod(t,5))',setpts=N/FRAME_RATE/TB" output_%04d.jpg
or
ffmpeg -i input.mp4 -vf "select='not(mod(t,5))'" -vsync vfr output_%04d.jpg
Files will be named output_0001.jpg, output_0002.jpg, output_0003.jpg, etc. See image muxer documentation for more info and options.
To adjust JPEG quality see How can I extract a good quality JPEG image from a video with ffmpeg?
Output specific number of equally spaced frames
This will output 12 frames from a 60 second duration input:
ffmpeg -i input.mp4 -vf "select='not(mod(t,60/12))'" -vsync vfr output_%04d.jpg
You must manually enter the duration of the input (shown as 60 in the example above). See an automatic method immediately below.
Using ffprobe to automatically provide duration value
Bash example:
input=input.mp4; ffmpeg -i "$input" -vf "select='not(mod(t,$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $input)/12))'" -vsync vfr output_%04d.jpg
With scale filter
Example using the scale filter:
ffmpeg -i input.mp4 -vf "select='not(mod(t,60/12))',scale=120:-1" -vsync vfr output_%04d.jpg
$ffmpegPath = exec('which ffmpeg'); $ffprobePath = exec('which ffprobe');
$command = "$ffprobePath -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $input_video"; $video_duration = shell_exec($command);
$thumbnails_output = 'output%02d.png'; $command = "$ffmpegPath -i $input_video -vf fps=3/$video_duration $thumbnails_output";
shell_exec($command);

How to take metadata from .mp3 file and put it to a video as a text using FFmpeg?

In my previously opened topic:
How to make FFmpeg automatically inject mp3 audio tracks in the single cycled muted video
I've got detailed explanation from #llogan how to broadcast looped short muted video on youtube automatically injecting audio tracks in it without interrupting a translation.
I plan to enhance the flow and the next question I faced with is how to dynamically put an additional text to the broadcast.
Prerequisites:
youtube broadcast is up and running by ffmpeg
short 3 min video is paying in infinity loop
audio tracks from playlist are automatically taken by "ffmpeg concat" and injected in the video one by one
this is a basic command to start translation:
ffmpeg -re -fflags +genpts -stream_loop -1 -i video.mp4 -re -f concat
-i input.txt -map 0:v -map 1:a -c:v libx264 -tune stillimage -vf format=yuv420p -c:a copy -g 20 -b:v 2000k -maxrate 2000k -bufsize
8000k -f flv rtmp://a.rtmp.youtube.com/live2/my-key
Improvements I want to bring
I plan to store some metadata in audio files (basically it's an artist name and a song name)
At the moment a particular song starts playing artist/song name should be taken from metadata and displayed on the video as text during the whole song is playing.
When the current song finishes and a new one starts playing the previous artist/song text should be replaced with the new one etc
My question is how to properly take metadata and add it to the existing broadcast config using ffmpeg?
This is a fairly broad question and I don't have a complete solution. But I can provide a partial answer containing several commands that you can use to help implement a solution.
Update text on video on demand
See Can you insert text from a file in real time with ffmpeg streaming?
Get title & artist metadata
With ffprobe:
ffprobe -v error -show_entries format_tags=title -of default=nw=1:nk=1 input.mp3
ffprobe -v error -show_entries format_tags=artist -of default=nw=1:nk=1 input.mp3
Or combined: format_tags=title,artist (note that title will display first, then artist, regardless of order in the command).
Get duration of a song
See How to get video duration in seconds?
What you need to figure out
The hard part is knowing when to update the file referenced in textfile in drawtext filter as shown in Update text on video on demand above.
Lazy solution
Pre-make a video per song including the title and artist info. Simple Bash example:
audio=input.mp3; ffmpeg -stream_loop -1 -i video.mp4 -i "$audio" -filter_complex "[0:v]scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720,setsar=1,fps=25,drawtext=text='$(ffprobe -v error -show_entries format_tags=title,artist -of default=nw=1:nk=1 $audio)':fontsize=18:fontcolor=white:x=10:y=h-th-10,format=yuv420p[v]" -map "[v]" -map 1:a -c:v libx264 -c:a aac -ac 2 -ar 44100 -g 50 -b:v 2000k -maxrate 2000k -bufsize 6000k -shortest "${audio%.*}.mp4"
Now that you already did the encoding, and everything is conformed to the same attributes for proper concatenation, you can probably just stream copy your playlist to YouTube (but I didn't test):
ffmpeg -re -f concat -i input.txt -c copy -f flv rtmp://a.rtmp.youtube.com/live2/my-key
Refer to your previous question on how to dynamically update the playlist.
References:
FFmpeg Wiki: Streaming to YouTube
Resizing videos with ffmpeg to fit into specific size
How to concatenate videos in ffmpeg with different attributes?

display only lines from output that contains a specified word

I'm looking for a way to get only the lines that contains a specified word, in this case all lines that contains the word Stream from an output
I've tried;
streams=$(ffprobe -i "movie.mp4" | grep "Stream")
but that didn't get any results..
or do I need to output it to a file and then try to extract the lines I'm looking for?
#paulsm4 was spot on ... the output goes to STDERR.
streams=$(ffprobe -i "movie.mp4" |& grep "Stream")
Note the &
No need for grep. Just use ffprobe directly to get whatever info you need.
Output all info
ffprobe -loglevel error -show_format -show_streams input.mp4
Video info only
ffprobe -loglevel error -show_streams -select_streams v input.mp4
Audio info only
ffprobe -loglevel error -show_streams -select_streams a input.mp4
Width x height
See Getting video dimension / resolution / width x height from ffmpeg
Duration
See How to get video duration?
Format / codec
Is there a way to use ffmpeg to determine the encoding of a file before transcoding?
Using ffprobe to check audio-only files
Info on frames
See Get video frames information with ffmpeg
More info and examples
See FFmpeg Wiki: ffprobe

Using ffmpeg in a script to detect and fix mp3 with sample rate != 44.1k

I only want to touch files that aren't 44.1 (as mp3 is lossy, so re-encoding/resampling files that dont needed to be touched is t good)
I have started playing with ffprobe (assuming this is the best way?), but got stuck with the syntax. Using:
ffprobe -show_streams -select_streams a format=sample_rate -of default=noprint_wrappers=1:nokey=1 myfile.mp3
Its not happy with this syntax, saying "myfile.mp3 provided as input filename, but 'format=sample_rate' was already specified."
Is there a better way to achieve this? If not, can someone help me with my ffmpeg probe syntax?
Remove -show_streams, add -loglevel error, and change format=sample_rate to -show_entries stream=sample_rate.
ffprobe -loglevel error -select_streams a -show_entries stream=sample_rate -of default=noprint_wrappers=1:nokey=1 myfile.mp3

ffmpeg modify audio length/size ( stretch or shrink)

I am developing a web app, where people can record videos. I have been able to send chunks of audio n video to server successfully, where I am trying to combine them and return as single proper file.
my problem is if the recording is for one hour, after merging the chunks
video length : 1:00:00 , audio length : 00:59:30,
now, this is not a issue of audio not getting recorded( I have checked that), the problem is, somehow, when i merge the chunks of audio, it shrinks,
I find that it is progressive sync issue where it gets worse and worse as time increases.
I have searched the net for the solution, most places say async, I have tried using it, but to no avail, is the below usage correct?
ffmpeg -i audio.wav -async 1 -i video.webm -y -strict -2 v.mp4
(v.mp4 is the final file that I provide to the users.)
found a solution(or a temp fix, depends of how you look at it),
it involves combination of ffmpeg and ffprobe ... i have done audio streching( ratio<1)
ffprobe -i a.mp3 -show_entries format=duration -v quiet -print_format json
ffprobe -i v.mp4 -show_entries format=duration -v quiet -print_format json
ffmpeg -i a.mp3 -filter:a atempo="0.9194791304347826" aSync.mp3 // audio is being stretched.
ffmpeg -i aSync.mp3 -i v.mp4 final.mp4

Resources