Using android FFMPEG library for limited video frames [duplicate] - android-studio

I need to create multiple thumbnails (ex. 12) from a video at equal times using ffmpeg.
So for example if the video is 60 seconds - I need to extract a screenshot every 5 seconds.
Im using the following command to get the frame in the 5ths second.
ffmpeg -ss 5 -i video.webm -frames:v 1 -s 120x90 thumbnail.jpeg
Is there a way to get multiple thumbnails with one command?

Get duration (optional)
Get duration using ffprobe. This is an optional step but is helpful if you will be scripting or automating the next commands.
ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 input.mp4
Example result:
60.000000
Output one frame every 5 seconds
Using the select filter:
ffmpeg -i input.mp4 -vf "select='not(mod(t,5))',setpts=N/FRAME_RATE/TB" output_%04d.jpg
or
ffmpeg -i input.mp4 -vf "select='not(mod(t,5))'" -vsync vfr output_%04d.jpg
Files will be named output_0001.jpg, output_0002.jpg, output_0003.jpg, etc. See image muxer documentation for more info and options.
To adjust JPEG quality see How can I extract a good quality JPEG image from a video with ffmpeg?
Output specific number of equally spaced frames
This will output 12 frames from a 60 second duration input:
ffmpeg -i input.mp4 -vf "select='not(mod(t,60/12))'" -vsync vfr output_%04d.jpg
You must manually enter the duration of the input (shown as 60 in the example above). See an automatic method immediately below.
Using ffprobe to automatically provide duration value
Bash example:
input=input.mp4; ffmpeg -i "$input" -vf "select='not(mod(t,$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $input)/12))'" -vsync vfr output_%04d.jpg
With scale filter
Example using the scale filter:
ffmpeg -i input.mp4 -vf "select='not(mod(t,60/12))',scale=120:-1" -vsync vfr output_%04d.jpg

$ffmpegPath = exec('which ffmpeg'); $ffprobePath = exec('which ffprobe');
$command = "$ffprobePath -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $input_video"; $video_duration = shell_exec($command);
$thumbnails_output = 'output%02d.png'; $command = "$ffmpegPath -i $input_video -vf fps=3/$video_duration $thumbnails_output";
shell_exec($command);

Related

How to take metadata from .mp3 file and put it to a video as a text using FFmpeg?

In my previously opened topic:
How to make FFmpeg automatically inject mp3 audio tracks in the single cycled muted video
I've got detailed explanation from #llogan how to broadcast looped short muted video on youtube automatically injecting audio tracks in it without interrupting a translation.
I plan to enhance the flow and the next question I faced with is how to dynamically put an additional text to the broadcast.
Prerequisites:
youtube broadcast is up and running by ffmpeg
short 3 min video is paying in infinity loop
audio tracks from playlist are automatically taken by "ffmpeg concat" and injected in the video one by one
this is a basic command to start translation:
ffmpeg -re -fflags +genpts -stream_loop -1 -i video.mp4 -re -f concat
-i input.txt -map 0:v -map 1:a -c:v libx264 -tune stillimage -vf format=yuv420p -c:a copy -g 20 -b:v 2000k -maxrate 2000k -bufsize
8000k -f flv rtmp://a.rtmp.youtube.com/live2/my-key
Improvements I want to bring
I plan to store some metadata in audio files (basically it's an artist name and a song name)
At the moment a particular song starts playing artist/song name should be taken from metadata and displayed on the video as text during the whole song is playing.
When the current song finishes and a new one starts playing the previous artist/song text should be replaced with the new one etc
My question is how to properly take metadata and add it to the existing broadcast config using ffmpeg?
This is a fairly broad question and I don't have a complete solution. But I can provide a partial answer containing several commands that you can use to help implement a solution.
Update text on video on demand
See Can you insert text from a file in real time with ffmpeg streaming?
Get title & artist metadata
With ffprobe:
ffprobe -v error -show_entries format_tags=title -of default=nw=1:nk=1 input.mp3
ffprobe -v error -show_entries format_tags=artist -of default=nw=1:nk=1 input.mp3
Or combined: format_tags=title,artist (note that title will display first, then artist, regardless of order in the command).
Get duration of a song
See How to get video duration in seconds?
What you need to figure out
The hard part is knowing when to update the file referenced in textfile in drawtext filter as shown in Update text on video on demand above.
Lazy solution
Pre-make a video per song including the title and artist info. Simple Bash example:
audio=input.mp3; ffmpeg -stream_loop -1 -i video.mp4 -i "$audio" -filter_complex "[0:v]scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720,setsar=1,fps=25,drawtext=text='$(ffprobe -v error -show_entries format_tags=title,artist -of default=nw=1:nk=1 $audio)':fontsize=18:fontcolor=white:x=10:y=h-th-10,format=yuv420p[v]" -map "[v]" -map 1:a -c:v libx264 -c:a aac -ac 2 -ar 44100 -g 50 -b:v 2000k -maxrate 2000k -bufsize 6000k -shortest "${audio%.*}.mp4"
Now that you already did the encoding, and everything is conformed to the same attributes for proper concatenation, you can probably just stream copy your playlist to YouTube (but I didn't test):
ffmpeg -re -f concat -i input.txt -c copy -f flv rtmp://a.rtmp.youtube.com/live2/my-key
Refer to your previous question on how to dynamically update the playlist.
References:
FFmpeg Wiki: Streaming to YouTube
Resizing videos with ffmpeg to fit into specific size
How to concatenate videos in ffmpeg with different attributes?

How to overlap and merge multiple audio files using ffmpeg?

I am trying to merge multiple audio files into a single file but instead of concatenating which I can do using the following command:
ffmpeg -v debug -i file1.wav -i file2.wav -i file3.wav -filter_complex [0:0]concat=n=3:v=0:a=1[out] -map [out] output.wav
Though this command works fine for concatenating, I want to overlap let's say the last 100ms of the end of the first file and 100ms of the start of the next file.
I am now trying to use 'acrossfade' filter that ffmpeg provides but I am not having any success with it.
ffmpeg -v debug -i file1.wav -i file2.wav -i file3.wav -filter_complex [0:a]acrossfade=d=0.100:c1=exp:c2=exp,[1:a]acrossfade=d=0.100:c1=exp:c2=exp,[2:a]acrossfade=d=0.100:c1=exp:c2=exp
This is what I have come up till now, but does not work as it throws 'Buffer is too short (n=0) for frame_length=1' error.
The documentation is not very helpful, does anyone have any idea what can be done?
Thanks in advance!
acrossfade is meant to create a transition between two inputs. So, each pair of inputs has to have acrossfade applied with the result being used as an input for the next acrossfade.
ffmpeg -v debug -i file1.wav -i file2.wav -i file3.wav -filter_complex "[0:a][1:a]acrossfade=d=0.100:c1=exp:c2=exp[a01];[a01][2:a]acrossfade=d=0.100:c1=exp:c2=exp" out.wav
Edit: your inputs are 16000 Hz, and your crossfade duration is 0.1s (!), which is less than 2 audio frames at the input sampling rate. Default frame size is 1024 samples. So, frame size needs to be lowered.
ffmpeg -v debug -i file1.wav -i file2.wav -i file3.wav -filter_complex "[0:a]asetnsamples=256[0a];[1:a]asetnsamples=256[1a];[2:a]asetnsamples=256[2a];[0a][1a]acrossfade=d=0.100:c1=exp:c2=exp[a01];[a01][2a]acrossfade=d=0.100:c1=exp:c2=exp" out.wav

avconv option for scale causes "Invalid frame size" error

I use avconv to get a preview image of a video clip. The command is
avconv -i video.mp4 -vframes 1 -s 100x100 cover.jpeg
It works but I would like to have correct dimensions of the output jpeg. For example to be the half size of the input video. So I execute
avconv -i video.mp4 -vframes 1 -s 'iw/2:ih/2' cover.jpeg
but this just causes Invalid frame size: iw/2:ih/2 error. How should I pass advanced scale options to avconv in command line?
Try
avconv -i video.mp4 -vframes 1 -vf "scale=iw/2:ih/2" cover.jpeg

ffmpeg modify audio length/size ( stretch or shrink)

I am developing a web app, where people can record videos. I have been able to send chunks of audio n video to server successfully, where I am trying to combine them and return as single proper file.
my problem is if the recording is for one hour, after merging the chunks
video length : 1:00:00 , audio length : 00:59:30,
now, this is not a issue of audio not getting recorded( I have checked that), the problem is, somehow, when i merge the chunks of audio, it shrinks,
I find that it is progressive sync issue where it gets worse and worse as time increases.
I have searched the net for the solution, most places say async, I have tried using it, but to no avail, is the below usage correct?
ffmpeg -i audio.wav -async 1 -i video.webm -y -strict -2 v.mp4
(v.mp4 is the final file that I provide to the users.)
found a solution(or a temp fix, depends of how you look at it),
it involves combination of ffmpeg and ffprobe ... i have done audio streching( ratio<1)
ffprobe -i a.mp3 -show_entries format=duration -v quiet -print_format json
ffprobe -i v.mp4 -show_entries format=duration -v quiet -print_format json
ffmpeg -i a.mp3 -filter:a atempo="0.9194791304347826" aSync.mp3 // audio is being stretched.
ffmpeg -i aSync.mp3 -i v.mp4 final.mp4

How to overlay/downmix two audio files using ffmpeg

Can I overlay/downmix two audio mp3 files into one mp3 output file using ffmpeg?
stereo + stereo → stereo
Normal downmix
Use the amix filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amix=inputs=2:duration=longest output.mp3
Or the amerge filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amerge=inputs=2 -ac 2 output.mp3
Downmix each input into specific output channel
Use the amerge and pan filters:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex "amerge=inputs=2,pan=stereo|c0<c0+c1|c1<c2+c3" output.mp3
mono + mono → stereo
Use the join filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex join=inputs=2:channel_layout=stereo output.mp3
Or amerge:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amerge=inputs=2 output.mp3
mono + mono → mono
Use the amix filter:
ffmpeg -i input0.mp3 -i input1.mp3 -filter_complex amix=inputs=2:duration=longest output.mp3
More info and examples
See FFmpeg Wiki: Audio Channels
Check this out:
ffmpeg -y -i ad_sound/whistle.mp3 -i ad_sound/4s.wav -filter_complex "[0:0][1:0] amix=inputs=2:duration=longest" -c:a libmp3lame ad_sound/outputnow.mp3
I think it will help.
The amix filter helps to mix multiple audio inputs into a single output.
If you run the following command:
ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
This command will mix 3 input audio streams (I used two mp3 files, in the example below) into a single output with the same duration as the first input and a dropout transition time of 3 seconds.
The amix filter accepts the following parameters:
inputs:
The number of inputs. If unspecified, it defaults to 2.
duration:
How to determine the end-of-stream.
longest:
The duration of the longest input. (default)
shortest:
The duration of the shortest input.
first:
The duration of the first input.
dropout_transition:
The transition time, in seconds, for volume renormalization when an input stream ends. The default value is 2 seconds.
For example, I ran the following command in Ubuntu:
FFMPEG version: 3.2.1-1
UBUNTU 16.04.1
ffmpeg -i background.mp3 -i bSound.mp3 -filter_complex amix=inputs=2:duration=first:dropout_transition=0 -codec:a libmp3lame -q:a 0 OUTPUT.mp3
-codec:a libmp3lame -q:a 0 was used to set a variable bit rate. Remember that, you need to install the libmp3lame library, if is necessary. But, it will work even without the -codec:a libmp3lame -q:a 0 part.
Reference: https://ffmpeg.org/ffmpeg-filters.html#amix
For merging two audio files with different volumes and different duration following command will work:
ffmpeg -y -i audio1.mp3 -i audio2.mp3 -filter_complex "[0:0]volume=0.09[a];[1:0]volume=1.8[b];[a][b]amix=inputs=2:duration=longest" -c:a libmp3lame output.mp3
Here duration can be change to longest or to shortest, you can also change the volume levels according to your need.
If you're looking to add background music to some voice use the following command as in the gaps the music will become loud automatically:
ffmpeg -i bgmusic.mp3 -i audio.mp3 -filter_complex "[1:a]asplit=2[sc][mix];[0:a][sc]sidechaincompress=threshold=0.003:ratio=20[bg]; [bg][mix]amerge[final]" -map [final] final.mp3
In this threshold is something whose value will decide how much loud the audio should be, the less the threshold more the audio will be. Ratio gives how much the other audio should be compressed, the more the ratio the more the compression is.
If they are different length, you can use apad to add a silent sound to the shortest one
With Bash
set 'amovie=a.mp3 [gg]; amovie=b.mp3 [hh]; [gg][hh] amerge'
ffmpeg -f lavfi -i "$1" -q 0 c.mp3
Example
You can use the following command arguments:
// Command is here
let commandValue = "-y -i \(recordedAudioPath) -i \(backgroundAudio) -filter_complex [\(0):a][\(1):a]amerge=inputs=\(2)[a] -map [a] -ac \(2) -shortest -preset ultrafast \(outputPath)"
MobileFFmpeg.execute(commandValue)

Resources