Beeping out portions of an audio file using ffmpeg - audio

I'm trying to use ffmpeg to beep out sections of an audio file (say 10-15 and 20-30). However only the first portion(10-20) gets beeped, whilst the next portion gets muted.
ffmpeg -i input.mp3 -filter_complex "[0]volume=0:enable='between(t,10,15)+between(t,20,30)'[main];sine=d=5:f=800,adelay=10s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2" output.wav
Using this as my reference, but not able to make much progress.
Edit : Well, sine=d=5 clearly mentions the duration as 5 (my bad). Seems like this command can be used to add beeping to only one specific portion, how can I possibly change it to add beeps to different sections with varying durations.

ffmpeg -i input.mp3 -af "volume=enable='between(t,5,10)':volume=0[main];sine=d=5:f=800,adelay=5s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2,
volume=enable='between(t,15,20)':volume=0[main];sine=d=5:f=800,adelay=15s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2, volume=enable='between(t,40,50)':volume=0[main];sine=d=10:f=800,adelay=40s,pan=stereo|FL=c0|FR=c0[beep];[main][beep]amix=inputs=2" output.wav
The above code beeps 5-10, 15-20 and 40-50
This seems to work. Separating the different beeping settings with a ,(comma) and making changes at all 3 places: between, sine=d=x where x seems to be the duration and adelay=ys where y is the delay, meaning when the beeping starts. So between would be (t, y, y+x).
References : Mute specified sections of an audio file using ffmpeg and FFMPEG:Adding beep sound to another audio file in specific time portions
Would love to know a more easier/convenient way of doing this. So I'm not marking this as an answer.

Related

FFMPEG reducing Generation Loss when inserting many videos into another video

I am trying to insert many miniclips.mp4 into a main.mp4 video - Although I have been able to do this using this solution, I seem to suffer from Generation Loss
The command I am using (within a python script, in a loop at many different intervals) is:
ffmpeg -i main.mp4 -i miniclipX.mp4 -filter_complex "[0:v]drawbox=t=fill:enable='between(t,5,6.4)'[bg];[1:v]setpts=PTS+5/TB[fg];[bg][fg]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass;[1:a]adelay=5s:all=1[a1];[0:a][a1]amix" output.mp4
(Then renaming output.mp4 to main.mp4 within a loop)
Would there be anyway to either:
A) Reduce generation loss by implementing certain flags
or
B) Include many different input files and many different -filter_complex's in a singular command to achieve what I am after?
Because you did not provide the ffmpeg log (and therefore there is no info about your ffmpeg or your inputs), for this answer I'll assume all videos are the same width and height.
Example to show miniclip1.mp4 at 5 seconds and miniclip2.mp4 at 10 seconds:
ffmpeg -i main.mp4 -i miniclip1.mp4 -i miniclip2.mp4 -filter_complex
"[1:v]setpts=PTS+5/TB[offset1];[0:v][offset1]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass[bg];
[2:v]setpts=PTS+10/TB[offset2];[bg][offset2]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass;
[1:a]adelay=5s:all=1[a1];
[2:a]adelay=10s:all=1[a2];
[0:a][a1][a2]amix=inputs=3"
output.mp4
Command was broken into multiple lines so it is easier to read. Make it one line when executing.

Mixing various audio and video sources into a single video

I've already read FFmpeg - Overlay one video onto another video?, How to overlay 2 videos at different time over another video in single ffmpeg command?, FFmpeg - Multiple videos with 4 areas and different play times (and many similar questions tagged [ffmpeg] about setpts), and the following code is working, but I'm sure we can simplify it, and have a more elegant solution.
I'd like to mix multiple sources (image and sound) , with different starting points:
t (seconds) 0 1 2 3 4 5 6 7 8 9 10 11 12 13
test.png [-------------------------------]
a.mp3 [-------]
without_sound.mp4 [-------------------] (overlay at x,y=200,200)
b.mp3 [---]
with_sound.mp4 [---------------------------------------] (overlay at x,y=100,100)
This works:
ffmpeg -i test.png
-t 2 -i a.mp3
-t 5 -i without_sound.mp4
-t 1 -i b.mp3
-t 10 -i with_sound.mp4
-filter_complex "
[0]setpts=PTS-STARTPTS[s0];
[1]adelay=2000^|2000[s1];
[2]setpts=PTS-STARTPTS+7/TB[s2];
[3]adelay=5000^|5000[s3];
[4]setpts=PTS-STARTPTS+3/TB[s4];
[4:a]adelay=3000^|3000[t4];
[s1][s3][t4]amix=inputs=3[outa];
[s0][s4]overlay=100:100[o2];
[o2][s2]overlay=200:200[outv]
" -map [outa] -map [outv]
out.mp4 -y
but:
is it normal that we have to use both setpts and adelay? I have tried without adelay and then the sound is not shifted. Said differently, is there a way to simplify:
[4]setpts=PTS-STARTPTS+3/TB[s4];
[4:a]adelay=3000^|3000[t4];
?
is there a way to do it with setpts and asetpts only? When I replaced adelay=5000|5000 with asetpts=PTS-STARTPTS+5/TB and also for the other one, it didn't give the expected time-shifting (see below)
in similar questions/answers I often see overlay=...:enable='between(t,...,...)', here it seems it is not needed, why?
More generally, how would you simplify this "mix multiple audio and video" ffmpeg code?
More details about the second bullet point: if we replace adelay by asetpts,
-filter_complex "
[0]setpts=PTS-STARTPTS[s0];
[1]asetpts=PTS-STARTPTS+2/TB[s1];
[2]setpts=PTS-STARTPTS+7/TB[s2];
[3]asetpts=PTS-STARTPTS+5/TB[s3];
[4]setpts=PTS-STARTPTS+3/TB[s4];
[4:a]asetpts=PTS-STARTPTS+3/TB[t4];
[s1][s3][t4]amix=inputs=3[outa];
[s0][s4]overlay=100:100[o2];
[o2][s2]overlay=200:200[outv]
it doesn't work: [3] should begin at 0'05", and [4:a] at 0'03" but they all begin at the same time than [1], i.e. at 0'02".
It seems that amix only takes the first asetpts in consideration, and discards the others; is it true?
is it normal that we have to use both setpts and adelay?
Yes, the former is for video streams; the latter, for audio. asetpts is not suitable for use with amix since the latter ignores starting time offsets. adelay fills in with silence from 0 to the desired offset.
I often see overlay=...:enable='between(t,...,...)', here it seems it is not needed, why?
Overlay syncs its main and overlay video frames by timestamps. enable is needed if one wishes to disable overlay when synced frames are available for both inputs.

mkv file out of sync with linear drift

I have a bunch of mkv files, with FLAC as the audio codec and FFV1 as the video one.
The files were created using an EasyCap aquisition dongle from a VCR analog source. Specifically, I used VLC's "open acquisition device" prompt and selected PAL. Then, I converted the files (audio PCM, video raw YUV) to (FLAC, FFV1) using
ffmpeg.exe -i input.avi -acodec flac -vcodec ffv1 -level 3 -threads 4 -coder 1 -context 1 -g 1 -slices 24 -slicecrc 1 output.mkv
Now, the files are progressively out of sync. It may be due to the fact that while (maybe) the video has a constant framerate, the FLAC track has variable framerate. So, is there a way to sync the track to audio, or something alike? Can FFmpeg do this? Thanks
EDIT
On Mulvya hint, I plotted the difference in sync at various times; the first column shows the seconds elapsed, the second shows the difference - in secs. The plot seems to behave linearly, with 0.0078 as a constant slope. NOTE: measurements taken by hands, by means of a chronometer
EDIT 2
Playing around with VirtualDub, I found that changing the framerate to 25 fps from the original 24.889 (Video->Frame rate...->Change frame rate to) and using the track converted to wav definitely does work. Two problems, though: VirtualDub crashes when importing the original FFV1-FLAC mkv file, so I had to convert the video to H264 to try it out; more, I find it difficult to use an external encoder to save VirtualDub output.
So, could I avoid using VirtualDub, and simply use ffmpeg for it? Here's the exported vdscript:
VirtualDub.audio.SetSource("E:\\4_track2.wav", "");
VirtualDub.audio.SetMode(0);
VirtualDub.audio.SetInterleave(1,500,1,0,0);
VirtualDub.audio.SetClipMode(1,1);
VirtualDub.audio.SetEditMode(1);
VirtualDub.audio.SetConversion(0,0,0,0,0);
VirtualDub.audio.SetVolume();
VirtualDub.audio.SetCompression();
VirtualDub.audio.EnableFilterGraph(0);
VirtualDub.video.SetInputFormat(0);
VirtualDub.video.SetOutputFormat(7);
VirtualDub.video.SetMode(3);
VirtualDub.video.SetSmartRendering(0);
VirtualDub.video.SetPreserveEmptyFrames(0);
VirtualDub.video.SetFrameRate2(25,1,1);
VirtualDub.video.SetIVTC(0, 0, 0, 0);
VirtualDub.video.SetCompression();
VirtualDub.video.filters.Clear();
VirtualDub.audio.filters.Clear();
The first line imports the wav-converted audio track.
Can I set an equivalent pipe in ffmpeg (possibly, using FLAC - not wav)? SetFrameRate2 is maybe the key, here.

How to Simply Remove Duplicate Frames from a Video using ffmpeg

First of all, I'd preface this by saying I'm NO EXPERT with video manipulation,
although I've been fiddling with ffmpeg for years (in a fairly limited way). Hence, I'm not too flash with all the language folk often use... and how it affects what I'm trying to do in my manipulations... but I'll have a go with this anyway...
I've checked a few links here, for example:
ffmpeg - remove sequentially duplicate frames
...but the content didn't really help me.
I have some hundreds of video clips that have been created under both Windows and Linux using both ffmpeg and other similar applications. However, they have some problems with times in the video where the display is 'motionless'.
As an example, let's say we have some web site that streams a live video into, say, a Flash video player/plugin in a web browser. In this case, we're talking about a traffic camera video stream, for example.
There's an instance of ffmpeg running that is capturing a region of the (Windows) desktop into a video file, viz:-
ffmpeg -hide_banner -y -f dshow ^
-i video="screen-capture-recorder" ^
-vf "setpts=1.00*PTS,crop=448:336:620:360" ^
-an -r 25 -vcodec libx264 -crf 0 -qp 0 ^
-preset ultrafast SAMPLE.flv
Let's say the actual 'display' that is being captured looks like this:-
123456789 XXXXX 1234567 XXXXXXXXXXX 123456789 XXXXXXX
^---a---^ ^-P-^ ^--b--^ ^----Q----^ ^---c---^ ^--R--^
...where each character position represents a (sequence of) frame(s). Owing to a poor internet connection, a "single frame" can be displayed for an extended period (the 'X' characters being an (almost) exact copy of the immediately previous frame). So this means we have segments of the captured video where the image doesn't change at all (to the naked eye, anyway).
How can we deal with the duplicate frames?... and how does our approach change if the 'duplicates' are NOT the same to ffmpeg but LOOK more-or-less the same to the viewer?
If we simply remove the duplicate frames, the 'pacing' of the video is lost, and what used to take, maybe, 5 seconds to display, now takes a fraction of a second, giving a very jerky, unnatural motion, although there are no duplicate images in the video. This seems to be achievable using ffmpeg with an 'mp_decimate' option, viz:-
ffmpeg -i SAMPLE.flv ^ ... (i)
-r 25 ^
-vf mpdecimate,setpts=N/FRAME_RATE/TB DEC_SAMPLE.mp4
That reference I quoted uses a command that shows which frames 'mp_decimate' will remove when it considers them to be 'the same', viz:-
ffmpeg -i SAMPLE.flv ^ ... (ii)
-vf mpdecimate ^
-loglevel debug -f null -
...but knowing that (complicated formatted) information, how can we re-organize the video without executing multiple runs of ffmpeg to extract 'slices' of video for re-combining later?
In that case, I'm guessing we'd have to run something like:-
user specifies a 'threshold duration' for the duplicates
(maybe run for 1 sec only)
determine & save main video information (fps, etc - assuming
constant frame rate)
map the (frame/time where duplicates start)->no. of
frames/duration of duplicates
if the duration of duplicates is less than the user threshold,
don't consider this period as a 'series of duplicate frames'
and move on
extract the 'non-duplicate' video segments (a, b & c in the
diagram above)
create 'new video' (empty) with original video's specs
for each video segment
extract the last frame of the segment
create a short video clip with repeated frames of the frame
just extracted (duration = user spec. = 1 sec)
append (current video segment+short clip) to 'new video'
and repeat
...but in my case, a lot of the captured videos might be 30 minutes long and have hundreds of 10 sec long pauses, so the 'rebuilding' of the videos will take a long time using this method.
This is why I'm hoping there's some "reliable" and "more intelligent" way to use
ffmepg (with/without the 'mp_decimate' filter) to do the 'decimate' function in only a couple of passes or so... Maybe there's a way that the required segments could even be specified (in a text file, for example) and as ffmpeg runs it will
stop/restart it's transcoding at specified times/frame numbers?
Short of this, is there another application (for use on Windows or Linux) that could do what I'm looking for, without having to manually set start/stop points,
extracting/combining video segments manually...?
I've been trying to do all this with ffmpeg N-79824-gcaee88d under Win7-SP1 and (a different version I don't currently remember) under Puppy Linux Slacko 5.6.4.
Thanks a heap for any clues.
I assume what you want to do is to keep frames with motion and upto 1 second of duplicate frames but discard the rest.
ffmpeg -i in.mp4 -vf
"select='if(gt(scene,0.01),st(1,t),lte(t-ld(1),1))',setpts=N/FRAME_RATE/TB"
trimmed.mp4
What the select filter expression does is make use of an if-then-else operator:
gt(scene,0.01) checks if the current frame has detected motion relative to the previous frame. The value will have to be calibrated based on manual observation by seeing which value accurately captures actual activity as compared to sensor/compression noise or visual noise in the frame. See here on how to get a list of all scene change values.
If the frame is evaluated to have motion, the then clause evaluates st(1,t). The function st(val,expr) stores the value of expr in a variable numbered val and it also returns that expression value as its result. So, the timestamp of the kept frames will keep on being updated in that variable until a static frame is encountered.
The else clause checks the difference between the current frame timestamp and the timestamp of the stored value. If the difference is less than 1 second, the frame is kept, else discarded.
The setpts sanitizes the timestamps of all selected frames.
Edit: I tested my command with a video input I synthesized and it worked.
I've done a bit of work on this question... and have found the following works pretty well...
It seems like the input video has to have a "constant frame rate" for things to work properly, so the first command is:-
ffmpeg -i test.mp4 ^
-vf "scale=trunc(iw/2)*2:trunc(ih/2)*2" ^
-vsync cfr test01.mp4
I then need to look at the 'scores' for each frame. Such a listing is produced by:-
ffmpeg -i test01.mp4 ^
-vf select="'gte(scene,0)',metadata=print" -f null -
I'll look at all those scores... and average them (mean) - a bit dodgy but it seems to work Ok. In this example, that average score is '0.021187'.
I then have to select a 'persistence' value -- how long to let the 'duplicated' frames run. If you force it to only keep one frame, the entire video will tend to run much too quickly... So, I've been using 0.2 seconds as a starting point.
So the next command becomes:-
ffmpeg -i test01.mp4 ^
-vf "select='if(gt(scene,0.021187),st(1,t),lte(t-ld(1),0.20))',
setpts=N/FRAME_RATE/TB" output.mp4
After that, the resultant 'output.mp4' video seems to work pretty well. It's only a bit of fiddling with the 'persistence' value that might need to be done to compromise between having a smoother-playing video and scenes that change a bit abruptly.
I've put together some Perl code that works Ok, which I'll work out how to post, if folks are interested in it... eventually(!)
Edit: Another advantage of doing this 'decimating', is that files are of shorter duration (obviously) AND they are smaller in size. For example, a sample video that ran for 00:07:14 and was 22MB in size went to 00:05:35 and 11MB.
Variable frame rate encoding is totally possible, but I don't think it does what you think it does. I am assuming that you wish to remove these duplicate frames to save space/bandwidth? If so, it will not work because the codec is already doing it. Codecs use reference frames, and only encode what has changed from the reference. Hence the duplicate frame take almost no space to begin with. Basically frames are just encoded as a packet of data saying, copy the previous frame, and make this change. The X frames have zero changes, so it only takes a few bytes to encode each one.

FFMPEG: 4-channel audio workflow suggestions?

I’ve got a bunch of stereo files recorded for a documentary with a Zoom in 4 channel mode. Basically it’s sets of pairs of stereo file s— file A would be a stereo file with a lav or boom mike recording, file B of identical length would be a proper stereo recorded by Zoom itself.
Now I’m trying to convert all this into something I can correctly ingest into editing suite. Files A are a mess but I came up with a ffmpeg script which downconvert them to mono then reconvert them back to stereo (to get rid of inconsistensies). Now how do I merge two stereo files into a single WAV or AIFF file containing two separate stereo channels? I browsed around for any workflows and/or standards on that but can’t really find anything useful.
Any ideas on how to do that with ffmpeg (or anything else, really) would be appreciated!
Don't know if FCP-X reads multi track WAVs but you can output to a multi-track MOV.
ffmpeg -i file1.wav -i file2.wav -c copy -map 0 -map 1 file.mov

Resources