I'm trying to syncronise some audio with video - the audio is recorded via an App in the M4U format, at the same time as the recording starts 2 cameras are triggered and start recording. When the recording stops both the audio and cameras stop recording. Now these are out by quite a bit, like at least a second. The file time lengths are the same, but the audio clearly starts recording earlier then the video.
I'm trying to syncronise these and I can do it manually in Audacity, but I'd like to try and get it close via FFMEG.
I've been having a good look around and can find commands for delaying the start of the audio track, but not cutting off the first few seconds of the Audio. I'm trying something like this
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -itsoffset -3 -i audio.m4u -map 0:v -map 1:a -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -b 5000k -shortest out.mp4
Any clues how to remove the first few seconds form the audio input?
The atrim filter does just that:
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -i audio.m4u -map 0:v -map 1:a -af atrim=3,asetpts=N/SR/TB -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -shortest out.mp4
The asetpts is added to reset the timestamps of the trimmed audio.
Related
I'm using the following command to combine two video files together, overlaying the second one at a certain point in the first file. The result is what I want except the audio from the overlayed file is missing.
ffmpeg.exe -y -hide_banner -ss 00:00:00.067 -i promo.mov -i tag.mov -filter_complex "[1:v]setpts=PTS+6.5/TB[a];[0:v][a]overlay=enable=gte(t\,6.5)[out]" -map [out] -map 0:a -map 1:a -c:v mpeg2video -c:a pcm_s16le -ar 48000 -af loudnorm=I=-20:print_format=summary -preset ultrafast -q:v 0 -t 10 complete.mxf
Without the -map 0:a I get no audio at all, but the second -map 1:a does not pass the audio from -i tag.mov
I have also tried amix but that combines audio from both clips starting at the beginning, and I want the audio from the second file to begin when that file starts overlaying.
It would also be helpful if I could make the audio from the first clip drop lower at the time of the overlay.
amix doesn't support introducing an input mid-way, so the workaround is to add leading silence. You can use the adelay filter to do this.
make the audio from the first clip drop lower at the time of the overlay
This is possible using a sidechaincompressor which takes two inputs and lowers the volume of the first input based on the volume of the second input.
So use,
ffmpeg.exe -y -hide_banner -ss 00:00:00.067 -i promo.mov -i tag.mov -filter_complex "[1:v]setpts=PTS+6.5/TB[1v];[0:v][1v]overlay=enable=gte(t\,6.5)[vout];[1:a]adelay=6.5s,apad,asplit=2[1amix][1aref];[0:a][1aref]sidechaincompress[0asc];[0asc][1amix]amix=inputs=2:duration=first[aout]" -map [vout] -map [aout] -c:v mpeg2video -c:a pcm_s16le -ar 48000 -af loudnorm=I=-20:print_format=summary -preset ultrafast -q:v 0 -t 10 complete.mxf
I'm trying to use ffmpeg for rendering video where an audio file and image are taken as inputs, and turned into a video with the same dimensions as the image with the audio file playing for the duration of the video (basically a music video).
I have this working for flac and mp3 files, my ffmpeg command for mp3 is below:
ffmpeg -loop 1 -framerate 2 -i "front.png" -i "testMP3file.mp3" -vf "scale=2*trunc(iw/2):2*trunc(ih/2),setsar=1" -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a copy -shortest -pix_fmt yuv420p -strict -2 "testMP3fileOutput1.mp4"
How can I take wav audio files as input instead of mp3? Is there a different codec I need to specify? This post talks about download libfaac and using that, is there any way to take wav audio file as input using just ffmpeg without downloading a separate library?
Just use a WAV file as input and change -c:a copy to -c:a aac (or omit -c:a if you want to use the default encoder which is -c:a aac for MP4 output):
ffmpeg -loop 1 -framerate 2 -i "front.png" -i "testMP3file.wav" -vf "scale=2*trunc(iw/2):2*trunc(ih/2),setsar=1,format=yuv420p" -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a aac -shortest "testMP3fileOutput1.mp4"
Other changes:
No need for -strict -2: it does nothing. You can remove that from your command too.
I replaced -pix_fmt yuv420p with format=yuv420p so all your filtering is contained in the filtergraph.
I want to create a video from combination of all these files which includes single audio file, still image in background and multiple image frames at several times, i have achieved this with video file on this help Now i have tried a failure attempt to create audio with same approach. But got an error which is obvious because still lack of knowledge in FFMPEG
Following is my failure attempt with error Output with label 'v2' does not exist in any defined filter graph, or was already used elsewhere.
ffmpeg -y -loop 1 -i bg.jpg -i img/%07d.png -i dia.mp3 -c:v libx264 -tune stillimage -pix_fmt yuv420p -c:a aac -b:a 128k -shortest -vf "[0:v]scale=1280:1280:force_original_aspect_ratio=increase,crop=1280:1280[v1],[v1][2]overlay=10:10:enable='between(t,0,6)'[v2]" -map "[v2]" out.mp4 2>&1
Use
ffmpeg -y -loop 1 -i bg.jpg -i img/%07d.png -i dia.mp3 -c:v libx264 -tune stillimage -pix_fmt yuv420p -c:a aac -b:a 128k -shortest -filter_complex "[0:v]scale=1280:1280:force_original_aspect_ratio=increase,crop=1280:1280[v1];[v1][1]overlay=10:10:enable='between(t,0,6)'" out.mp4 2>&1
The pad numbering is wrong, and there should be a semi-colon after the bg image scale.
So I have a video called 1.mkv and would like to mix in a variety of different audio clips at certain points. To do this I'm using the -filter_complex option. However, I'm running into some problems because when ffmpeg tries to mix in the first audio stream the audio works for a short while when the clip is playing and then all audio cuts out. I'm running ffmpeg version 2.8.15-0(which is up to date with my distro). Another "weird" thing about the video output is that in xplayer the video will freeze after the audio cuts out, and will work if you skip far enough ahead in the video(not sure if this is helpful but it might give some extra clues).
Full command:
ffmpeg -i "1.mkv" -i "5.wav" -i "2.wav" -i "3.wav" -i "6.wav" -i "7.wav" -i "4.wav" -i "9.wav" -i "8.wav" -i "10.wav" -filter_complex "[0:0]setdar=4/3[v0];
[2:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=20000|20000,volume=0.5[ad2];
[4:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=900000|900000,volume=0.5[ad4];
[3:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=300000|300000,volume=0.5[ad3];
[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=720000|720000,volume=0.5[ad1];
[7:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1140000|1140000,volume=0.5[ad7];
[9:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1260000|1260000,volume=0.5[ad9];
[8:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1020000|1020000,volume=0.5[ad8];
[5:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=960000|960000,volume=0.5[ad5];
[6:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=480000|480000,volume=0.5[ad6];
[0:1]volume=1[ad0];
[ad0][ad2][ad4][ad3][ad1][ad7][ad9][ad8][ad5][ad6]amix=inputs=10:duration=first:dropout_transition=0,dynaudnorm[a0]" -map "[v0]" -map "[a0]" -c:v libx264 -ar 44100 -c:a libmp3lame -preset ultrafast -crf 17 -b:v 1M out2.flv
partial command
ffmpeg -i "1.mkv" -i "2.wav" -filter_complex "[0:0]setdar=4/3[v0];
[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=20000|20000,volume=0.5[ad2];
[0:1]volume=1[ad0];
[ad0][ad2]amix=inputs=2:duration=first:dropout_transition=0,dynaudnorm[a0]" -map "[v0]" -map "[a0]" -c:v libx264 -ar 44100 -c:a libmp3lame -preset ultrafast -crf 17 -b:v 1M out2.flv
So I managed to solve this by playing around with the audio filters. The fix was converting the mono stream into a stereo before applying the audio filters. I was considering deleting the question but I'll leave it up incase someone has the same problem in the future.
mono to stereo
[1][1]amerge=inputs=2[a1]
I have a script that takes in input a video file (generally avi or mp4) and converts it to a "lower quality" mkv video optimized for web streaming.
The ffmpeg command I use is this one:
ffmpeg -fflags +genpts -i file:"$input" -sn -codec:v:0 libx264 -force_key_frames expr:gte\(t,n_forced*5\) -vf "scale=trunc(min(max(iw\,ih*dar)\,1280)/2)*2:trunc(ow/dar/2)*2" -pix_fmt yuv420p -preset superfast -crf 23 -b:v 1680000 -maxrate 1680000 -bufsize 3360000 -vsync vfr -profile:v high -level 41 -map_metadata -1 -threads 8 -codec:a:0 libmp3lame -ac 2 -ab 320000 -af "aresample=async=1" -y "$output"
The problem is that this command only includes the first audio track of my video. I have some dual language videos (italian and english) for which I want to include both languages.
Is there a simple ffmpeg command option that automatically includes all audio tracks found in a video?
Add -map 0:a to include all audio streams.