I have a video which hasn't 1920x1080 so I need to make it.
I try to use next command:
ffmpeg -i "video.avi" -filter_complex "nullsrc=size=1920x1080 [0:v]; [0:v] overlay=shortest=1:x=200:y=100" -r 30 -c:v libx264 -preset fast -crf 18 -profile:v high -bf 2 -flags +cgop -coder 1 -pix_fmt yuv420p -strict -2 -c:a aac -b:a 384k "video.mp4"
But I got a green frame over the video like this - http://i.imgur.com/QNVUGb5.jpg
I dont find a solution to make a green in any other color.
How possible to make green frame to black frame?
Thanks.
Use the pad filter instead
It is simpler to just use the pad filter to add the frame:
ffmpeg -i input -filter_complex "pad=1920:1080:(ow-iw)/2:(oh-ih)/2" output
Not as efficient alternatives
Alternatively, if you want to use the overlay filter to add padding then you can use the color source filter instead of nullsrc:
ffmpeg -f lavfi -i color=s=1920x1080:c=black -i video.mp4 -filter_complex "[0][1]overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2:shortest=1" output.mp4
If you still want to use nullsrc then refer to the chromakey filter, but this is inefficient and not a recommend method.
Related
I want to create a video from combination of all these files which includes single audio file, still image in background and multiple image frames at several times, i have achieved this with video file on this help Now i have tried a failure attempt to create audio with same approach. But got an error which is obvious because still lack of knowledge in FFMPEG
Following is my failure attempt with error Output with label 'v2' does not exist in any defined filter graph, or was already used elsewhere.
ffmpeg -y -loop 1 -i bg.jpg -i img/%07d.png -i dia.mp3 -c:v libx264 -tune stillimage -pix_fmt yuv420p -c:a aac -b:a 128k -shortest -vf "[0:v]scale=1280:1280:force_original_aspect_ratio=increase,crop=1280:1280[v1],[v1][2]overlay=10:10:enable='between(t,0,6)'[v2]" -map "[v2]" out.mp4 2>&1
Use
ffmpeg -y -loop 1 -i bg.jpg -i img/%07d.png -i dia.mp3 -c:v libx264 -tune stillimage -pix_fmt yuv420p -c:a aac -b:a 128k -shortest -filter_complex "[0:v]scale=1280:1280:force_original_aspect_ratio=increase,crop=1280:1280[v1];[v1][1]overlay=10:10:enable='between(t,0,6)'" out.mp4 2>&1
The pad numbering is wrong, and there should be a semi-colon after the bg image scale.
I have a list of images: 1.png, 2.png... and a list of audio files 1.mp3, 2.mp3...
I'd like to generate a video file where audio clips are concatenated, and each image is displayed over the corresponding audio clip:
Think of the images as slides in a slideshow, and the audio as narration for the slide.
Are there any frameworks which would allow me to do this? I'd like to use FFmpeg CLI or something high level if possible.
Lazy method is to make each segment then concatenate with the concat demuxer:
ffmpeg -loop 1 -i 1.png -i 1.mp3 -c:v libx264 -preset stillimage -vf format=yuv420p -c:a aac -shortest 1.mp4
ffmpeg -loop 1 -i 2.png -i 2.mp3 -c:v libx264 -preset stillimage -vf format=yuv420p -c:a aac -shortest 2.mp4
ffmpeg -loop 1 -i 2.png -i 2.mp3 -c:v libx264 -preset stillimage -vf format=yuv420p -c:a aac -shortest 2.mp4
ffmpeg -f concat -i input.txt -c copy -movflags +faststart output.mp4
This assumes the image files are the same width & height, and the audio files have the same channel layout & sample rate.
More complicated method is to use the concat filter which allows you to do it all in one command, but you'll have to enter the duration of each image segment to match the corresponding audio duration.
I'm trying to correctly map and centrally-orient a single showwaves (or showfreqs) overlay against two symmetrically-scrolling showspectrum overlays with ffmpeg, e.g.
ffmpeg -i input.mp3 -filter_complex "[0:a]showspectrum=color=fiery:saturation=1:slide=scroll:scale=log:win_func=gauss:overlap=1:s=960x1080,pad=1920:1080[vs]; [0:a]showspectrum=color=fiery:saturation=2:slide=rscroll:scale=log:win_func=gauss:overlap=1:s=960x1080[ss]; [0:a]showwaves=s=960x540:mode=p2p[sw]; [vs][ss]overlay=w[out]; [out][sw]overlay=w[out]" -map "[out]" -map 0:a -c:v libx264 -preset fast -crf 18 -c:a copy output.mkv
As shown in the screen capture above, the showwaves overlay is stubbornly fixed in the upper right quadrant. The intent is to have it display horizontally across the center.
Bonus points if you can help me thicken the lines drawn by the showwaves filter.
Use
ffmpeg -i input.mp3 -filter_complex "[0:a]showspectrum=color=fiery:saturation=1:slide=scroll:scale=log:win_func=gauss:overlap=1:s=960x1080,pad=1920:1080[vs]; [0:a]showspectrum=color=fiery:saturation=2:slide=rscroll:scale=log:win_func=gauss:overlap=1:s=960x1080[ss]; [0:a]showwaves=s=1920x540:mode=p2p,inflate[sw]; [vs][ss]overlay=w[out]; [out][sw]overlay=0:(H-h)/2[out]" -map "[out]" -map 0:a -c:v libx264 -preset fast -crf 18 -c:a copy output.mkv
Co-ordinates set for the overlay of showwaves. showwaves size also changed to span full width. Added inflate filter to simulate "thickness" but in terms of quality, YMMV.
So I have a video called 1.mkv and would like to mix in a variety of different audio clips at certain points. To do this I'm using the -filter_complex option. However, I'm running into some problems because when ffmpeg tries to mix in the first audio stream the audio works for a short while when the clip is playing and then all audio cuts out. I'm running ffmpeg version 2.8.15-0(which is up to date with my distro). Another "weird" thing about the video output is that in xplayer the video will freeze after the audio cuts out, and will work if you skip far enough ahead in the video(not sure if this is helpful but it might give some extra clues).
Full command:
ffmpeg -i "1.mkv" -i "5.wav" -i "2.wav" -i "3.wav" -i "6.wav" -i "7.wav" -i "4.wav" -i "9.wav" -i "8.wav" -i "10.wav" -filter_complex "[0:0]setdar=4/3[v0];
[2:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=20000|20000,volume=0.5[ad2];
[4:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=900000|900000,volume=0.5[ad4];
[3:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=300000|300000,volume=0.5[ad3];
[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=720000|720000,volume=0.5[ad1];
[7:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1140000|1140000,volume=0.5[ad7];
[9:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1260000|1260000,volume=0.5[ad9];
[8:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1020000|1020000,volume=0.5[ad8];
[5:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=960000|960000,volume=0.5[ad5];
[6:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=480000|480000,volume=0.5[ad6];
[0:1]volume=1[ad0];
[ad0][ad2][ad4][ad3][ad1][ad7][ad9][ad8][ad5][ad6]amix=inputs=10:duration=first:dropout_transition=0,dynaudnorm[a0]" -map "[v0]" -map "[a0]" -c:v libx264 -ar 44100 -c:a libmp3lame -preset ultrafast -crf 17 -b:v 1M out2.flv
partial command
ffmpeg -i "1.mkv" -i "2.wav" -filter_complex "[0:0]setdar=4/3[v0];
[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=20000|20000,volume=0.5[ad2];
[0:1]volume=1[ad0];
[ad0][ad2]amix=inputs=2:duration=first:dropout_transition=0,dynaudnorm[a0]" -map "[v0]" -map "[a0]" -c:v libx264 -ar 44100 -c:a libmp3lame -preset ultrafast -crf 17 -b:v 1M out2.flv
So I managed to solve this by playing around with the audio filters. The fix was converting the mono stream into a stereo before applying the audio filters. I was considering deleting the question but I'll leave it up incase someone has the same problem in the future.
mono to stereo
[1][1]amerge=inputs=2[a1]
I'm trying to syncronise some audio with video - the audio is recorded via an App in the M4U format, at the same time as the recording starts 2 cameras are triggered and start recording. When the recording stops both the audio and cameras stop recording. Now these are out by quite a bit, like at least a second. The file time lengths are the same, but the audio clearly starts recording earlier then the video.
I'm trying to syncronise these and I can do it manually in Audacity, but I'd like to try and get it close via FFMEG.
I've been having a good look around and can find commands for delaying the start of the audio track, but not cutting off the first few seconds of the Audio. I'm trying something like this
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -itsoffset -3 -i audio.m4u -map 0:v -map 1:a -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -b 5000k -shortest out.mp4
Any clues how to remove the first few seconds form the audio input?
The atrim filter does just that:
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -i audio.m4u -map 0:v -map 1:a -af atrim=3,asetpts=N/SR/TB -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -shortest out.mp4
The asetpts is added to reset the timestamps of the trimmed audio.