ffmpeg add all audio tracks to video conversion (mkv) - linux

I have a script that takes in input a video file (generally avi or mp4) and converts it to a "lower quality" mkv video optimized for web streaming.
The ffmpeg command I use is this one:
ffmpeg -fflags +genpts -i file:"$input" -sn -codec:v:0 libx264 -force_key_frames expr:gte\(t,n_forced*5\) -vf "scale=trunc(min(max(iw\,ih*dar)\,1280)/2)*2:trunc(ow/dar/2)*2" -pix_fmt yuv420p -preset superfast -crf 23 -b:v 1680000 -maxrate 1680000 -bufsize 3360000 -vsync vfr -profile:v high -level 41 -map_metadata -1 -threads 8 -codec:a:0 libmp3lame -ac 2 -ab 320000 -af "aresample=async=1" -y "$output"
The problem is that this command only includes the first audio track of my video. I have some dual language videos (italian and english) for which I want to include both languages.
Is there a simple ffmpeg command option that automatically includes all audio tracks found in a video?

Add -map 0:a to include all audio streams.

Related

ffmpeg : how to take wav as audio input for creating a video?

I'm trying to use ffmpeg for rendering video where an audio file and image are taken as inputs, and turned into a video with the same dimensions as the image with the audio file playing for the duration of the video (basically a music video).
I have this working for flac and mp3 files, my ffmpeg command for mp3 is below:
ffmpeg -loop 1 -framerate 2 -i "front.png" -i "testMP3file.mp3" -vf "scale=2*trunc(iw/2):2*trunc(ih/2),setsar=1" -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a copy -shortest -pix_fmt yuv420p -strict -2 "testMP3fileOutput1.mp4"
How can I take wav audio files as input instead of mp3? Is there a different codec I need to specify? This post talks about download libfaac and using that, is there any way to take wav audio file as input using just ffmpeg without downloading a separate library?
Just use a WAV file as input and change -c:a copy to -c:a aac (or omit -c:a if you want to use the default encoder which is -c:a aac for MP4 output):
ffmpeg -loop 1 -framerate 2 -i "front.png" -i "testMP3file.wav" -vf "scale=2*trunc(iw/2):2*trunc(ih/2),setsar=1,format=yuv420p" -c:v libx264 -preset medium -tune stillimage -crf 18 -c:a aac -shortest "testMP3fileOutput1.mp4"
Other changes:
No need for -strict -2: it does nothing. You can remove that from your command too.
I replaced -pix_fmt yuv420p with format=yuv420p so all your filtering is contained in the filtergraph.

FFMPEG 1 Audio + 1 Still image + multiple image frames at several times

I want to create a video from combination of all these files which includes single audio file, still image in background and multiple image frames at several times, i have achieved this with video file on this help Now i have tried a failure attempt to create audio with same approach. But got an error which is obvious because still lack of knowledge in FFMPEG
Following is my failure attempt with error Output with label 'v2' does not exist in any defined filter graph, or was already used elsewhere.
ffmpeg -y -loop 1 -i bg.jpg -i img/%07d.png -i dia.mp3 -c:v libx264 -tune stillimage -pix_fmt yuv420p -c:a aac -b:a 128k -shortest -vf "[0:v]scale=1280:1280:force_original_aspect_ratio=increase,crop=1280:1280[v1],[v1][2]overlay=10:10:enable='between(t,0,6)'[v2]" -map "[v2]" out.mp4 2>&1
Use
ffmpeg -y -loop 1 -i bg.jpg -i img/%07d.png -i dia.mp3 -c:v libx264 -tune stillimage -pix_fmt yuv420p -c:a aac -b:a 128k -shortest -filter_complex "[0:v]scale=1280:1280:force_original_aspect_ratio=increase,crop=1280:1280[v1];[v1][1]overlay=10:10:enable='between(t,0,6)'" out.mp4 2>&1
The pad numbering is wrong, and there should be a semi-colon after the bg image scale.

Generate Video From Images and Audio

I have a list of images: 1.png, 2.png... and a list of audio files 1.mp3, 2.mp3...
I'd like to generate a video file where audio clips are concatenated, and each image is displayed over the corresponding audio clip:
Think of the images as slides in a slideshow, and the audio as narration for the slide.
Are there any frameworks which would allow me to do this? I'd like to use FFmpeg CLI or something high level if possible.
Lazy method is to make each segment then concatenate with the concat demuxer:
ffmpeg -loop 1 -i 1.png -i 1.mp3 -c:v libx264 -preset stillimage -vf format=yuv420p -c:a aac -shortest 1.mp4
ffmpeg -loop 1 -i 2.png -i 2.mp3 -c:v libx264 -preset stillimage -vf format=yuv420p -c:a aac -shortest 2.mp4
ffmpeg -loop 1 -i 2.png -i 2.mp3 -c:v libx264 -preset stillimage -vf format=yuv420p -c:a aac -shortest 2.mp4
ffmpeg -f concat -i input.txt -c copy -movflags +faststart output.mp4
This assumes the image files are the same width & height, and the audio files have the same channel layout & sample rate.
More complicated method is to use the concat filter which allows you to do it all in one command, but you'll have to enter the duration of each image segment to match the corresponding audio duration.

FFMPEG amix filter causes main audio stream to cut out

So I have a video called 1.mkv and would like to mix in a variety of different audio clips at certain points. To do this I'm using the -filter_complex option. However, I'm running into some problems because when ffmpeg tries to mix in the first audio stream the audio works for a short while when the clip is playing and then all audio cuts out. I'm running ffmpeg version 2.8.15-0(which is up to date with my distro). Another "weird" thing about the video output is that in xplayer the video will freeze after the audio cuts out, and will work if you skip far enough ahead in the video(not sure if this is helpful but it might give some extra clues).
Full command:
ffmpeg -i "1.mkv" -i "5.wav" -i "2.wav" -i "3.wav" -i "6.wav" -i "7.wav" -i "4.wav" -i "9.wav" -i "8.wav" -i "10.wav" -filter_complex "[0:0]setdar=4/3[v0];
[2:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=20000|20000,volume=0.5[ad2];
[4:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=900000|900000,volume=0.5[ad4];
[3:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=300000|300000,volume=0.5[ad3];
[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=720000|720000,volume=0.5[ad1];
[7:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1140000|1140000,volume=0.5[ad7];
[9:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1260000|1260000,volume=0.5[ad9];
[8:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=1020000|1020000,volume=0.5[ad8];
[5:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=960000|960000,volume=0.5[ad5];
[6:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=480000|480000,volume=0.5[ad6];
[0:1]volume=1[ad0];
[ad0][ad2][ad4][ad3][ad1][ad7][ad9][ad8][ad5][ad6]amix=inputs=10:duration=first:dropout_transition=0,dynaudnorm[a0]" -map "[v0]" -map "[a0]" -c:v libx264 -ar 44100 -c:a libmp3lame -preset ultrafast -crf 17 -b:v 1M out2.flv
partial command
ffmpeg -i "1.mkv" -i "2.wav" -filter_complex "[0:0]setdar=4/3[v0];
[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,adelay=20000|20000,volume=0.5[ad2];
[0:1]volume=1[ad0];
[ad0][ad2]amix=inputs=2:duration=first:dropout_transition=0,dynaudnorm[a0]" -map "[v0]" -map "[a0]" -c:v libx264 -ar 44100 -c:a libmp3lame -preset ultrafast -crf 17 -b:v 1M out2.flv
So I managed to solve this by playing around with the audio filters. The fix was converting the mono stream into a stereo before applying the audio filters. I was considering deleting the question but I'll leave it up incase someone has the same problem in the future.
mono to stereo
[1][1]amerge=inputs=2[a1]

FFMpeg Shorten Audio Input

I'm trying to syncronise some audio with video - the audio is recorded via an App in the M4U format, at the same time as the recording starts 2 cameras are triggered and start recording. When the recording stops both the audio and cameras stop recording. Now these are out by quite a bit, like at least a second. The file time lengths are the same, but the audio clearly starts recording earlier then the video.
I'm trying to syncronise these and I can do it manually in Audacity, but I'd like to try and get it close via FFMEG.
I've been having a good look around and can find commands for delaying the start of the audio track, but not cutting off the first few seconds of the Audio. I'm trying something like this
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -itsoffset -3 -i audio.m4u -map 0:v -map 1:a -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -b 5000k -shortest out.mp4
Any clues how to remove the first few seconds form the audio input?
The atrim filter does just that:
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -i audio.m4u -map 0:v -map 1:a -af atrim=3,asetpts=N/SR/TB -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -shortest out.mp4
The asetpts is added to reset the timestamps of the trimmed audio.

Resources