ffmpeg HLS multisize with no audio - audio

I need to implement HLS video conversion and I've found a shell script that works almost perfectly. Here it is:
VIDEO_IN=test.mov
VIDEO_OUT=master
HLS_TIME=4
FPS=25
GOP_SIZE=100
PRESET_P=veryslow
V_SIZE_1=960x540
V_SIZE_2=416x234
V_SIZE_3=640x360
V_SIZE_4=768x432
V_SIZE_5=1280x720
V_SIZE_6=1920x1080
# HLS
ffmpeg -i $VIDEO_IN -y \
-preset $PRESET_P -keyint_min $GOP_SIZE -g $GOP_SIZE -sc_threshold 0 -r $FPS -c:v libx264 -pix_fmt yuv420p \
-map v:0 -s:0 $V_SIZE_1 -b:v:0 2M -maxrate:0 2.14M -bufsize:0 3.5M \
-map v:0 -s:1 $V_SIZE_2 -b:v:1 145k -maxrate:1 155k -bufsize:1 220k \
-map v:0 -s:2 $V_SIZE_3 -b:v:2 365k -maxrate:2 390k -bufsize:2 640k \
-map v:0 -s:3 $V_SIZE_4 -b:v:3 730k -maxrate:3 781k -bufsize:3 1278k \
-map v:0 -s:4 $V_SIZE_4 -b:v:4 1.1M -maxrate:4 1.17M -bufsize:4 2M \
-map v:0 -s:5 $V_SIZE_5 -b:v:5 3M -maxrate:5 3.21M -bufsize:5 5.5M \
-map v:0 -s:6 $V_SIZE_5 -b:v:6 4.5M -maxrate:6 4.8M -bufsize:6 8M \
-map v:0 -s:7 $V_SIZE_6 -b:v:7 6M -maxrate:7 6.42M -bufsize:7 11M \
-map v:0 -s:8 $V_SIZE_6 -b:v:8 7.8M -maxrate:8 8.3M -bufsize:8 14M \
-map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -map a:0 -c:a aac -b:a 128k -ac 1 -ar 44100\
-f hls -hls_time $HLS_TIME -hls_playlist_type vod -hls_flags independent_segments \
-master_pl_name $VIDEO_OUT.m3u8 \
-hls_segment_filename HLS/stream_%v/s%06d.ts \
-strftime_mkdir 1 \
-var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2 v:3,a:3 v:4,a:4 v:5,a:5 v:6,a:6 v:7,a:7 v:8,a:8" HLS/stream_%v.m3u8
It works as expected when test.mov has an audio track. But if it is, e.g. a screencast recorded with Quick Time with no audio track it will fail with this error:
Stream map 'a:0' matches no streams.
To ignore this, add a trailing '?' to the map.
I tried to do what it recommends and add ? to all audio mappings like:
-map a:0?
In this case it failed on -var_stream_map because it doesn't allow optional parameters.
I've also found how to add an empty audio track here adding silent audio in ffmpeg
But I had no luck trying to combine it with the script above.
Can anyone help me change the script so it could accept any files with and without audio?
p.s. I honestly read the official docs of ffmpeg but it didn't help at all

It seems this is due to the HLS muxer not parsing/using the trailing '?'. Unfortunately, there is currently nothing you can do differently in your call to ffmpeg.
The work around I use is to call
ffprobe -show_streams -select_streams a -i $VIDEO_IN
If the resulting list of STREAMS is empty, then there is no audio stream.
Then you can adjust your ffmpeg call accordingly.

Related

FFMPEG how to combine -filter_complex with h264 and output to stdout

I need to stream some ffmpeg output that is generated with ffmpeg (audio to video); for this I'm using -filter_complex avectorscope.
I am testing the pipe with ffplay, something like this works:
ffmpeg -i video.mp4 -f h264 - | ffplay -
and something like this also works fine (writing to a file):
ffmpeg -i input.mp3 -filter_complex "[0:a]avectorscope=s=1920x1080,format=yuv420p[v]" \
-map "[v]" -map 0:a -vcodec libx264 avectorscope.mp4
But what I really need is something like this:
ffmpeg -i input.mp3 -filter_complex "[0:a]avectorscope=s=1920x1080,format=yuv420p[v]" \
-map "[v]" -map 0:a -vcodec libx264 -f h264 | ffplay -
but when I try that I get this error:
Automatic encoder selection failed for output stream #0:1. Default encoder for format h264 (codec none) is probably disabled. Please choose an encoder manually.
Error selecting an encoder for stream 0:1
pipe:: Invalid data found when processing input
So I can encode it to a file but not encode it to a pipe with the same flags.
I also tried with other format (-f flv and -f mpegts) and they also work (for ffplay), but it doesn't work with other tools that require h264 stream as input.
I hope someone can help!
-f h264 represents a raw H.264 bitstream so audio can't be included.
Use a container with multiplexing e.g. nut
ffmpeg -i input.mp3 -filter_complex "[0:a]avectorscope=s=1920x1080,format=yuv420p[v]" \ -map "[v]" -map 0:a -vcodec libx264 -f nut | ffplay -f nut -

Optimize ffmpeg overlay and loop filters

I have a video, video.mp4, of 30 seconds, and I have an audio that can change in length, audio.mp3.
My final idea is to have an output video of a loop of video.mp4 for the total length of the audio.mp3, and an overlay of the waveform of the audio.mp3. What I've done is this, in a bash script:
# calculate length of the audio and of the video
tot=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 audio.mp3)
vid=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 video.mp4)
# how many base video we need to loop into the waveform video?
repeattime=`echo "scale=0; ($tot+$vid-1)/$vid" | bc`
# ffmpeg final command
ffmpeg -stream_loop $repeattime -i video.mp4 -i audio.mp3 -filter_complex "[1:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]; [0:v][outputwave] overlay=0:main_h-overlay_h [out]" -map '[out]' -map '1:a' -c:a copy -y output.mp4
Is there a better way to do it in a single ffmpeg command? I know it exists the loop filter in ffmpeg, but it loops frames and I don't know the number of frames of the video.mp4. Also, using $repeattime can result in a number of loop longer then needed (because math calculation is done round up)
-shortest helps you:
#!/bin/bash
ffmpeg -hide_banner -stream_loop -1 -i "input 1.mp4" -i "input 1.mp3" -filter_complex "
[1:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave];
[0:v][outputwave] overlay=0:main_h-overlay_h [v]
" -map [v] -map 1:a -c:a copy -shortest -y output.mp4
You can use the shortest option in overlay.
ffmpeg -stream_loop -1 -i video.mp4 -i audio.mp3 -filter_complex "[1:a]showwaves=s=1280x100:colors=Red:mode=cline:rate=25:scale=sqrt[outputwave]; [0:v][outputwave] overlay=0:main_h-overlay_h:shortest=1 [out]" -map '[out]' -map '1:a' -c:a copy -y output.mp4

How to convert MKV movies 5.1 audio tracks to 2.0 (Stereo) but keep the original ones

To solve a problem I had where 5.1 movies had really quite dialogues, I'm using FFMPEG to convert every audio track of my MKV movies to an 2.0 track with audio normalization, leaving video and subtitles intact.
Here's what the command looks like:
for /r %%i in (*.mkv) do (
#ffmpeg.exe -hide_banner -v 32 -stats -y -i "%%i" -map 0:v -map 0:a -map 0:s? -c:s copy -c:v copy -acodec ac3 -ac 2 -ar 48000 -ab 640k -af %aproc2% -f matroska "%%~ni [Stereo].mkv"
)
What I'd like to do now is having these converted audio track added to the MKV among the 5.1 tracks, and not replacing the originals, which I may want in future.
I'm not really an expert of FFMPEG, so I'm looking for some help.
Use
for /r %%i in (*.mkv) do (
#ffmpeg.exe -hide_banner -v 32 -stats -y -i "%%i" -map 0:v -map 0:a -map 0:a -map 0:s? -c:s copy -c:v copy -c:a:0 ac3 -ac:a:0 2 -ar:a:0 48000 -ab:a:0 640k -filter:a:0 %aproc2% -c:a:1 copy -f matroska "%%~ni [Stereo].mkv"
)
The audio is mapped twice. All audio options have a output stream specifier attached so they only apply to the first audio output and the codec for the 2nd audio output is set to copy.
For inputs with multiple tracks, you'll need multiple commands
for /r %%i in (*.mkv) do (
#ffmpeg.exe -hide_banner -v 32 -stats -y -i "%%i" -map 0:a -c:a ac3 -ac 2 -ar 48000 -ab 640k -filter:a %aproc2% -f matroska "%%~dpni [Stereo].mka"
#ffmpeg.exe -hide_banner -v 32 -stats -y -i "%%i" -i "%%~dpni [Stereo].mka" -map 0:v -map 0:a -map 1:a -map 0:s? -c copy -f matroska "%%~ni [Stereo].mkv"
)

How to input an audio file, generate video, split, crop and overlay to output a kaleidoscope effect

I need to create an FFMPEG script which reads in an audio file ("testloop.wav" in this example) generates a video from the waveform using the "showcqt" filter , and then crops and overlays the output from that to generate a kaleidoscope effect. This is the code I have so far - the generation of the intial video and the output section work correctly, but there is a fault in the split, crop and overlay section which I cannot trace.
ffmpeg -i "testloop.wav" -i "testloop.wav" \
-filter_complex "[0:a]showcqt,format=yuv420p[v]" -map "[v]" \
"split [tmp1][tmp2]; \
[tmp1] crop=iw:(ih/3)*2:0:0, pad=0:ih+ih/2 [top]; \
[tmp2] crop=iw:ih/3:0:(ih/3)*2, hflip [bottom]; \
[top][bottom] overlay=0:(H/3)*2"\
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart "${i%.wav}.mp4
You can't split or define multiple filter_complexes. Also, no need to feed input twice.
ffmpeg -i "testloop.wav" \
-filter_complex "[0:a]showcqt,format=yuv420p, \
split [tmp1][tmp2]; \
[tmp1] crop=iw:(ih/3)*2:0:0, pad=0:ih+ih/2 [top]; \
[tmp2] crop=iw:ih/3:0:(ih/3)*2, hflip [bottom]; \
[top][bottom] overlay=0:(H/3)*2"\
-c:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p \
-c:a aac -strict -2 -b:a 384k -ar 48000 -movflags +faststart out.mp4
(I'm not debugging the logic of the effect you're trying to achieve. Only the syntax)

ffmpeg-Error "Buffer queue overflow, dropping." when merging two videos with delay

I want to merge two videos (as example the iphone video from https://peach.blender.org/trailer-page/). The videos are placed on an background image with the overlay filter and the second video starts 3 seconds later.
And I need that the audio is mixed.
Here is my code:
ffmpeg \
-loop 1 -i background.png \
-itsoffset 0 -i trailer_iphone.m4v \
-itsoffset 3 -i trailer_iphone.m4v \
\
-y \
-t 36 \
-filter_complex "
[2:a] adelay=3000 [2delayed];
[1:a][2delayed] amerge=inputs=2 [audio];
[0][1:v] overlay=10:10:enable='between(t,0,33)' [lv1];
[lv1][2:v] overlay=10:300:enable='between(t,0,36)' [video]
" \
\
-threads 0 \
-map "[video]" -map "[audio]" \
-vcodec libx264 -acodec aac \
merged-video.mp4
I get the error message:
[Parsed_overlay_3 # 0x7fe892502ac0] [framesync # 0x7fe892502b88] Buffer queue overflow, dropping.
And the merged video has many dropped frames.
I know that are some other posting with this error message. But the suggested solutions doesn't work for me.
How can I fix the problem?
FFmpeg is dropping frames from [2:v] because the processing of [0][1:v]overlay is taking longer than the frame drop threshold.
Insert a fifo filter to 2:v to avoid this.
ffmpeg -loop 1 -i background.png
-itsoffset 0 -i trailer_iphone.m4v
-itsoffset 3 -i trailer_iphone.m4v
-t 36 -filter_complex
"[2:a]adelay=3000[2delayed];[1:a][2delayed]amerge=inputs=2[audio];
[0][1:v]overlay=10:10:enable='between(t,0,33)'[lv1];
[2:v]fifo[2f];[lv1][2f]overlay=10:300:enable='between(t,0,36)'[video]"
-threads 0 -map "[video]" -map "[audio]" -vcodec libx264 -acodec aac merged-video.mp4
(For stereo audio, it should be adelay=3000|3000)

Resources