Improve avconv load and speed? - linux

When I convert video through avconv it's take above 95% percentage, is there any way to reduce the converting time?

Try using -threads auto or push it to the number of cores / threads your CPU has.
Here is my full script
#!/bin/sh
infile=$1
tmpfile="$1-tmp.mp4"
outfile="$1-new.mp4"
options="-vcodec libx264 -b 512k -flags +loop+mv4 -cmp 256 \
-partitions +parti4x4+parti8x8+partp4x4+partp8x8+partb8x8 \
-me_method hex -subq 7 -trellis 1 -refs 5 -bf 3 \
-flags2 +bpyramid+wpred+mixed_refs+dct8x8 -coder 1 -me_range 16 \
-g 250 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -qmin 10\
-qmax 51 -qdiff 4"
# Half size
options=$options" -vf scale=iw*0.5:-1"
# Copy audio
options=$options" -codec:a copy"
# Shut up
options=$options" -loglevel info "
# echo "Options : $options"
avconv -y -i "$infile" -threads auto $options "$outfile"
# avconv -y -i "$infile" -an -pass 1 -threads auto $options "$tmpfile"
# avconv -y -i "$infile" -acodec aac -strict experimental -ar 44100 -ab 96k -pass 2 -threads auto $options "$tmpfile"
# qt-faststart "$tmpfile" "$outfile"

Related

how to put a red frame before each video programmatically, when concatenating videos using ffmpeg

To merge files with ffmpeg I'm creating a text file like this:
file 'a.mkv'
file 'b.mkv'
file 'c.mkv'
Then I'm running this command to concat these videos:
ffmpeg -f concat -i file.txt -c copy merged.mkv
But the thing is, I want to put red frame for three seconds in the middle of each video.
Now I can get these videos' frame dimensions and create a video and modify the text file like this:
file 'red.mkv'
file 'a.mkv'
file 'red.mkv'
file 'b.mkv'
file 'red.mkv'
file 'c.mkv'
But this is not a programmatical approach, so is there any way I can concat videos and put red frames for three seconds (before every video)?
I want to generate that red frame video, on run time.
bash script:
#!/bin/bash
intro=a.mkv
main=b.mkv
outro=c.mkv
red=red.mkv
VID=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=codec_name -of default=nw=1:nk=1 "$main")
WID=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=width -of default=nw=1:nk=1 "$main")
HEI=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=height -of default=nw=1:nk=1 "$main")
SAR=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=sample_aspect_ratio -of default=nw=1:nk=1 "$main")
if [ "$SAR" = "N/A" ]; then SAR=1; fi
FPS=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=r_frame_rate -of default=nw=1:nk=1 "$main")
# if video has variable framerate, you can get something like 22840/769
FPS=30
TBN=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=time_base -of default=nw=1:nk=1 "$main")
TBN=${TBN#*/}
FMT=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=pix_fmt -of default=nw=1:nk=1 "$main")
echo $VID $WID $HEI $SAR $FPS $TBN $FMT
AUD=$(ffprobe -v 0 -select_streams a:0 -show_entries stream=codec_name -of default=nw=1:nk=1 "$main")
CHL=$(ffprobe -v 0 -select_streams a:0 -show_entries stream=channel_layout -of default=nw=1:nk=1 "$main")
SRA=$(ffprobe -v 0 -select_streams a:0 -show_entries stream=sample_rate -of default=nw=1:nk=1 "$main")
echo $AUD $CHL $SRA
ffmpeg \
-f lavfi -i "color=c=red:s=${WID}x${HEI}:r=${FPS}:d=3" \
-f lavfi -i "anullsrc=cl=$CHL:r=$SRA" \
-c:v $VID -c:a $AUD -video_track_timescale $TBN -shortest $red -y
echo --- "$red"
ffprobe -v 0 -select_streams v:0 -show_entries stream=codec_name,width,height,sample_aspect_ratio,r_frame_rate,avg_frame_rate,time_base,pix_fmt -of csv=print_section=0 "$red"
ffprobe -v 0 -select_streams a:0 -show_entries stream=codec_name,sample_rate,channels -of csv=print_section=0 "$red"
l="list.txt"
echo "file '$intro'" > $l
for f in $main $outro; do
[[ ! -f $f ]] && continue
echo "file '$red'" >> $l
echo "file '$f'" >> $l
done
cat $l
ffmpeg -f concat -i list.txt -c copy output.mp4 -y
mpv output.mp4
maybe, you have to add something to this code

improve the performance of making slide video using ffmpeg

I have 10 images(1080x1920) and 2 video(1080x1920).
I want to merge all of them to make a final video using command:
INPUT_DATA=(_img1.jpg vid1.mp4 _img2.jpg vid2.mp4 _img3.jpg _img4.jpg _img5.jpg _img6.jpg _img7.jpg _img8.jpg _img9.jpg _img10.jpg)
IMAGE_TIME_SPAN=5
INPUT_DIR="input"
OUTPUT_DIR="output"
BACKGROUND_MUSIC="$INPUT_DIR/background.mp3"
OUTPUT_STEP1="$OUTPUT_DIR/tmp_video.mp4"
OUTPUT_STEP3="$OUTPUT_DIR/tmp_background.mp3"
OUTPUT_FILE="$OUTPUT_DIR/final.mp4"
SCALE_SIZE="1080x1920"
ffmpeg -y \
-f lavfi -t 1 -i anullsrc \
-i $INPUT_DIR/${INPUT_DATA[0]} \
-i $INPUT_DIR/${INPUT_DATA[1]} \
-i $INPUT_DIR/${INPUT_DATA[2]} \
-i $INPUT_DIR/${INPUT_DATA[3]} \
-i $INPUT_DIR/${INPUT_DATA[4]} \
-i $INPUT_DIR/${INPUT_DATA[5]} \
-i $INPUT_DIR/${INPUT_DATA[6]} \
-i $INPUT_DIR/${INPUT_DATA[7]} \
-i $INPUT_DIR/${INPUT_DATA[8]} \
-i $INPUT_DIR/${INPUT_DATA[9]} \
-i $INPUT_DIR/${INPUT_DATA[10]} \
-i $INPUT_DIR/${INPUT_DATA[11]} \
-filter_complex \
"[1:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img1]; \
[2:v]fade=t=in:st=0:d=4:alpha=1,scale=$SCALE_SIZE[v1]; \
[3:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img2]; \
[4:v]fade=t=in:st=0:d=4:alpha=1,scale=$SCALE_SIZE[v2]; \
[5:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img3]; \
[6:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img4]; \
[7:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img5]; \
[8:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img6]; \
[9:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img7]; \
[10:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img8]; \
[11:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img9]; \
[12:v]fade=t=in:st=0:d=4:alpha=1,zoompan=z='zoom+0.0009':d=25*$IMAGE_TIME_SPAN:s=$SCALE_SIZE[img10]; \
[img1][0:a][v1][0:a][img2][0:a][v2][0:a][img3][0:a][img4][0:a][img5][0:a][img6][0:a][img7][0:a][img8][0:a][img9][0:a][img10][0:a]concat=n=12:v=1:a=1" \
-pix_fmt yuv420p -c:v libx264 \
$OUTPUT_STEP1
I run this command on a virtual machine (Linux, 4GB memories) and it takes ~11 min to finish! It takes too long and I want to reduce the process time to less than 5 min.
Any suggestion for the better performance.

ffmpeg add music to video playing after video

I'm using ffmpeg to create slideshow of images.
I have to add music in the backgroup which will be played to the size of the slideshow (slicing audio if audio length is greater than video or repeating audio if audio length is shorter than video)
This is what my script is
ffmpeg -y \
-loop 1 -i in1.png \
-loop 1 -i in2.png \
-loop 1 -i in3.png \
-loop 1 -i in4.png \
-loop 1 -i in5.png \
-filter_complex \
"[0:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1:1[v0]; \
[1:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1:1[v1]; \
[2:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1:1[v2]; \
[3:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1:1[v3]; \
[4:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1:1[v4]; \
[v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,setsar=1:1[v]" -i music.mp3 -shortest -map "[v]" -aspect 16:9 -r 24 shortSlideshow1234.mp4;
This generates output, but slideshow is silent and there is no music in the video.
You need to also map the audio:
ffmpeg -y \
-loop 1 -framerate 24 -i in1.png \
-loop 1 -framerate 24 -i in2.png \
-loop 1 -framerate 24 -i in3.png \
-loop 1 -framerate 24 -i in4.png \
-loop 1 -framerate 24 -i in5.png \
-i music.mp3 \
-filter_complex \
"[0:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1[v0]; \
[1:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1[v1]; \
[2:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1[v2]; \
[3:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1[v3]; \
[4:v]trim=duration=6,fade=t=in:st=0:d=1,fade=t=out:st=5:d=1,setsar=1[v4]; \
[v0][v1][v2][v3][v4]concat=n=5:v=1:a=0[v]" -map "[v]" -map 5:a shortSlideshow1234.mp4

ffmpeg not working in script - moov atom not found

I made a simple script that divides a flv file into multiple parts, converts them all to .mp4 individually and then merge all of them to form a final mp4 file. I did this to save time and convert large files in parallel.
However, I am stuck because the command that normally runs on command line for ffmpeg, doesn't run via script.
I am kind of stuck here and will like to have some assistance.
#!/bin/bash
#sleep 5
filenametmp=$1;
filename=`echo "$filenametmp" | awk '{split($0,a,"."); print a[1]}'`
echo $filename
output="$filename-output"
filenamewithoutpath=`echo "$output" | awk '{split($0,a,"/"); print a[4]}'`
echo $output $filenamewithoutpath
/usr/bin/ffmpeg -i $filenametmp -c copy -map 0 -segment_time $2 -f segment $output%01d.flv
#sleep 10
#echo "/bin/ls -lrt /root/storage/ | /bin/grep $filenamewithoutpath | /usr/bin/wc -l"
filecounttmp=`/bin/ls -lrt /opt/storage/ | /bin/grep $filenamewithoutpath | /usr/bin/wc -l`
filecount=`expr $filecounttmp - 1`
echo $filecount
for i in `seq 0 $filecount`
do
suffix=`expr 0000 + $i`
filenametoconvert="$output$suffix.flv"
convertedfilename="$output$suffix.mp4"
echo $filenametoconvert
/usr/bin/ffmpeg -i $filenametoconvert -c:v libx264 -crf 23 -preset medium -vsync 1 -r 25 -c:a aac -strict -2 -b:a 64k -ar 44100 -ac 1 $convertedfilename > /dev/null 2>&1 &
done
sleep 5
concatstring=""
for j in `seq 0 $filecount`
do
suffix=`expr 0000 + $j`
convertedfilenamemp4="$output$suffix.mp4"
#concatstring=`concat:$concatstring|$convertedfilenamemp4`
echo "file" $convertedfilenamemp4 >> $filename.txt
#ffmpeg -i concat:"$concatstring" -codec copy $filename.mp4
#ffmpeg -f concat -i $filename.txt -c copy $filename.mp4
done
echo $concatstring
ffmpeg -f concat -i $filename.txt -c copy $filename.mp4
rm $output*
rm $filename.txt
I run any flv file like this :
./ff.sh /opt/storage/tttttssssssssss_573f5b1cd473202daf2bf694.flv 20
I get this error message :
moov atom not found
I am on Ubuntu 14.04 LTS version, standard installation of ffmpeg.

ffmpeg generate m3u8 from mp4 (Resume option)

I have a mp4 file or other file (non mp4 format) and i need generate ts files and m3u8 playlist.
I am using this command and works fine:
ffmpeg -i foo.mp4 -codec copy -vbsf h264_mp4toannexb -map 0 -f segment
-segment_list out.m3u8 -segment_time 10 out%03d.ts
Now I need to generate many ts simultaneous so i need a "resume option".
Please see the example below:
One thread (first 20 seconds (0-20))
ffmpeg -i foo.mp4 -codec copy -vbsf h264_mp4toannexb -map 0 -f segment
-segment_list out.m3u8 -segment_time 10 out%03d.ts
Seconds thread (20 seconds to 40 seconds)
ffmpeg -i foo.mp4 ......
Third thread (40 seconds to 60 seconds)
ffmpeg -i foo.mp4 ......
I have lots of core processor to do this jobs.
In resume i need generate .ts files and m3u8 fastest way possible
I need help or advises to resolve my problem.
Prove of concept i build a litle script that use -ss and -t option:
<?php
//generate all the commands using -ss and -t <seconds>
$startTime = new DateTime("00:00:00");
for ($i=1; $i < 20; $i++) {
$data = $startTime->format('H:i:s');
$exec = 'ffmpeg -i "<FILE>" -ss '.$data.' -t 10 -c:v copy -bsf h264_mp4toannexb -flags -global_header -map 0 -f segment -segment_time 10 -segment_start_number '.$i.' -segment_list '.sprintf("%04d", $i).'_test.m3u8 -segment_format mpegts '.$i.'stream%05d.ts';
shell_exec($exec);
$startTime->modify('+10 seconds');
echo "\n";
}
//cycle all m3u8 and creates a master hls playlist
$files = glob('*.{m3u8}', GLOB_BRACE);
sort($files);
$ret = "#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:11
";
foreach($files as $file) {
$ret .= shell_exec('sed -n -e 6,7p '. $file);
}
$ret .= "#EXT-X-ENDLIST";
file_put_contents('final.m3u8', $ret);
?>
Thanks

Resources