FFMPEG preset is not found? Linux Cent OS 6 - linux

I ran this command
ffmpeg -i v-16418145218d8d7abdaabec46beb22ecffd2f5d1625.flv -y -acodec aac -ac 2 -ab 160k -vcodec libx264 -vpre iPod640 -vpre slow -f mp4 -threads 0 OUTPUT.mp4
Got this response:
[flv # 0x10ff670]Estimating duration from bitrate, this may be inaccurate
Seems stream 0 codec frame rate differs from container frame rate: 1000.00 (1000/1) -> 25.00 (25/1)
Input #0, flv, from 'v-16418145218d8d7abdaabec46beb22ecffd2f5d1625.flv':
Metadata:
duration : 14
width : 320
height : 240
videodatarate : 500
framerate : 25
videocodecid : 2
audiodatarate : 0
audiosamplerate : 22050
audiosamplesize : 16
stereo : true
audiocodecid : 2
filesize : 912970
Duration: 00:00:13.92, start: 0.000000, bitrate: 576 kb/s
Stream #0.0: Video: flv, yuv420p, 320x240, 512 kb/s, 25 tbr, 1k tbn, 1k tbc
Stream #0.1: Audio: mp3, 22050 Hz, 2 channels, s16, 64 kb/s
File for preset 'iPod640' not found
But after doing a find, this is what I found.
/usr/share/ffmpeg/libx264-ipod320.ffpreset
/usr/share/ffmpeg/libx264-ipod640.ffpreset **** ITS HERE ******
/usr/share/ffmpeg/libx264-lossless_fast.ffpreset
/usr/share/ffmpeg/libx264-lossless_max.ffpreset
/usr/share/ffmpeg/libx264-lossless_medium.ffpreset
/usr/share/ffmpeg/libx264-lossless_slow.ffpreset
/usr/share/ffmpeg/libx264-lossless_slower.ffpreset
/usr/share/ffmpeg/libx264-lossless_ultrafast.ffpreset
/usr/share/ffmpeg/libx264-main.ffpreset
/usr/share/ffmpeg/libx264-max.ffpreset
/usr/share/ffmpeg/libx264-medium.ffpreset
/usr/share/ffmpeg/libx264-medium_firstpass.ffpreset
/usr/share/ffmpeg/libx264-normal.ffpreset
/usr/share/ffmpeg/libx264-placebo.ffpreset
/usr/share/ffmpeg/libx264-placebo_firstpass.ffpreset
/usr/share/ffmpeg/libx264-slow.ffpreset
/usr/share/ffmpeg/libx264-slow_firstpass.ffpreset
/usr/share/ffmpeg/libx264-slower.ffpreset
I alos tried with -vpre libx264-ipod640 and still no luck. I get preset libx264-ipod640 is not found.... Do i have to enable presets somehow? ffmpeg -- enable presets ? or something?
** EDIT: My ffmpeg version info **
FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
FFmpeg 0.6.5
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0

Found the solution while experimenting. I had the wrong order of presets. It has to be the -vcodec then -vpre (speed) -vpre (presetprofile [aka ipod640])

Related

RTMP_ReadPacket, failed to read RTMP packet header rtmp://a.rtmp.youtube.com/live2: Unknown error occurred

Here is the script I am trying to run, I believe the issue is in here:
#!/bin/bash
GIF=/home/stream1/85012216.gif
STREAM_KEY=thisisasecret
URL=rtmp://a.rtmp.youtube.com/live2
FPS=30
KEYINT=$(expr $FPS \* 3)
$FFMPEG -f alsa -ac 2 -i hw:Loopback,1,0 -fflags +genpts -r $FPS -i $GIF \
-vcodec libx264 -x264opts keyint=$KEYINT:min-keyint=$KEYINT:scenecut=-1 -b:v 1000k \
-preset veryfast -pix_fmt yuv420p -s 854x480 \
-c:a libfdk_aac -b:a 96k -ar 44100 \
-f flv $URL
The error:
ffmpeg version N-92337-g8e50215b5e Copyright (c) 2000-2018 the FFmpeg
developers built with gcc 7 (Ubuntu 7.3.0-27ubuntu1~18.04)
configuration: --enable-shared --enable-gpl --enable-nonfree
--enable-pthreads --enable-postproc --enable-libtheora --enable-version3 --enable-libx264 --enable-libfdk-aac --disable-stripping --disable-encoder=libschroedinger --enable-librtmp --enable-gnutls --enable-avfilter --enable-libfreetype --disable-decoder=amrnb --enable-fontconfig --disable-mips32r2 --disable-mipsdspr2 --disable-htmlpages --disable-podpages --disable-altivec --enable-libass --enable-omx --enable-omx-rpi libavutil 56. 23.100 / 56. 23.100 libavcodec 58. 36.100 /
58. 36.100 libavformat 58. 21.100 / 58. 21.100 libavdevice 58. 6.100 / 58. 6.100 libavfilter 7. 43.100 / 7. 43.100 libswscale 5. 4.100 / 5. 4.100 libswresample 3. 4.100 /
3. 4.100 libpostproc 55. 4.100 / 55. 4.100 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, alsa, from
'hw:Loopback,1,0': Duration: N/A, start: 1541258646.286883, bitrate:
1536 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s Input #1, gif, from '/home/stream1/85012216.gif': Duration: N/A,
bitrate: N/A
Stream #1:0: Video: gif, bgra, 500x281, 16.67 fps, 16.67 tbr, 100 tbn, 100 tbc RTMP_ReadPacket, failed to read RTMP packet header
rtmp://a.rtmp.youtube.com/live2: Unknown error occurred
System details:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
-f flv $URL should be -f flv $URL/$STREAM_KEY

Extract the time of the video with ffmpeg in Ubuntu

I am using Ubuntu 14.04.5 LTS 32 bit and ffmpeg to extract the time of a video.
In Windows 10 the command works:
/usr/bin/ffmpeg -i video.mp4 -vstats 2> & 1
In Linux ffmpeg returns the error in red:
At least one output file must be specified
The error complete is here:
ffmpeg version 2.4.3-1ubuntu1~trusty6 Copyright (c) 2000-2014 the FFmpeg developers
built on Nov 22 2014 17:07:19 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
configuration: --prefix=/usr --extra-version='1ubuntu1~trusty6' --build-suffix=-ffmpeg --toolchain=hardened
--extra-cflags= --extra-cxxflags= --libdir=/usr/lib/i386-linux-gnu --shlibdir=/usr/lib/i386-linux-gnu --incdir=/
usr/include/i386-linux-gnu --enable-gpl --enable-shared --disable-stripping --enable-avresample
--enable-avisynth --enable-fontconfig --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
--enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfreetype --enable-libfribidi
--enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus
--enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libspeex --enable-libssh
--enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
--enable-opengl --enable-x11grab --enable-libxvid --enable-libx265 --enable-libdc1394 --enable-libiec61883
--enable-libzvbi --enable-libzmq --enable-frei0r --enable-libx264 --enable-libsoxr --enable-openal
--enable-libopencv
libavutil 54. 7.100 / 54. 7.100
libavcodec 56. 1.100 / 56. 1.100
libavformat 56. 4.101 / 56. 4.101
libavdevice 56. 0.100 / 56. 0.100
libavfilter 5. 1.100 / 5. 1.100
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 0.100 / 3. 0.100
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 0.100 / 53. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp41isom
creation_time : 2017-01-29 15:42:02
location : -21.6646-46.7394/
location-eng : -21.6646-46.7394/
Duration: 00:01:49.43, start: 0.000000, bitrate: 1171 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv), 640x360 [SAR 1:1 DAR 16:9], 992 kb/s,
15.01 fps, 15 tbr, 30k tbn, 60 tbc (default)
Metadata:
creation_time : 2017-01-29 15:42:02
handler_name : VideoHandler
encoder : AVC Coding
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 174 kb/s (default)
Metadata:
creation_time : 2017-01-29 15:42:02
handler_name : SoundHandler
At least one output file must be specified
How can I fix this problem?
Use
/usr/bin/ffmpeg -i video.mp4 -vstats -map 0:v -f null - 2>&1
Or you could just use ffprobe
ffprobe -show_entries format=duration -of compact=p=0:nk=1 video.mp4

Volume adjust and channel merge on video using FFMPEG

I'm trying to convert a recorded AVI (1 video stream, 2 audio streams[1st is stereo, 2nd is mono]) video file to H264/AAC.
I want the second audio stream to be at 60% volume and the first at 100%.
I also want to merge the first and second audio stream.
Output should be H264 with AAC audio.
The command I tried to use is:
"ffmpeg.exe"
-i "input.avi"
-filter_complex "
[0:a:0]aformat=channel_layouts=stereo,volume=1.0[a1];
[0:a:1]aformat=channel_layouts=mono,volume=0.6[a2];
[a1][a2]amerge,pan=stereo|c0<c0+c2|c1<c1+c2[out]
"
-map 0:v
-map "[out]"
-c:v libx264 -preset slow -crf 26
-c:a aac -strict experimental -ab 128000 -ac 2 -ar 48000
"output.mp4"
But I get the error:
[AVFilterGraph # 0000000ceb630c00] The following filters could not choose their formats: Parsed_amerge_4
Consider inserting the (a)format filter near their input or output.
Error configuring complex filters.
Error number -5 occurred
What am I doing wrong and how can it be fixed?
Complete Console Log:
ffmpeg version N-76456-g6df2c94 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 5.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 5.100 / 55. 5.100
libavcodec 57. 14.100 / 57. 14.100
libavformat 57. 14.100 / 57. 14.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 14.101 / 6. 14.101
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.100 / 2. 0.100
libpostproc 54. 0.100 / 54. 0.100
[avi # 000000b1cfa6bc20] non-interleaved AVI
Guessed Channel Layout for Input Stream #0.1 : stereo
Guessed Channel Layout for Input Stream #0.2 : mono
Input #0, avi, from 'input.avi':
Metadata:
encoder : DxtoryCore ver2.0.0.122
ISRC : Video:Lagarith Lossless Codec Audio0:Lautsprecher (2- USB Audio CODEC ) Audio1:Mikrofon (2- USB Audio CODEC )
Duration: 00:27:55.60, start: 0.000000, bitrate: 181780 kb/s
Stream #0:0: Video: lagarith (LAGS / 0x5347414C), rgb24, 1920x1017, 179663 kb/s, 30 fps, 30 tbr, 30 tbn, 30 tbc
Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
Stream #0:2: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 1 channels, s16, 705 kb/s
[Parsed_amerge_4 # 000000b1cfaf3200] No channel layout for input 1
Last message repeated 1 times
[AVFilterGraph # 000000b1cfa70c00] The following filters could not choose their formats: Parsed_amerge_4
Consider inserting the (a)format filter near their input or output.
Error configuring complex filters.
Error number -5 occurred
It seems this can be fixed by splitting all audio streams into separate channels first and then feeding it back into the amerge filter.
"ffmpeg.exe"
-i "input.avi"
-filter_complex "
[0:a:0]volume=1.0,channelsplit=channel_layout=stereo[a1][a2];
[0:a:1]volume=0.6,channelsplit=channel_layout=mono[a3];
[a1][a2][a3]amerge=inputs=3,pan=stereo|c0<c0+c2|c1<c1+c2[out]
"
-map 0:v
-map "[out]"
-c:v libx264 -preset slow -crf 26
-c:a aac -strict experimental -ab 128000 -ac 2 -ar 48000
"output.mp4"

Applying fades between images ffmpeg command

I am trying to create a video slideshow that fade out/in between each image solely from an ffmpeg cli command. After researching this for hours, the only way I discovered that this was possible was to use the -filter_complex argument and pass in all images and specify a complex filter that defines multiple fades out and back in that I could time to happen between frames. The command I have so far:
ffmpeg -y -framerate 1/5 \
-loop 1 -i img-1.jpg \
-loop 1 -i img-2.jpg \
-loop 1 -i img-3.jpg \
-filter_complex \
"[1:v]fade=out:4:d=1,fade=in:5:d=1[fad1]; \
[2:v]fade=out:9:d=1,fade=in:10:d=1[fad2]; \
[3:v]fade=out:14:d=1,fade=in:15:d=1[fad3];" \
-c:v libx264 -r 25 -pix_fmt yuv420p test.mp4
Here's the output from executing this command:
ffmpeg version 2.6.4 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 5.1.1 (GCC) 20150618 (Red Hat 5.1.1-4)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-frei0r --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopencv --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, image2, from img-1.jpg':
Duration: 00:00:05.00, start: 0.000000, bitrate: 141 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 72:72 DAR 16:9], 0.20 fps, 0.20 tbr, 0.20 tbn, 0.20 tbc
Input #1, image2, from img-2.jpg':
Duration: 00:00:00.04, start: 0.000000, bitrate: 17789 kb/s
Stream #1:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 67:67 DAR 16:9], 25 fps, 25 tbr, 25 tbn, 25 tbc
Input #2, image2, from 'img-3.jpg':
Duration: 00:00:00.04, start: 0.000000, bitrate: 17764 kb/s
Stream #2:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 62:62 DAR 16:9], 25 fps, 25 tbr, 25 tbn, 25 tbc
[AVFilterGraph # 0xbc2a00] No such filter: ''
Error configuring filters.
All I am trying to do is create a video slideshow with fades/transitions between images. Any help is greatly appreciated!
I found the best answer to this since it seems impossible to accomplish this in one command. First, you create an mpeg of each of your images applying the fades to the mpeg using video filters (-vf):
ffmpeg -y -loop 1 -i image-1.jpg -vf
"fade=t=in:st=0:d=0.5,fade=t=out:st=4.5:d=0.5" -c:v mpeg2video -t 5
-q:v 1 -pix_fmt yuv420p temp-1.mpeg
If you want to do this in one command, it's not the prettiest solution but you can concatenate your commands as "command1 && command2 && ...." assuming the above is 'command1'. Once, these intermediate mpegs are created, you can concatenate these nicely into a video:
ffmpeg -i temp-1.mpeg -i temp-2.mpeg -filter_complex '[0:v][1:v] concat=n=2:v=1 [v]' -map '[v]' -c:v libx264 -r 30 -s 1280x720 -aspect 16:9 -q:v 1 -pix_fmt yuv420p output.mp4
In the "concat=n=2" portion of this concatentation command, the '2' refers to the number of inputs you have. This will give you a video slideshow of images with a fade in of 0.5 seconds at the beginning and a fade out of 0.5 seconds at the end which gives the effect of fading between images.
Also, you can add panning/zooming to each image by adding the "zoompan" video filter to the first command when converting the image to the intermediate mpeg. For example, your first command would become:
ffmpeg -y -loop 1 -i image-1.jpg -vf
"zoompan=z='min(zoom+0.0015,1.5)':d=125,fade=t=in:st=0:d=0.5,fade=t=out:st=4.5:d=0.5" -c:v mpeg2video -t 5
-q:v 1 -pix_fmt yuv420p temp-1.mpeg

Convert yuv video to png frames using ffmpeg

I want to convert a yuv video to png frames using ffmpeg.
The command I use is
/root/bin/ffmpeg -i pirkagia_max_vid_qual_one.yuv -s 720x576 -r 25 -pix_fmt yuv420p -f image2 one/image-%3d.png
I get the following response:
ffmpeg version git-2015-08-07-8015150 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
configuration: --prefix=/root/ffmpeg_build --extra-cflags=-I/root/ffmpeg_build/include --extra-ldflags=-L/root/ffmpeg_build/lib --bindir=/root/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265
libavutil 54. 30.100 / 54. 30.100
libavcodec 56. 57.100 / 56. 57.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 32.100 / 5. 32.100
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
[IMGUTILS # 0x7fffe8e84760] Picture size 0x0 is invalid
[IMGUTILS # 0x7fffe8e84310] Picture size 0x0 is invalid
[rawvideo # 0x3aaf160] Could not find codec parameters for stream 0 (Video: rawvideo (I420 / 0x30323449), yuv420p, -4 kb/s): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pirkagia_max_vid_qual_one.yuv: could not find codec parameters
Input #0, rawvideo, from 'pirkagia_max_vid_qual_one.yuv':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, -4 kb/s, 25 tbr, 25 tbn, 25 tbc
Output #0, image2, to 'one/image-%3d.png':
Output file #0 does not contain any stream
Any idea?
In your case the rawvideo demuxer needs additional information. Since there appears to be no header in your inputs specifying the video parameters you must specify them in order to be able to decode the data correctly. Example:
ffmpeg -pixel_format yuv420p -video_size 720x576 -framerate 25 -i …
Also, yuv420p is incompatible for the PNG encoder, so you can remove that as an output option and an appropriate pixel format will be auto-selected.
import os
num = 1
video_name = ['out.yuv']
short = ['yuv']
for i in range(num):
saveroot = 'images/' + short[i]
savepath = 'images/' + short[i] + '/im%03d.yuv'
if not os.path.exists(saveroot):
os.makedirs(saveroot)
print('ffmpeg -y -pix_fmt yuv420p -s 1920x1024 -i ' + 'videos_crop/' + video_name[i] + ' ' + savepath)
os.system('ffmpeg -y -pix_fmt yuv420p -s 1920x1024 -i ' + 'videos_crop/' + video_name[i] + ' ' + savepath)

Resources