Unknown V4L2 pixel format equivalent for yuvj420p - linux

I am trying to pipe a mp4 video located in Videos/video.mp4 to a virtual webcam device located at /dev/video0.
I tried running:
ffmpeg -re -i Videos/video.mp4 -map 0:v -f v4l2 /dev/video0
and I keep getting the following error:
[video4linux2,v4l2 # 0x5580cf270100] Unknown V4L2 pixel format equivalent for yuvj420p
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
Conversion failed!
Full log:
ffmpeg version 4.2.2-1+b1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9 (Debian 9.2.1-28)
configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Videos/video.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2020-03-23T04:24:01.000000Z
com.android.version: 8.1.0
Duration: 00:01:00.14, start: 0.000000, bitrate: 20048 kb/s
Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuvj420p(pc, smpte170m), 1920x1080, 19898 kb/s, SAR 1:1 DAR 16:9, 29.43 fps, 29.58 tbr, 90k tbn, 180k tbc (default)
Metadata:
rotate : 270
creation_time : 2020-03-23T04:24:01.000000Z
handler_name : VideoHandle
Side data:
displaymatrix: rotation of 90.00 degrees
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
creation_time : 2020-03-23T04:24:01.000000Z
handler_name : SoundHandle
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
[video4linux2,v4l2 # 0x5580cf270100] Unknown V4L2 pixel format equivalent for yuvj420p
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
Conversion failed!
The desired result is that the mp4 video is seen by apps that try to view the webcam. I am running this on a desktop without a webcam or video interface, which is why I am using /dev/video0

Add -vf format=yuv420p (or the alias -pix_fmt yuv420p).
The v4l2 output device doesn't support yuvj420p which is the pixel format of your input. In most cases ffmpeg will automatically choose a supported pixel format, but it is unable to do so for V4L2 output, so you have to manually do it:
ffmpeg -re -i Videos/video.mp4 -map 0:v -vf format=yuv420p -f v4l2 /dev/video0

Related

add black&silence to beginning of a video

Hi I am struggling to add black&silence to the begining of a video with ffmpeg. I did search a lot but they look too complex for me.
Below command is what I find to add black&silence to the end of of video, now how can I tune it to the beginning of a video?
ffmpeg -i input.mp4 -f lavfi -i color=s=1920x1080:d=10 -filter_complex [0:v][1]concat -af [0]apad -shortest output.mp4
Looks I need to use adelay instead of apad, below is the command that makes sense to me, but the audio is not delayed.
ffmpeg -i input.mp4 -f lavfi -i color=s=1920x1080:d=10 -filter_complex [1][0:v]concat -af [0]adelay=10 output.mp4
Here is the input info and ffmpeg version:
ffmpeg -i input.mp4
ffmpeg version 4.2.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.29.100
Duration: 00:01:00.00, start: 0.000998, bitrate: 2526 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 2394 kb/s, 24 fps, 24 tbr, 16k tbn, 48 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 124 kb/s (default)
Metadata:
handler_name : SoundHandler
At least one output file must be specified
Thanks!
There are several methods to do this. The first method is simple and easy but re-encodes the main video. The other method is slightly more complicated but does not re-encode the main video, so the quality is preserved this method will be faster for long videos.
tpad & adelay filters
Using the tpad and adelay filters:
ffmpeg -i input.mp4 -filter_complex "[0:v]tpad=start_duration=2[v];[0:a]adelay=2s:all=true[a]" -map "[v]" -map "[a]" output.mp4
If your ffmpeg is older than version 4.2 then change adelay=2s:all=true to adelay=2000|2000.
color & anullsrc filters with concat demuxer
Make 2 second black and silence that match the attributes of the input. Using the color and anullsrc filters:
ffmpeg -f lavfi -i color=size=1920x1080:rate=24:duration=2 -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -video_track_timescale 16k -shortest black.mp4
Make join.txt containing:
file 'black.mp4'
file 'input.mp4'
Concatenate with the concat demuxer:
ffmpeg -f concat -i join.txt -c copy output.mp4

FFmpeg ignoring output pixel format

I am using ffmpeg 4.2.2 on an Ubuntu 20.04 machine to clone the video stream of a USB webcam so that multiple applications can use the same feed. To achieve this, I simply clone to a v4l2 loop back device:
ffmpeg -f v4l2 -i /dev/video0 -codec copy -f v4l2 /dev/video1
So far, this works reasonably well. I am able to successfully access /dev/video1 which presents the same feed as /dev/video0.
Note: To make this work you need to ensure that the v4l2loopback device kernel module is enabled:
modprobe v4l2loopback devices=1
Next I'd like to convert the pixel format of the dummy device as the application that is accessing the dummy device can only handle yuv422p or RGB whereas my source device /dev/video0 provides yuv420p. I thought that this would be a simple task that can easily be handled by presenting ffmpeg with an additional -pix_fmt argument on the output device like so:
ffmpeg -f v4l2 -i /dev/video0 -codec copy -f v4l2 -pix_fmt yuv422p /dev/video1
While ffmpeg starts cloning the stream without any warnings or errors, it is still outputting in yuv420p instead:
joel#joel-ubuntu:~$ ffmpeg -f v4l2 -i /dev/video0 -codec copy -f v4l2 -pix_fmt yuv422p /dev/video1
ffmpeg version 4.2.2-1ubuntu1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-3ubuntu1)
configuration: --prefix=/usr --extra-version=1ubuntu1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[video4linux2,v4l2 # 0x55ca407b9700] Time per frame unknown
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 6726.737520, bitrate: N/A
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 640x480, 29.25 tbr, 1000k tbn, 1000k tbc
Output #0, video4linux2,v4l2, to '/dev/video1':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 640x480, q=2-31, 29.25 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 76 fps= 34 q=-1.0 Lsize=N/A time=00:00:02.52 bitrate=N/A speed=1.14x
No matter what -pix_fmt I pass, I always end up with yuv420p on the output.
I did several tests with both proper USB UVC webcams as well as DroidCam. The output pixel format never changes as expected. This is also not specific to requesting yuv422p as a pixel format. Also other formats are being ignored. Why is this happening? What am I missing?
Note: I have verified that ffmpeg is capable of the yuv422p pixel format (it is being listed when executing ffmpeg -pix_fmts).
You can't change pixel formats when using -c:v copy. Change to -c:v rawvideo.

Using FFMPEG to split a 16 channel audio input source into 4 seperate 4 channel audio feeds for streaming

I hope someone can help
I am currently trying to split a 16ch Dante audio feed from a separate machine into 4 different audio streams that I can use to then TX via RTMP to Wowza for MPEG-DASH encoding, at present i am just trying to split them into files, I will add the RTMP streaming later.
The biggest issue I am encountering at current is that FFMPEG is returning me this error from my input string
Filter channelsplit:WR has an unconnected output
here is my current input string
ffmpeg -f dshow -i audio="Dante Via Receive (Dante Via)" -filter_complex "[0:a]channelsplit=channel_layout=hexadecagonal[FL][FR][FC][BL][BR][BC][SL][SR][TFL][TFC][TFR][TBL][TBC][TBR][WL][WR]" -map "[FL][FR][FC][BL]" 1-4.wav -map "[BR][BC][SL][SR]" 5-8.wav -map "[TFL][TFC][TFR][TBL]" 9-12.wav -map "[TBC][TBR][WL][WR]" 13-16.wav
and here is the full FFMPEG output
ffmpeg version git-2019-12-26-b0d0d7e Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20191125
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 37.100 / 56. 37.100
libavcodec 58. 65.100 / 58. 65.100
libavformat 58. 35.101 / 58. 35.101
libavdevice 58. 9.101 / 58. 9.101
libavfilter 7. 69.101 / 7. 69.101
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, dshow, from 'audio=Dante Via Receive (Dante Via)':
Duration: N/A, start: 103082.790000, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
File '1-4.wav' already exists. Overwrite? [y/N] y
File '5-8.wav' already exists. Overwrite? [y/N] y
File '9-12.wav' already exists. Overwrite? [y/N] y
File '13-16.wav' already exists. Overwrite? [y/N] y
Filter channelsplit:WR has an unconnected output
I'm also getting the issue where FFMPEG is guessing that the channel count is stereo, which is incorrect but i'm having problems figuring out how to define the input stream as 16ch's of audio
Any help with this would be greatly recieved
Cheers
M
ffmpeg -f dshow -channels 16 -i audio="Dante Via Receive (Dante Via)" -filter_complex "[0:a]channelmap=0|1|2|3[1-4];[0:a]channelmap=4|5|6|7[5-8];[0:a]channelmap=8|9|10|11[9-12];[0:a]channelmap=12|13|14|15[13-16]" -map "[1-4]" 1-4.wav -map "[5-8]" 5-8.wav -map "[9-12]" 9-12.wav -map "[13-16]" 13-16.wav
Try adding the -channels 16 dshow input option.
Filter output labels can't be combined in -map, so do all mixing with filters and give each -map a single label.
channelsplit only outputs channels as individual streams, and it does not mix multiple channels into a single stream, so channelmap is used instead.
I don't have dshow so I couldn't test this.

ffmpeg replace part of audio file with looped audio

I am quite new to ffmpeg and I am trying to replace a part of a first audio file with another second file. The second file can be too short, so some sort of loop should exist.
After some research I came up with the following command arguments and it gives me the output as long as I only do one replacement. But I would like to do multiple replacements. So any help on what I am doing wrong? Any suggestions/remarks on the way of working are also very welcome.
(Any typos in the commands below can be ignored, I generate the command by script and for ease of use I simplified the names.)
Works (One replacement):
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=start=5,asetpts=PTS-STARTPTS[partB];[partA][replaceA][partB]concat=n=3:v=0:a=1[aout]" -map "[aout]" Out.wav
Works Not (Multiple replacements):
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceA];[0:a]atrim=5:4,asetpts=PTS-STARTPTS[partB];[replaceBase]atrim=0:2,asetpts=PTS-STARTPTS[replaceB];[0:a]atrim=start=6,asetpts=PTS-STARTPTS[partC];[partA][replaceA][partB][replaceB][PartC]concat=n=4:v=0:a=1[aout]" -map "[aout]" Out.wav
ffmpeg version N-76860-g72eaf72 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 5.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libdcadec --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 9.100 / 55. 9.100
libavcodec 57. 16.100 / 57. 16.100
libavformat 57. 19.100 / 57. 19.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 15.100 / 6. 15.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from '3897583stereo.wav':
Duration: 00:00:12.07, bitrate: 256 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 8000 Hz, 2 channels, s16, 256 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'beep-021.wav':
Metadata:
encoder : Lavf57.19.100
Duration: 00:00:00.30, bitrate: 1413 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 2 channels, s16, 1411 kb/s
[wav # 057242c0] Invalid stream specifier: replaceBase.
Last message repeated 1 times
Stream specifier 'STREAM CUT matches no streams.
Thanks in advance!
I managed to find a workaround (or maybe just how it should be done) by splitting the looped stream with asplit. Remarks for the way of processing are still welcome...
"ffmpeg.exe" -y -i "first.wav" -i "second.wav" -filter_complex "[1:a][1:a][1:a]concat=n=3:v=0:a=1,asetpts=PTS-STARTPTS[replaceBase];[replaceBase]asplit=2 [replaceA][replaceB];[0:a]atrim=0:3,asetpts=PTS-STARTPTS[partA];[replaceA]atrim=0:2,asetpts=PTS-STARTPTS[replaceTrimmedA];[0:a]atrim=5:6,asetpts=PTS-STARTPTS[partB];[replaceB]atrim=0:2,asetpts=PTS-STARTPTS[replaceTrimmedB];[0:a]atrim=start=8,asetpts=PTS-STARTPTS[partC];[partA][replaceTrimmedA][partB][replaceTrimmedB][PartC]concat=n=4:v=0:a=1[aout]" -map "[aout]" Out.wav
Regards,

FFMPEG command issue

I am having an issue with FFMPEG. To be exact I'm trying to generate a number of 'meaningful' thumbnails from a video file.
I have found this command on the internet:
ffmpeg -ss 3 -i input.mp4 -vf "select=gt(scene\,0.4)" -frames:v 5 -vsync vfr fps=fps=1/600 out%02d.jpg
Sadly it doesn't work for me, as I'm getting:
[NULL # 0x86c2420] Unable to find a suitable output format for 'fps=fps=1/600'
fps=fps=1/600: Invalid argument
I have tried including "fps=fps=1/600" inside -vf, which resulted in only one picture being generated. What am I doing wrong?
EDIT:
This is an example of a full output:
$ ffmpeg -ss 3 -i video.ogg -vf "select=gt(scene\,0.4)" -frames:v 5 -vsync vfr fps=fps=1/600 out%02d.jpg
ffmpeg version 2.5.3 Copyright (c) 2000-2015 the FFmpeg developers
built on Jan 10 2015 23:26:13 with gcc 4.9.2 (GCC) 20141224 (prerelease)
configuration: --prefix=/usr --disable-debug --disable-static --disable-stripping --enable-avisynth --enable-avresample --enable-fontconfig --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-runtime-cpudetect --enable-shared --enable-swresample --enable-vdpau --enable-version3 --enable-x11grab
libavutil 54. 15.100 / 54. 15.100
libavcodec 56. 13.100 / 56. 13.100
libavformat 56. 15.102 / 56. 15.102
libavdevice 56. 3.100 / 56. 3.100
libavfilter 5. 2.103 / 5. 2.103
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
[theora # 0x9b59140] 7 bits left in packet 82
[ogg # 0x9b586e0] Broken file, keyframe not correctly marked.
Last message repeated 2 times
Input #0, ogg, from 'video.ogg':
Duration: 00:09:56.46, start: 0.000000, bitrate: 2237 kb/s
Stream #0:0: Video: theora, yuv420p, 854x480, 24 tbr, 24 tbn, 24 tbc
Stream #0:1: Audio: vorbis, 48000 Hz, stereo, fltp, 192 kb/s
[NULL # 0x9b97660] Unable to find a suitable output format for 'fps=fps=1/600'
fps=fps=1/600: Invalid argument
All I had to do is add -vf before "fps=fps=1/600"

Resources