Unsupported presentation (0x20400003) - azure

so I'm trying to get a really simple livestreaming system running through Azure Media Services. I've got ffmpeg installed on a Raspberry Pi w/ a USB camera, and I'm just trying to get the camera feed received through Azure so I can start tinkering with the Media Player. The ffmpeg command appears to run without a hitch, but whenever I attempt to preview the stream, I get a the following error:
"The video playback was aborted due to a corruption problem or because the video used features your browser did not support. 0x20400003"
The 0x0400003 part of the code is listed in the docs (http://amp.azure.net/libs/amp/latest/docs/index.html#error-codes)
as meaning the presentation of the video is not supported, but I can't find what that actually means in terms of what's wrong.
I'm using the following ffmpeg command for encoding,
ffmpeg -v verbose -framerate 30 -r 30 -i /dev/video0 -vcodec libx264 -preset ultrafast -acodec libfdk-aac -ab 48k -b:v 500k -maxrate 500k -bufsize 500k -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://{Azure channel address}/channel5
which results in the following output:
ffmpeg version N-83743-gd757ddb Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --enable-gpl --enable-libx264 --enable-nonfree --enable-libfdk-aac
libavutil 55. 47.101 / 55. 47.101
libavcodec 57. 82.100 / 57. 82.100
libavformat 57. 66.103 / 57. 66.103
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 74.100 / 6. 74.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
[video4linux2,v4l2 # 0x1f7a430] fd:3 capabilities:84200001
[video4linux2,v4l2 # 0x1f7a430] Querying the device for the current frame size
[video4linux2,v4l2 # 0x1f7a430] Setting frame size to 640x480
[video4linux2,v4l2 # 0x1f7a430] The driver changed the time per frame from 1/30 to 1/15
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 169752.581724, bitrate: 73728 kb/s
Stream #0:0: Video: rawvideo, 1 reference frame (YUY2 / 0x32595559), yuyv422, 640x480, 73728 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[graph 0 input from stream 0:0 # 0x1f89eb0] w:640 h:480 pixfmt:yuyv422 tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2
[auto_scaler_0 # 0x1f8a9c0] w:iw h:ih flags:'bicubic' interl:0
[format # 0x1f8a040] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[graph 0 input from stream 0:0 # 0x1f89eb0] TB:0.033333 FRAME_RATE:30.000000 SAMPLE_RATE:nan
[auto_scaler_0 # 0x1f8a9c0] w:640 h:480 fmt:yuyv422 sar:0/1 -> w:640 h:480 fmt:yuv422p sar:0/1 flags:0x4
No pixel format specified, yuv422p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 # 0x1f7d650] using cpu capabilities: ARMv6 NEON
[libx264 # 0x1f7d650] profile High 4:2:2, level 3.0, 4:2:2 8-bit
[libx264 # 0x1f7d650] 264 - core 148 r2762 90a61ec - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=60 keyint_min=31 scenecut=0 intra_refresh=0 rc_lookahead=0 rc=cbr mbtree=0 bitrate=500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=500 vbv_bufsize=500 nal_hrd=none filler=0 ip_ratio=1.40 aq=0
Output #0, flv, to 'rtmp://{Azure Channel Address}/channel5':
Metadata:
encoder : Lavf57.66.103
Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv422p, 640x480, q=-1--1, 500 kb/s, 30 fps, 1k tbn, 30 tbc
Metadata:
encoder : Lavc57.82.100 libx264
Side data:
cpb: bitrate max/min/avg: 500000/0/500000 buffer size: 500000 vbv_delay: -1
[flv # 0x1f7c1c0] Failed to update header with correct duration.ate= 501.5kbits/s speed=0.25x
[flv # 0x1f7c1c0] Failed to update header with correct filesize.
frame= 2155 fps=7.5 q=-1.0 Lsize= 4392kB time=00:01:11.80 bitrate= 501.1kbits/s speed=0.25x
video:4350kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.974120%
I'm not sure if the header errors are significant, as the program continues to run as expected, but please let me know if there's anything here that's blatantly an issue, or whether there's a meaningful explanation for what the presentation issues are.

Ok here is a quick helper for Raspberry PI Live streaming to Azure Media Services that worked out well for me.
There are a few tricks you can do here to make it work a lot better. The problem is mostly with the FFMPEG command, but you can optimize encoding as well by using the hardware acceleration support on the PI (if you have 2 or higher).
I initially followed this guide to build FFMPEG.
https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
I had to compile the x264 codec first.
When compiling FFMPEG i had to use the "make -j4" to compile on all 4 cores of the latest Raspberry PI B+ or 3.0. Made it compile much faster.
Compilation took a long time on the PI anyways, so I let it run overnight.
Once I had a compiled FFMPEG, I used the new H264 Open Max (OMX) acceleration feature.
add "-enable-omx -enable-omx-rpi" to ./configure options
use FFMPEG Encoder option "-c:v h264_omx"
see for details - https://ubuntu-mate.community/t/hardware-h264-video-encoding-with-libav-openmax-il/4997/11
Once i had that working I did a quick test to make sure I could successfully capture an MP4 File
ffmpeg -framerate 30 -r 30 -s 640x360 -i /dev/video0 -vcodec h264_omx -preset ultrafast -acodec libfaac -ab 48k -b:v 2000k -bufsize 500k -g 60 -keyint_min 60 -sc_threshold 0 out.mp4
Finally I went with the Smooth Streaming protocol support (which is a lot more reliable than RTMP).
ffmpeg -i /dev/video1 -pix_fmt yuv420p -f ismv -movflags isml+frag_keyframe -video_track_timescale 10000000 -frag_duration 2000000 -framerate 30 -r 30 -c:v h264_omx -preset ultrafast -map 0:v:0 -b:v:0 2000k -minrate:v:0 2000k -maxrate:v:0 2000k -bufsize 2500k -s:v:0 640x360 -map 0:v:0 -b:v:1 500k -minrate:v:1 500k -maxrate:v:1 500k -s:v:1 480x360 -g 60 -keyint_min 60 -sc_threshold 0 -c:a libfaac -ab 48k -map 0:a? -threads 0 "http://***your-account-***channel.mediaservices.windows.net/ingest.isml/Streams(video)"
DEEP EXPLANATION OF WHAT IS GOING ON ABOVE ON THE FFMPEG COMMAND LINE:
ffmpeg
-re **READ INPUT AT NATIVE FRAMERATE
-stream_loop -1 **LOOP INFINITE
-i C:\Video\tears_of_steel_1080p.mov **INPUT FILE IS THIS MOV FILE
-movflags isml+frag_keyframe **OUTPUT IS SMOOTH STREAMING THIS SETS THE FLAGS
-f ismv **OUTPUT ISMV SMOOTH
-threads 0 ** SETS THE THREAD COUNT TO USE FOR ALL STREAMS. YOU CAN USE A STREAM SPECIFIC COUNT AS WELL
-c:a aac ** SET TO AAC CODEC
-ac 2 ** SET THE OUTPUT TO STEREO
-b:a 64k ** SET THE BITRATE FOR THE AUDIO
-c:v libx264 ** SET THE VIDEO CODEC
-preset fast ** USE THE FAST PRESET FOR X246
-profile:v main **USE THE MAIN PROFILE
-g 48 ** GOP SIZE IS 48 frames
-keyint_min 48 ** KEY INTERVAL IS SET TO 48 FRAMES
-sc_threshold 0 ** NOT SURE!
-map 0:v ** MAP THE FIRST VIDEO TRACK OF THE FIRST INPUT FILE
-b:v:0 5000k **SET THE OUTPUT TRACK 0 BITRATE
-minrate:v:0 5000k ** SET OUTPUT TRACK 0 MIN RATE TO SIMULATE CBR
-maxrate:v:0 5000k ** SET OUTPUT TRACK 0 MAX RATE TO SIMULATE CBR
-s:v:0 1920x1080 **SCALE THE OUTPUT OF TRACK 0 to 1920x1080.
-map 0:v ** MAP THE FIRST VIDEO TRACK OF THE FIRST INPUT FILE
-b:v:1 3000k ** SET THE OUTPUT TRACK 1 BITRATE TO 3Mbps
-minrate:v:1 3000k -maxrate:v:1 3000k ** SET THE MIN AND MAX RATE TO SIMULATE CBR OUTPU
-s:v:1 1280x720 ** SCALE THE OUTPUT OF TRACK 1 to 1280x720
-map 0:v -b:v:2 1800k ** REPEAT THE ABOVE STEPS FOR THE REST OF THE OUTPUT TRACKS
-minrate:v:2 1800k -maxrate:v:2 1800k -s:v:2 854x480
-map 0:v -b:v:3 1000k -minrate:v:3 1000k -maxrate:v:3 1000k -s:v:3 640x480
-map 0:v -b:v:4 600k -minrate:v:4 600k -maxrate:v:4 600k -s:v:4 480x360
-map 0:a:0 ** FINALLY TAKE THE SOURCE AUDIO FROM THE FIRST SOURCE AUDIO TRACK.
 http://<yourchannel>.channel.mediaservices.windows.net/ingest.isml/Streams(stream0)
Hope that helps get you started in the right direction. Let me know if you have any more questions.

Related

How can I mux a MKV and MKA file and get it to play in a browser?

I'm using ffmpeg to merge .mkv and .mka files into .mp4 files. My current command looks like this:
ffmpeg -i video.mkv -i audio.mka output_path.mp4
The audio and video files are pre-signed urls from Amazon S3. Even on a server with sufficient resources, this process is going very slowly. I've researched situations where you can tell ffmpeg to skip re-encoding each frame, but I think that in my situation it actually does need to re-encode each frame.
I've downloaded 2 sample files to my macbook pro and have installed ffmpeg locally via homebrew. When I run the command
ffmpeg -i video.mkv -i audio.mka -c copy output.mp4
I get the following output:
ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.1.0 (clang-802.0.42)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, matroska,webm, from '319_audio_1498590673766.mka':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.53, start: 2.831000, bitrate: 50 kb/s
Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
title : Audio
Input #1, matroska,webm, from '319_video_1498590673766.mkv':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.97, start: 2.851000, bitrate: 224 kb/s
Stream #1:0(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
title : Video
[mp4 # 0x7fa4f0806800] Could not find tag for codec vp8 in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #1:0 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Last message repeated 1 times
So it appears that the specific encodings I'm working with are vp8 videos and opus audio files, which I believe are incompatible with the .mp4 output container. I would appreciate answers that cover ways of optimally merging vp8 and opus into .mp4 output or answers that point me in the direction of output media formats that are both compatible with vp8 & opus and are playable on web and mobile devices so that I can bypass the re-encoding step altogether.
EDIT:
Just wanted to provide a benchmark after following LordNeckbeard's advice:
4 min 41 second video transcoded locally on my mac
LordNeckbeard’s approach : 15 mins 55 seconds (955 seconds)
Current approach : 18 mins 49 seconds (1129 seconds)
18% speed increase
You can use ffmpeg to mux and/or re-encode MKV and MKA into web browser compatible formats such as Webm or MP4.
Webm mux: If the input formats are VP8/VP9 video with Vorbis or Opus audio
You can just mux into Webm if your inputs are VP8 or VP9 video and Vorbis or Opus audio, such as the inputs in your question. This should be fast because it will not re-encode:
ffmpeg -i video.mkv -i audio.mka -c copy output.webm
Default stream selection behavior is to select one stream per stream type, so with -map you can tell it which streams to choose to prevent mistakes. For example, if both inputs contain multiple streams, but you only want to first video stream from video.mkv and the first audio stream from audio.mka:
ffmpeg -i video.mkv -i audio.mka -map 0:v:0 -map 1:a:0 -c copy -movflags +faststart output.webm
MP4 mux: If the input formats are H.264/H.265 video and AAC audio
ffmpeg -i video.mkv -i audio.mka -c copy -movflags +faststart output.mp4
-movflags +faststart was added because you mentioned web playback. This will allow the video to begin playback before it is completely downloaded by the client.
Webm Re-encode: If the input formats are not compatible with Webm
You'll need to re-encode:
ffmpeg -i video.mkv -i audio.mka -c:v libvpx-vp9 -crf 33 -b:v 0 -c:a libopus output.webm
VP9 is really slow. If you want VP8 instead use -c:v libvpx. For more info see FFmpeg Wiki: VP8 and FFmpeg Wiki: VP9.
If you don't have libopus support use libvorbis instead.
MP4 Re-encode: If the input formats are not compatible with MP4
ffmpeg -i video.mkv -i audio.mka -c:v libx264 -crf 23 -preset medium -c:a aac -movflags +faststart output.mp4
For video, control quality with -crf and encoding speed with -preset. See FFmpeg Wiki: H.264 and FFmpeg Wiki: AAC for more info.
If your target devices are limited in the H.264 profiles they support you can add -profile:v main or -profile:v baseline.
ffprobe for scripting
You can make a script to automate this. ffprobe can be used to determine the formats:
$ ffprobe -loglevel error -select_streams v:0 -show_entries stream=codec_name -of csv=p=0 video.mkv
h264
$ ffprobe -loglevel error -select_streams a:0 -show_entries stream=codec_name -of csv=p=0 audio.mka
aac
The ffprobe outputs can be used as variables in an if/then statement.

Concat multiple (self-generated) videos using ffmpeg on raspbian linux

I am a very talented sleep talker, so I decided to write a solution that records the things I talk at night to make funny videos with subtitles of it. The project is nearly done, but I got a big problem with concating videos I generated before.
The video parts are generated from single png frames using this command:
ffmpeg -y -framerate 15 -i "${images_file_path}" -c:v libx264 -r 30 -pix_fmt yuv420p "${video_file_path}"
Then the sound is added using this command (got this from #9049970 and #11779490):
ffmpeg -y -i "${video_file_path}" -i "${mp3_file_path}" -map 0:v -map 1:a -vcodec copy -acodec copy -shortest "${final_video_file_path}"
All this is causing no problems so far, but I think it may be relevant to know how the videos are generated. I can watch all this and get valid video and sound - the full source code of this first part can be found here.
Now I added a feature that is able to generate "full videos" containing a title and a various number of previously generated "video parts" using this command:
ffmpeg -f concat -i "${video_list_path}" -filter_complex "${filter_string} concat=n=${input_file_counter}:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" "${full_video_path}"
But something is wrong with it and I get this error:
Invalid file index 1 in filtergraph description [0:v:0] [1:v:0] [2:v:0] [2:a:0] [3:v:0] [4:v:0] [4:a:0] [5:v:0] [6:v:0] [6:a:0] [7:v:0] concat=n=8:v=1:a=1 [v] [a].
The full output is:
ffmpeg version N-77213-g7c1c453 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --enable-shared --enable-gpl --prefix=/usr --enable-nonfree --enable-libmp3lame --enable-libfaac --enable-libx264 --enable-version3 --disable-mmx
libavutil 55. 10.100 / 55. 10.100
libavcodec 57. 17.100 / 57. 17.100
libavformat 57. 20.100 / 57. 20.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 20.100 / 6. 20.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[mov,mp4,m4a,3gp,3g2,mj2 # 0xc2e900] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from '/usr/sleeptalk/records-rendered/3enguzpuu2gw0ogk8wkko/videos.txt':
Duration: N/A, start: 0.000000, bitrate: 61 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 58 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s
Metadata:
handler_name : SoundHandler
Invalid file index 1 in filtergraph description [0:v:0] [1:v:0] [2:v:0] [2:a:0] [3:v:0] [4:v:0] [4:a:0] [5:v:0] [6:v:0] [6:a:0] [7:v:0] concat=n=8:v=1:a=1 [v] [a].
I also wrote a test case so you can reproduce this on your local machine. Download the files from my dropbox. Also, the full script that renders the "final move" can be found here.
Would be great to get an Idea, got struggle to fix this the last two days.
You're using both the concat demuxer as well as the concat filter. Skip the latter, because
a) it's unnecessary and
b) I don't believe the demuxer is inducting all input files as separate inputs so the indices beyond 0 don't make sense. Also, the concat filter needs equal number of streams per input file, and their input assignment has to be pair-wise i.e. [0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0]....
Instead, use
ffmpeg -f concat -i "${video_list_textfile}" -c copy "${full_video_path}"
where ${video_list_textfile} is a text file of the form
file 'file1.mp4'
file 'file2.mp4'
file 'file3.mp4'
...

Merging video and audio stream, where audio drifts

I want to record audio and video with my raspberry pi b+ 2.
I tried to accomplish this with one ffmpeg command but this is to slow. and i could not get it working correctly
I have a raspberry pi camera module and a Cirrus audio card. On the raspberry i have compiled a new kernel with support for the audio card. I also compiled ffmpeg on the raspberr with alsa support
~$ ffmpeg
ffmpeg version N-71470-g2db24cf Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.6 (Debian 4.6.3-14+rpi1)
configuration: --arch=armel --target-os=linux --enable-gpl --extra-libs=-lasound --enable-nonfree
libavutil 54. 22.101 / 54. 22.101
libavcodec 56. 34.100 / 56. 34.100
libavformat 56. 30.100 / 56. 30.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 14.100 / 5. 14.100
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
Now i try to record an audio stream and a video stream 'at the same time'
I do this my running a shell script
raspivid -t 60000 -vs -w 1280 -h 720 -b 5000000 -fps 25 -o video.h264 &
arecord -Dhw:sndrpiwsp -r 44100 -c 2 -d 60 -f S32_LE audio.aac
i also tried with -r 22050 and -f S16_LE
when running this it sometimes gives an (i think)
overrun!!! (at least 1038.725 ms long)
at the end of the script i have two files. a video and a audio file.
now i want to merge those two together by using ffmpeg
ffmpeg -i video.h264 -i audio.aac -c:v copy -c:a aac -strict experimental output.mp4
this gives the output:
ffmpeg version N-71470-g2db24cf Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.6 (Debian 4.6.3-14+rpi1)
configuration: --arch=armel --target-os=linux --enable-gpl --extra-libs=-lasound --enable-nonfree
libavutil 54. 22.101 / 54. 22.101
libavcodec 56. 34.100 / 56. 34.100
libavformat 56. 30.100 / 56. 30.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 14.100 / 5. 14.100
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, h264, from 'video_1min_3.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'audio_1min_3.aac':
Duration: 00:01:00.00, bitrate: 705 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 22050 Hz, 2 channels, s16, 705 kb/s
[mp4 # 0x3230f20] Codec for stream 0 does not use global headers but container format requires global headers
Output #0, mp4, to 'output_1min_3.mp4':
Metadata:
encoder : Lavf56.30.100
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1280x720, q=2-31, 25 fps, 25 tbr, 1200k tbn, 1200k tbc
Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 22050 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc56.34.100 aac
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
frame= 1822 fps=310 q=-1.0 Lsize= 33269kB time=00:01:12.84 bitrate=3741.7kbits/s
video:32300kB audio:941kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.086073%
so finally i have a file output.mp4 that is a movie with audio that is in sync at the beginning but drifts away to a difference of about 4 seconds. where the audio is ahead of the video.
I hope you can help me trying to solve this issue so the audio does not drift away anymore.
Thanks in advance
( i tried to be as clear as possible )
We can try to use the -async and -vsync options to correct the audio and video time shift.
for example, i have used the below option to reduce the time lag of 2 sec seen in the audio.
./ffmpeg -async 1 -i "weatherinput.mov" -strict -2 -vcodec libx264 -movflags +faststart -vprofile high -preset slow -b:v 500k -maxrate 500k -bufsize 1000k -threads 0 -b:a 128k -pix_fmt yuv420p "weatheroutput.mp4"
Also we can use vsync options if required apart from the ioffset.
The link below can also referred for other combination of using th async, vsync and i offset to avoid the drift.

FFMpeg Concatenation Filters: Stream specifier ':0' in filtergraph matches no streams

I am developing an application that relies heavily on FFMpeg to perform various transformations on audio files. I am currently testing my FFMpeg configuration on the command line.
I am trying to concatenate multiple audio files which are in different formats (Primarily MP3, MP2 & WAV). I have been using the official TRAC documentation (https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join%2C%20merge)%20media%20files#differentcodec) to help me with this and have created the following command:
ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav
However, when I run this on Mac OS X using version 2.0.1 of FFMpeg, I get the following error message:
Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.
Here is my full output from the terminal:
~/ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav
ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
built on Aug 15 2013 10:56:46 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --arch=x86_64 --enable-runtime-cpudetect
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'OHIn.wav':
Duration: 00:00:06.71, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'OHOut.wav':
Duration: 00:00:07.19, bitrate: 1411 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.
I do not understand why this does not work. FFMpeg shows that the streams 0:0 and 1:0 exist in the source files. The only other similar problems online have surrounded the use of the single quote in Windows, however testing of this confirm it does not apply to my Mac command line.
Any help would be much appreciated.
You need to tell the concat filter the number of output video streams. The default is v=1 for video and a=0 for audio, but you have no video streams. It's best to not rely on the defaults. Manually list the number of input segments (n), output video streams (v), and output audio streams (a).
ffmpeg -i input0.wav -i input1.wav -filter_complex "[0:a][1:a]concat=n=2:v=0:a=1[a]" -map "[a]" output.wav
Notice that I added v=0.
See the concat filter documentation for more info.
In addition to upvoting Lord Neckbeard's response, which, solved my problem btw: I wanted to provide a working example of a Bash Shell script, showing how I concatenate three mp3 files (an intro, middle and outro, each having the same bitrate of 160 kbps, sample rate of 44.1 Khz) into one result mp3. The reason why my filter graph reads:
[0:a] [1:a] [2:a]
instead of something like:
[0:0] [1:0] [2:0]
is because some mp3s had artwork, which, ffmpeg sees as two streams for each input mp3 file, one audio (for the music itself) and one video (for the image artwork file).
The :a portion lets ffmpeg know that you want it to use only the audio stream(s) that it reads for that input file and to pass that along to the concat filter. So any video filters get ignored. The benefit of doing this is that you don't need to know the position of the video stream (so that you don't accidentally pass it) as specified by running a command like:
ffprobe control-intro-recording.mp3.
Anyways, I digress, here's the shell script:
#!/bin/sh
ffmpeg -i ./source-files/control-intro-recording.mp3 \
-i ./source-files/control-middle-1-hour-recording-with-artwork-160-kbps.mp3 \
-i ./source-files/control-outro-recording.mp3 \
-filter_complex '[0:a] [1:a] [2:a] concat=n=3:v=0:a=1 [a]' \
-map '[a]' ./output-files/control-output-with-artwork-160-kbps-improved.mp3
I ran into this Stream specifier ':0' in filtergraph description [0:0] [1:0]... error trying to combine two video files. #LordNeckbeard's answer helped me diagnose the issue. I mention it as a separate answer in case a future querent like myself encounters this situation with video files.
It turned out that one of my videos didn't have an audio track. Adding an audio track with
ffmpeg -f lavfi -i aevalsrc=0 -i title-slide.mp4 -shortest -c:v copy \
-c:a mp3 -strict experimental title.mp4
got me going.

Crossdevice encoding static file to stream in browser using FFMPEG (segmented h264 ?)

I'm building a mediacenter application in NodeJS which is going pretty ok.
(you can check it out on Github: https://github.com/jansmolders86/mediacenterjs )
I'm using FFMPEG to transcode local (static) movies to a stream which I then send to the browser.
At first I used h264 with Flash which worked in browsers, but I really need it to work on Android an iOS (so no Flash) and preferably working on a Raspberry Pi.
But getting it to play on all devices is driving me absolutely insane!
I have all these bits of the puzzle I've gathered from countless hours reading articles, tutorials and stack overflow posts, which led me to the conclusion that I need to produce the following:
Use video codec H264 to transcode to MP4
Move the moovatom '-movflags' to make a MP4 streamable
Segment the stream so Apple can play the stream as well.
But getting nowhere with this. Every time I produce a series of FFMPEG settings that either don't work, or work on some devices rather than all.
Some of my failed attempt where:
My flash attempt -> Main problem (not running in IOS):
'-y','-ss 0','-b 800k','-vcodec libx264','-acodec mp3'\
'-ab 128','-ar 44100','-bufsize 62000', '-maxrate 620k'\
metaDuration,tDuration,'-f flv
my HLS attempt -> Main problem (not running in browser):
'-r 15','-b:v 128k','-c:v libx264','-x264opts level=41'\
'-threads 4','-s 640x480','-map 0:v','-map 0:a:0','-c:a mp3'\
'-b:a 160000','-ac 2','-f hls','-hls_time 10','-hls_list_size 6'\
'-hls_wrap 18','-start_number 1'
My MP4 attempt -> Main problem (duration is shortened and the later part of the video is speeding by)
'-y','-ss 0','-b 800k','-vcodec libx264','-acodec mp3'\
'-ab 128','-ar 44100','-bufsize 62000', '-maxrate 620k'\
metaDuration,tDuration,'-f mp4','-movflags','frag_keyframe+empty_moov'
Second MP4 attempt: -> Main problem (duration is shortened and the later part of the video is speeding by)
'-y','-vcodec libx264','-pix_fmt yuv420p','-b 1200k','-flags +loop+mv4'\
'-cmp 256','-partitions +parti4x4+parti8x8+partp4x4+partp8x8+partb8x8'\
'-me_method hex','-subq 7','-trellis 1','-refs 5','-bf 3','-coder 1'\
'-me_range 16','-g 150','-keyint_min 25','-sc_threshold 40'\
'-i_qfactor 0.71','-acodec mp3','-qmin 10','-qdiff 4','-qmax 51'\
'-ab 128k','-ar 44100','-threads 2','-f mp4','-movflags','frag_keyframe+empty_moov'])
Here is an example of the FFMPEG log running with these settings:
file conversion error ffmpeg version N-52458-gaa96439 Copyright (c) 2000-2013 the FFmpeg developers
built on Apr 24 2013 22:19:32 with gcc 4.8.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --e
nable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable
-libgsm --enable-libilbc --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --ena
ble-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwola
me --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enabl
e-libxvid --enable-zlib
libavutil 52. 27.101 / 52. 27.101
libavcodec 55. 6.100 / 55. 6.100
libavformat 55. 3.100 / 55. 3.100
libavdevice 55. 0.100 / 55. 0.100
libavfilter 3. 60.101 / 3. 60.101
libswscale 2. 2.100 / 2. 2.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
[avi # 02427900] non-interleaved AVI
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, avi, from 'C:/temp/the avengers.avi':
Duration: 00:00:34.00, start: 0.000000, bitrate: 1433 kb/s
Stream #0:0: Video: cinepak (cvid / 0x64697663), rgb24, 320x240, 15 tbr, 15 tbn, 15 tbc
Stream #0:1: Audio: pcm_u8 ([1][0][0][0] / 0x0001), 22050 Hz, mono, u8, 176 kb/s
Please use -b:a or -b:v, -b is ambiguous
[libx264 # 02527c60] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 # 02527c60] profile High, level 2.0
[libx264 # 02527c60] 264 - core 130 r2274 c832fe9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2013 - http://www.videolan.org/x26
4.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x133 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16
chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=2 lookahead_threads=1 sliced_th
reads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 we
ightb=1 open_gop=0 weightp=2 keyint=150 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=120
0 ratetol=1.0 qcomp=0.60 qpmin=10 qpmax=51 qpstep=4 ip_ratio=1.41 aq=1:1.00
Output #0, mp4, to 'pipe:1':
Metadata:
encoder : Lavf55.3.100
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 320x240, q=10-51, 1200 kb/s, 15360 tbn, 15 tbc
Stream #0:1: Audio: mp3 (i[0][0][0] / 0x0069), 44100 Hz, mono, s16p, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (cinepak -> libx264)
Stream #0:1 -> #0:1 (pcm_u8 -> libmp3lame)
Press [q] to stop, [?] for help
frame= 106 fps=0.0 q=10.0 size= 1kB time=00:00:06.94 bitrate= 1.4kbits/s
frame= 150 fps=149 q=14.0 size= 1kB time=00:00:09.87 bitrate= 1.0kbits/s
frame= 191 fps=126 q=16.0 size= 1kB time=00:00:12.61 bitrate= 0.8kbits/s
frame= 244 fps=121 q=16.0 size= 2262kB time=00:00:16.14 bitrate=1147.6kbits/s
frame= 303 fps=120 q=14.0 size= 2262kB time=00:00:20.08 bitrate= 922.2kbits/s
frame= 354 fps=117 q=15.0 size= 3035kB time=00:00:23.48 bitrate=1058.6kbits/s
frame= 402 fps=113 q=15.0 size= 3035kB time=00:00:26.67 bitrate= 932.1kbits/s
frame= 459 fps=113 q=16.0 size= 4041kB time=00:00:30.43 bitrate=1087.7kbits/s
frame= 510 fps=103 q=2686559.0 Lsize= 5755kB time=00:00:33.93 bitrate=1389.3kbits/s
video:5211kB audio:531kB subtitle:0 global headers:0kB muxing overhead 0.235111%
[libx264 # 02527c60] frame I:6 Avg QP:10.55 size: 25921
[libx264 # 02527c60] frame P:245 Avg QP:12.15 size: 14543
[libx264 # 02527c60] frame B:259 Avg QP:15.55 size: 6242
[libx264 # 02527c60] consecutive B-frames: 6.1% 73.7% 14.7% 5.5%
[libx264 # 02527c60] mb I I16..4: 19.9% 6.2% 73.9%
[libx264 # 02527c60] mb P I16..4: 6.0% 0.2% 12.0% P16..4: 35.4% 9.6% 16.3% 7.0% 5.6% skip: 7.8%
[libx264 # 02527c60] mb B I16..4: 0.7% 0.0% 4.3% B16..8: 27.6% 17.2% 17.0% direct:17.3% skip:15.9% L0:39.4% L1:43.2%
BI:17.4%
[libx264 # 02527c60] final ratefactor: 11.41
[libx264 # 02527c60] 8x8 transform intra:1.6% inter:4.0%
[libx264 # 02527c60] coded y,uvDC,uvAC intra: 93.0% 97.0% 94.9% inter: 58.4% 58.7% 50.6%
[libx264 # 02527c60] i16 v,h,dc,p: 15% 26% 54% 5%
[libx264 # 02527c60] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 17% 39% 4% 4% 3% 1% 6% 9%
[libx264 # 02527c60] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 34% 21% 4% 2% 2% 2% 2% 5%
[libx264 # 02527c60] i8c dc,h,v,p: 51% 24% 19% 6%
[libx264 # 02527c60] Weighted P-Frames: Y:4.1% UV:1.2%
[libx264 # 02527c60] ref P L0: 68.2% 9.8% 11.0% 5.6% 4.6% 0.8% 0.0%
[libx264 # 02527c60] ref B L0: 87.7% 8.0% 3.9% 0.4%
[libx264 # 02527c60] ref B L1: 97.8% 2.2%
[libx264 # 02527c60] kb/s:1255.36
Lastly this is my node code fireing up FFMPEG. (I use the module Fluent-ffmpeg: https://github.com/schaermu/node-fluent-ffmpeg )
var proc = new ffmpeg({ source: movie, nolog: true, timeout:15000})
.addOptions(['-r 15','-b:v 128k','-c:v libx264','-x264opts level=41','-threads 4','-s 640x480','-map 0:v','-map 0:a:0','-c:a mp3','-b:a 160000','-ac 2','-f hls','-hls_time 10','-hls_list_size 6','-hls_wrap 18','-start_number 1 stream.m3u8'])
.writeToStream(res, function(retcode, error){
if (!error){
console.log('file has been converted succesfully',retcode .green);
}else{
console.log('file conversion error',error .red);
}
});
So to conclude this very long and code heavy question:
I hope this does not come off as a lazy request, but could someone show/explain to me which FFMPEG settings could/should work on all platforms (modern browsers, Android and iOS) producing a stream of a static file which I can send to a HTML5 player.
[EDIT] what I need if a generic option isn't available
And if this is not possible as some posts might suggest, I would love to see a set of FFMPEG settings that get's the job done properly as far as mp4 streaming is concerned. (e.g encoding a streamable mp4).
The streaming mp4 needs the following
A shifted moovAtom
It needs to be h264
Thanks very much for your help!
There is no format that can play on every device and browser. HTML5 is getting us closer, but there is still debate on formats and codecs. My friends at Zencoder have a new blog post blog post (HERE) that addresses this exact issue.
EDIT: you asked for more specifics. Again, it depends on what platforms you wish to target. I will cover a couple here.
This should play on all modern browsers that support the h.264 codec. It should also play on iPhone4 and above:
ffmpeg -i ~/Dropbox/Test\ Content/bigbuckbunny/bigbuckbunny_1500.mp4 -vcodec libx264 -profile:v main -b:v 512k -s 1280x720 -r:v 30 -acodec libfdk_aac -b:a 128k -movflags faststart -y movie1.mp4
The iPhone 3gs does not support main profile, and its max supported resolution is 640x480. This command will encode for this older device.
ffmpeg -i ~/Dropbox/Test\ Content/bigbuckbunny/bigbuckbunny_1500.mp4 -vcodec libx264 -profile:v baseline -b:v 512k -s 640x432 -r:v 30 -acodec libfdk_aac -b:a 128k -movflags faststart -y movie2.mp4
I encoded some sample files and created a web page here:
http://szatmary.org/stackoverflow/18758133/
The HTML looks like this:
<!DOCTYPE html>
<html>
<body>
<br>ffmpeg -i ~/Dropbox/Test\ Content/bigbuckbunny/bigbuckbunny_1500.mp4 -vcodec libx264 -profile:v main -b:v 512k -s 1280x720 -r:v 30 -acodec libfdk_aac -b:a 128k -movflags faststart -y movie1.mp4<br>
<video controls>
<source src="movie1.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<br>ffmpeg -i ~/Dropbox/Test\ Content/bigbuckbunny/bigbuckbunny_1500.mp4 -vcodec libx264 -profile:v baseline -b:v 512k -s 640x432 -r:v 30 -acodec libfdk_aac -b:a 128k -movflags faststart -y movie2.mp4<br>
<video controls>
<source src="movie2.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</body>
</html>
Here the command broken down into what each element means:
command + input file (should be obvious):
ffmpeg -i ~/Dropbox/Test\ Content/bigbuckbunny/bigbuckbunny_1500.mp4
Use libx264 to encode the video:
-vcodec libx264
Set the h.264 profile to main. baseline will allow for playback on older devices, but you will sacrifice a little quality:
-profile:v main
Set the bitrate to 512 kilobits per second. Choose a value based on the available bandwidth. Higher for LAN/WiFi, lower for 3G/LTE
-b:v 512k
Scale the video to 720p resolution (Again depends on target platform)
-s 1280x720
Encode at 30 frames per second:
-r:v 30
Use libfdk_aac to encode the audio. Or use libmp3lame if you want mp3. AAC is highly recommended. It has much better support on ios and produces higher quality audio:
-acodec libfdk_aac
Set audio bitrate to 128 kilobits per second. You may adjust this for bandwidth as well. with AAC you can probably go as low as 32k
-b:a 128k
Set audio sampling rate to 48000 sample per second. If using mp3, do 44100 for ios
-r:a 48000
This tells ffmpeg to place the moov atom at be start of the mp4 file.
-movflags faststart
Output file (-y tells ffmpeg it can overwrite the file without asking)
-y movie1.mp4
As far as I know you won't find a setting which works on every device. I'd recommend you to check the user agent and then use different settings for different devices. This way you could also use device optimized settings.
I'm thinking about 2 workarounds to enable seeking. I'm assuming that, like me, you don't want to store files transcoded in advance.
The first one is just faking seek from the browser. Use a custom timeline control and when seeking change the video src to an URL including the desired start time, and pass it to ffmpeg. Of course this completely defeats browser prefetching. I implemented this in my project and it works fine.
The second option is a bit more technical and I'm not sure how to do it. The idea would be to transcode all available videos in advance, extract the moov atom, save it to disk and write it manually when streaming. This seems quite hard to do but not impossible.

Resources