How can I mux a MKV and MKA file and get it to play in a browser? - audio

I'm using ffmpeg to merge .mkv and .mka files into .mp4 files. My current command looks like this:
ffmpeg -i video.mkv -i audio.mka output_path.mp4
The audio and video files are pre-signed urls from Amazon S3. Even on a server with sufficient resources, this process is going very slowly. I've researched situations where you can tell ffmpeg to skip re-encoding each frame, but I think that in my situation it actually does need to re-encode each frame.
I've downloaded 2 sample files to my macbook pro and have installed ffmpeg locally via homebrew. When I run the command
ffmpeg -i video.mkv -i audio.mka -c copy output.mp4
I get the following output:
ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.1.0 (clang-802.0.42)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, matroska,webm, from '319_audio_1498590673766.mka':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.53, start: 2.831000, bitrate: 50 kb/s
Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
title : Audio
Input #1, matroska,webm, from '319_video_1498590673766.mkv':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.97, start: 2.851000, bitrate: 224 kb/s
Stream #1:0(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
title : Video
[mp4 # 0x7fa4f0806800] Could not find tag for codec vp8 in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #1:0 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Last message repeated 1 times
So it appears that the specific encodings I'm working with are vp8 videos and opus audio files, which I believe are incompatible with the .mp4 output container. I would appreciate answers that cover ways of optimally merging vp8 and opus into .mp4 output or answers that point me in the direction of output media formats that are both compatible with vp8 & opus and are playable on web and mobile devices so that I can bypass the re-encoding step altogether.
EDIT:
Just wanted to provide a benchmark after following LordNeckbeard's advice:
4 min 41 second video transcoded locally on my mac
LordNeckbeard’s approach : 15 mins 55 seconds (955 seconds)
Current approach : 18 mins 49 seconds (1129 seconds)
18% speed increase

You can use ffmpeg to mux and/or re-encode MKV and MKA into web browser compatible formats such as Webm or MP4.
Webm mux: If the input formats are VP8/VP9 video with Vorbis or Opus audio
You can just mux into Webm if your inputs are VP8 or VP9 video and Vorbis or Opus audio, such as the inputs in your question. This should be fast because it will not re-encode:
ffmpeg -i video.mkv -i audio.mka -c copy output.webm
Default stream selection behavior is to select one stream per stream type, so with -map you can tell it which streams to choose to prevent mistakes. For example, if both inputs contain multiple streams, but you only want to first video stream from video.mkv and the first audio stream from audio.mka:
ffmpeg -i video.mkv -i audio.mka -map 0:v:0 -map 1:a:0 -c copy -movflags +faststart output.webm
MP4 mux: If the input formats are H.264/H.265 video and AAC audio
ffmpeg -i video.mkv -i audio.mka -c copy -movflags +faststart output.mp4
-movflags +faststart was added because you mentioned web playback. This will allow the video to begin playback before it is completely downloaded by the client.
Webm Re-encode: If the input formats are not compatible with Webm
You'll need to re-encode:
ffmpeg -i video.mkv -i audio.mka -c:v libvpx-vp9 -crf 33 -b:v 0 -c:a libopus output.webm
VP9 is really slow. If you want VP8 instead use -c:v libvpx. For more info see FFmpeg Wiki: VP8 and FFmpeg Wiki: VP9.
If you don't have libopus support use libvorbis instead.
MP4 Re-encode: If the input formats are not compatible with MP4
ffmpeg -i video.mkv -i audio.mka -c:v libx264 -crf 23 -preset medium -c:a aac -movflags +faststart output.mp4
For video, control quality with -crf and encoding speed with -preset. See FFmpeg Wiki: H.264 and FFmpeg Wiki: AAC for more info.
If your target devices are limited in the H.264 profiles they support you can add -profile:v main or -profile:v baseline.
ffprobe for scripting
You can make a script to automate this. ffprobe can be used to determine the formats:
$ ffprobe -loglevel error -select_streams v:0 -show_entries stream=codec_name -of csv=p=0 video.mkv
h264
$ ffprobe -loglevel error -select_streams a:0 -show_entries stream=codec_name -of csv=p=0 audio.mka
aac
The ffprobe outputs can be used as variables in an if/then statement.

Related

ffmpeg complex filtering: how to get around

Alright, I have my own compiled ffmpeg with --enable-lv2. This allows for 3rd-party plugins to work. The plugin I use is: https://github.com/lucianodato/speech-denoiser it's a plugin that wraps around this RNN noise reduction library: https://github.com/GregorR/rnnoise-models
The following commands work:
(1) ffmpeg -i input.mov -filter_complex '[0:a]lv2=plugin=https\\://github.com/lucianodato/speech-denoiser[audio]' -map "[audio]" output.wav
(2) ffmpeg -i input.mov -filter_complex '[0:v]copy[video]' -map "[video]" output.mov
But when I do the combination, that doesn't work.
ffmpeg -i input.mov -filter_complex '[0:a]lv2=plugin=https\\://github.com/lucianodato/speech-denoiser[audio];[0:v]copy[video]' -map "[audio]" -map "[video]" output.mov
I think the error is essentially this:
Channel layout change is not supported
Error while filtering: Not yet implemented in FFmpeg, patches welcome
Failed to inject frame into filter network: Not yet implemented in FFmpeg, patches welcome
Error while processing the decoded data for stream #0:0
My guess: this 3rd-party filter is not configure to work with any other output stream other than audio.
My question: can I somehow trick this 3rd-party plugin that it is outputting to an audio file, while still outputting everything to a video file?
Note: I know, I can simply split this up in 2 commands and be done with it, so I'm wondering if I can accomplish this via one ffmpeg command. How I would split it up in 2 commands is as follows:
ffmpeg -i out_cropped.mov -af 'lv2=plugin=https\\://github.com/lucianodato/speech-denoiser' -vcodec copy out_cropped_denoised.wav
&&
ffmpeg -i out_cropped.mov -i out_cropped_denoised.wav -c:v copy -map 0:v:0 -map 1:a:0 out_cropped_denoised.mov
But I want to be able to put it all in one complex filter (ideally) or at least in one ffmpeg command.
Appendix: here is the full interaction
ffmpeg -i input.mov -filter_complex '[0:a]lv2=plugin=https\\://github.com/lucianodato/speech-denoiser[audio];[0:v]copy[video]' -map "[audio]" -map "[video]" output.mov
ffmpeg version N-95577-g68f623d644 Copyright (c) 2000-2019 the FFmpeg developers
built with Apple clang version 11.0.0 (clang-1100.0.33.8)
configuration: --prefix=/usr/local --enable-gpl --enable-nonfree --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libopus --enable-libxvid --enable-lv2 --samples=fate-suite/
libavutil 56. 35.101 / 56. 35.101
libavcodec 58. 60.100 / 58. 60.100
libavformat 58. 33.100 / 58. 33.100
libavdevice 58. 9.100 / 58. 9.100
libavfilter 7. 65.100 / 7. 65.100
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mov':
Metadata:
major_brand : qt
minor_version : 512
compatible_brands: qt
encoder : Lavf58.29.100
Duration: 00:16:19.11, start: 0.000000, bitrate: 1341 kb/s
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1080x960, 1262 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc (default)
Metadata:
handler_name : Core Media Video
encoder : Lavc58.54.100 libx264
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
Metadata:
handler_name : Core Media Audio
File 'output.mov' already exists. Overwrite? [y/N] y
#ote: I typed yes and then this came.
Stream mapping:
Stream #0:0 (h264) -> copy
Stream #0:1 (aac) -> lv2
lv2 -> Stream #0:0 (aac)
copy -> Stream #0:1 (libx264)
Press [q] to stop, [?] for help
[out_0_0 # 0x7fa6811066c0] Channel layout change is not supported
Error while filtering: Not yet implemented in FFmpeg, patches welcome
Failed to inject frame into filter network: Not yet implemented in FFmpeg, patches welcome
Error while processing the decoded data for stream #0:0
I forgot to post an answer here, but I recompiled the ffmpeg project.
And then I could use this command ffmpeg -i out_cropped.mov -af 'lv2=plugin=https\\://github.com/lucianodato/speech-denoiser' -vcodec copy out_cropped_denoised.wav
I remember that I wrote a compilation guide to myself as compiling it seemed a scary thing to do. And it was (just a little), but ultimately it was perfectly doable.
Here's the guide.
How to compile ffmpeg, lv2 and speech-denoiser for mac and denoise your audio files (and put it into videos) on a Mac!
Helpful guide for compiling ffmpeg on MacOS:
CompilationGuide/macOS – FFmpeg
Install depencencies
brew install automake fdk-aac git lame libass libtool libvorbis libvpx \
opus sdl shtool texi2html theora wget x264 x265 xvid nasm
Install lilv (dependency for lv2)
brew install lilv #because of ERROR: lilv-0 not found using pkg-config when doing ./configure right away
Configure ffmpeg
./configure --prefix=/usr/local --enable-gpl --enable-nonfree --enable-libass \
--enable-libfdk-aac --enable-libfreetype --enable-libmp3lame \
--enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libopus --enable-libxvid --enable-lv2 \
--samples=fate-suite/
Make & Install
make
sudo make install
Install speech denoiser dependencies + the project itself
brew update
brew cask uninstall oclint
brew install lv2 meson ninja pkg-config autoconf m4 libtool automake
#Download and install speech denoiser
git clone https://github.com/lucianodato/speech-denoiser.git
cd speech-denoiser
chmod +x install.sh && ./install.sh
Check fo see if install exists
lv2ls #You got this command from installing lilv
Output: https://github.com/lucianodato/speech-denoiser
(yep a URL)
Use your command!
#audio to denoised audio
ffmpeg -i out_cropped.mov -af 'lv2=plugin=https\\://github.com/lucianodato/speech-denoiser' -vcodec copy out_cropped_denoised.wav
#for if you want to put it with a video
&&
ffmpeg -i out_cropped.mov -i out_cropped_denoised.wav -c:v copy -map 0:v:0 -map 1:a:0 out_cropped_denoised.mov

Unsupported presentation (0x20400003)

so I'm trying to get a really simple livestreaming system running through Azure Media Services. I've got ffmpeg installed on a Raspberry Pi w/ a USB camera, and I'm just trying to get the camera feed received through Azure so I can start tinkering with the Media Player. The ffmpeg command appears to run without a hitch, but whenever I attempt to preview the stream, I get a the following error:
"The video playback was aborted due to a corruption problem or because the video used features your browser did not support. 0x20400003"
The 0x0400003 part of the code is listed in the docs (http://amp.azure.net/libs/amp/latest/docs/index.html#error-codes)
as meaning the presentation of the video is not supported, but I can't find what that actually means in terms of what's wrong.
I'm using the following ffmpeg command for encoding,
ffmpeg -v verbose -framerate 30 -r 30 -i /dev/video0 -vcodec libx264 -preset ultrafast -acodec libfdk-aac -ab 48k -b:v 500k -maxrate 500k -bufsize 500k -r 30 -g 60 -keyint_min 60 -sc_threshold 0 -f flv rtmp://{Azure channel address}/channel5
which results in the following output:
ffmpeg version N-83743-gd757ddb Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --enable-gpl --enable-libx264 --enable-nonfree --enable-libfdk-aac
libavutil 55. 47.101 / 55. 47.101
libavcodec 57. 82.100 / 57. 82.100
libavformat 57. 66.103 / 57. 66.103
libavdevice 57. 3.100 / 57. 3.100
libavfilter 6. 74.100 / 6. 74.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
[video4linux2,v4l2 # 0x1f7a430] fd:3 capabilities:84200001
[video4linux2,v4l2 # 0x1f7a430] Querying the device for the current frame size
[video4linux2,v4l2 # 0x1f7a430] Setting frame size to 640x480
[video4linux2,v4l2 # 0x1f7a430] The driver changed the time per frame from 1/30 to 1/15
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 169752.581724, bitrate: 73728 kb/s
Stream #0:0: Video: rawvideo, 1 reference frame (YUY2 / 0x32595559), yuyv422, 640x480, 73728 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[graph 0 input from stream 0:0 # 0x1f89eb0] w:640 h:480 pixfmt:yuyv422 tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2
[auto_scaler_0 # 0x1f8a9c0] w:iw h:ih flags:'bicubic' interl:0
[format # 0x1f8a040] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[graph 0 input from stream 0:0 # 0x1f89eb0] TB:0.033333 FRAME_RATE:30.000000 SAMPLE_RATE:nan
[auto_scaler_0 # 0x1f8a9c0] w:640 h:480 fmt:yuyv422 sar:0/1 -> w:640 h:480 fmt:yuv422p sar:0/1 flags:0x4
No pixel format specified, yuv422p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 # 0x1f7d650] using cpu capabilities: ARMv6 NEON
[libx264 # 0x1f7d650] profile High 4:2:2, level 3.0, 4:2:2 8-bit
[libx264 # 0x1f7d650] 264 - core 148 r2762 90a61ec - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=60 keyint_min=31 scenecut=0 intra_refresh=0 rc_lookahead=0 rc=cbr mbtree=0 bitrate=500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=500 vbv_bufsize=500 nal_hrd=none filler=0 ip_ratio=1.40 aq=0
Output #0, flv, to 'rtmp://{Azure Channel Address}/channel5':
Metadata:
encoder : Lavf57.66.103
Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv422p, 640x480, q=-1--1, 500 kb/s, 30 fps, 1k tbn, 30 tbc
Metadata:
encoder : Lavc57.82.100 libx264
Side data:
cpb: bitrate max/min/avg: 500000/0/500000 buffer size: 500000 vbv_delay: -1
[flv # 0x1f7c1c0] Failed to update header with correct duration.ate= 501.5kbits/s speed=0.25x
[flv # 0x1f7c1c0] Failed to update header with correct filesize.
frame= 2155 fps=7.5 q=-1.0 Lsize= 4392kB time=00:01:11.80 bitrate= 501.1kbits/s speed=0.25x
video:4350kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.974120%
I'm not sure if the header errors are significant, as the program continues to run as expected, but please let me know if there's anything here that's blatantly an issue, or whether there's a meaningful explanation for what the presentation issues are.
Ok here is a quick helper for Raspberry PI Live streaming to Azure Media Services that worked out well for me.
There are a few tricks you can do here to make it work a lot better. The problem is mostly with the FFMPEG command, but you can optimize encoding as well by using the hardware acceleration support on the PI (if you have 2 or higher).
I initially followed this guide to build FFMPEG.
https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
I had to compile the x264 codec first.
When compiling FFMPEG i had to use the "make -j4" to compile on all 4 cores of the latest Raspberry PI B+ or 3.0. Made it compile much faster.
Compilation took a long time on the PI anyways, so I let it run overnight.
Once I had a compiled FFMPEG, I used the new H264 Open Max (OMX) acceleration feature.
add "-enable-omx -enable-omx-rpi" to ./configure options
use FFMPEG Encoder option "-c:v h264_omx"
see for details - https://ubuntu-mate.community/t/hardware-h264-video-encoding-with-libav-openmax-il/4997/11
Once i had that working I did a quick test to make sure I could successfully capture an MP4 File
ffmpeg -framerate 30 -r 30 -s 640x360 -i /dev/video0 -vcodec h264_omx -preset ultrafast -acodec libfaac -ab 48k -b:v 2000k -bufsize 500k -g 60 -keyint_min 60 -sc_threshold 0 out.mp4
Finally I went with the Smooth Streaming protocol support (which is a lot more reliable than RTMP).
ffmpeg -i /dev/video1 -pix_fmt yuv420p -f ismv -movflags isml+frag_keyframe -video_track_timescale 10000000 -frag_duration 2000000 -framerate 30 -r 30 -c:v h264_omx -preset ultrafast -map 0:v:0 -b:v:0 2000k -minrate:v:0 2000k -maxrate:v:0 2000k -bufsize 2500k -s:v:0 640x360 -map 0:v:0 -b:v:1 500k -minrate:v:1 500k -maxrate:v:1 500k -s:v:1 480x360 -g 60 -keyint_min 60 -sc_threshold 0 -c:a libfaac -ab 48k -map 0:a? -threads 0 "http://***your-account-***channel.mediaservices.windows.net/ingest.isml/Streams(video)"
DEEP EXPLANATION OF WHAT IS GOING ON ABOVE ON THE FFMPEG COMMAND LINE:
ffmpeg
-re **READ INPUT AT NATIVE FRAMERATE
-stream_loop -1 **LOOP INFINITE
-i C:\Video\tears_of_steel_1080p.mov **INPUT FILE IS THIS MOV FILE
-movflags isml+frag_keyframe **OUTPUT IS SMOOTH STREAMING THIS SETS THE FLAGS
-f ismv **OUTPUT ISMV SMOOTH
-threads 0 ** SETS THE THREAD COUNT TO USE FOR ALL STREAMS. YOU CAN USE A STREAM SPECIFIC COUNT AS WELL
-c:a aac ** SET TO AAC CODEC
-ac 2 ** SET THE OUTPUT TO STEREO
-b:a 64k ** SET THE BITRATE FOR THE AUDIO
-c:v libx264 ** SET THE VIDEO CODEC
-preset fast ** USE THE FAST PRESET FOR X246
-profile:v main **USE THE MAIN PROFILE
-g 48 ** GOP SIZE IS 48 frames
-keyint_min 48 ** KEY INTERVAL IS SET TO 48 FRAMES
-sc_threshold 0 ** NOT SURE!
-map 0:v ** MAP THE FIRST VIDEO TRACK OF THE FIRST INPUT FILE
-b:v:0 5000k **SET THE OUTPUT TRACK 0 BITRATE
-minrate:v:0 5000k ** SET OUTPUT TRACK 0 MIN RATE TO SIMULATE CBR
-maxrate:v:0 5000k ** SET OUTPUT TRACK 0 MAX RATE TO SIMULATE CBR
-s:v:0 1920x1080 **SCALE THE OUTPUT OF TRACK 0 to 1920x1080.
-map 0:v ** MAP THE FIRST VIDEO TRACK OF THE FIRST INPUT FILE
-b:v:1 3000k ** SET THE OUTPUT TRACK 1 BITRATE TO 3Mbps
-minrate:v:1 3000k -maxrate:v:1 3000k ** SET THE MIN AND MAX RATE TO SIMULATE CBR OUTPU
-s:v:1 1280x720 ** SCALE THE OUTPUT OF TRACK 1 to 1280x720
-map 0:v -b:v:2 1800k ** REPEAT THE ABOVE STEPS FOR THE REST OF THE OUTPUT TRACKS
-minrate:v:2 1800k -maxrate:v:2 1800k -s:v:2 854x480
-map 0:v -b:v:3 1000k -minrate:v:3 1000k -maxrate:v:3 1000k -s:v:3 640x480
-map 0:v -b:v:4 600k -minrate:v:4 600k -maxrate:v:4 600k -s:v:4 480x360
-map 0:a:0 ** FINALLY TAKE THE SOURCE AUDIO FROM THE FIRST SOURCE AUDIO TRACK.
 http://<yourchannel>.channel.mediaservices.windows.net/ingest.isml/Streams(stream0)
Hope that helps get you started in the right direction. Let me know if you have any more questions.

FFmpeg not copying all audio streams

I'm having trouble getting ffmpeg to copy all audio streams from a .mp4 file. After hours of searching online, it appears this should copy all streams (as shown in example 4 here):
ffmpeg -i in.mp4 -map 0 -c copy out.mp4
in.mp4 contains 3 streams:
Video
Audio track 1
Audio track 2
out.mp4 (which should be identical to in.mp4) contains only 2 streams:
Video
Audio track 1
FFmpeg does appear to correctly identify all 3 streams, but doesn't copy all of them over. Output from FFmpeg:
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Stream #0:2 -> #0:2 (copy)
Edit: Output from ffmpeg -v 9 -loglevel 99 -i in.mp4:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from in.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.36.100
Duration: 00:00:06.03, start: 0.000000, bitrate: 5582 kb/s
Stream #0:0(und), 1, 1/15360: Video: h264 (Main), 1 reference frame (avc1 /
0x31637661), yuv420p(tv, bt470bg/unknown/unknown, left), 1920x1080 (0x0) [SAR 1:
1 DAR 16:9], 0/1, 5317 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und), 1, 1/48000: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream #0:2(und), 1, 1/48000: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
stereo, fltp, 128 kb/s
Metadata:
handler_name : SoundHandler
Successfully opened the file.
At least one output file must be specified
[AVIOContext # 0000000001c2b9e0] Statistics: 153350 bytes read, 2 seeks
Edit 2 (solved): I managed to find the correct syntax from this ticket. For any others that are interested, the correct syntax is:
ffmpeg -i in.mp4 -vcodec copy -c:a copy -map 0 out.mp4
This will copy all streams.
FFmpeg have option to map all streams to output, you have to use option -map 0 to map all streams from input to output.
In full line it might look like:
ffmpeg -i in.mp4 -c copy -map 0 out.mp4
For more info see the documentation on stream selection and the -map option.
Apparently this is a popular question, so I'm posting my solution as an answer (was previously a comment reply) so that others can see.
I managed to find the correct syntax from this ticket. The correct syntax is:
ffmpeg -i in.mp4 -vcodec copy -c:a copy -map 0:0 -map 0:1 -map 0:2 out.mp4
This will copy all 3 streams.
OK, I read pretty deep into the ffmpeg man page and found this which should be useful:
Note that currently each output stream can only contain channels from
a single input stream; you can't for example use "-map_channel" to
pick multiple input audio channels contained in different streams
(from the same or different files) and merge them into a single output
stream. It is therefore not currently possible, for example, to turn
two separate mono streams into a single stereo stream. However
splitting a stereo stream into two single channel mono streams is
possible.
If you need this feature, a possible workaround is to use the amerge
filter. For example, if you need to merge a media (here input.mkv)
with 2 mono audio streams into one single stereo channel audio stream
(and keep the video stream), you can use the following command:
ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
You may want to read through and experiment with the man page instructions on man ffmpeg-filters to understand just what level of complexity you're getting into for naming channels and expected output.
[Edit: As Mulvya noted, this answers a question, but it was not quite the original poster's question.]
First I tried this broader answer here: https://stackoverflow.com/a/54616353/1422630
But I had trouble with a not supported subtitle track so I ended having to use this command:
avconv -i INFILE -c copy -map 0:a -map 0:v OUTFILE
I understand that, after I asked to copy, it basically copied only what I mapped (and it mapped all audio of course), as I don't care for the subtitles being embedded at all. If you want to map the subtitles, just add this -map 0:s.
It seems that specific ffmpeg versions ignore -c copy option and skip audio stream copy, thus resulting in final file with no audio, e.g. does not copy audio tracks and produce video with no sound.
The ffmpeg affected is for example used on Synology Disk Station devices:
ffmpeg version 2.7.7 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.3 (crosstool-NG 1.20.0) 20150311 (prerelease)
To resolve that, without analyzing file structure and manually mapping all audio streams with -map 0:1 -map 0:2 etc, I found very simple command to process it automatically:
ffmpeg -i INFILE -map 0 -c copy -c:a copy OUTFILE
This is different from -c:v -c:a as preserves chapters and subtitles together with video and all audio tracks with different languages, like english, spanish, french, russian or chineese.
Also in case you have more hardly broken file, which simple copy does not fix, please try this command, which potentially fix more errors, which could crash video player, or stuck video or audio:
ffmpeg -err_detect ignore_err -i INFILE -map 0 -c copy -c:a copy OUTFILE

Concat multiple (self-generated) videos using ffmpeg on raspbian linux

I am a very talented sleep talker, so I decided to write a solution that records the things I talk at night to make funny videos with subtitles of it. The project is nearly done, but I got a big problem with concating videos I generated before.
The video parts are generated from single png frames using this command:
ffmpeg -y -framerate 15 -i "${images_file_path}" -c:v libx264 -r 30 -pix_fmt yuv420p "${video_file_path}"
Then the sound is added using this command (got this from #9049970 and #11779490):
ffmpeg -y -i "${video_file_path}" -i "${mp3_file_path}" -map 0:v -map 1:a -vcodec copy -acodec copy -shortest "${final_video_file_path}"
All this is causing no problems so far, but I think it may be relevant to know how the videos are generated. I can watch all this and get valid video and sound - the full source code of this first part can be found here.
Now I added a feature that is able to generate "full videos" containing a title and a various number of previously generated "video parts" using this command:
ffmpeg -f concat -i "${video_list_path}" -filter_complex "${filter_string} concat=n=${input_file_counter}:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" "${full_video_path}"
But something is wrong with it and I get this error:
Invalid file index 1 in filtergraph description [0:v:0] [1:v:0] [2:v:0] [2:a:0] [3:v:0] [4:v:0] [4:a:0] [5:v:0] [6:v:0] [6:a:0] [7:v:0] concat=n=8:v=1:a=1 [v] [a].
The full output is:
ffmpeg version N-77213-g7c1c453 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.2 (Raspbian 4.9.2-10)
configuration: --enable-shared --enable-gpl --prefix=/usr --enable-nonfree --enable-libmp3lame --enable-libfaac --enable-libx264 --enable-version3 --disable-mmx
libavutil 55. 10.100 / 55. 10.100
libavcodec 57. 17.100 / 57. 17.100
libavformat 57. 20.100 / 57. 20.100
libavdevice 57. 0.100 / 57. 0.100
libavfilter 6. 20.100 / 6. 20.100
libswscale 4. 0.100 / 4. 0.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
[mov,mp4,m4a,3gp,3g2,mj2 # 0xc2e900] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from '/usr/sleeptalk/records-rendered/3enguzpuu2gw0ogk8wkko/videos.txt':
Duration: N/A, start: 0.000000, bitrate: 61 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 58 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 2 kb/s
Metadata:
handler_name : SoundHandler
Invalid file index 1 in filtergraph description [0:v:0] [1:v:0] [2:v:0] [2:a:0] [3:v:0] [4:v:0] [4:a:0] [5:v:0] [6:v:0] [6:a:0] [7:v:0] concat=n=8:v=1:a=1 [v] [a].
I also wrote a test case so you can reproduce this on your local machine. Download the files from my dropbox. Also, the full script that renders the "final move" can be found here.
Would be great to get an Idea, got struggle to fix this the last two days.
You're using both the concat demuxer as well as the concat filter. Skip the latter, because
a) it's unnecessary and
b) I don't believe the demuxer is inducting all input files as separate inputs so the indices beyond 0 don't make sense. Also, the concat filter needs equal number of streams per input file, and their input assignment has to be pair-wise i.e. [0:v:0] [0:a:0] [1:v:0] [1:a:0] [2:v:0] [2:a:0]....
Instead, use
ffmpeg -f concat -i "${video_list_textfile}" -c copy "${full_video_path}"
where ${video_list_textfile} is a text file of the form
file 'file1.mp4'
file 'file2.mp4'
file 'file3.mp4'
...

FFMpeg Concatenation Filters: Stream specifier ':0' in filtergraph matches no streams

I am developing an application that relies heavily on FFMpeg to perform various transformations on audio files. I am currently testing my FFMpeg configuration on the command line.
I am trying to concatenate multiple audio files which are in different formats (Primarily MP3, MP2 & WAV). I have been using the official TRAC documentation (https://trac.ffmpeg.org/wiki/How%20to%20concatenate%20(join%2C%20merge)%20media%20files#differentcodec) to help me with this and have created the following command:
ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav
However, when I run this on Mac OS X using version 2.0.1 of FFMpeg, I get the following error message:
Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.
Here is my full output from the terminal:
~/ffmpeg -i OHIn.wav -i OHOut.wav -filter_complex '[0:0] [1:0] concat=n=2:a=1 [a]' -map '[a]' output.wav
ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
built on Aug 15 2013 10:56:46 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --arch=x86_64 --enable-runtime-cpudetect
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'OHIn.wav':
Duration: 00:00:06.71, bitrate: 1411 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, wav, from 'OHOut.wav':
Duration: 00:00:07.19, bitrate: 1411 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Stream specifier ':0' in filtergraph description [0:0] [1:0] concat=n=2:a=1 [a] matches no streams.
I do not understand why this does not work. FFMpeg shows that the streams 0:0 and 1:0 exist in the source files. The only other similar problems online have surrounded the use of the single quote in Windows, however testing of this confirm it does not apply to my Mac command line.
Any help would be much appreciated.
You need to tell the concat filter the number of output video streams. The default is v=1 for video and a=0 for audio, but you have no video streams. It's best to not rely on the defaults. Manually list the number of input segments (n), output video streams (v), and output audio streams (a).
ffmpeg -i input0.wav -i input1.wav -filter_complex "[0:a][1:a]concat=n=2:v=0:a=1[a]" -map "[a]" output.wav
Notice that I added v=0.
See the concat filter documentation for more info.
In addition to upvoting Lord Neckbeard's response, which, solved my problem btw: I wanted to provide a working example of a Bash Shell script, showing how I concatenate three mp3 files (an intro, middle and outro, each having the same bitrate of 160 kbps, sample rate of 44.1 Khz) into one result mp3. The reason why my filter graph reads:
[0:a] [1:a] [2:a]
instead of something like:
[0:0] [1:0] [2:0]
is because some mp3s had artwork, which, ffmpeg sees as two streams for each input mp3 file, one audio (for the music itself) and one video (for the image artwork file).
The :a portion lets ffmpeg know that you want it to use only the audio stream(s) that it reads for that input file and to pass that along to the concat filter. So any video filters get ignored. The benefit of doing this is that you don't need to know the position of the video stream (so that you don't accidentally pass it) as specified by running a command like:
ffprobe control-intro-recording.mp3.
Anyways, I digress, here's the shell script:
#!/bin/sh
ffmpeg -i ./source-files/control-intro-recording.mp3 \
-i ./source-files/control-middle-1-hour-recording-with-artwork-160-kbps.mp3 \
-i ./source-files/control-outro-recording.mp3 \
-filter_complex '[0:a] [1:a] [2:a] concat=n=3:v=0:a=1 [a]' \
-map '[a]' ./output-files/control-output-with-artwork-160-kbps-improved.mp3
I ran into this Stream specifier ':0' in filtergraph description [0:0] [1:0]... error trying to combine two video files. #LordNeckbeard's answer helped me diagnose the issue. I mention it as a separate answer in case a future querent like myself encounters this situation with video files.
It turned out that one of my videos didn't have an audio track. Adding an audio track with
ffmpeg -f lavfi -i aevalsrc=0 -i title-slide.mp4 -shortest -c:v copy \
-c:a mp3 -strict experimental title.mp4
got me going.

Resources