ffmpeg error when cutting video (aac bitstream error) - audio

I'm trying to use ffmpeg to cut out a 5 minute chunk from a video. For some reason on this particular video I get an error "aac bitstream error". The resulting video is 5 minutes long with no audio or video.
ffmpeg -i testvideo.mp4 -ss 00:05:00 -t 00:10:00 -c:v copy -c:a copy testvideo_5min_test.mp4
ffmpeg version N-55540-g93f4277 Copyright (c) 2000-2013 the FFmpeg developers
built on Aug 14 2013 12:15:34 with gcc 4.3.2 (Debian 4.3.2-1.1)
configuration: --enable-libx264 --enable-gpl --enable-shared --enable-libfaac --enable-nonfree
libavutil 52. 42.100 / 52. 42.100
libavcodec 55. 28.100 / 55. 28.100
libavformat 55. 13.102 / 55. 13.102
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 82.100 / 3. 82.100
libswscale 2. 4.100 / 2. 4.100
libswresample 0. 17.103 / 0. 17.103
libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'testvideo.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2013-05-10 17:42:36
Duration: 00:35:21.47, start: 0.000000, bitrate: 8684 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 8490 kb/s, 29.97 fps, 29.97 tbr, 29970 tbn, 59.94 tbc
Metadata:
creation_time : 2013-05-10 17:42:36
handler_name : Mainconcept MP4 Video Media Handler
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s
Metadata:
creation_time : 2013-05-10 17:42:36
handler_name : Mainconcept MP4 Sound Media Handler
File 'testvideo_5min_test.mp4' already exists. Overwrite ? [y/N] y
Output #0, mp4, to 'testvideo_5min_test.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
encoder : Lavf55.13.102
Stream #0:0(eng): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 8490 kb/s, 29.97 fps, 29970 tbn, 29970 tbc
Metadata:
creation_time : 2013-05-10 17:42:36
handler_name : Mainconcept MP4 Video Media Handler
Stream #0:1(eng): Audio: aac ([64][0][0][0] / 0x0040), 48000 Hz, stereo, 189 kb/s
Metadata:
creation_time : 2013-05-10 17:42:36
handler_name : Mainconcept MP4 Sound Media Handler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[mp4 # 0x8088740] aac bitstream error5886kB time=00:01:13.63 bitrate=8442.8kbits/s
[mp4 # 0x8088740] aac bitstream error8357kB time=00:04:15.24 bitrate=8612.9kbits/s
[mp4 # 0x8088740] aac bitstream error6128kB time=00:05:00.25 bitrate=8625.0kbits/s
[mp4 # 0x8088740] aac bitstream error6415kB time=00:07:12.56 bitrate=8643.7kbits/s
frame=17952 fps=2429 q=-1.0 Lsize= 635531kB time=00:10:00.01 bitrate=8677.0kbits/s
video:621056kB audio:13870kB subtitle:0 global headers:0kB muxing overhead 0.095223%

Related

After video convert to hls with ffmpeg, new video has no screen and has just audio

I am having problems with HLS stream creation, sometimes my created video just has audio and it's display a black screen
Here is my code in below
/opt/nodejs/ffmpeg -i "https://******-v1-post-content.s3.us-east-
2.amazonaws.com/104/posts/win/video/item-1615842876280.mov" -codec: copy -start_number 0 -hls_time 10 -hls_list_size 0 -f hls /tmp/item-1615842876280.m3u8
The Output Of command
stderr: ffmpeg version 4.2.3-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://*****-v1-post-content.s3.us-east-2.amazonaws.com/104/posts/win/video/item-1615842876280.mov:
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2021-03-13T08:49:02.000000Z
Duration: 00:00:03.50, start: 0.000000, bitrate: 7984 kb/s
Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, bt709), 1920x1080, 7881 kb/s, 29.97 fps, 29.97 tbr, 600 tbn, 600 tbc (default)
Metadata:
rotate : 90
creation_time : 2021-03-13T08:49:02.000000Z
handler_name : Core Media Video
encoder : HEVC
Side data:
displaymatrix: rotation of -90.00 degrees
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 83 kb/s (default)
Metadata:
creation_time : 2021-03-13T08:49:02.000000Z
handler_name : Core Media Audio
[hls # 0x6684ec0] Opening '/tmp/item-16158428762800.ts' for writing
Output #0, hls, to '/tmp/item-1615842876280.m3u8':
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
encoder : Lavf58.29.100
Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, bt709), 1920x1080, q=2-31, 7881 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 600 tbc (default)
Metadata:
rotate : 90
creation_time : 2021-03-13T08:49:02.000000Z
handler_name : Core Media Video
encoder : HEVC
Side data:
displaymatrix: rotation of -90.00 degrees
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 83 kb/s (default)
Metadata:
creation_time : 2021-03-13T08:49:02.000000Z
handler_name : Core Media Audio
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Thank you!
Sevada

FFMPEG Merge Multiple Videos: No Audio Source for one of them

I've written a node.js script to merge multiple video files into a single file. I've encountered a scenario in which no audio is provided for one of the input video files.
I first executed ffprobe so that I can access what I'll refer to as the "video file spec". In this scenario, I created a basic module to help me better understand my problem:
Evaluation from all processes: [
{
fileName: 'input-0.mp4',
isVideoAvailable: true,
isAudioAvailable: false,
width: 1920,
height: 1080,
sampleRateAspectRatio: '1/1',
audioVolume: 1,
duration: '13.140000'
},
{
fileName: 'input-1.mp4',
isVideoAvailable: true,
isAudioAvailable: true,
width: 1920,
height: 1080,
sampleRateAspectRatio: '1/1',
audioVolume: 1,
duration: '17.160000'
},
{
fileName: 'input-2.mp4',
isVideoAvailable: true,
isAudioAvailable: true,
width: 1920,
height: 1080,
sampleRateAspectRatio: '1/1',
audioVolume: 1,
duration: '20.280000'
},
{
fileName: 'input-3.mp4',
isVideoAvailable: true,
isAudioAvailable: true,
width: 1920,
height: 1080,
sampleRateAspectRatio: '1/1',
audioVolume: 1,
duration: '19.020000'
},
{
fileName: 'input-4.mp4',
isVideoAvailable: true,
isAudioAvailable: true,
width: 1920,
height: 1080,
sampleRateAspectRatio: '1/1',
audioVolume: 1,
duration: '9.480000'
}
]
This next block of code are the parameters that I've actually hard-coded in this case. The screen resolution and aspect ratio are manually set, as I discovered differing settings with each video I have been processing. These parameters allow FFMPEG to execute successfully under normal circumstances:
let ffmpegParameters = [
'-i',
'input-0.mp4',
'-i',
'input-1.mp4',
'-i',
'input-2.mp4',
'-i',
'input-3.mp4',
'-i',
'input-4.mp4',
'-f',
'lavfi',
'-t',
'0.1',
'-i',
'anullsrc',
'-filter_complex',
'[0:v]scale=1920:1080,setsar=1/1[v0];[0:a]volume=1.0[a0];[1:v]scale=1920:1080,setsar=1/1[v1];[1:a]volume=1.0[a1];[2:v]scale=1920:1080,setsar=1/1[v2];[2:a]volume=1.0[a2];[3:v]scale=1920:1080,setsar=1/1[v3];[3:a]volume=1.0[a3];[4:v]scale=1920:1080,setsar=1/1[v4];[4:a]volume=1.0[a4];[v0][a0][v1][a1][v2][a2][v3][a3][v4][a4]concat=n=5:v=1:a=1[v][a]',
'-map',
'[v]',
'-map',
'[a]',
'-c:v',
'libx264',
'-vsync',
'2',
'output.mp4'
]
A comment from a different thread suggested to supply a dummy audio in cases such as mine. I've added that to no prevail:
'-f',
'lavfi',
'-t',
'0.1',
'-i',
'anullsrc',
I do not know how to adjust the complex filter to account for my situation of the first video containing no audio. I've included the entire log below:
Logs:
ffmpeg version git-2020-02-03-1c15111
Copyright (c) 2000-2020 the FFmpeg developers
built with Apple clang version 11.0.0 (clang-1100.0.33.8)
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-appkit --enable-avfoundation --enable-coreimage --enable-audiotoolbox
libavutil 56. 38.100 / 56. 38.100
libavcodec 58. 67.100 / 58. 67.100
libavformat 58. 37.100 / 58. 37.100
libavdevice 58. 9.103 / 58. 9.103
libavfilter 7. 73.100 / 7. 73.100
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input-0.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.37.100
Duration: 00:00:14.80, start: 1.620000, bitrate: 1499 kb/s
Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1498 kb/s, 25 fps, 25 tbr, 1200k tbn, 2400k tbc (default)
Metadata:
handler_name : VideoHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'input-1.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.37.100
Duration: 00:00:18.48, start: 0.000000, bitrate: 977 kb/s
Stream #1:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1440x876 [SAR 1:1 DAR 120:73], 903 kb/s, 15.21 fps, 16.67 tbr, 16k tbn, 32k tbc (default)
Metadata:
handler_name : VideoHandler
Stream #1:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'input-2.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.37.100
Duration: 00:00:22.68, start: 0.000000, bitrate: 1795 kb/s
Stream #2:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1718 kb/s, 29.54 fps, 50 tbr, 16k tbn, 32k tbc (default)
Metadata:
handler_name : VideoHandler
Stream #2:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #3, mov,mp4,m4a,3gp,3g2,mj2, from 'input-3.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.37.100
Duration: 00:00:54.60, start:
0.000000, bitrate: 404 kb/s
Stream #3:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1440x876 [SAR 1:1 DAR 120:73], 330 kb/s, 15.24 fps, 16.67 tbr, 16k tbn, 32k tbc (default)
Metadata:
handler_name : VideoHandler
Stream #3:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #4, mov,mp4,m4a,3gp,3g2,mj2, from 'input-4.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.37.100
Duration: 00:00:09.36
, start: 0.000000, bitrate: 1794 kb/s
Stream #4:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1717 kb/s, 29.38 fps, 50 tbr, 16k tbn, 32k tbc (default)
Metadata:
handler_name : VideoHandler
Stream #4:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
Metadata:
handler_name : SoundHandler
Stream specifier ':a' in filtergraph description [0:v]scale=1920:1080,setsar=1/1[v0];[0:a]volume=1.0[a0];[1:v]scale=1920:1080,setsar=1/1[v1];[1:a]volume=1.0[a1];[2:v]scale=1920:1080,setsar=1/1[v2];[2:a]volume=1.0[a2];[3:v]scale=1920:1080,setsar=1/1[v3];[3:a]volume=1.0[a3];[4:v]scale=1920:1080,setsar=1/1[v4];[4:a]volume=1.0[a4];[v0][a0][v1][a1][v2][a2][v3][a3][v4][a4]concat=n=5:v=1:a=1[v][a] matches no streams.
When I removed the stream specifier [a0], I received a different error:
FFmpeg Video Merge - STDERR: [Parsed_setsar_3 # 0x7f87c7709100] Media type mismatch between the 'Parsed_setsar_3' filter output pad 0 (video) and the 'Parsed_concat_14' filter input pad 1 (audio)
[AVFilterGraph # 0x7f87c7430c00] Cannot create the link setsar:0 -> concat:1
My question is how will the filter-complex value of my parameter list that I have defined be adjusted to deal with that first video, which has no audio?
Stream specifier ':a' in filtergraph description … matches no streams
You are referencing a stream that does not exist. In this case it is due to [0:a]volume=1.0[a0]. You are attempting to select audio from input-0.mp4, but this input has no audio.
Media type mismatch
I don't know your exact command so I can't point out the actual cause, but your video and audio filter labels are likely mixed up somewhere.
Working Example
ffmpeg -i input-0.mp4 -i input-1.mp4 -i input-2.mp4 -i input-3.mp4 -i input-4.mp4 -f lavfi -t 0.1 -i anullsrc=channel_layout=mono:sample_rate=44100 -filter_complex "[0:v]scale=1920:1080,setsar=1/1[v0];[1:v]scale=1920:1080,setsar=1[v1];[2:v]scale=1920:1080,setsar=1[v2];[3:v]scale=1920:1080,setsar=1[v3];[4:v]scale=1920:1080,setsar=1[v4];[v0][5][v1][1:a][v2][2:a][v3][3:a][v4][4:a]concat=n=5:v=1:a=1[v][a]" -map "[v]" -map "[a]" -movflags +faststart output.mp4
Since volume=1 does absolutely nothing you can eliminate that filter.
For the video input without audio pair it with the anullsrc output in concat (as with [v0][5] in the example above).
concat filter will automatically select a common sample rate and channel layout for audio streams, but I still prefer to manually set them in anullsrc so I know for sure what I'm going to get.
Your inputs vary in DAR, so the 1440x876 videos will look squished in the output. You can avoid this by adding scale + crop or pad. Since they all have the same SAR refer to Resizing videos with ffmpeg to fit into static size.
Upscaling is usually not a great idea. Consider downscaling to 1280x720 instead since half the inputs are that size.

ffmpeg can't concat movies from two different devices

I've got bunch of movies from two different Panasonic devices. As long as I concat movies from only ONE device - final movie is smooth.
However, when I add movie clip from second device, right after final movie ends playing movies from first device it display audio and only still images from movie clip from other device.
ffmpeg -f concat -i mylist.txt -c copy final_movie.MP4
Example ffprobe:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'S6810001.MP4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.27.100
Duration: 00:00:10.62, start: 0.021333, bitrate: 1131 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 998 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 132 kb/s (default)
Metadata:
handler_name : SoundHandle
Second device movie clip:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'a/T00004.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.27.100
Duration: 00:00:33.18, start: 0.000000, bitrate: 1190 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 929 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: ac3 (ac-3 / 0x332D6361), 48000 Hz, stereo, fltp, 256 kb/s (default)
Metadata:
handler_name : SoundHandler
Side data:
audio service type: main
Why movie is still? How to prepare it so it could be joined correctly?
You're trying to concat videos that likely have different profiles. Also, the audio formats are different, but they should be the same if you want to stream copy.
Use ffprobe to view more detailed info about each input:
ffprobe -loglevel error -show_streams S6810001.MP4
ffprobe -loglevel error -show_streams a/T00004.mp4
In this example assuming the video in S6810001.MP4 has Main profile and a/T00004.mp4 has Baseline profile you can "conform" a/T00004.mp4 to be more like S6810001.MP4, or vice versa (note that audio can have a profile too, so make sure you're looking at the right section in the ffprobe output). This example command will use the same profile and same audio format:
ffmpeg -i a/T00004.mp4 -profile:v main -c:a aac a/T00004_encoded.mp4
Now use a/T00004_encoded.mp4 as your second input instead of a/T00004.mp4:
ffmpeg -f concat -i mylist.txt -c copy final_movie.MP4

Is there a method to identify the audio channel layout?

A video file has many channels, and each channel is contained in a stream, e.g.
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.1 : mono
Guessed Channel Layout for Input Stream #0.2 : mono
Guessed Channel Layout for Input Stream #0.3 : mono
Guessed Channel Layout for Input Stream #0.4 : mono
Guessed Channel Layout for Input Stream #0.5 : mono
Guessed Channel Layout for Input Stream #0.6 : mono
Guessed Channel Layout for Input Stream #0.7 : mono
Guessed Channel Layout for Input Stream #0.8 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/tmp/ff6.mov':
Metadata:
major_brand : qt
minor_version : 537199360
compatible_brands: qt
creation_time : 2015-01-16 17:54:47
Duration: 00:01:54.00, start: 0.000000, bitrate: 160836 kb/s
Stream #0:0(eng): Video: prores (apch / 0x68637061), yuv422p10le, 1920x1080, 154650 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
timecode : 00:59:55:00
Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:2(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:3(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:4(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:5(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:6(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:7(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:8(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16, 768 kb/s
Metadata:
creation_time : 2015-01-16 17:54:47
handler_name : Apple Alias Data Handler
Stream #0:9(eng): Data: none (tmcd / 0x64636D74)
Metadata:
creation_time : 2015-01-16 17:55:17
handler_name : Apple Alias Data Handler
timecode : 00:59:55:00
Is there a method to identify which stream represents the left channel, and which represent right channel, and so on.
Thanks for reviewing my question.

how can I transform 6 audio channels into one 5.1 channel with ffmpeg? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a pro-res file which has 6 mono audio channels.
here's the ffmpeg console:
:\test-remapping>ffmpeg -i MelleParadis_PART1_CONSTANTE.mov
ffmpeg version N-60106-ge6d1c66 Copyright (c) 2000-2014 the FFmpeg developers
built on Jan 22 2014 22:01:26 with gcc 4.8.2 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
eex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aa
cenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavp
ack --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
libavutil 52. 63.100 / 52. 63.100
libavcodec 55. 49.100 / 55. 49.100
libavformat 55. 25.101 / 55. 25.101
libavdevice 55. 5.102 / 55. 5.102
libavfilter 4. 1.100 / 4. 1.100
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 17.104 / 0. 17.104
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.1 : mono
Guessed Channel Layout for Input Stream #0.2 : mono
Guessed Channel Layout for Input Stream #0.3 : mono
Guessed Channel Layout for Input Stream #0.4 : mono
Guessed Channel Layout for Input Stream #0.5 : mono
Guessed Channel Layout for Input Stream #0.6 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'MelleParadis_PART1_CONSTANTE.mov':
Metadata:
major_brand : qt
minor_version : 537199360
compatible_brands: qt
creation_time : 2013-11-27 18:58:26
Duration: 00:07:34.32, start: 0.000000, bitrate: 117742 kb/s
Stream #0:0(eng): Video: prores (apcn / 0x6E637061), yuv422p10le, 1920x1080,
113098 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
timecode : 01:00:00:00
Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16,
768 kb/s (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
Stream #0:2(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16,
768 kb/s (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
Stream #0:3(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16,
768 kb/s (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
Stream #0:4(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16,
768 kb/s (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
Stream #0:5(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16,
768 kb/s (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
Stream #0:6(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, mono, s16,
768 kb/s (default)
Metadata:
creation_time : 2013-11-27 18:58:26
handler_name : Gestionnaire dıalias Apple
Stream #0:7(eng): Data: none (tmcd / 0x64636D74) (default)
Metadata:
creation_time : 2013-11-27 19:03:46
handler_name : Gestionnaire dıalias Apple
timecode : 01:00:00:00``
I would like to transform them into one 5.1 audio channel
I try this line of code:
D:\test-remapping>ffmpeg -i "MelleParadis_PART1_CONSTANTE.mov" -c copy -c:a ac3 -map 0 mlle5.1.mov
the console replies "NOT ENOUGH SPACE" and stop...
of course there's still some space in my hard drive...
How could I map all the 6 mono streams into 5.1 ?
thanks in advance.
pauline
from https://trac.ffmpeg.org/wiki/AudioChannelManipulation
ffmpeg -i front_left.wav -i front_right.wav -i front_center.wav -i lfe.wav -i back_left.wav -i back_right.wav \
-filter_complex "[0:a][1:a][2:a][3:a][4:a][5:a] amerge=inputs=6" output.wav

Resources