I have e.g. 3 mono wav files. I would like to join them in one wave file which has 3 channels (not 2.1). The duration o this wave should inherit from the longest duration of mono files. I tried many commands, but no one of them gave me the expected result. Could you help?
apad + join
One method is to use apad on the two shorter inputs and then mix them with the join filter:
ffmpeg -i front_left.wav -i front_right.wav -i front_center.wav -filter_complex "[0]apad[FL];[1]apad[FR];[FL][FR][2]join=inputs=3:channel_layout=3.0:map=0.0-FL|1.0-FR|2.0-FC" output.wav
apad + amerge + channelmap
Similar to above, but channelmap (or pan) has to be added because amerge has no mapping functionality and assumes 2.1 instead of 3.0:
ffmpeg -i front_left.wav -i front_right.wav -i front_center.wav -filter_complex "[0]apad[FL];[1]apad[FR];[FL][FR][2]amerge=inputs=3,channelmap=map=FL-FL|FR-FR|LFE-FC" output.wav
You can use ffprobe to get the file durations.
ffmpeg -layouts will provide a list of accepted channel names and layouts.
Related
I'm trying to encode 6 arbitrary mono audio streams into a single AAC 5.1 track in an mp4 container (here with test streams):
ffmpeg -f lavfi -i testsrc=duration=10:size=100x100:rate=30 -f lavfi -i aevalsrc="-2+random(0)" -filter_complex "[1:a][1:a][1:a][1:a][1:a][1:a]join=inputs=6:channel_layout=5.1:map=0.0-FL|1.0-FR|2.0-FC|3.0-LFE|4.0-BL|5.0-BR[a]" -map '0:v' -map "[a]" -c:a aac -channel_layout 5.1 -t 10 testlfe.mp4
5 of the channels replicate the input audio just fine (modulo encoding). However, the LFE channel is lowpassed. Extracting with:
ffmpeg -i testlfe.mp4 -filter_complex "channelsplit=channel_layout=5.1:channels=LFE[LFE]" -map '[LFE]' testlfe.wav
I get a lowpassed rumble, instead of the original full white noise
(from ffmpeg -i testlfe.wav -lavfi showspectrumpic=s=640x320 testlfe.png)
Is there a way to prevent the lowpass from happening?
I couldn't find any references whether that's inherent to the AAC 5.1 encoding, something that ffmpeg does, or inherent to the decoding process. (I did decode my same test files using something that uses Microsoft MediaFoundation and the LFE channel was still lowpassed).
Turns out, the AAC codec inherently limits the LFE bandwidth, so there's no way around it.
(thanks to kesh in the comments) Wikipedia's Advanced Audio Encoding article claims the upper limit is 120Hz which matches my spectrogram, but doesn't cite a source. The actual ISO/IEC 13818-7:2006(en) Standard costs a bunch of money to read as usual, but in the free glossary there is an entry:
low frequency enhancement ( LFE ) channel:
limited bandwidth channel for low frequency audio effects in a multichannel system
Encode with
ffmpeg -i 6channels.wav -filter "channelmap=0|1|2|3|4|5:6.0(front)" -c:a libfdk_aac -ac 6 -profile:a aac_he -vbr 1 -cutoff 18000 -movflags +faststart 6channels-vbr1-fdk.m4a
It can also be done with regular aac codec.
I am trying to use the ffmpeg library to take two FLAC files and replace the audio in File A with the audio in File B at a given timestamp.
For example if File B was to be played at 00:02 and was a second long, playing the output it would be (00:00-0:01) File A Audio -> (00:02-0:03) File B Audio -> (00:04-...) File A Audio
To do this, I have tried the following
ffmpeg -y -i original.flac -i replacement.flac -acodec copy -ss 2 -to 3 -write_xing 0 result.flac
But this only produces the original audio between the specified timestamps.
Is there any way to achieve this within ffmpeg?
The typical method to do this would be the concat demuxer, but there are issues with FLAC extraction with duration header in the output, so you can use
ffmpeg -y -i original.flac -i replacement.flac \
-filter_complex "[0]atrim=0:2[Apre];[0]atrim=5,asetpts=PTS-STARTPTS[Apost];\
[Apre][1][Apost]concat=n=3:v=0:a=1" out.flac
Where 2 is the insertion point in seconds, and 5 is the insertion point + B's duration.
Does anyone know if it is possible to use FFMPEG to output 2 audio files each to a different output device (i.e. sound card) using one command?
If so, how?
If not possible with FFMPEG is there any other free tool that allows this?
Thanks!
Use absolute path of the folder when you define outputs.
Tested on Windows: ffmpeg -i "input.mp3" D:\output1.mp3 C:\output2.mp3
Source: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs
For two inputs, to different outputs, it's
ffmpeg -i A.mp3 -i B.mp3 -map 0 Aout.mp3 -map 1 Bout.mp3
Background: I would like to use MLT melt to render a project, but I'd like that render to result with separate audio and video files. I'd intend to use melt's "consumer" avformat which uses ffmpeg's libraries, so I'm formulating this question as for ffmpeg.
According to Useful FFmpeg Commands For Converting Audio & Video Files (labnol.org), the following is possible:
ffmpeg -i video.mp4 -t 00:00:50 -c copy small-1.mp4 -ss 00:00:50 -codec copy small-2.mp4
... which slices the "merged" audio+video files into two separate "chunk" files, which are also audio+video files, in a single call; that's not what I need.
Then, ffmpeg Documentation (ffmpeg.org), mentions this:
ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
... which splits the entire duration of the content of two channels of a stereo audio file, into two mono files; that's more like what I need, except I want to split an A+V file into a stereo audio file, and a video file.
So I tried this with elephantsdream_teaser.ogv:
ffmpeg -i /tmp/elephantsdream_teaser.ogv \
-map 0.0 -vcodec copy ele.ogv -map 0.1 -acodec copy ele.ogg
... but this fails with "Number of stream maps must match number of output streams" (even if zero-size ele.ogv and ele.ogg are created).
So my question is - is something like this possible with ffmpeg, and if it is, how can I do it?
Your command works, but you need to specify mapping with columns instead of dots as so:
ffmpeg -i /tmp/elephantsdream_teaser.ogv -map 0:0 -vcodec copy ele.ogv -map 0:1 -acodec copy ele.ogg
You might want to test with a more recent build of ffmpeg. Mine gave correct errors for your command:
[ogg # 00000000043f8480] Invalid stream specifier: .0.
Last message repeated 3 times
Stream map '0.0' matches no streams.
I combine two stereo files to one 4 channel file with:
ffmpeg -i 1.wav -i 2.wav -filter_complex "amerge=inputs=2" -c:a pcm_s24le out.wav
This works fine, but when I open the file in Quicktime or want to edit it in other applications the quicktime channel assignment is L/C/R/SURR. But I want it to be QUAD - L/R/LS/RS. How can I tell ffmpeg to set these assignments?
got it...
add
"amerge=inputs=2,channelmap=0|1|2|3:channel_layout=quad"
then Quicktime will correctly show the channels as L/R/SL/SR.