ffmpeg output 2 different audio files each to a different output device at the same time - audio

Does anyone know if it is possible to use FFMPEG to output 2 audio files each to a different output device (i.e. sound card) using one command?
If so, how?
If not possible with FFMPEG is there any other free tool that allows this?
Thanks!

Use absolute path of the folder when you define outputs.
Tested on Windows: ffmpeg -i "input.mp3" D:\output1.mp3 C:\output2.mp3
Source: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs

For two inputs, to different outputs, it's
ffmpeg -i A.mp3 -i B.mp3 -map 0 Aout.mp3 -map 1 Bout.mp3

Related

FFmpeg: How to join multiple mono files into one multichannel file?

I have e.g. 3 mono wav files. I would like to join them in one wave file which has 3 channels (not 2.1). The duration o this wave should inherit from the longest duration of mono files. I tried many commands, but no one of them gave me the expected result. Could you help?
apad + join
One method is to use apad on the two shorter inputs and then mix them with the join filter:
ffmpeg -i front_left.wav -i front_right.wav -i front_center.wav -filter_complex "[0]apad[FL];[1]apad[FR];[FL][FR][2]join=inputs=3:channel_layout=3.0:map=0.0-FL|1.0-FR|2.0-FC" output.wav
apad + amerge + channelmap
Similar to above, but channelmap (or pan) has to be added because amerge has no mapping functionality and assumes 2.1 instead of 3.0:
ffmpeg -i front_left.wav -i front_right.wav -i front_center.wav -filter_complex "[0]apad[FL];[1]apad[FR];[FL][FR][2]amerge=inputs=3,channelmap=map=FL-FL|FR-FR|LFE-FC" output.wav
You can use ffprobe to get the file durations.
ffmpeg -layouts will provide a list of accepted channel names and layouts.

FFMPEG encode audio and forced subtitles at same time?

I'm using latest static build of ffmpeg windows.
My input file (.mkv) is:
[video] - 1080, V_MPEG4/ISO/AVC, 14.6 Mbps, ID#0
[audio] - DTS 5.1, 1510 Kbps, ID#1
[subtitles] - S_TEXT/ASS Lossless English, ID#14
My problem is this: I convert the audio, so that my target player, a XB1 console (media support faq), is able to play audio/video. However sometimes its rather difficult to hear or parts may be in foreign language, so I want to force the english subtitles into the mix at the same time I convert the audio.
Currently for the audio, I use the following command
ffmpeg -i input.mkv -codec copy -acodec ac3 output.mkv
Can I somehow tie in the forced subtitles (onto the video) in order to save an extra process of taking the output.mkv and trying to force subtitles on?
Edit: I've tried using the following command to extract subtitles to be able to edit them
ffmpeg -i Movie.mkv -map 0:s:14 subs.srt
However i get the error: Stream map '0:s:14' matches no streams
Edit2: attempted to extract subtitles and succeeded with
ffmpeg -i input.mkv -map 0:14 -c copy subtitles.ass
but still looking to force the subtitles, nonetheless!
Also - a little bonus to this question - can I somehow extract the .ass file and edit it to only produce subtitles for foreign parts - so english audio doesn't have subtitles during the movie but foreign audio does have subtitles?
Cheers
Edit3:
When I try to use both of the commands at once (my earlier mentioned audio converter & one from the ffmpeg wiki)
ffmpeg -i input.mkv -codec copy -acodec ac3 -vf "ass=subs.ass" output.mkv
I get the following error from ffmpeg,
Filtergraph 'ass=subs.ass' was defined for video output stream 0:0 but codec copy was selected.
Filtering and streamcopy cannot be used together.
Since your media player does not support subtitles, the text has to be burnt onto the video image. For that, use
ffmpeg -i input.mkv -vf "ass=subs.ass" -c:v libx264 -crf 20 -c:a ac3 output.mkv
This will re-encode the video, since text is being added. The CRF value controls the video quality. Lower values produce better quality but larger files. 18 to 28 is a decent range to try.

Mute specified sections of an audio file using ffmpeg

I have a JSON file containing regions that I want to mute in a given audio file. How can I process the audio file to mute the file between the listed sections?
The following command will mute two sections: between 5-10s and 15-20s:
ffmpeg -i video.mp4 -af "volume=enable='between(t,5,10)':volume=0, volume=enable='between(t,15,20)':volume=0" ...
Description:
-af is the audio filter. It works by specifying multiple volume filters that are enabled/disabled at the specified time. volume=enable='between(t,5,10)':volume=0 means use a volume filter that gets enabled between 5 and 10 seconds and sets the volume to 0.
Thanks to #aergistal , it worked for me:
command line:
ffmpeg -i input.mp4 -af "volume=enable='between(t,1,2)':volume=0" output.mp4
nodejs fluent ffmpeg:
ffmpeg('input.mp4').audioFilters("volume=0:enable='between(t,1,2)'").output('output.mp4')
I came across this post because I was trying to see how to lower sections of audio in a video.
For example, I want the volume between 34 to 35 minutes, 37 to 40 minutes, 0.1 times of the input volume. Below works for me and hope it works for others who are after the same task:
C:\ffmpeg-4.4-full_build\bin>ffmpeg -i in_video.mp4 -filter:a "volume=enable='between(t,34*60,35*60)':volume=0.1, volume=enable='between(t,37*60,40*60)':volume=0.1" -vcodec copy out_video.mp4
Note the time in between needs to be seconds.
Refer to the link below for more info about the audio volume filter (-filter:a).
https://trac.ffmpeg.org/wiki/AudioVolume

Split a video file into separate video and audio files using a single ffmpeg call?

Background: I would like to use MLT melt to render a project, but I'd like that render to result with separate audio and video files. I'd intend to use melt's "consumer" avformat which uses ffmpeg's libraries, so I'm formulating this question as for ffmpeg.
According to Useful FFmpeg Commands For Converting Audio & Video Files (labnol.org), the following is possible:
ffmpeg -i video.mp4 -t 00:00:50 -c copy small-1.mp4 -ss 00:00:50 -codec copy small-2.mp4
... which slices the "merged" audio+video files into two separate "chunk" files, which are also audio+video files, in a single call; that's not what I need.
Then, ffmpeg Documentation (ffmpeg.org), mentions this:
ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
... which splits the entire duration of the content of two channels of a stereo audio file, into two mono files; that's more like what I need, except I want to split an A+V file into a stereo audio file, and a video file.
So I tried this with elephantsdream_teaser.ogv:
ffmpeg -i /tmp/elephantsdream_teaser.ogv \
-map 0.0 -vcodec copy ele.ogv -map 0.1 -acodec copy ele.ogg
... but this fails with "Number of stream maps must match number of output streams" (even if zero-size ele.ogv and ele.ogg are created).
So my question is - is something like this possible with ffmpeg, and if it is, how can I do it?
Your command works, but you need to specify mapping with columns instead of dots as so:
ffmpeg -i /tmp/elephantsdream_teaser.ogv -map 0:0 -vcodec copy ele.ogv -map 0:1 -acodec copy ele.ogg
You might want to test with a more recent build of ffmpeg. Mine gave correct errors for your command:
[ogg # 00000000043f8480] Invalid stream specifier: .0.
Last message repeated 3 times
Stream map '0.0' matches no streams.

Add audio (with an offset) to video with FFMPEG

I have a 10 minute video and a 50 minute audio mp3.
The video starts at 500 seconds into the audio.
Using FFMPEG, how can I add the the audio to the video but specify a 500 seconds audio offset (So that they sync up)?
EDIT:
Down the bottom of this page it suggests how to specify an offset.
$ ffmpeg -i video_source -itsoffet delay -i audio_source -map 0:x -map 1:y .......
However, when I apply this, it still starts the audio from the start.
We are 8 years later, and the -itsoffset does work.
Exactly as in your linked page:
ffmpeg -i input_1 -itsoffset 00:00:03 -i input_2
Note that you place the -itsoffset switch before the input you want to delay, in this case input_2 will be delayed.
So in your case that the video starts later, you would add -itsoffset 00:08:20 before the video input.
I couldn't get audio to offset properly either, and some searching suggests that -itsoffset is currently broken.
You could try and get/compile an old version of ffmpeg before it broke (which doesn't sound like much fun).
Alternately, you could pad your audio with the necessary silence using something like sox and then combine:
sox -null silence.mp3 trim 0 500 # use -r to adjust sample-rate if necessary
sox silence.mp3 input.mp3 padded_input.mp3
ffmpeg -i in.avi -i padded_input.mp3 out.avi

Resources