I've been trying to make ffmpeg pick up files from a folder and merge them together.
The code i have for merging the audio and video is:
ffmpeg -i video.m2v -i audio.wav -c copy -map 0:0 -map 1:0 %orginal_name%.mxf
This works but i chnage the %origninal_name to the name of the video file.
Im currently using a watch folder in FFAStrans to pick the video file up and using custom ffmpeg comand to run the command. The problem i'm having is that i have to specify the video and audio file name.
The folder has over 100 video and audio file and they have the same name so if ita s food show it would be
category_name_episode_HighRandomVariable.m2v for video
category_name_episode_HighRandomVariableDifferentFromVideo.wav for audio
example of this is
food_johnsCooking_EP1_High745548.m2v and
food_johnsCooking_EP1_High8547885874.wav
im using regext as well but dont really know how to use in in FFAStrans but the command looks like this.
$regext("%s_original_name%","(.+)_High")
Does anyone know how i can set it up so i can get the correct audio and video file to merge and at the same time make sure all other videos and audio files are done without me having to change the ffmpeg -i to the next video and audio name.
Any Help or advice is appreciated.
Many thanks in advance.
This script might helps you. I'm assuming that each episode has only one audio/video file with .wav and .m2v format.
import os
import sys
import subprocess
file_list = sorted(os.listdir(sys.argv[1]))
src_path = os.path.abspath(sys.argv[1]) + '//'
for i,k in zip(file_list[0::2], file_list[1::2]): #checks the episode has only one audio/video file
if((i.split('_')[2] == k.split('_')[2]) and i.split('.')[1] != k.split('.')[1]):
dst_dir = os.path.expanduser('~\\Videos\\FFmpegOutput\\') + i.split('.')[0] + ".mxf"
ffmpeg_cmd = "ffmpeg -i " + src_path + i + " -i " + src_path + k + " -c copy -map 0:0 -map 1:0 " + dst_dir
subprocess.Popen(ffmpeg_cmd, shell=True).communicate()
else:
print("Video or Audio file is missing for episode {}".format(i.split('.')[0]))
break
You can run this script as below.
python script_file ["Path to your audio/video folder"]
Example : python script.py c://Users//MediaFolder
It will save multiplexed videos in Videos folder.
Related
I have a folder full of WAV files with separate L and R channels. I've been using SOX for some things like changing the sample rate of the audio files inside a specific folder using this code:
for file in *.wav; do sox $file -r 44100 -b 24 converted/$(basename $file) -V; done
For example, I have these two files that I want to merge:
- CLOSE_1_02.L.wav
- CLOSE_1_02.R.wav
I would like to merge them in a stereo file (L in the left channel and R in the right channel) with the name: "CLOSE_1_02.wav". Can anybody help me?
Thanks.
from link:
sox -M input.l.wav input.r.wav output.wav
will merge input.l.wav and input.r.wav into output.wav.
I'm sorry, but the answer (1) is wrong. The questioner wants a two-channel file with one sound file in the left channel, and the other in the right channel. I tried the command given, and it produces a 1-channel output.wav with both input files mixed into a single channel.
This question already has answers here:
Using ffprobe to check if file is audio or video only
(5 answers)
Closed 5 years ago.
I'm trying to figure out if a video has audio present in it so as to extract the mp3 using ffmpeg. When the video contains no audio channels, ffmpeg creates an empty mp3 file which I'm using to figure out if audio was present in the video in the first place. I'm sure there is a better way to identify if audio is present in a video. Will avprobe help with this? Can anyone point me to a resource or probably a solution?
Edit: Surprisingly, the same command on my server running the latest build of ffprobe doesn't run. It throws an error saying
Unrecognized option 'select_stream'
Failed to set value 'a' for option 'select_stream'
Any ideas how to rectify this out?
I would use FFprobe (it comes along with FFMPEG):
ffprobe -i INPUT -show_streams -select_streams a -loglevel error
In case there's no audio it ouputs nothing. If there is an audio stream then you get something like:
[STREAM]
index=0
codec_name=mp3
codec_long_name=MP3 (MPEG audio layer 3)
profile=unknown
codec_type=audio
codec_time_base=1/44100
etc
etc...
[/STREAM]
That should be easy enough to parse regardless of the language you're using to make this process automated.
If it is normal video file from the local path, you can do something like this to find whether video has audio file or not.
You need to look into the MediaMetadataRetriever
By using METADATA_KEY_HAS_AUDIO you can check whether the video has the audio or not.
private boolean isVideoHaveAudioTrack(String path) {
boolean audioTrack =false;
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(path);
String hasAudioStr = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_HAS_AUDIO);
if(hasAudioStr.equals("yes")){
audioTrack=true; }
else{
audioTrack=false; }
return audioTrack;
}
Here path is your video file path.
PS: Since it is old question , i am writing this answer to help some other folks , to whom it may help.
Found a round about to solve this problem. This seems to answer the question I asked.
ffprobe -i input.mp4 -show_streams 2>&1 | grep 'Stream #0:1'
ffprobe -v fatal # set log level to fatal
-of default=nw=1:nk=1 # use default format and hide wrappers and keys
-show_streams # show info about media streams
-select_streams a # show only audio streams
-show_entries stream=codec_type # show only stream.codec_type entries
video.mp4 # input file
A media file contains an audio stream returns:
audio
1
0
0
0
0
0
0
0
0
0
0
0
und
SoundHandler
A media file contains no audio stream retuns empty result.
A non-media file also returns empty result. If you want to return an error message for non-media files and on any other error case, use -v error instead:
ffprobe -v error # set log level to error
-of default=nw=1:nk=1 # use default format and hide wrappers and keys
-show_streams # show info about media streams
-select_streams a # show only audio streams
-show_entries stream=codec_type # show only stream.codec_type entries
video.mp4 # input file
So, you take this instead of empty result:
non-media-file.zip: Invalid data found when processing input
If you only want to know if there is audio and don't care about the stream details you can run the following command, which will extract the duration of the audio stream in the input file. If the response is null/whitespace the input file has no audio in it.
Command:
ffprobe -v error -of flat=s_ -select_streams 1 -show_entries stream=duration -of default=noprint_wrappers=1:nokey=1
In a command line, if I run:
ffmpeg -i inputVideo.mp4 -vn -f mp4 -acodec copy outputAudio.aac
everything works perfectly fine.
However if I do the same thing, except standard out instead of the output file ("pipe:1" instead of "outputAudio.aac") then I get this error:
"Could not write header for output file #0 (incorrect codec parameters ?)"
Help from anyone with ffmpeg experience is greatly appreciated
Thanks
Well the trouble is you are asking for a mp4 file with a filename of outputAudio.aac. So if you check outputAudio.aac it is actually a mp4 file. To write mp4 files ffmpeg will need a seekable file descriptor which stdout is not. [this is because mp4 moov atom is written at the end in the beginning of the file.
If you want aac to be dumped to stdout you should ask for a adts file
ffmpeg -i input.mp4 -acodec copy -vn -f adts -strict -2 -
If you need it in a mp4.. mux it after that into a file
mp4 is not a streaming format: see here Fix 3GP file after streaming from Android Media Recorder for my answer to a different question which explains this.
I have a single intro video. I want to add the intro using ffmpeg or a similar program in the beggining of the users uploaded video (and yes I do need to merge them in one file, so it would be possible to download it later)
I`ve been searching internet and it suggests to convert both (intro and the other video) in to .mpg format.
OK, so far so good, but now when I try to join them together I get
[mpeg4 # 0x5547c60]Invalid and inefficient vfw-avi packed B frames detected
So I`m guessing it is because of something being different in both videos, like frame rate or size.
The worst thing is users are allowed to upload videos in almost any formats, also 240p-720p quality, so there is not one default size to convert the intro video into.
How could this be done?
Your intro video should match the resolution of the user videos, you should have as many intro-videos in as many resolutions as the user videos. Or convert all the user videos to a single resolution to match that of the intro video.Are you doing intro.mpg + user.mpg to combine the videos? Is this giving the above error?
Use ffmpeg:
ffmpeg -i 'concat:input1|input2' -codec copy output
or
ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv -filter_complex '[0:0] [0:1] [1:0] [1:1] [2:0] [2:1] concat=n=3:v=1:a=1 [v] [a]' -map '[v]' -map '[a]' output.mkv
or
$ cat mylist.txt
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
$ ffmpeg -f concat -i mylist.txt -c copy output
Source: Concatenate two mp4 files using ffmpeg
I have been trying to get lots of wav files delayed by 2 seconds at the start using ffmpeg. And so far, even though I have read the manual, I was not able to get it working. Here is my command:
for %%A in (*.wav) do (
ffmpeg -i "%%A" -itsoffset 00:00:02 "%%~NA"1.wav )
And nothing is being changed. Files are simply getting copied. I also tried the same with mp3 files. I also tried mkv and avi (to make sure it was not a container writing issue), but it gives the same result also.
Command is same here and here, but it does not work. Please, help.
You must put -itsoffset BEFORE you specify input. So:
ffmpeg -itsoffset 00:00:02 -i "%%A" "%%~NA"1.wav
Changing the input time offset like that isn't going to do anything noticeable for a single stream, it's meant for fixing out-of-sync issues between audio and video streams.
Do you want to tack on two seconds of silence at the start? If so, one simple way that'd work (although it may feel a bit hackish) is to simply tack on a 2 second WAV full of silence, before the actual input. This would be accomplished by simply adding another -i option before the actual file:
ffmpeg -i 2secsilence.wav -i "%%A" "%%~NA"1.wav
I know this question is over 9 months old, but I came across it and wanted to add some more information about '-itsoffset'. From the ffmpeg trouble ticket pages (https://ffmpeg.org/trac/ffmpeg/ticket/1349):
This command should display file1 content one second earlier than file2 content:
ffmpeg -itsoffset -1 -i file1.ts -i file2.ts -vcodec copy -acodec copy -map 0:0 -map 1:1 out.ts
1) What I see is that -itsoffset adds or subtracts from all the timestamps (both the video and audio streams) in a file. So this option is only going to be useful when remuxing from separate input files.
2) outfile has expected playback behavior with .ts and .mkv containers.
3) It does not work with .avi (no timestamps, so not a surprise)
4) It does not work with .mp4 container (a bug?)
And that is where this issue stands as of today.