I have a 2 MP3 files, one is 10 minutes long and another track that is 1 second long. I would like to merge these tracks into a new file that plays the 1 second track at random intervals of the longer one.
processA ... split the longer file into several segments using your random interval for details see https://unix.stackexchange.com/a/1675/10949
processB ... then for each segment from above splitting operation append your shorter file ... repeat until you have each processA segment with that shorter file appended ... for details see https://superuser.com/a/1164761/81282
then stitch together all of above files from processB
I have not tried this however it might be easier if you first converted both original source mp3 files into WAV files before doing anything ... then once done and working as WAV convert the final WAV back to mp3
Related
I want to split an audio file into several equal-length segments using FFmpeg. I want to specify the general segment duration (no overlap), and I want FFmpeg to render as many segments as it takes to go over the whole audio file (in other words, the number of segments to be rendered is unspecified).
Also, since I am not very experienced with FFmpeg (I only use it to make simple file conversions with few arguments), I would like a description of the code you should use to do this, rather than just a piece of code that I won't necessarily understand, if possible.
Thank you in advance.
P.S. Here's the context for why I'm trying to do this:
I would like to sample a song into single-bar loops automatically, instead of having to chop them manually using a DAW. All I want to do is align the first beat of the song to the beat grid in my DAW, and then export that audio file and use it to generate one-bar loops in FFmpeg.
In the future, I will try to do something like a batch command in which one can specify the tempo and key signature, and it will generate the loops using FFmpeg automatically (as long as the loop is aligned to the beat grid, as I've mentioned earlier). 😀
You can use the segment muxer. Basic example:
ffmpeg -i input.wav -f segment -segment_time 2 output_%03d.wav
-f segment indicates that the segment muxer should be used for the output.
-segment_time 2 makes each segment 2 seconds long.
output_%03d.wav is the output file name pattern which will result iin output_000.wav, output_001.wav, output_002.wav, and so on.
I have multiple videos of same resolution. Each video has a different length. And I also have a fixed length for output file. Let's say 4 minutes. Let's assume there are 4 input files each of 30 seconds but each input file could have different length. I want to put first 30 secs of output file blank and the next 30 secs as 1st input file and next 10 secs as blank and next 30 secs as 2nd input file so on. Basically I have a predetermined start point for each input file and between the gaps there should be black screen. How can I achieve this ? ffmpeg commands are fine but I'm going to have to automate this in nodejs so if you can give me any tips on it that'd be great!
There doesn't seem to be a single ffmpeg command to do this so I had to split the problem into a smaller problems.
First I generated a list of video segments that are going to a part of the final output video. Now some of these segments are already present and some are to be a black video.
So I used an ffmpeg command to generate a black video with silent audio with the desired length. So now I have all the segments I need and it's just a matter of combining them one after another.
I have two short 2-3 minute .wav files that were recorded within 1 minute of eachother. They could be anywhere from 0-60 seconds off. I'd like to sync them together. There is a sync tone that is played, and present in both audio files. There is very little audio in them besides the loud sync tone, it is very obvious when viewed in audacity.
I've tried every solution listed here Automatically sync two audio recordings in python
and none of them work. They all share the same problem, when they get to this method:
def find_freq_pairs(freqs_dict_orig, freqs_dict_sample):
time_pairs = []
for key in freqs_dict_sample.keys(): # iterate through freqs in sample
if freqs_dict_orig.has_key(key): # if same sample occurs in base
for i in range(len(freqs_dict_sample[key])): # determine time offset
for j in range(len(freqs_dict_orig[key])):
time_pairs.append((freqs_dict_sample[key][i], freqs_dict_orig[key][j]))
return time_pairs
Each time, the inner for loop ends up having to do (500k ^ 2) iterations for each of the 512 keys in the freqs_dict dictionary. This will take many months to run. This is with two 3-4 second audio files. With 1-2 minute audio files, it was (5m+ * 5m+) iterations. I think perhaps the library broke with python3, since everyone on that thread seemed happy with it...
Does anyone know a better way to sync two audio files with python?
Thank you
Using FFmpeg, I am trying to combine many audio files into one long one, with a crossfade between each of them. To keep the numbers simple, let's say I have 10 input files, each 5 minutes, and I want a 10 second crossfade between each. (Resulting duration would be 48:30.) Assume all input files have the same codec/bitrate.
I was pleasantly surprises to find how simple it was to crossfade two files:
ffmpeg -i 0.mp3 -i 1.mp3 -vn -filter_complex acrossfade=d=10:c1=tri:c2=tri out.mp3
But the acrossfade filter does not allow 3+ inputs. So my naive solution is to repeatedly run ffmpeg, crossfading the previous intermediate output with the next input file. It's not ideal. It leads me to two questions:
1. Does acrossfade losslessly copy the streams? (Except where they're actively crossfading, of course.) Or do the entire input streams get reencoded?
If the input streams are entirely reencoded, then my naive approach is very bad. In the example above (calling acrossfade 9 times), the first 4:50 of the first file would be reencoded 9 times! If I'm combining 50 files, the first file gets reencoded 49 times!
2. To avoid multiple runs and the reencoding issue, can I achieve the many-crossfade behavior in a single ffmpeg call?
I imagine I would need some long filtergraph, but I haven't figured it out yet. Does anyone have an example of crossfading just 3 input files? From that I could automate the filtergraphs for longer chains.
Thanks for any tips!
The problem: As the input there are two mp3 files.
First is a 24 hour mp3 record of today's radio broadcasting.
The second one is a one minute long record of the same radiostation, that was made during the day.
To be abstract the second file is a kind of "subsequence" of the first one.
Is there any way to automatically determine at which 'part' of the big file is located the little one?