Handling large mp3 files(>200mb) to cut them into small pieces? - python-3.x

I tried pydub's AudioSegment but it can only load small files. For large files my cpu runs at max and just stops after a couple minutes by killing the process.
I don't have any code written yet. The file is an mp3 audiobook that was downloaded from piratebay. Duration is 8 hours and filesize around 300mb. I want to cut the file into multiple files 30-60 mins each so that I can sync them with apple music(Doesn't allow large files).
Pydub doesnt't even load it so I haven't gone any further.
There are softwares to achieve this but I'm trying to do it with python.

Related

How do I read/write small sections of an audio file with pysoundfile?

So I'm making a program that corrects stereo in-balance for an audio file. I'm using pysoundfile to read/write the files. Code looks something like this.
import soundfile as sf
data, rate = sf.read("Input.wav")
for d in data:
# processes audio
sf.write("Output.wav", data, rate, 'PCM_24')
The issue is that I'm working with DJ mixes that can be a couple hours long. So loading the entire mix into ram is causing the program to be killed.
My question is how do I read/write the file in smaller sections vs loading the entire thing?

record screen with high quality and minimum size in ElectronJS (Windows)

as I said in the title, I need to record my screen from an electron app.
my needs are:
high quality (720p or 1080p)
minimum size
record audio + screen + mic
low impact on PC hardware while recording
no need for any wait after the recorder stopped
by minimum size I mean about 400MB on 720p and 700MB on 1080p for a 3 to 4 hours recording. we already could achieve this by bandicam and obs and it's possible
I already tried:
the simple MediaStreamRecorder API using RecordRTC.Js; produces huge file sizes, like 1GB per hour for 720p video.
compressing the output video using FFmpeg; it can take up to 1 hour for 3 hours recording
save every chunk with 'ondataavailable' event and right after, run FFmpeg and convert and reduce the size and append all the compressed files (also by FFmpeg); there are two problems. 1, because of different PTS but it can be fixed by tunning compress command args. 2, the main problem is the audio data headers are only available in the first chunk and this approach causes a video that only has audio for the first few seconds
recording the video with FFmpeg itself; the end-users need to change some things manually (Stereo Mix), the configs are too complex, it causes the whole PC to work slower while recording (like fps drop; even if I set -threads to 1), in some cases after recording is finished it needs many times to wrap it all up
searched through the internet to find applications that can be used from the command line; I couldn't find much, the famous applications like bandicam and obs have command line args but there are not many args to play with and I can't set many options which leads to other problems
I don't know what else I can do, please tell me if u know a way or simple tool that can be used through CLI to achieve this and guide me through this
I end up using the portable mode of high-level 3d-party applications like obs-studio and adding them to our final package. I also created a js file to control the application using CLI
this way I could pre-set my options (such as crf value, etc) and now our average output size for a 3:30 hour value with 1080p resolution is about 700MB which is impressive

Run a same script with a same file multiple times

I have an audio file, and I am running a script to decode that audio file. Now I want to measure the latency of the system.
So, I am running a script with a time command.
like $ time ./script.sh
In the script script.sh the file path to the audio file has been mentioned.
Now I want to check, the latency when there will be the same audio file 100,1000 and 10000 times. Initially, I am thinking about actually copying the same audio file 100,1000,10000 times and run the script again to measure time.
Is there any other method, in which I should not copy the same audio file multiple times (like 100 to 10000 times) and measure the time.

How to sync two audio files in python?

I have two short 2-3 minute .wav files that were recorded within 1 minute of eachother. They could be anywhere from 0-60 seconds off. I'd like to sync them together. There is a sync tone that is played, and present in both audio files. There is very little audio in them besides the loud sync tone, it is very obvious when viewed in audacity.
I've tried every solution listed here Automatically sync two audio recordings in python
and none of them work. They all share the same problem, when they get to this method:
def find_freq_pairs(freqs_dict_orig, freqs_dict_sample):
time_pairs = []
for key in freqs_dict_sample.keys(): # iterate through freqs in sample
if freqs_dict_orig.has_key(key): # if same sample occurs in base
for i in range(len(freqs_dict_sample[key])): # determine time offset
for j in range(len(freqs_dict_orig[key])):
time_pairs.append((freqs_dict_sample[key][i], freqs_dict_orig[key][j]))
return time_pairs
Each time, the inner for loop ends up having to do (500k ^ 2) iterations for each of the 512 keys in the freqs_dict dictionary. This will take many months to run. This is with two 3-4 second audio files. With 1-2 minute audio files, it was (5m+ * 5m+) iterations. I think perhaps the library broke with python3, since everyone on that thread seemed happy with it...
Does anyone know a better way to sync two audio files with python?
Thank you

Is there any way to download a file in at constant speed?

I am trying to write a script in python or nodejs which can download a file, image or video at constant speed, lets say average download rate of my connection is 10 mbs/s but I want to dedicate 3 mbs/s speed to that script and it must download the media at that constant speed.

Resources