I have over 1000 wav files and I want to extract 1st second of every audio file and save them in my machine locally.
Is there any software or code library which help me to do this?
Related
I have a 2 MP3 files, one is 10 minutes long and another track that is 1 second long. I would like to merge these tracks into a new file that plays the 1 second track at random intervals of the longer one.
processA ... split the longer file into several segments using your random interval for details see https://unix.stackexchange.com/a/1675/10949
processB ... then for each segment from above splitting operation append your shorter file ... repeat until you have each processA segment with that shorter file appended ... for details see https://superuser.com/a/1164761/81282
then stitch together all of above files from processB
I have not tried this however it might be easier if you first converted both original source mp3 files into WAV files before doing anything ... then once done and working as WAV convert the final WAV back to mp3
I have an audio streaming application that uses requests to download the audio file and then played using Gstreamer.
I want to trim the first few seconds of all the audio files that i have. I could use ffmpeg to trim but that would waste cpu resources on my embedded platform and also waste network bandwidth
(The number of songs are around 1000, and they get downloaded continously, so it does make a difference)
I have tried downloading partial file using the range header in requests but that doesn't work. I can't play the file.
Can someone please tell me how i can make this work?
The audio files are generally .m4a / .webm but they are extracted from youtube so can't say for sure.
This is an uneasy task.. there is no clean way how to do it..
you can probably use the valve element set it to drop by default..
and then put some timer which sets the drop to false..
not sure how this will work, you need to try.
Here are some hints:
I have a system that creates audio files of automated fire dispatches as wav files, and occasionally the file will record multiple calls separated by a multi-tone sequence. Here is a sample
I have been searching for a way to have ffmpeg, or some other Linux CLI tool, recognise the tones and then cut the file into a separate WAV file named [unixtimestamp of original]+duration seconds.wav
Does anyone have any ideas?
If I have a remote mp4 file on a server that supports Byte Ranges, is it possible to retrieve a single byte range and create a new/self-contained mp4 from that range data?
If I try and write a returned byte range data directly to an mp4 file using fs.createWriteStream(remoteFilename) it doesn't get the video meta data (duration, dimensions, etc) that it needs to be playable.
When I get a byte range that starts with 0 and ends with XX the output mp4 is playable, but will have the duration meta-data of the entire video length and will freeze the screen when the byte range is done for the remainder of the duration time.
How else can I take a byte range and create a stand-alone .mp4 file from that stream object?
The whole point of this is to avoid downloading the entire 10 minute file before I can make a 5 second clip using ffmpeg. If I can calculate and download the byte range, there should be a way to write it to a standalone mp4 file.
Thanks in advance for any help you can provide.
MP4 files are structured with boxes. Two main of them being moov and mdat (general case of non-fragmented MP4):
moov box: contains other boxes :) - each of them contains information about the encoded data present in the mdat box (moov = metadata about the MP4 file). Typical metadatas are duration, framerate, codecs information, reference to the video/audio frames ...
mdat box: contains the actual encoded data for the file. It can come from various codecs and includes audio and video data (or only one of them if it happens to be). For H264 NAL units are contains within the mdat box.
The moov box is (should be) at the beginning of the file for MP4 file web delivery so if you write a byte range request from 0 to XX you will likely get the whole moov box + a certain amount of mdat data. Hence the file can be played up to a certain point. If you byte range from YY to XX chances are you will not get a decent moov box but a lot of mdat which as such cannot be used unless they are repack in a MP4 file with a proper moov box referencing information about the "cut" mdat.
It is possible to recreate a valid MP4 file from a byte range chunk but it requires an advanced knowledge of the MP4 file format structure (you need to retrieve the moov box as well to make it bearable). MP4 file format is based on ISO base media file format - that was specified as ISO/IEC 14496-12 (MPEG-4 Part 12).
I know 2 libs that could help doing what you want: one in PHP and one in Java. I do not know if such a lib exists for node.js (I guess it could be ported). Even if you do not use them the 2 libs above contain valuable information about the subject.
To provide an answer to your question you could tackle the issue with a different angle. Knowing which part of the file you want in milliseconds you could execute an ffmpeg command to splice the full-length MP4 file server-side into a smaller one and then do what you need with this new smaller MP4 file (as so you do not need to download unnecessary data on the client).
ffmpeg command for that is (in this case cut at 1 minute from beginning of file):
ffmpeg -i input.mp4 -ss 00:00:00.000 -t 00:01:00.000 -c:a copy -c:v copy output.mp4
See this post for more info on the above command line
This is done pretty fast as the MP4 file structure is just re-organised with no re-transcoding.
EDIT: Or can I use ffmpeg on a remote file and create the new clip locally?
ffmpeg -ss 00:01:00.000 -i "http://myfile.mp4" -t 00:02:00.000 -c:a copy -c:v copy output.mp4
Assuming you have ffmpeg on your client (app/web) if you run the above command ffmpeg will fetch the mp4 to the input URL then seek to 1 min and cut 2 min from there and so write the generated content to output.mp4 locally (without downloading the full file of course).
ffmpeg needs to be built with support for http protocol input (which you will find in most binaries). You can read here for further information on where to place the -ss parameter (pros/cons).
Depends on if your remote MP4 file is a fragmented mp4 file or a flat mp4 file. If it is a fragmented mp4 file (which I think it is based on your byte-range comment) you can just download the init byterange and the fragment you are interested in and concatenate them together.
If you have a flat mp4 file then the accepted answer is the right way to go.
The problem: As the input there are two mp3 files.
First is a 24 hour mp3 record of today's radio broadcasting.
The second one is a one minute long record of the same radiostation, that was made during the day.
To be abstract the second file is a kind of "subsequence" of the first one.
Is there any way to automatically determine at which 'part' of the big file is located the little one?