I am looking for a toolkit or library to search contents of audio files for am audio sample.
For example I have 5 seconds of speech that I know it exists in hundreds of hours of audio, and I want to find exact file and position of this sub-samples.
The sample is %99 similar but maybe converted to different audio format so it may have minor differences in waveform.
I prefer .NET library if there is such an option.
Thank you.
What you are trying to do is not an easy DSP problem to solve, and there is no one foolproof method. There is however an excellent recent article on audio fingerprinting on codeproject which goes into some depth on an algorithm that searches for duplicate MP3s, with code in C#. You may be able to adapt the algorithm to your needs.
Related
Lets say I have the audio file for Happy Birthday. I want to convert that audio file into an audio file that sounds like this : happy birthday.
First, I'd like to know if I have the ability to program this? Can a highschooler who's almost finished with APCS program this?
If I can:
How would I change the bpm of the song? I've searched through a bunch of websites, but they weren't very helpful.
I know that audio files can be represented in waveforms. How would I scan for each individual wave in an audio file (I need this to isolate the notes)?
This is a very ambitious project, actually. One reason is that it involves using digital signal processing tools like FFT (Fast fourier transforms) to analyze the sound to pick out the pitches. You might be able to find a library that can do this, but as far as coding such a tool, that would involve a steep learning curve.
If you would like to look further into this, there is a good online resource called "The Scientists and Engineers Guide to Digital Signal Processing". I was able to work through and understand the discrete fourier transform with only high school math (lots of trig) and a bit of calculus. It was a lift, though.
Trying to analyze rhythm is also no easy task. Even with advanced tools provided in professional notation system such as Finale, people have trouble playing rhythms in time well enough for the best transcription tools. Algorithms that "quantize" the beats help but also limit the amount of detail that can be included in the playback.
My guess is that as interesting and worthwhile as this project would be, to bring it to completion before the semester ends would require putting together prebuilt pieces. A lot of programming is done that way, these days.
If you scale the project back to something like just getting your code to analyze a short sample of a single note and give its pitch, that would be both impressive and doable with a lot of work. It could be done with a DFT algorithm instead of requiring FFT, reducing the amount of info you'd have to acquire first. That way, you'd only have to work your way up to understanding and implementing the material on this link which is about calculating the DFT. Notice that there is example code in BASIC. The code examples throughout this book are a big help.
I am starting a project to test the audio performance on linux.
What I need to do is to play the audio on our websystem and check the audio quality (or just check it has audio output) on linux.
I am going to record the audio on linux with ffmpeg. Is there any other better choice?
I don't know how to (automation) check I recorded is what I played, as well as the quality of recorded audio.
I think what you need is PESQ (Perceptual Evaluation of Sound Quality). However I have not found anything which is open source/free and out of the box.
You can download the recommendation from here:
http://www.itu.int/rec/T-REC-P.862-200511-I!Amd2/en
Basically this is the reference implementation of PESQ.
Sevana has an audio quality analyser which is not an ITU standard, it is AQuA:
http://www.sevana.fi/aqua_wiki.php
It is available for linux but I think you have to pay for it.
You can also check the similarities for two audio files with cross-correlation, please refer to here:
https://dsp.stackexchange.com/questions/736/how-do-i-implement-cross-correlation-to-prove-two-audio-files-are-similar
I just learned that lot of people are using Matlab or Octave to generate the necessary data, for example:
http://bagustris.blogspot.ie/2011/11/calculate-time-lag-from-cross.html
I would like to create a utility in either PHP or Perl to convert an audio file created by the Nortel's Callpilot voice mail system into a wave file. The problem is that the format, which has the .vbk file extension, is unknown to virtually any audio player. To date, I have not found one that will play a .vbk file. I've looked at audio file conversion libraries in CPAN and tried many of them, they don't recognize the file. I was not successful with PHP's audio formats manipulation either. Nortel does provide a converter, however, it does not suite my needs. I would like to have this run via cron on a CentOS system. I don't know how to reverse engineer this format. There seems to be just scraps of info on this format on the web. This page indicates that it is "based on the H.232 format":
https://www.odesk.com/o/jobs/job/Reverse-Engineer-Nortel-VBK-Audio-Format_~~f501f11679f3f6bb/
I know this is a very old thread, but I've recently been looking into converting Nortel's vbk format as well. Importing the vbk files into Audacity with raw data option, Encoding: U-Law, Byte order: little-endian, Channels: 1 Channel (Mono), Sample rate: 8000 Hz. Not sure if they have multiple formats for their vbk files, but mine were from a BCM50 phone system.
Well, this is the joy of closed proprietary systems. But there is a chance they could play nice. Try to contact Callpilot and see if they'll give you the format specs. It's worth a shot.
As for reverse engineering, you need to be able to generate known content. Like a constant tone at 60Hz for exactly 1 second. Then at 50Hz. Then at 10 seconds. Compare them. Isolate the data from the metadata. There is going to be compression involved, so try a handful of common compression schemes, maybe research into Nortel's practices will probably tell you more. If you can feed that into a player and get a tone back out, you're on your way.
There's probably more informed and structured ways to go about reverse engineering, but from my experience it's a lot of trial and error.
I am trying to make an application for listening to podcasts. Each podcast is an mp3 file, around 50MB in size. After reviewing the Using Audio chapter of the Multimedia Programming Guide, I decided to use AVPlayer, as the other options did not seem appropriate. However, the more I work with AVFoundation, the more complicated it seems and I have a feeling that simply streaming an mp3 file should be easier. Plus on the top of this document, there is a note stating:
Important: This document contains
information that used to be in iOS
Application Programming Guide. The
information in this document has not
been updated specifically for iOS 4.0
Does that mean that I have some other options, or that AVFoundation is maybe an overkill for what I need to do? I would really appreciate it if someone could clear things out a bit and let me know if I'm making something wrong here.
Thanks in advance!
You should explore Cocos Denshion.
http://www.cocos2d-iphone.org/wiki/doku.php/cocosdenshion:cookbook
The audio engine comes with cocos2d, and it is just 5 classes you can include with your project.
It's very simple to use, as you can see from the above link. It's basically just a wrapper for some AVFoundation classes.
The only trick will be to stream your mp3, but it looks like you can simply update the Cocos Denshion CDAudioManager to hand a URL to the AVAudioPlayer, as a start. Whether or not that satisfies your streaming requirement, I don't know.
At the very least, it will give you some AVFoundation code to study.
I just found a pdf with a nice overview of some possible options from this course blog. Together with Julian's suggestion this is all I could find so far.
I undertaking a personal project which involves the development of a system which will automatically generate audio thumbnail clips (about 30 seconds in length) from a full length track.
In order to do this I want to look at the energy and pitch of the audio to try and correctly identify its major structural features.
Is there any open source software available that can do energy/pitch extraction? If not I will start looking into alternative methods using MATLAB.
Thanks!
YAAFE (Yet Another Audio Feature Extractor) http://yaafe.sourceforge.net/ does audio feature extraction in MATLAB, Python and C.
You might want to look into the Echo Nest API. It has a lot of audio analysis capabilities, and I know there's a script bundled in the Remix package that can automagically turn songs into shorter or longer versions (I believe the script is called earworm).
Audacity may do it.
Try JAudio which can extract features from an audio.
MARSYAS contains bextract for analysis, can find MFCCs and various other timbral and spectral features. http://marsyas.info/