Methods for determining acoustical similarity (but not fingerprinting) - audio

I'm looking for methods that work in practise for determining some kind of acoustical similarity between different songs.
Most of the methods I've seen so far (MFCC etc.) seem actually to aim at finding identical songs only (i.e. fingerprinting, for music recognition not recommendation). While most recommendation systems seem to work on network data (co-listened songs) and tags.
Most Mpeg-7 audio descriptors also seem to be along this line. Plus, most of them are defined on the level of "extract this and that" level, but nobody seems to actually make any use of these features and use them for computing some song similarity. Yet even an efficient search of similar items...
Tools such as http://gjay.sourceforge.net/ and http://imms.luminal.org/ seem to use some simple spectral analysis, file system location, tags, plus user input such as the "color" and rating manually assigned by the user or how often the song was listened and skipped.
So: which audio features are reasonably fast to compute for a common music collection, and can be used to generate interesting playlists and find similar songs? Ideally, I'd like to feed in an existing playlist, and get out a number of songs that would match this playlist.
So I'm really interested in accoustic similarity, not so much identification / fingerprinting. Actually, I'd just want to remove identical songs from the result, because I don't want them twice.
And I'm also not looking for query by humming. I don't even have a microphone attached.
Oh, and I'm not looking for an online service. First of all, I don't want to send all my data to Apple etc., secondly I want to get only recommendations from the songs I own (I don't want to buy additional music right now, while I havn't explored all of my music. I havn't even converted all my CDs into mp3 yet ...) and secondly my music taste is not mainstream; I don't want the system to recommend Maria Carey all the time.
Plus of course, I'm really interested in what techniques work well, and which don't... Thank you for any recommendations of relevant literature and methods.

Only one application has ever done this really well. MusicIP mixer.
http://www.spicefly.com/article.php?page=musicip-software
It hasn't been updated for about ten years (and even then the interface was a bit clunky), it requires a very old version of Java, and doesn't work with all file formats - but it was and still is cross-platform and free. It does everything you're asking : generates acoustic fingerprints for every mp3/ogg/flac/m3u in your collection, saves them to a tag on the song, and given one or more songs, generates a playlist similar to those songs. It only uses the acoustics of the songs, so it's just as likely to add an unreleased track which only you have on your own hard drive as a famous song.
I love it, but every time I update my operating system / buy a new computer it takes forever to get it working again.

Related

Advice on dynamically combining mpeg-dash mpd data

I'm doing research for a project that's about to start.
We will be supplied hundreds of 30 second video files that the end user can select (via various filters) we then want to play them as if it was one video.
It seems that Media Source Extensions with MPEG-DASH is the way to go.
I feel like it could possibly be solve in the following way, but I'd like to ask if this sounds right from anyone who has done similar things
My theory:
Create mpd's for each video (via mp4box or similar tool)
User make selections (each of which has a mpd)
Read each mpd and get their <period> elements (most likely only one in each)
Create a new mpd file and insert all the <period> elements into it in order.
Caveats
I imagine this may be problematic if the videos were all different sizes formats etc, but in this case we can assume consistency.
So my question is to anyone with mpeg-dash / mpd exterience, does this sound right? or is there a better way to acheive this?
Sounds right, multi period is the only feasible way in my opinion.
Ideally you would encode all the videos with the same settings to provide the end user a consistent experience. However, it shouldn't be a problem if quality or even aspect ratio etc change from one period to another from a technical point of view. You'll need a player which supports multi period, such as dash.js or Bitmovin.

Determining the 'amount' of speaking in a video

I'm working on a project to transcribe lecture videos. We are currently just using humans to do the transcriptions as we believe it is easier to transcribe than editing ASR, especially for technical subjects (not the point of my question, though I'd love any input on this). From our experiences we've found that after about 10 minutes of transcribing we get anxious or lose focus. Thus we have been splitting videos into ~5-7 minute chunks based on logical breaks in the lecture content. However, we've found that the start of a lecture (at least for the class we are piloting) often has more talking than later on, which often has time where the students are talking among themselves about a question. I was thinking that we could do signal processing to determine the rough amount of speaking throughout the video. The idea is to break the video into segments containing roughly the same amount of lecturing, as opposed to segments that are the same length.
I've done a little research into this, but everything seems to be a bit overkill for what I'm trying to do. The videos for this course, though we'd like to generalize, contain basically just the lecturer with some occasional feedback and distant student voices. So can I just simply look at the waveform and roughly use the spots containing audio over some threshold to determine when the lecturer is speaking? Or is an ML approach really necessary to quantify the lecturer's speaking?
Hope that made sense, I can clarify anything if necessary.
Appreciate the help as I have no experience with signal processing.
Although there are machine learning mehtods that are very good at discriminating voice from other sounds, you don't seem to require that sort of accuracy for your application. A simple level-based method similar to the one you proposed should be good enough to get you an estimate of speaking time.
Level-Based Sound Detection
Goal
Given an audio sample, discriminate the portions with a high amount of sounds from the portions that consist of background noise. This can then be easily used to estimate the amount of speech in a sound file.
Overview of Method
Rather than looking at raw levels in the signal, we will first convert it to a sliding-window RMS. This gives a simple measure of how much audio energy is at any given point of the audio sample. By analyzing the RMS signal we can automatically determine a threshold for distinguishing between backgroun noise and speech.
Worked Example
I will be working this example in MATLAB because it makes the math easy to do and lets me create illustrations.
Source Audio
I am using President Kennedy's "We choose to go to the moon" speech. I'm using the audio file from Wikipedia, and just extracting the left channel.
imported = importdata('moon.ogg');
audio = imported.data(:,1);
plot(audio);
plot((1:length(audio))/imported.fs, audio);
title('Raw Audio Signal');
xlabel('Time (s)');
Generating RMS Signal
Although you could techinically implement an overlapping per-sample sliding window, it is simpler to avoid the overlapping and you'll get very similar results. I broke the signal into one-second chunks, and stored the RMS values in a new array with one entry per second of audio.
audioRMS = [];
for i = 1:imported.fs:(length(audio)-imported.fs)
audioRMS = [audioRMS; rms(audio(i:(i+imported.fs)))];
end
plot(1:length(audioRMS), audioRMS);
title('Audio RMS Signal');
xlabel('Time (s)');
This results in a much smaller array, full of positive values representing the amount of audio energy or "loudness" per second.
Picking a Threshold
The next step is to determine how "loud" is "loud enough." You can get an idea of the distribution of noise levels with a histogram:
histogram(audioRMS, 50);
I suspect that the lower shelf is the general background noise of the crowd and recording environment. The next shelf is probably the quieter applause. The rest is speech and loud crowd reactions, which will be indistinguishable to this method. For your application, the loudest areas will almost always be speech.
The minimum value in my RMS signal is .0233, and as a rough guess I'm going to use 3 times that value as my criterion for noise. That seems like it will cut off the whole lower shelf and most of the next one.
A simple check against that threshold gives a count of 972 seconds of speech:
>> sum(audioRMS > 3*min(audioRMS))
ans =
972
To test how well it actually worked, we can listen to the audio that was eliminated.
for i = 1:length(speech)
if(~speech(i))
clippedAudio = [clippedAudio; audio(((i-1)*imported.fs+1):i*imported.fs)];
end
end
>> sound(clippedAudio, imported.fs);
Listening to this gives a bit over a minute of background crowd noise and sub-second clips of portions of words, due to the one-second windows used in the analysis. No significant lengths of speech are clipped. Doing the opposite gives audio that is mostly the speech, with clicks heard as portions are skipped. The louder applause breaks also make it through.
This means that for this speech, the threshold of three times the minimum RMS worked very well. You'll probably need to fiddle with that ratio to get good automatic results for your recording environment, but it seems like a good place to start.

Extracting user interests from social profiles

This is my first time dabbling in NLP so please excuse my ignorance. I'm looking for a method to extract interests/likes/hobbies from users' social profiles. Here is an example where all the interests/likes/hobbies are in bold:
"I consider myself a pretty diverse character... I'm a professional
wrestler, but I'd take a bullet for Wall•E. I train like a one-man genocide machine in the gym, but I cried at
"Armageddon." I'll head bang to AC/DC, and I'm seriously
considering getting a Legend of Zelda tattoo. I'm 420-friendly. I
like to party it up with the frat crowd one night, hang out with
my Burning Man friends the next, play Halo and World of
Warcraft the next, and jam with friends that aren't any younger than
40 the next. My youngest friend is 16, my oldest friend is 66. I'll
sing karaoke at the bars, and I'm my friends' collective
psychiatrist/shoulder."
The profiles are plain text. There are no meta tags or ids associated with any of it, it's just a paragraph of text.
My naiive idea was to take each noun and match it against Freebase to see if it's an activity/artist/movie/book etc. The problem is that although most entities mentioned will be things the user likes, she will also mention things she doesn't like and I have no means of distinguishing the 2.
I have 2 questions:
What sub field of NLP should I be looking at? Some googleable algorithms/techniques/authors would be greatly appreciated.
How hard is this problem?
Thanks!
First, unless using NLP to do this is a particular objective for you, check your problem domain to see if you can avoid it completely.
For instance:
do these profiles have tags (supplied either by the Site or by the
user)?
what does the Site's API make available (assuming that's how you are accessing this data; if you are scraping it, then this doesn't of course apply)? A good example, Facebook. if you read a user's posts, you'll see words like "wrestler", "karaoke", etc. but if you look at what fields are exposed via the Graph API, you'll see that these activities nearly always have an associated FB ID.
I am not a specialist in this field, but I can recommend a couple of resources directed to NLP and which are accessible to the non-specialist or novice. The first is a text processing API. This simple web service uses REST and JSON IO. It is free and seems to have a fairly large rate limit.
This API appears to rely heavily on the excellent Natural Language Tooolkit (NLTK) which is a mature stable library in python, that includes modules directed to the problem in your Question, e.g., Sentiment Analysis, Tagging and Chunk Extraction, etc.
Which particular sub-domain is most relevant to solving the Question in the OP? I don't know, but I suspect there's a module somewhere in the NLTK that does what you need. Finding that module is hopefully just a matter of skimming the API Documentation (which is organized by module); reading the Getting Started section which contains an excellent survey of NLTK's modules as well as demos for all of each of them.

How to search intelligently for something within context? Is there a larger topic involved?

I am trying to build a site that searches a database of user comments for the most often mentioned names of movies. However, with certain movie titles like Up and Warrior(2011), there are far too many irrelevant results and I want to only search for the title in threads about movies or else make sure it's mentioned in the right context. Is there a more generalized question that this problem is a subset of (I'm sure there is but google yielded nothing so far).
working out the context of a chunk of text to determin whether the word "up" is refering to a film or not is, unfortunately, something only a human can do at the moment.
have a look at amazon's mechanical turk service, you can pay people to search thru the text for you. this might not be great if you are trying to offer a free service however.

Comparing audio recordings

I have 5 recorded wav files. I want to compare the new incoming recordings with these files and determine which one it resembles most.
In the final product I need to implement it in C++ on Linux, but now I am experimenting in Matlab. I can see FFT plots very easily. But I don't know how to compare them.
How can I compute the similarity of two FFT plots?
Edit: There is only speech in the recordings. Actually, I am trying to identify the response of answering machines of a few telecom companies. It's enough to distinguish two messages "this person can not be reached at the moment" and "this number is not used anymore"
This depends a lot on your definition of "resembles most". Depending on your use case this can be a lot of things. If you just want to compare the bare spectra of the whole file you can just correlate the values returned by the two ffts.
However spectra tend to change a lot when the files get warped in time. To figure out the difference with this, you need to do a windowed fft and compare the spectra for each window. This then defines your difference function you can use in a Dynamic time warping algorithm.
If you need perceptual resemblance an FFT probably does not get you what you need. An MFCC of the recordings is most likely much closer to this problem. Again, you might need to calculate windowed MFCCs instead of MFCCs of the whole recording.
If you have musical recordings again you need completely different aproaches. There is a blog posting that describes how Shazam works, so you might be able to find this on google. Or if you want real musical similarity have a look at this book
EDIT:
The best solution for the problem specified above would be the one described here ("shazam algorithm" as mentioned above).This is however a bit complicated to implement and easier solution might do well enough.
If you know that there are only 5 different different possible incoming files, I would suggest trying first something as easy as doing the euclidian distance between the two signals (in temporal or fourier). It is likely to give you good result.
Edit : So with different possible starts, try doing an autocorrelation and see which file has the higher peak.
I suggest you compute simple sound parameter like fundamental frequency. There are several methods of getting this value - I tried autocorrelation and cepstrum and for voice signals they worked fine. With such function working you can make time-analysis and compare two signals (base - to which you compare, in - which you would like to match) on given interval frequency. Comparing several intervals based on such criteria can tell you which base sample matches the best.
Of course everything depends on what you mean resembles most. To compare function you can introduce other parameters like volume, noise, clicks, pitches...

Resources