How do I tell the difference between an 8Khz acoustic model and a 16Khz model? - cmusphinx

I'm able to get a reasonable level of accuracy with 8khz audio files. Now I want to try a higher sample rate, if I can.
Looking at the acoustic models available on this page, they list:
en-us-8khz.tar.gz
en-us-semi-full.tar.gz
en-us-semi.tar.gz
en-us.tar.gz
The one that says 8khz is obviously the one for the 8khz sample rate, but what about the other three? What sample rates do they match?
If I use a 16khz audio file, which of these acoustic models do I need to use?
And in the absense of the sample rate being in the file name, how do I figure out the sample rate of an acoustic model?

You can open the file feat.params in model folder and look for -upperf parameter. In 8khz model -upperf is usually 3500 or 4000. For 16khz model -upperf is more than 4000, usually 6800.

Related

Is Speech-to-Text voice training data sampled at 48kHz still good for improving recognition of 16kHz speech

We are training our Azure Cognitive Services Custom Speech model using data recorded in .wav (RIFF) format at 16bit, 16kHz as per the documentation.
But, we have obtained a dataset of speech recorded at 48kHz and encoded as MP3. Speech Studio seems to be able to train the service using this data without problems but we would like to know if doing so, with the higher sample rate, will only be of use in recognising streamed data also at the higher rate or does that not matter?
Having a higher sample rate like the one you described is desirable in terms of quality of the audio, but it generally won't influence speech recognition. As long as you meet the audio format minimum requirements, speech recognition should work just fine.

Which Spectrogram best represents features of an audio file for CNN based model?

I am looking to understand various spectrograms for audio analysis. I want to convert an audio file into 10 second chunks, generate spectrograms for each and use a CNN model to train on top of those images to see if they are good or bad.
I have looked at linear, log, mel, etc and read somewhere that mel based spectrogram is best to be used for this. But with no proper verifiable information. I have used the simple following code to generate mel spectrogram.
y,sr= librosa.core.load(r'C:\Users\Tej\Desktop\NoiseWork\NoiseOnly\song.wav')
S = librosa.feature.melspectrogram(y=y, sr=sr)
librosa.display.specshow(librosa.power_to_db(S, ref=np.max))
My question is which spectrogram best represents features of an audio file for training with CNN? I have used linear but some audio files the linear spectrogram seems to be the same
To add to what has been stated, I recommend reading through A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging by Keunwoo Choi, György Fazekas, Kyunghyun Cho, and Mark Sandler.
For their data, they achieved nearly identical classification accuracy between simple STFTs and melspectrograms. So melspectrograms seem to be the clear winner for dimension reduction if you don't mind the preprocessing. The authors also found, as jonner mentions, that log-scaling (essentially converting amplitude to a db scale) improves accuracy. You can easily do this with Librosa (using your code) like this:
y,sr= librosa.core.load(r'C:\Users\Tej\Desktop\NoiseWork\NoiseOnly\song.wav')
S = librosa.feature.melspectrogram(y=y, sr=sr)
S_db = librosa.core.power_to_db(S)
As for normalization after db-scaling, that seems hit or miss depending on your data. From the paper above, the authors found nearly no difference using various normalization techniques for their data.
One last thing that should be mentioned is a somewhat new method called Per-Channel Energy Normalization. I recommend reading Per-Channel Energy Normalization: Why and How by Vincent Lostanlen, Justin Salamon, Mark Cartwright, Brian McFee,
Andrew Farnsworth, Steve Kelling, and Juan Pablo Bello. Unfortunately, there are some parameters that need adjusting depending on the data, but in many cases seems to do as well as or better than logmelspectrograms. You can implement it in Librosa like this:
y,sr= librosa.core.load(r'C:\Users\Tej\Desktop\NoiseWork\NoiseOnly\song.wav')
S = librosa.feature.melspectrogram(y=y, sr=sr)
S_pcen = librosa.pcen(S)
Although, like I mentioned, there are parameters within pcen that need adjusting! Here is Librosa's documentation on PCEN to get you started if you are interested.
Log-scaled mel-spectrograms is the current "standard" for use with Convolutional Neural Networks. It was the most commonly used in Audio Event Detection and Audio Scene Classification literature between 2015-2018.
To be more invariant to amplitude changes, normalized is usually applied. Either to entire clips or the windows being classified. Mean/std normalization works fine, generally.
But from the perspective of a CNN, there is relatively small difference between the different spectrometer variations. So this is unlikely to fix your issue if two or more spectrograms are basically the same.

Which feature, algorithm is good for Speaker Verification

I have a task with speaker verification.
My task is calculate the similarity between two audio speech voice, then compare with a threshold.
Ex: similarity score between two audio is 70%, threshold is 50%. Hence the speaker is the same person.
The speech is text-independent, it's can be any conversation.
I have experiment in using MFCC, GMM for speaker recognition task, but this task is difference, just compare two audio feature to have the similarity score. I don't know which feature is good for speaker verification and which algorithm can help me to calculate similarity score between 2 patterns.
Hope to have you guys's advices,
Many thanks.
State of the art these days is xvectors:
Deep Neural Network Embeddings for Text-Independent Speaker Verification
Implementation in Kaldi is here.
I am also working on TIMIT Dataset for speaker verification. I have extracted mfcc features and trained a UBM for same, and adapted for each speaker.When it comes to adaptation I have used diagonal matrix.
How are you testing the wav files? However, when it comes to features you can use pitch and energy.

Building GMM using SIDEKIT 1.2

I have a 2 dimensional data in the form of a text file. I have to build a GMM based on this data using Sidekit 1.2.
Which function should I use to estimate the parameters of the Gaussian model (Mean, covariance matrix, weighted average etc.)
Can you please provide a small example with your own set of (x,y) data and build a GMM using that ?
Any help would be greatly appreciated.
Sidekit is a toolkit built mainly for the task of speaker recognition, and its framework (as other similar toolkits) relies on the training data consisting of audio files in the formats .wav, .sph or raw PCM.
If you're just building a GMM and don't plan to use it for speaker recognition experiments, I would recommend using another toolkit for general statistical purposes (scikit-learn might be a good choice).
If you do plan to do speaker recognition tasks, you will have to some initial work on your data. If your text-data is some form of speaker data, you could convert it to the appropriate format. For example, if the y part is raw audio, convert it to wav-files. If y is cepstral features or other features, store it in h5.-format. After doing this, you can build a GMM for speaker recognition tasks by following the tutorials on the Sidekit homepage.

What algorithm is used for audio feature extraction in google's audioset?

I am getting started with Google's Audioset. While the dataset is extensive, I find the information with regards to the audio feature extraction very vague. The website mentions
128-dimensional audio features extracted at 1Hz. The audio features were extracted using a VGG-inspired acoustic model described in Hershey et. al., trained on a preliminary version of YouTube-8M. The features were PCA-ed and quantized to be compatible with the audio features provided with YouTube-8M. They are stored as TensorFlow Record files.
Within the paper, the authors discuss using mel spectrograms on 960 ms chunks to get a 96x64 representation. It is then unclear to me how they get to the 1x128 format representation used in the Audioset. Does anyone know more about this??
They use the 96*64 data as input for a modified VGG network.The last layer of VGG is FC-128, so its output will be 1*128, and that is the reason.
The architecture of VGG can be found here: https://github.com/tensorflow/models/blob/master/research/audioset/vggish_slim.py

Resources