How to detect filler sound like um, uh, etc using cmusphinx/mozilla deepspeech/google stt etc? - speech-to-text

I am working on a project in Speech Recognition and the task is to detect filler sounds like um, uh, eh, etc. on audio clips of children/students speaking in English. Their speaking English is not that great.
How can this be done using cmuSphinx/Mozilla deep speech/google cloud speech/Kaldi?
Or do I need to start from scratch?
I also tried to go through other posts and papers on how to build an ASR but since its not a long term project, I do not have the time to spend on building it from scratch and see the results. Also, I am okay with less accuracy which I can claim to improve later on.

Related

Detect different speakers in an audio recording

I want to make an application that counts the speaking time of each speaker in an audio recording. I don't care about doing full voice recognition and transcribing every word in the recording, I just want the speaking time of each voice.
Is there a piece of software that provides such feature?
If possible, I would like to avoid using a third-party service (such as Google Cloud) to achieve this, and I would like the solution to be light enough to run on a modern smartphone.
Thank you for your help.
I had the same idea. Check this out https://github.com/pyannote/pyannote-audio
Haven't tried it myself yet. Will add an edit after.

APCS final project: Converting an audio file to a simpler MIDI file

Lets say I have the audio file for Happy Birthday. I want to convert that audio file into an audio file that sounds like this : happy birthday.
First, I'd like to know if I have the ability to program this? Can a highschooler who's almost finished with APCS program this?
If I can:
How would I change the bpm of the song? I've searched through a bunch of websites, but they weren't very helpful.
I know that audio files can be represented in waveforms. How would I scan for each individual wave in an audio file (I need this to isolate the notes)?
This is a very ambitious project, actually. One reason is that it involves using digital signal processing tools like FFT (Fast fourier transforms) to analyze the sound to pick out the pitches. You might be able to find a library that can do this, but as far as coding such a tool, that would involve a steep learning curve.
If you would like to look further into this, there is a good online resource called "The Scientists and Engineers Guide to Digital Signal Processing". I was able to work through and understand the discrete fourier transform with only high school math (lots of trig) and a bit of calculus. It was a lift, though.
Trying to analyze rhythm is also no easy task. Even with advanced tools provided in professional notation system such as Finale, people have trouble playing rhythms in time well enough for the best transcription tools. Algorithms that "quantize" the beats help but also limit the amount of detail that can be included in the playback.
My guess is that as interesting and worthwhile as this project would be, to bring it to completion before the semester ends would require putting together prebuilt pieces. A lot of programming is done that way, these days.
If you scale the project back to something like just getting your code to analyze a short sample of a single note and give its pitch, that would be both impressive and doable with a lot of work. It could be done with a DFT algorithm instead of requiring FFT, reducing the amount of info you'd have to acquire first. That way, you'd only have to work your way up to understanding and implementing the material on this link which is about calculating the DFT. Notice that there is example code in BASIC. The code examples throughout this book are a big help.

Sound detection of cutting woods

Im really new to machine Learning.I have a project to identify a given sound.(Ex: cutting wood)In the audio clip there will be several sound. What i need to do is recognise that particular sound from it. I red some articles about machine learning. But i still have lack of knowledge where to start this project and also I'm running out of time.
Any help will be really appreciated. Can anyone please tell me how to do this?
Can i directly perform template(algorithms) matching for a sound?
It's a long journey ahead of you and Stack Overflow isn't a good place for asking such a generic question. Consult help section for more.
To get you started, here are some web sites:
Awesome Bioacoustic
Comparative Audio Analysis With Wavenet, MFCCs, UMAP, t-SNE and PCA
Here are two small repos of mine related to audio classification:
Gender classification from audio
Kiwi / not-a-kiwi bird calls detector
They might give you an idea where to start your project. Check the libraries I am using - likely they will be of help to you.

Tutorial tensorflow audio pitch analysis

I'm a beginner with tensorflow and Python and I'm trying to build an app that automatically detects, in a football (soccer) match some key moments (yellow/red cards, goals, etc).
I'm starting to understand how to do a video analysis training the program on a dataset built by me, downloading images from the web and tagging them. In order to obtain some better results for the analysis, I was wondering if someone had some suggestions on tutorials to follow in order to understand how to train my app also on audio files, to make the program able to understand when there is a pitch variation in the audio of the video and combine both video and audio analysis in order to get better results.
Thank you in advance
Since you are new to Python and to tensorflow, I recommend you focus on just audio for now, especially since its a strong indicator of events of importance in a football match (red/yellow cards, nasty fouls, goals, strong chances, good plays, etc).
Very simply, without using much ML at all, you can use the average volume of a time period to infer significance. If you want to get a little more sophisticated, you can consider speech-to-text libraries to look for keywords in commentator speech.
Using video to try to determine when something important is happening is much, much more challenging.
This page can help you get started with audio signal processing in Python.
https://bastibe.de/2012-11-02-real-time-signal-processing-in-python.html

TI-99 speech effect?

I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?
Most of those old speech synthesizers were build directly in-chip. Perhaps you could find a synthesizer that sounds like the chip, but if you really want the original sound, you would either have to simulate the chip (I don't know if it's a simple matter, perhaps the chip internals aren't published).
I only know because I burnt out a number of the Radio Shack speech synthesizer ICs before I managed to get a SP0256-AL2 working.
If you're more of a do-it yourself type guy, you need to find out which IC actually drove the speech synthesis in a TI-99, and then build the chip up on a bread board. That's what I was trying to do back then, and I managed to get the chip to speak, but lost patience after I fried my third chip due to a mis-wiring issue when I attempted to attach it to my PC's parallel port. I think this was the book I was using back then, but there's no cover art featured so it's hard to know for sure.
If you are familiar with how to use ROM images, there seems to be a gentleman that has managed to refeverse engineer the ROM image out of a SP0256-AL2. Look here for the image and the incredible granted permission to do the work and distribute the results.
You could start with open source that does something similar: Adding Robotic/Vocoder effect to your song using Audacity

Resources