I'm a beginner with tensorflow and Python and I'm trying to build an app that automatically detects, in a football (soccer) match some key moments (yellow/red cards, goals, etc).
I'm starting to understand how to do a video analysis training the program on a dataset built by me, downloading images from the web and tagging them. In order to obtain some better results for the analysis, I was wondering if someone had some suggestions on tutorials to follow in order to understand how to train my app also on audio files, to make the program able to understand when there is a pitch variation in the audio of the video and combine both video and audio analysis in order to get better results.
Thank you in advance
Since you are new to Python and to tensorflow, I recommend you focus on just audio for now, especially since its a strong indicator of events of importance in a football match (red/yellow cards, nasty fouls, goals, strong chances, good plays, etc).
Very simply, without using much ML at all, you can use the average volume of a time period to infer significance. If you want to get a little more sophisticated, you can consider speech-to-text libraries to look for keywords in commentator speech.
Using video to try to determine when something important is happening is much, much more challenging.
This page can help you get started with audio signal processing in Python.
https://bastibe.de/2012-11-02-real-time-signal-processing-in-python.html
Related
I am using Media Recorder class to record sound . and I am getting amplitude on some interval of time and converting it into decibel, but what I want I also want to get frequency of audio on that interval with corresponding to that amplitude or decibel. I searched, but I did not get proper idea that how to do it.
please if someone can guide me then please help
The process most often used to determine pitch is called the "fast fourier transform". Try using those keywords or the common abbreviation "FFT" along with the language or platform you are working on and that should bring up libraries to incorporate that can do this. Coding an FFT is pretty complex, so you'll probably want to use a library. But if you are curious about the math and how they work, check out The Scientists and Engineers Guide to Digital Signal Processing.
Lets say I have the audio file for Happy Birthday. I want to convert that audio file into an audio file that sounds like this : happy birthday.
First, I'd like to know if I have the ability to program this? Can a highschooler who's almost finished with APCS program this?
If I can:
How would I change the bpm of the song? I've searched through a bunch of websites, but they weren't very helpful.
I know that audio files can be represented in waveforms. How would I scan for each individual wave in an audio file (I need this to isolate the notes)?
This is a very ambitious project, actually. One reason is that it involves using digital signal processing tools like FFT (Fast fourier transforms) to analyze the sound to pick out the pitches. You might be able to find a library that can do this, but as far as coding such a tool, that would involve a steep learning curve.
If you would like to look further into this, there is a good online resource called "The Scientists and Engineers Guide to Digital Signal Processing". I was able to work through and understand the discrete fourier transform with only high school math (lots of trig) and a bit of calculus. It was a lift, though.
Trying to analyze rhythm is also no easy task. Even with advanced tools provided in professional notation system such as Finale, people have trouble playing rhythms in time well enough for the best transcription tools. Algorithms that "quantize" the beats help but also limit the amount of detail that can be included in the playback.
My guess is that as interesting and worthwhile as this project would be, to bring it to completion before the semester ends would require putting together prebuilt pieces. A lot of programming is done that way, these days.
If you scale the project back to something like just getting your code to analyze a short sample of a single note and give its pitch, that would be both impressive and doable with a lot of work. It could be done with a DFT algorithm instead of requiring FFT, reducing the amount of info you'd have to acquire first. That way, you'd only have to work your way up to understanding and implementing the material on this link which is about calculating the DFT. Notice that there is example code in BASIC. The code examples throughout this book are a big help.
I'm currently researching an problem regarding DOA (direction of arrival) regression for an audio source, and need to generate training data in the form of audio signals of moving sound sources. In particular, I have the stationary sound files, and I need to simulate a source and microphone(s) with the distances between them changing to reflect movement.
Is there any software online that could potentially do the trick? I've looked into pyroomacoustics and VA as well as other potential libraries, but none of them seem to deal with moving audio sources, due to the difficulties in simulating the doppler effect.
If I were to write up my own simulation code for dealing with this, how difficult would it be? My use case would be an audio source and a microphone in some 2D landscape, both moving with their own velocities, where I would want to collect the recording from the microphone as an audio file.
Some speculation here on my part, as I have only dabbled with writing some aspects of what you are asking about and am not experienced with any particular libraries. Likelihood is good that something exists and will turn up.
That said, I wonder if it would be possible to use either the Unreal or Unity game engine. Both, as far as I can remember, grant the ability to load your own cues and support 3D including Doppler.
As far as writing your own, a lot depends on what you already know. With a single-point mike (as opposed to stereo) the pitch shifting involved is not that hard. There is a technique that involves stepping through the audio file's DSP data using linear interpolation for steps that lie in between the data points, which is considered to have sufficient fidelity for most purposes. Lot's of trig, too, to track the changes in velocity.
If we are dealing with stereo, though, it does get more complicated, depending on how far you want to go with it. The head masks high frequencies, so real time filtering would be needed. Also it would be good to implement delay to match the different arrival times at each ear. And if you start talking about pinnas, I'm way out of my league.
As of now it seems like Pyroomacoustics does not support moving sound sources. However, do check a possible workaround suggested by the developers here in Issue #105 - where the idea of using a time-varying convolution on a dense microphone array is suggested.
Im really new to machine Learning.I have a project to identify a given sound.(Ex: cutting wood)In the audio clip there will be several sound. What i need to do is recognise that particular sound from it. I red some articles about machine learning. But i still have lack of knowledge where to start this project and also I'm running out of time.
Any help will be really appreciated. Can anyone please tell me how to do this?
Can i directly perform template(algorithms) matching for a sound?
It's a long journey ahead of you and Stack Overflow isn't a good place for asking such a generic question. Consult help section for more.
To get you started, here are some web sites:
Awesome Bioacoustic
Comparative Audio Analysis With Wavenet, MFCCs, UMAP, t-SNE and PCA
Here are two small repos of mine related to audio classification:
Gender classification from audio
Kiwi / not-a-kiwi bird calls detector
They might give you an idea where to start your project. Check the libraries I am using - likely they will be of help to you.
a little background: I'm faced with converting 3000 IVR scripts for a new PBX & IVR.
Currently the voice scripts exist in .VOX format--AND..they're not 'written-out'/documented.
I humbly ask if anyone knows of a program, such that, I can dump the .vox file in the program and have it produce a text document. If the .vox format is a problem I could probably convert all of them to .wav or whatever.
Yes there are tons of apps/programs which do speech to text in real time, but I want to be able to "upload/dump" the recording into a program, and obtain text.
Can someone point me in the right direction?
Thank you in advance for any sort of comment/help.
SF
The problem is that you are hoping to perform generic natural language processing on low quality audio files. Low quality audio data significantly reduces the reliability of natural language processing software. Upsampling your audio files will not improve their content which means poor results even if you did have access to a natural language engine.
Your best bet is to work with a company that performs hybrid machine/human transcription and pay them for each transcriptions. Alternatively you could consider working with Amazon Mechanical Turk and buy some general purpose human effort to get these transcribed. In both of those cases it is likely that VOX files would not work, you would first need to convert them to WAV or MP3 files to make it easier for the third party to use off the shelf software to listen to the prompts.