I am trying to change the pitch of a buffer sample using a scriptprocessor, but what kind of formulas do I need to do this? I am not looking for the exact js code, but just for some general mathematical how to. I would love to have some code for this, as the first answer has a lot of formulas where I have no idea on how to implement that in JS.
I know that this is working with time, but according to this it can be done with the FFT, but I have no idea how one should do that.
For one method of doing time-pitch modification using an FFT, look up phase vocoder. Here's one explanation of how a phase vocoder works (but a search will turn up many others): http://www.guitarpitchshifter.com/algorithm.html
I believe https://github.com/mikolalysenko/pitch-shift would be appropriate (the quality is not on par with other code, but this library is rather easy to understand/use). You can hear a demo at http://mikolalysenko.github.io/pitch-shift/.
Related
I'm looking to create this project in processing, however, I'm finding the terminology a bit hard. I'm not sure how to call the effect where the line is staying permanently throughout the song to 'draw' the music data.
I would appreciate any guidance on what tutorials I could look at or an answer from someone.
My aim is to create something as close to this as possible:
https://www.youtube.com/watch?v=Bb5PTitqtlc&t=58s
Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take those pieces on one at a time. Write down exactly what you want to happen, in English, and that will be an algorithm that you can think about implementing with code.
Get something simple working. Can you write a simple sketch that plays a song? Then work your way forward in small steps. Can you write a simple sketch that prints out some numeric values based on the song that's playing? Separately from that, can you create a very simple visualization using hard-coded numbers? Get all of that working separately before you think about combining them into a sketch that shows a visualization based on a song that's playing.
Then if you get stuck, you can post a more specific question along with a MCVE. Good luck.
Or even better, how to get the size of the amplitude or the volume of the wave sound every certain time.
In fact I need the two ways, the full waveform and measure it each time. the first one for have a view of the song wave and the second one for visual effects.
this is for Android (NDK) systems.
come on people, I don't ask for the full code answer, I just want you to tell me some advices or something that can help me. You can simply say that the question is hard or makes no sense. but say something.
Whatever, I researched a little bit and I didn't find the answer for the question, but I did find a better solution for the problem, and is a free library named "superpowered", simple, fast, cross-platform, and has all the functions for analize sounds.
hope this help people new to this world of sound programming
I am new to processing, i found it by searching for "draw with coding" , and i tried it, seems every time i modify the code, i have to stop and render again to get the final result
Is there any way to get updated graph without re-rendering? that can be much more convenient for creating simple figures.
if not, is there any alternative to processing that can draw a graph with coding?
I've used Tikz in Latex, but that is just for Latex, I want something that can let me draw a figure by coding, I've suffered enough though using software like coreldraw, it lacks the fundamental elegance of coding..
thanks alot!
Please have a look at the FluidForms libraries.
easy to setup
documentation and video tutorials
as long as you don't run into exceptions, live code comfortably
if you prefix public variables with param you also get sliders for free :)
Do check out the video tutorials, especially this one:
Also, if using Python isn't a problem I recommend having a look at:
NodeBox
Field
Python is a brilliant scripting language - which makes prototyping/'live coding' easy(although it can be compiled and it also plays nicely with c/c++) and is easy to pick up and a joy to use.
In Processing, you must re-run your program to see the changes (graphically), unless you write code to receive input from the user to dynamically adjust what you are drawing. For creating user interfaces there's for example the controlP5 library (http://www.sojamo.de/libraries/controlP5/).
It doesn't support "live coding" (at least that I know of).
You must re-run the code to see the new result.
If Live coding is what you're looking for, check out Fluxus (http://www.pawfal.org/fluxus/) or Impromptu (http://en.wikipedia.org/wiki/Impromptu_(programming_environment)
I am looking for some advice on categorizing a library of sound effects. I have a large set of random sound effects, (think whistles, pops, growls, creaks, gunshots etc). I would like to be able to take a growl for example, and find the next growl that sounds the closest to the original.
Given a sound, what sound from my set sounds the closest to it.
I have done a fair amount of googling and have found two avenues that I am still researching. One is using echonest, although their "best match" support looks not promising for public users. The other option is diving into FFT and building my own matching algorithm. This is a fine option and would be a great learning experience but I wanted to get some opinions from others who might know a little more about sound processing; especially short clips .5sec - 3sec range, not full length music.
Thanks!
I have worked in movie postproduction for years and as far as I know, there is no way to do that automatically. Every file has meta information in its file header which describes what the sound is like. You are then actually not searching for the file names but in the meta string.
I don't think that it is trivial to sort effects programmatically as two effects that sound similar might be totally different if you look at the waveform.
You would need to extract significant information about a sound that you can then compare.
I am also not a DSP expert, maybe there are methods to do this
If you're interested in trying to build your own system to do this, I can suggest a few keywords that might help to refine your Google searches. In the academic research community, the task you're describing is often called "content-based audio searching". I know there's been a lot of work done on it, and though most pertains to music, sound effects have definitely been the focus of a number of studies.
You might want to start with the work of Pedro Cano.
Also, I recently heard about a company that's doing similar work. You might want to check out products from Imagine Research.
Those are just a couple of ideas off the top of my head. I'm not %100 sure they'll be helpful. If they are, please let me know!
I am looking for a way to compare a user submitted audio recording against a reference recording for comparison in order to give someone a grade or percentage for language learning.
I realize that this is a very un-scientific way of doing things and is more than a gimmick than anything.
My first thoughts are some sort of audio fingerprinting, or waveform comparison.
Any ideas where I should be looking?
This is by no means a trivial problem to solve, though there is an abundance of research on the topic. Presently the most successful forms of machine learning in the speech recognition domain apply Hidden Markov Model techniques.
You may also want to take a look at existing implementations of HMM algorithms. One such library in its early stages is ghmm.
Perhaps even better and more readily applicable to your problem is HTK.
In addition to chomp's great answer, one important keyword you probably need to look up is Dynamic Time Warping (DTW). This is the wikipedia article: http://en.wikipedia.org/wiki/Dynamic_time_warping