Creating a drawn audio reactive visual - audio

I'm looking to create this project in processing, however, I'm finding the terminology a bit hard. I'm not sure how to call the effect where the line is staying permanently throughout the song to 'draw' the music data.
I would appreciate any guidance on what tutorials I could look at or an answer from someone.
My aim is to create something as close to this as possible:
https://www.youtube.com/watch?v=Bb5PTitqtlc&t=58s

Stack Overflow isn't really designed for general "how do I do this" type questions. It's for specific "I tried X, expected Y, but got Z instead" type questions. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take those pieces on one at a time. Write down exactly what you want to happen, in English, and that will be an algorithm that you can think about implementing with code.
Get something simple working. Can you write a simple sketch that plays a song? Then work your way forward in small steps. Can you write a simple sketch that prints out some numeric values based on the song that's playing? Separately from that, can you create a very simple visualization using hard-coded numbers? Get all of that working separately before you think about combining them into a sketch that shows a visualization based on a song that's playing.
Then if you get stuck, you can post a more specific question along with a MCVE. Good luck.

Related

How to get the waveform with OpenSL ES?

Or even better, how to get the size of the amplitude or the volume of the wave sound every certain time.
In fact I need the two ways, the full waveform and measure it each time. the first one for have a view of the song wave and the second one for visual effects.
this is for Android (NDK) systems.
come on people, I don't ask for the full code answer, I just want you to tell me some advices or something that can help me. You can simply say that the question is hard or makes no sense. but say something.
Whatever, I researched a little bit and I didn't find the answer for the question, but I did find a better solution for the problem, and is a free library named "superpowered", simple, fast, cross-platform, and has all the functions for analize sounds.
hope this help people new to this world of sound programming

Developing a simple reactive program with Haskell

I am building a three-dimensional tic-tac-toe game. I have already built the game itself that runs in command prompt, and now I am building a system that runs the game on a cube of bi-color LED lamps and an input button.
As the input through the button and the output through the cube will happen simultaneously, this is going to be my first concurrent/reactive programming in functional languages. I am clueless at all. I have read and understood the basics of those programming concepts, but can't quite apply them to my problem.
What kind of strategy and library is suit to this problem? (I don't know if this is a legitimate question..) I'm so lost that I need advice even about where to start from/with.
(In the stragety/library above,) How can an even listener (or anything that reacts to certain events in the system) "update" data, how can such mutable data (which I expect are called rather like mutable reference because data themselves should be immutable) be used like, and how do you explain that it keeps the referential transparency (please ignore this question if nonsense)?
(In the stragety/library above,) How does it deal with such a kind of input that the user can unintentionally input many times? For example, it's hard for us to "tap" a button at just one point of time. I want my program to recognize such sequential inputs as a single input.
Thank you so much for reading my weird English, and I would appreciate a lot if you could give me any pieces of guide or information. Thank you again!

Change pitch of audio buffer

I am trying to change the pitch of a buffer sample using a scriptprocessor, but what kind of formulas do I need to do this? I am not looking for the exact js code, but just for some general mathematical how to. I would love to have some code for this, as the first answer has a lot of formulas where I have no idea on how to implement that in JS.
I know that this is working with time, but according to this it can be done with the FFT, but I have no idea how one should do that.
For one method of doing time-pitch modification using an FFT, look up phase vocoder. Here's one explanation of how a phase vocoder works (but a search will turn up many others): http://www.guitarpitchshifter.com/algorithm.html
I believe https://github.com/mikolalysenko/pitch-shift would be appropriate (the quality is not on par with other code, but this library is rather easy to understand/use). You can hear a demo at http://mikolalysenko.github.io/pitch-shift/.

real time refreshing in processing

I am new to processing, i found it by searching for "draw with coding" , and i tried it, seems every time i modify the code, i have to stop and render again to get the final result
Is there any way to get updated graph without re-rendering? that can be much more convenient for creating simple figures.
if not, is there any alternative to processing that can draw a graph with coding?
I've used Tikz in Latex, but that is just for Latex, I want something that can let me draw a figure by coding, I've suffered enough though using software like coreldraw, it lacks the fundamental elegance of coding..
thanks alot!
Please have a look at the FluidForms libraries.
easy to setup
documentation and video tutorials
as long as you don't run into exceptions, live code comfortably
if you prefix public variables with param you also get sliders for free :)
Do check out the video tutorials, especially this one:
Also, if using Python isn't a problem I recommend having a look at:
NodeBox
Field
Python is a brilliant scripting language - which makes prototyping/'live coding' easy(although it can be compiled and it also plays nicely with c/c++) and is easy to pick up and a joy to use.
In Processing, you must re-run your program to see the changes (graphically), unless you write code to receive input from the user to dynamically adjust what you are drawing. For creating user interfaces there's for example the controlP5 library (http://www.sojamo.de/libraries/controlP5/).
It doesn't support "live coding" (at least that I know of).
You must re-run the code to see the new result.
If Live coding is what you're looking for, check out Fluxus (http://www.pawfal.org/fluxus/) or Impromptu (http://en.wikipedia.org/wiki/Impromptu_(programming_environment)

Implementing "best match" for sound effects

I am looking for some advice on categorizing a library of sound effects. I have a large set of random sound effects, (think whistles, pops, growls, creaks, gunshots etc). I would like to be able to take a growl for example, and find the next growl that sounds the closest to the original.
Given a sound, what sound from my set sounds the closest to it.
I have done a fair amount of googling and have found two avenues that I am still researching. One is using echonest, although their "best match" support looks not promising for public users. The other option is diving into FFT and building my own matching algorithm. This is a fine option and would be a great learning experience but I wanted to get some opinions from others who might know a little more about sound processing; especially short clips .5sec - 3sec range, not full length music.
Thanks!
I have worked in movie postproduction for years and as far as I know, there is no way to do that automatically. Every file has meta information in its file header which describes what the sound is like. You are then actually not searching for the file names but in the meta string.
I don't think that it is trivial to sort effects programmatically as two effects that sound similar might be totally different if you look at the waveform.
You would need to extract significant information about a sound that you can then compare.
I am also not a DSP expert, maybe there are methods to do this
If you're interested in trying to build your own system to do this, I can suggest a few keywords that might help to refine your Google searches. In the academic research community, the task you're describing is often called "content-based audio searching". I know there's been a lot of work done on it, and though most pertains to music, sound effects have definitely been the focus of a number of studies.
You might want to start with the work of Pedro Cano.
Also, I recently heard about a company that's doing similar work. You might want to check out products from Imagine Research.
Those are just a couple of ideas off the top of my head. I'm not %100 sure they'll be helpful. If they are, please let me know!

Resources