Raw Sound playing - audio

I've been working for some time with image formats and i know that an image is an array of pixels (24- maybe 32 bits long). The question is: what is the way a sound file is represented? To be honest i'm not even sure what i should be googling for. Also i would be interested how do you use the data, i mean actually playing the sounds in the file. For an image file you have all sorts of abstract devices to draw an image on(Graphics:java,c#, HDC:cpp(win32), etc.) .I hope i have been clear enough.

Here's a dandy overview of how .wav is stored. I found it by typing "wave file format" into google.
http://www.sonicspot.com/guide/wavefiles.html

WAV files can also store compressed audio, but I believe most of the time they are not compressed. But the WAV format is designed as a container for a number of options on how that audio is stored.
Here's a snipped of code that I found at another question here at stackoverflow that I like in C# that builds a WAV-formatted audio MemoryStream and then plays that stream (without saving it to a file, like many other answers rely on). But saving it to a file can easily be added with one line of code if you want it saved to disk, but I would think that most of the time, that'd be undesirable.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Windows.Forms;
public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
{
var mStrm = new MemoryStream();
BinaryWriter writer = new BinaryWriter(mStrm);
const double TAU = 2 * Math.PI;
int formatChunkSize = 16;
int headerSize = 8;
short formatType = 1;
short tracks = 1;
int samplesPerSecond = 44100;
short bitsPerSample = 16;
short frameSize = (short)(tracks * ((bitsPerSample + 7) / 8));
int bytesPerSecond = samplesPerSecond * frameSize;
int waveSize = 4;
int samples = (int)((decimal)samplesPerSecond * msDuration / 1000);
int dataChunkSize = samples * frameSize;
int fileSize = waveSize + headerSize + formatChunkSize + headerSize + dataChunkSize;
// var encoding = new System.Text.UTF8Encoding();
writer.Write(0x46464952); // = encoding.GetBytes("RIFF")
writer.Write(fileSize);
writer.Write(0x45564157); // = encoding.GetBytes("WAVE")
writer.Write(0x20746D66); // = encoding.GetBytes("fmt ")
writer.Write(formatChunkSize);
writer.Write(formatType);
writer.Write(tracks);
writer.Write(samplesPerSecond);
writer.Write(bytesPerSecond);
writer.Write(frameSize);
writer.Write(bitsPerSample);
writer.Write(0x61746164); // = encoding.GetBytes("data")
writer.Write(dataChunkSize);
{
double theta = frequency * TAU / (double)samplesPerSecond;
// 'volume' is UInt16 with range 0 thru Uint16.MaxValue ( = 65 535)
// we need 'amp' to have the range of 0 thru Int16.MaxValue ( = 32 767)
// so we simply set amp = volume / 2
double amp = volume >> 1; // Shifting right by 1 divides by 2
for (int step = 0; step < samples; step++)
{
short s = (short)(amp * Math.Sin(theta * (double)step));
writer.Write(s);
}
}
mStrm.Seek(0, SeekOrigin.Begin);
new System.Media.SoundPlayer(mStrm).Play();
writer.Close();
mStrm.Close();
} // public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
But this code shows a bit of insight into the WAV-format, and it is even code that allows a person to build your own WAV-format in C# source code.

Related

An Algorithm for producing fake audio visualizer

Does anybody knows an algorithm for making a random series of numbers (like 100 java-byte (>=-127 & <= 127) ) which when are drawn as a bar chart, would be similar to a regular audio spectrum, like those SoundCloud ones?
I'm trying to write one, it has multiple Random and Sinus calculations, but the result is very ugly, it's something between a sinus wave and an old toothbrush. I would be very thankful if you code direct me to a one which is aesthetically convincing
An algorithm with an explanation (and/or picture) is fine. A pseudocode would be very nice of you. An actual JAVA code is bonus. :D
Edit:
This is the code I'm using right now. It's convoluted but I'm basically adding a random deviation to a sinus wave with random amplitude (which I'm not sure if it was a good idea).
private static final int FREQ = 7;
private static final double DEG_TO_RAD = Math.PI / 180;
private static final int MAX_AMPLITUDE = 127;
private static final float DEVIATION = 0.1f; // 10 percent is maximum deviation
private void makeSinusoidRandomBytes() {
byte[] bytes = new byte[AUDIO_VISUALIZER_DENSITY];
for (int i = 0; i < AUDIO_VISUALIZER_DENSITY; i++) {
int amplitude = random.nextInt(MAX_AMPLITUDE) - MAX_AMPLITUDE/2;
byte dev = (byte) (random.nextInt((int) Math.max(Math.abs(2 * DEVIATION * amplitude), 1))
- Math.abs(DEVIATION * amplitude));
bytes[i] = (byte) (Math.sin(i * FREQ * DEG_TO_RAD) * amplitude - dev);
}
this.bytes = bytes;
}
A real soundwave is actually a combination of sine waves of different frequencies and amplitudes added together, not random deviations from a sine wave. The difficult part will be to choose a combination of wave amplitudes and frequencies that will produce the output that you will subjectively like! However, most sound waves have a base frequency and then a number of overtones which "fit into" that wavelength - for example it might have an overtone at 3/2 * the base frequency and at amplitude of 2/3 the base frequency. By combining these overtones and scaling the resulting waveform to the -127 - +127 range, you'll get an actual soundwave.
The following code is C#, but close enough to Java to give you an idea. It's from a game, where I needed to combine many sine waves together to create various types of oscillating effects:
/// <summary>
/// Return a value between 0 and 1 based on a sine-wave oscillating with a given combination of periods at a given point in time
/// </summary>
/// <param name="time">time to get wave value at</param>
/// <param name="periods">lengths of waves</param>
/// <returns>height of wave</returns>
public static float MultiPulse(float time, params float[] periods)
{
float c = 0;
foreach (float p in periods)
{
float cp = (MathHelper.Pi / p) * time;
float s = ((float)Math.Sin(cp) + 1) / 2;
c += s / periods.Length;
}
return c;
}
You probably want to modify that to allow you to specify different amplitudes as well as periods for the waves you are combining.
By combining many widely varying amplitudes and periods (frequencies) you should by trial and error be able to get something convincing.
Based on the idea see sharper gave me, this is the code I'm using right now:
int mainAmp = random.nextInt(MAX_AMPLITUDE) - MAX_AMPLITUDE / 2;
int overtoneAmp = random.nextInt(MAX_AMPLITUDE * 2 / 3) - MAX_AMPLITUDE / 3;
int overtone2Amp = random.nextInt(MAX_AMPLITUDE * 4 / 7) - MAX_AMPLITUDE / 2 * 7;
int mainFreq = random.nextInt(7) + 7;
int overtoneFreq = mainFreq * 3 / 2;
int overtone2Freq = mainFreq * 7 / 4;
byte[] bytes = new byte[AUDIO_VISUALIZER_DENSITY];
for (int i = 0; i < AUDIO_VISUALIZER_DENSITY; i++) {
bytes[i] = (byte) (Math.sin(i * mainFreq * DEG_TO_RAD) * mainAmp
+ Math.sin(i * overtoneFreq * DEG_TO_RAD) * overtoneAmp
+ Math.sin(i * overtone2Freq * DEG_TO_RAD) * overtone2Amp);
}
Main frequency is between 8 and 15 for my app. You can play with those. The other two overtones I'm using are (2 - 1/2)x & (2 - 1/4)x of main frequency. You can add more like (2 - 1/8)x etc. Or use another series of frequencies. I also randomize the amplitude to get a unique wave each time.
These are some waves I'm drawing using this code:

Play 2 different frequencies alternatively in Java

I am a newbie in Java Sounds. I want to play 2 different frequencies alternatively for 1 second each in a loop for some specified time.
Like, if I have 2 frequencies 440hz and 16000hz and the time period is 10 seconds then for every 'even' second 440hz gets played and for every 'odd' second 16000hz, i.e. 5 seconds each alternatively.
I have learned a few things through some examples and I have also made a program that runs for a single user specified frequency for a time also given by the user with the help of those examples.
I will really appreciate if someone can help me out on this.
Thanks.
I am also attaching that single frequency code for reference.
import java.nio.ByteBuffer;
import java.util.Scanner;
import javax.sound.sampled.*;
public class Audio {
public static void main(String[] args) throws InterruptedException, LineUnavailableException {
final int SAMPLING_RATE = 44100; // Audio sampling rate
final int SAMPLE_SIZE = 2; // Audio sample size in bytes
Scanner in = new Scanner(System.in);
int time = in.nextInt(); //Time specified by user in seconds
SourceDataLine line;
double fFreq = in.nextInt(); // Frequency of sine wave in hz
//Position through the sine wave as a percentage (i.e. 0 to 1 is 0 to 2*PI)
double fCyclePosition = 0;
//Open up audio output, using 44100hz sampling rate, 16 bit samples, mono, and big
// endian byte ordering
AudioFormat format = new AudioFormat(SAMPLING_RATE, 16, 1, true, true);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
if (!AudioSystem.isLineSupported(info)) {
System.out.println("Line matching " + info + " is not supported.");
throw new LineUnavailableException();
}
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(format);
line.start();
// Make our buffer size match audio system's buffer
ByteBuffer cBuf = ByteBuffer.allocate(line.getBufferSize());
int ctSamplesTotal = SAMPLING_RATE * time; // Output for roughly user specified time in seconds
//On each pass main loop fills the available free space in the audio buffer
//Main loop creates audio samples for sine wave, runs until we tell the thread to exit
//Each sample is spaced 1/SAMPLING_RATE apart in time
while (ctSamplesTotal > 0) {
double fCycleInc = fFreq / SAMPLING_RATE; // Fraction of cycle between samples
cBuf.clear(); // Discard samples from previous pass
// Figure out how many samples we can add
int ctSamplesThisPass = line.available() / SAMPLE_SIZE;
for (int i = 0; i < ctSamplesThisPass; i++) {
cBuf.putShort((short) (Short.MAX_VALUE * Math.sin(2 * Math.PI * fCyclePosition)));
fCyclePosition += fCycleInc;
if (fCyclePosition > 1) {
fCyclePosition -= 1;
}
}
//Write sine samples to the line buffer. If the audio buffer is full, this will
// block until there is room (we never write more samples than buffer will hold)
line.write(cBuf.array(), 0, cBuf.position());
ctSamplesTotal -= ctSamplesThisPass; // Update total number of samples written
//Wait until the buffer is at least half empty before we add more
while (line.getBufferSize() / 2 < line.available()) {
Thread.sleep(1);
}
}
//Done playing the whole waveform, now wait until the queued samples finish
//playing, then clean up and exit
line.drain();
line.close();
}
}
Your best bet is probably creating Clips as shown in the sample code below.
That said, the MHz range is typically not audible—looks like you have a typo in your question. If it's no typo, you will run into issues with Mr. Nyquist.
Another hint: Nobody uses Hungarian Notation in Java.
import javax.sound.sampled.*;
import java.nio.ByteBuffer;
import java.nio.ShortBuffer;
public class AlternatingTones {
public static void main(final String[] args) throws LineUnavailableException, InterruptedException {
final Clip clip0 = createOneSecondClip(440f);
final Clip clip1 = createOneSecondClip(16000f);
clip0.addLineListener(event -> {
if (event.getType() == LineEvent.Type.STOP) {
clip1.setFramePosition(0);
clip1.start();
}
});
clip1.addLineListener(event -> {
if (event.getType() == LineEvent.Type.STOP) {
clip0.setFramePosition(0);
clip0.start();
}
});
clip0.start();
// prevent JVM from exiting
Thread.sleep(10000000);
}
private static Clip createOneSecondClip(final float frequency) throws LineUnavailableException {
final Clip clip = AudioSystem.getClip();
final AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100f, 16, 1, 2, 44100, true);
final ByteBuffer buffer = ByteBuffer.allocate(44100 * format.getFrameSize());
final ShortBuffer shortBuffer = buffer.asShortBuffer();
final float cycleInc = frequency / format.getFrameRate();
float cyclePosition = 0f;
while (shortBuffer.hasRemaining()) {
shortBuffer.put((short) (Short.MAX_VALUE * Math.sin(2 * Math.PI * cyclePosition)));
cyclePosition += cycleInc;
if (cyclePosition > 1) {
cyclePosition -= 1;
}
}
clip.open(format, buffer.array(), 0, buffer.capacity());
return clip;
}
}
The method I would use would be to count frames while outputting to a SourceDataLine. When you have written one second's worth of frames, switch frequencies. This will give much better timing accuracy than attempting to fiddle with Clips.
I'm unclear if the code you are showing is something you wrote or copied-and-pasted. If you have a question about how it doesn't work, I'm happy to help if you show what you tried and what errors or exceptions were generated.
When outputting to a SourceDataLine, there will have to be a step where you convert the short value (-32768..+32767) to two bytes as per the 16-bit encoding specified in the audio format you have. I don't see where this is being done in your code. [EDIT: can see where the putShort() method does this, though it only works for BigEndian, not the more common LittleEndian.]
Have you looked over the Java Tutorial
Sound Trail?

Multi audio tones to sound card using portaudio

I am trying to generate a tone to the sound card (Frequency: 1950 hz, duration: 40 ms, level: -30 db, right-channel (stereo), on steam 1). Eventually, I would like to play two of these tones (one goes to channel 1 and one goes to channel 2).
Any help or direction is greatly appreciated.
Thanks,
DW
Hi Bjorn, I tried this but I am not getting the what I am expecting as a frequency (plus seems sound is not clean). Any ideas what's wrong?
I greatly appreciate any help.
#define SAMPLE_RATE (44100)
#define TABLE_SIZE (200)
float FREQUENCY = 422;
...
for(int i=0; i<TABLE_SIZE; i++ )
{
data.sine[i] = (float) sin( (double)i * ((2.0 * M_PI)/(double)SAMPLE_RATE) * FREQUENCY );
}
data.left_phase = 0;
data.right_phase = 0;
...
... in callback function ...
for(unsigned long i = 0; i < framesPerBuffer; i++ )
{
// fill output buffer with sin wave
*out++ = data->amp * data->sine[data->left_phase]; // left
*out++ = data->amp * data->sine[data->right_phase]; // right
data->left_phase += 1;
if( data->left_phase >= TABLE_SIZE )
data->left_phase -= TABLE_SIZE;
data->right_phase += 1;
if( data->right_phase >= TABLE_SIZE )
data->right_phase -= TABLE_SIZE;
}
PortAudio has sample code for generating tones, you just need to figure out the frequency. See for example this answer:
[portaudio]Transmit and Detect frequency - Windows
Update:
Rather than trying to store a table of sine data, simply calculate the sine value in the callback using this formula:
amplitude[n] = sin( n * desiredFreq * 2 * pi / samplerate )
so (untested) your code will look something like this:
typedef struct
{
long n;
} MyData;
float FREQUENCY = 422;
static int MyCallback(
const void *inputBuffer,
void *outputBuffer,
unsigned long framesPerBuffer,
const PaStreamCallbackTimeInfo* timeInfo,
PaStreamCallbackFlags statusFlags,
void *userData
)
{
MyData *data = (MyData*)userData;
float *out = (float*)outputBuffer;
(void) timeInfo; /* Prevent unused variable warnings. */
(void) statusFlags;
(void) inputBuffer;
for(unsigned long i = 0; i < framesPerBuffer; i++ )
{
// fill output buffer with sin wave
float v = sin( data->n * FREQUENCY * 2 * PI / (float) SAMPLERATE )
*out++ = v; // left
*out++ = v; // right
}
return paContinue;
}
This code is not without problems: eg. eventually n will "wrap around" and I'm not sure if sin remains accurate and efficient as the input gets larger. Nevertheless it's a good starting point, and if you just need to generate a few seconds of a tone on modern hardware, this is really all you need. If you need something fancier, get this working first, then you can worry about making it more efficient and robust with a LUT.

How to play audio sample buffers from AVCaptureAudioDataOutput

The main goal of the app Im trying to make is a peer-to-peer video streaming. (Sort of like FaceTime using bluetooth/WiFi).
Using AVFoundation, I was able to capture video/audio sample buffers. Then Im sending the video/audo sample buffer data. Now the problem is to process the sample buffer data in the receiving side.
As for the video sample buffer, I was able to get a UIImage from the sample buffer. But for the audio sample buffer, I dont know how to process it so I can play the audio.
So the question is how can I process/play the audio sample buffers?
Right now Im just plotting the waveform, just like in apple's Wavy sample code:
CMSampleBufferRef sampleBuffer;
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {
SInt16 subSet[numSamples / numSamplesToRead];
for (int j = 0; j < numSamples / numSamplesToRead; j++)
subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];
SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
double scaledSample = (double) ((audioSample / SINT16_MAX));
// plot waveform using scaledSample
[updateUI:scaledSample];
}
To show video you can use
(here is : getting of ARGB picture and converting to Qt (nokia qt) QImage you can replace by other image)
place it to delegate class
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
SVideoSample sample;
sample.pImage = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width = CVPixelBufferGetWidth(imageBuffer);
sample.height = CVPixelBufferGetHeight(imageBuffer);
QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);
self->m_receiver->eventReceived(img);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];

Analyze "whistle" sound for pitch/note

I am trying to build a system that will be able to process a record of someone whistling and output notes.
Can anyone recommend an open-source platform which I can use as the base for the note/pitch recognition and analysis of wave files ?
Thanks in advance
As many others have already said, FFT is the way to go here. I've written a little example in Java using FFT code from http://www.cs.princeton.edu/introcs/97data/. In order to run it, you will need the Complex class from that page also (see the source for the exact URL).
The code reads in a file, goes window-wise over it and does an FFT on each window. For each FFT it looks for the maximum coefficient and outputs the corresponding frequency. This does work very well for clean signals like a sine wave, but for an actual whistle sound you probably have to add more. I've tested with a few files with whistling I created myself (using the integrated mic of my laptop computer), the code does get the idea of what's going on, but in order to get actual notes more needs to be done.
1) You might need some more intelligent window technique. What my code uses now is a simple rectangular window. Since the FFT assumes that the input singal can be periodically continued, additional frequencies are detected when the first and the last sample in the window don't match. This is known as spectral leakage ( http://en.wikipedia.org/wiki/Spectral_leakage ), usually one uses a window that down-weights samples at the beginning and the end of the window ( http://en.wikipedia.org/wiki/Window_function ). Although the leakage shouldn't cause the wrong frequency to be detected as the maximum, using a window will increase the detection quality.
2) To match the frequencies to actual notes, you could use an array containing the frequencies (like 440 Hz for a') and then look for the frequency that's closest to the one that has been identified. However, if the whistling is off standard tuning, this won't work any more. Given that the whistling is still correct but only tuned differently (like a guitar or other musical instrument can be tuned differently and still sound "good", as long as the tuning is done consistently for all strings), you could still find notes by looking at the ratios of the identified frequencies. You can read http://en.wikipedia.org/wiki/Pitch_%28music%29 as a starting point on that. This is also interesting: http://en.wikipedia.org/wiki/Piano_key_frequencies
3) Moreover it might be interesting to detect the points in time when each individual tone starts and stops. This could be added as a pre-processing step. You could do an FFT for each individual note then. However, if the whistler doesn't stop but just bends between notes, this would not be that easy.
Definitely have a look at the libraries the others suggested. I don't know any of them, but maybe they contain already functionality for doing what I've described above.
And now to the code. Please let me know what worked for you, I find this topic pretty interesting.
Edit: I updated the code to include overlapping and a simple mapper from frequencies to notes. It works only for "tuned" whistlers though, as mentioned above.
package de.ahans.playground;
import java.io.File;
import java.io.IOException;
import java.util.Arrays;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.UnsupportedAudioFileException;
public class FftMaxFrequency {
// taken from http://www.cs.princeton.edu/introcs/97data/FFT.java.html
// (first hit in Google for "java fft"
// needs Complex class from http://www.cs.princeton.edu/introcs/97data/Complex.java
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2"); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
static class AudioReader {
private AudioFormat audioFormat;
public AudioReader() {}
public double[] readAudioData(File file) throws UnsupportedAudioFileException, IOException {
AudioInputStream in = AudioSystem.getAudioInputStream(file);
audioFormat = in.getFormat();
int depth = audioFormat.getSampleSizeInBits();
long length = in.getFrameLength();
if (audioFormat.isBigEndian()) {
throw new UnsupportedAudioFileException("big endian not supported");
}
if (audioFormat.getChannels() != 1) {
throw new UnsupportedAudioFileException("only 1 channel supported");
}
byte[] tmp = new byte[(int) length];
byte[] samples = null;
int bytesPerSample = depth/8;
int bytesRead;
while (-1 != (bytesRead = in.read(tmp))) {
if (samples == null) {
samples = Arrays.copyOf(tmp, bytesRead);
} else {
int oldLen = samples.length;
samples = Arrays.copyOf(samples, oldLen + bytesRead);
for (int i = 0; i < bytesRead; i++) samples[oldLen+i] = tmp[i];
}
}
double[] data = new double[samples.length/bytesPerSample];
for (int i = 0; i < samples.length-bytesPerSample; i += bytesPerSample) {
int sample = 0;
for (int j = 0; j < bytesPerSample; j++) sample += samples[i+j] << j*8;
data[i/bytesPerSample] = (double) sample / Math.pow(2, depth);
}
return data;
}
public AudioFormat getAudioFormat() {
return audioFormat;
}
}
public class FrequencyNoteMapper {
private final String[] NOTE_NAMES = new String[] {
"A", "Bb", "B", "C", "C#", "D", "D#", "E", "F", "F#", "G", "G#"
};
private final double[] FREQUENCIES;
private final double a = 440;
private final int TOTAL_OCTAVES = 6;
private final int START_OCTAVE = -1; // relative to A
public FrequencyNoteMapper() {
FREQUENCIES = new double[TOTAL_OCTAVES*12];
int j = 0;
for (int octave = START_OCTAVE; octave < START_OCTAVE+TOTAL_OCTAVES; octave++) {
for (int note = 0; note < 12; note++) {
int i = octave*12+note;
FREQUENCIES[j++] = a * Math.pow(2, (double)i / 12.0);
}
}
}
public String findMatch(double frequency) {
if (frequency == 0)
return "none";
double minDistance = Double.MAX_VALUE;
int bestIdx = -1;
for (int i = 0; i < FREQUENCIES.length; i++) {
if (Math.abs(FREQUENCIES[i] - frequency) < minDistance) {
minDistance = Math.abs(FREQUENCIES[i] - frequency);
bestIdx = i;
}
}
int octave = bestIdx / 12;
int note = bestIdx % 12;
return NOTE_NAMES[note] + octave;
}
}
public void run (File file) throws UnsupportedAudioFileException, IOException {
FrequencyNoteMapper mapper = new FrequencyNoteMapper();
// size of window for FFT
int N = 4096;
int overlap = 1024;
AudioReader reader = new AudioReader();
double[] data = reader.readAudioData(file);
// sample rate is needed to calculate actual frequencies
float rate = reader.getAudioFormat().getSampleRate();
// go over the samples window-wise
for (int offset = 0; offset < data.length-N; offset += (N-overlap)) {
// for each window calculate the FFT
Complex[] x = new Complex[N];
for (int i = 0; i < N; i++) x[i] = new Complex(data[offset+i], 0);
Complex[] result = fft(x);
// find index of maximum coefficient
double max = -1;
int maxIdx = 0;
for (int i = result.length/2; i >= 0; i--) {
if (result[i].abs() > max) {
max = result[i].abs();
maxIdx = i;
}
}
// calculate the frequency of that coefficient
double peakFrequency = (double)maxIdx*rate/(double)N;
// and get the time of the start and end position of the current window
double windowBegin = offset/rate;
double windowEnd = (offset+(N-overlap))/rate;
System.out.printf("%f s to %f s:\t%f Hz -- %s\n", windowBegin, windowEnd, peakFrequency, mapper.findMatch(peakFrequency));
}
}
public static void main(String[] args) throws UnsupportedAudioFileException, IOException {
new FftMaxFrequency().run(new File("/home/axr/tmp/entchen.wav"));
}
}
i think this open-source platform suits you
http://code.google.com/p/musicg-sound-api/
Well, you could always use fftw to perform the Fast Fourier Transform. It's a very well respected framework. Once you've got an FFT of your signal you can analyze the resultant array for peaks. A simple histogram style analysis should give you the frequencies with the greatest volume. Then you just have to compare those frequencies to the frequencies that correspond with different pitches.
in addition to the other great options:
csound pitch detection: http://www.csounds.com/manual/html/pvspitch.html
fmod: http://www.fmod.org/ (has a free version)
aubio: http://aubio.org/doc/pitchdetection_8h.html
You might want to consider Python(x,y). It's a scientific programming framework for python in the spirit of Matlab, and it has easy functions for working in the FFT domain.
If you use Java, have a look at TarsosDSP library. It has a pretty good ready-to-go pitch detector.
Here is an example for android, but I think it doesn't require too much modifications to use it elsewhere.
I'm a fan of the FFT but for the monophonic and fairly pure sinusoidal tones of whistling, a zero-cross detector would do a far better job at determining the actual frequency at a much lower processing cost. Zero-cross detection is used in electronic frequency counters that measure the clock rate of whatever is being tested.
If you going to analyze anything other than pure sine wave tones, then FFT is definitely the way to go.
A very simple implementation of zero cross detection in Java on GitHub

Resources