AudioInputStream.skip() skips too many bytes - audio

AudioInputStream in= AudioSystem.getAudioInputStream(file);
AudioInputStream din = AudioSystem.getAudioInputStream(decodedFormat, in);
AudioFormat targetFormat = new AudioFormat();
long startTime = 180; //Time in Seconds
long nBytesToStart = (long) (startTime*targetFormat.getFrameRate()*targetFormat.getFrameSize());
din.skip(nBytesToStart);
This code operates just fine, except that too many bytes are being skipped. I cannot figure out what the problem is.
The code below is accurate, but it requires too much time to process.
AudioInputStream in= AudioSystem.getAudioInputStream(file);
AudioInputStream din = AudioSystem.getAudioInputStream(decodedFormat, in);
byte[] data = new byte[4096];
long nBufferFramesToStart = (long) (startTime*targetFormat.getFrameRate()*targetFormat.getFrameSize()/data.length);
for(int i = 0; i < nBufferFramesToStart; i++){ //Go to this point in the audio file
din.read(data, 0, data.length);
}
What do I need to know about the AudioInputStream.skip() method? What might the problem be?

Related

Sound - what is raw sound data?

I have code that decodes an MP3 and populates an array with all the 'values'.
My question is: what are those values? Are they frequencies? Are they amplitudes?
This is the code:
File file = new File(song.getFilepath());
if (file.exists()) {
AudioInputStream in = AudioSystem.getAudioInputStream(file);
AudioInputStream din = null;
AudioFormat baseFormat = in.getFormat();
AudioFormat decodedFormat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
baseFormat.getSampleRate(),
16,
baseFormat.getChannels(),
baseFormat.getChannels() * 2,
baseFormat.getSampleRate(),
false);
din = AudioSystem.getAudioInputStream(decodedFormat, in);
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] data = new byte[4096];
SourceDataLine line = getLine(decodedFormat);
int nBytesRead = 0, nBytesWritten = 0;
while (nBytesRead != -1) {
nBytesRead = din.read(data, 0, data.length);
if (nBytesRead != -1) {
nBytesWritten = line.write(data, 0, nBytesRead);
out.write(data, 0, nBytesRead);
}
}
byte[] audio = out.toByteArray();
System.err.println(audio.length);
for (byte b : audio) {
System.err.println(b);
}
}
I get about 40,000,000 numbers (length of byte array) for a 3 minute song, but I have no idea what they are?
They are amplitudes. Usually, each amplitude is 16 bits (2 bytes, range from -32768 to 32767), and there are two channels (left and right). In this case, one sound sample spans four bytes.
Example:
100, 0, 2, 1
means left amplitude is 100 (of 32767) and right is 258.

How to play audio sample buffers from AVCaptureAudioDataOutput

The main goal of the app Im trying to make is a peer-to-peer video streaming. (Sort of like FaceTime using bluetooth/WiFi).
Using AVFoundation, I was able to capture video/audio sample buffers. Then Im sending the video/audo sample buffer data. Now the problem is to process the sample buffer data in the receiving side.
As for the video sample buffer, I was able to get a UIImage from the sample buffer. But for the audio sample buffer, I dont know how to process it so I can play the audio.
So the question is how can I process/play the audio sample buffers?
Right now Im just plotting the waveform, just like in apple's Wavy sample code:
CMSampleBufferRef sampleBuffer;
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {
SInt16 subSet[numSamples / numSamplesToRead];
for (int j = 0; j < numSamples / numSamplesToRead; j++)
subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];
SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
double scaledSample = (double) ((audioSample / SINT16_MAX));
// plot waveform using scaledSample
[updateUI:scaledSample];
}
To show video you can use
(here is : getting of ARGB picture and converting to Qt (nokia qt) QImage you can replace by other image)
place it to delegate class
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
SVideoSample sample;
sample.pImage = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width = CVPixelBufferGetWidth(imageBuffer);
sample.height = CVPixelBufferGetHeight(imageBuffer);
QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);
self->m_receiver->eventReceived(img);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];

Raw Sound playing

I've been working for some time with image formats and i know that an image is an array of pixels (24- maybe 32 bits long). The question is: what is the way a sound file is represented? To be honest i'm not even sure what i should be googling for. Also i would be interested how do you use the data, i mean actually playing the sounds in the file. For an image file you have all sorts of abstract devices to draw an image on(Graphics:java,c#, HDC:cpp(win32), etc.) .I hope i have been clear enough.
Here's a dandy overview of how .wav is stored. I found it by typing "wave file format" into google.
http://www.sonicspot.com/guide/wavefiles.html
WAV files can also store compressed audio, but I believe most of the time they are not compressed. But the WAV format is designed as a container for a number of options on how that audio is stored.
Here's a snipped of code that I found at another question here at stackoverflow that I like in C# that builds a WAV-formatted audio MemoryStream and then plays that stream (without saving it to a file, like many other answers rely on). But saving it to a file can easily be added with one line of code if you want it saved to disk, but I would think that most of the time, that'd be undesirable.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Windows.Forms;
public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
{
var mStrm = new MemoryStream();
BinaryWriter writer = new BinaryWriter(mStrm);
const double TAU = 2 * Math.PI;
int formatChunkSize = 16;
int headerSize = 8;
short formatType = 1;
short tracks = 1;
int samplesPerSecond = 44100;
short bitsPerSample = 16;
short frameSize = (short)(tracks * ((bitsPerSample + 7) / 8));
int bytesPerSecond = samplesPerSecond * frameSize;
int waveSize = 4;
int samples = (int)((decimal)samplesPerSecond * msDuration / 1000);
int dataChunkSize = samples * frameSize;
int fileSize = waveSize + headerSize + formatChunkSize + headerSize + dataChunkSize;
// var encoding = new System.Text.UTF8Encoding();
writer.Write(0x46464952); // = encoding.GetBytes("RIFF")
writer.Write(fileSize);
writer.Write(0x45564157); // = encoding.GetBytes("WAVE")
writer.Write(0x20746D66); // = encoding.GetBytes("fmt ")
writer.Write(formatChunkSize);
writer.Write(formatType);
writer.Write(tracks);
writer.Write(samplesPerSecond);
writer.Write(bytesPerSecond);
writer.Write(frameSize);
writer.Write(bitsPerSample);
writer.Write(0x61746164); // = encoding.GetBytes("data")
writer.Write(dataChunkSize);
{
double theta = frequency * TAU / (double)samplesPerSecond;
// 'volume' is UInt16 with range 0 thru Uint16.MaxValue ( = 65 535)
// we need 'amp' to have the range of 0 thru Int16.MaxValue ( = 32 767)
// so we simply set amp = volume / 2
double amp = volume >> 1; // Shifting right by 1 divides by 2
for (int step = 0; step < samples; step++)
{
short s = (short)(amp * Math.Sin(theta * (double)step));
writer.Write(s);
}
}
mStrm.Seek(0, SeekOrigin.Begin);
new System.Media.SoundPlayer(mStrm).Play();
writer.Close();
mStrm.Close();
} // public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
But this code shows a bit of insight into the WAV-format, and it is even code that allows a person to build your own WAV-format in C# source code.

Analyze "whistle" sound for pitch/note

I am trying to build a system that will be able to process a record of someone whistling and output notes.
Can anyone recommend an open-source platform which I can use as the base for the note/pitch recognition and analysis of wave files ?
Thanks in advance
As many others have already said, FFT is the way to go here. I've written a little example in Java using FFT code from http://www.cs.princeton.edu/introcs/97data/. In order to run it, you will need the Complex class from that page also (see the source for the exact URL).
The code reads in a file, goes window-wise over it and does an FFT on each window. For each FFT it looks for the maximum coefficient and outputs the corresponding frequency. This does work very well for clean signals like a sine wave, but for an actual whistle sound you probably have to add more. I've tested with a few files with whistling I created myself (using the integrated mic of my laptop computer), the code does get the idea of what's going on, but in order to get actual notes more needs to be done.
1) You might need some more intelligent window technique. What my code uses now is a simple rectangular window. Since the FFT assumes that the input singal can be periodically continued, additional frequencies are detected when the first and the last sample in the window don't match. This is known as spectral leakage ( http://en.wikipedia.org/wiki/Spectral_leakage ), usually one uses a window that down-weights samples at the beginning and the end of the window ( http://en.wikipedia.org/wiki/Window_function ). Although the leakage shouldn't cause the wrong frequency to be detected as the maximum, using a window will increase the detection quality.
2) To match the frequencies to actual notes, you could use an array containing the frequencies (like 440 Hz for a') and then look for the frequency that's closest to the one that has been identified. However, if the whistling is off standard tuning, this won't work any more. Given that the whistling is still correct but only tuned differently (like a guitar or other musical instrument can be tuned differently and still sound "good", as long as the tuning is done consistently for all strings), you could still find notes by looking at the ratios of the identified frequencies. You can read http://en.wikipedia.org/wiki/Pitch_%28music%29 as a starting point on that. This is also interesting: http://en.wikipedia.org/wiki/Piano_key_frequencies
3) Moreover it might be interesting to detect the points in time when each individual tone starts and stops. This could be added as a pre-processing step. You could do an FFT for each individual note then. However, if the whistler doesn't stop but just bends between notes, this would not be that easy.
Definitely have a look at the libraries the others suggested. I don't know any of them, but maybe they contain already functionality for doing what I've described above.
And now to the code. Please let me know what worked for you, I find this topic pretty interesting.
Edit: I updated the code to include overlapping and a simple mapper from frequencies to notes. It works only for "tuned" whistlers though, as mentioned above.
package de.ahans.playground;
import java.io.File;
import java.io.IOException;
import java.util.Arrays;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.UnsupportedAudioFileException;
public class FftMaxFrequency {
// taken from http://www.cs.princeton.edu/introcs/97data/FFT.java.html
// (first hit in Google for "java fft"
// needs Complex class from http://www.cs.princeton.edu/introcs/97data/Complex.java
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2"); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
static class AudioReader {
private AudioFormat audioFormat;
public AudioReader() {}
public double[] readAudioData(File file) throws UnsupportedAudioFileException, IOException {
AudioInputStream in = AudioSystem.getAudioInputStream(file);
audioFormat = in.getFormat();
int depth = audioFormat.getSampleSizeInBits();
long length = in.getFrameLength();
if (audioFormat.isBigEndian()) {
throw new UnsupportedAudioFileException("big endian not supported");
}
if (audioFormat.getChannels() != 1) {
throw new UnsupportedAudioFileException("only 1 channel supported");
}
byte[] tmp = new byte[(int) length];
byte[] samples = null;
int bytesPerSample = depth/8;
int bytesRead;
while (-1 != (bytesRead = in.read(tmp))) {
if (samples == null) {
samples = Arrays.copyOf(tmp, bytesRead);
} else {
int oldLen = samples.length;
samples = Arrays.copyOf(samples, oldLen + bytesRead);
for (int i = 0; i < bytesRead; i++) samples[oldLen+i] = tmp[i];
}
}
double[] data = new double[samples.length/bytesPerSample];
for (int i = 0; i < samples.length-bytesPerSample; i += bytesPerSample) {
int sample = 0;
for (int j = 0; j < bytesPerSample; j++) sample += samples[i+j] << j*8;
data[i/bytesPerSample] = (double) sample / Math.pow(2, depth);
}
return data;
}
public AudioFormat getAudioFormat() {
return audioFormat;
}
}
public class FrequencyNoteMapper {
private final String[] NOTE_NAMES = new String[] {
"A", "Bb", "B", "C", "C#", "D", "D#", "E", "F", "F#", "G", "G#"
};
private final double[] FREQUENCIES;
private final double a = 440;
private final int TOTAL_OCTAVES = 6;
private final int START_OCTAVE = -1; // relative to A
public FrequencyNoteMapper() {
FREQUENCIES = new double[TOTAL_OCTAVES*12];
int j = 0;
for (int octave = START_OCTAVE; octave < START_OCTAVE+TOTAL_OCTAVES; octave++) {
for (int note = 0; note < 12; note++) {
int i = octave*12+note;
FREQUENCIES[j++] = a * Math.pow(2, (double)i / 12.0);
}
}
}
public String findMatch(double frequency) {
if (frequency == 0)
return "none";
double minDistance = Double.MAX_VALUE;
int bestIdx = -1;
for (int i = 0; i < FREQUENCIES.length; i++) {
if (Math.abs(FREQUENCIES[i] - frequency) < minDistance) {
minDistance = Math.abs(FREQUENCIES[i] - frequency);
bestIdx = i;
}
}
int octave = bestIdx / 12;
int note = bestIdx % 12;
return NOTE_NAMES[note] + octave;
}
}
public void run (File file) throws UnsupportedAudioFileException, IOException {
FrequencyNoteMapper mapper = new FrequencyNoteMapper();
// size of window for FFT
int N = 4096;
int overlap = 1024;
AudioReader reader = new AudioReader();
double[] data = reader.readAudioData(file);
// sample rate is needed to calculate actual frequencies
float rate = reader.getAudioFormat().getSampleRate();
// go over the samples window-wise
for (int offset = 0; offset < data.length-N; offset += (N-overlap)) {
// for each window calculate the FFT
Complex[] x = new Complex[N];
for (int i = 0; i < N; i++) x[i] = new Complex(data[offset+i], 0);
Complex[] result = fft(x);
// find index of maximum coefficient
double max = -1;
int maxIdx = 0;
for (int i = result.length/2; i >= 0; i--) {
if (result[i].abs() > max) {
max = result[i].abs();
maxIdx = i;
}
}
// calculate the frequency of that coefficient
double peakFrequency = (double)maxIdx*rate/(double)N;
// and get the time of the start and end position of the current window
double windowBegin = offset/rate;
double windowEnd = (offset+(N-overlap))/rate;
System.out.printf("%f s to %f s:\t%f Hz -- %s\n", windowBegin, windowEnd, peakFrequency, mapper.findMatch(peakFrequency));
}
}
public static void main(String[] args) throws UnsupportedAudioFileException, IOException {
new FftMaxFrequency().run(new File("/home/axr/tmp/entchen.wav"));
}
}
i think this open-source platform suits you
http://code.google.com/p/musicg-sound-api/
Well, you could always use fftw to perform the Fast Fourier Transform. It's a very well respected framework. Once you've got an FFT of your signal you can analyze the resultant array for peaks. A simple histogram style analysis should give you the frequencies with the greatest volume. Then you just have to compare those frequencies to the frequencies that correspond with different pitches.
in addition to the other great options:
csound pitch detection: http://www.csounds.com/manual/html/pvspitch.html
fmod: http://www.fmod.org/ (has a free version)
aubio: http://aubio.org/doc/pitchdetection_8h.html
You might want to consider Python(x,y). It's a scientific programming framework for python in the spirit of Matlab, and it has easy functions for working in the FFT domain.
If you use Java, have a look at TarsosDSP library. It has a pretty good ready-to-go pitch detector.
Here is an example for android, but I think it doesn't require too much modifications to use it elsewhere.
I'm a fan of the FFT but for the monophonic and fairly pure sinusoidal tones of whistling, a zero-cross detector would do a far better job at determining the actual frequency at a much lower processing cost. Zero-cross detection is used in electronic frequency counters that measure the clock rate of whatever is being tested.
If you going to analyze anything other than pure sine wave tones, then FFT is definitely the way to go.
A very simple implementation of zero cross detection in Java on GitHub

How does one jump to a particular offset more than once?

Imagine this function:
void SoundManager::playSource(ALuint sourceID, float offset)
{
alSourceStop(sourceID);
ALint iTotal = 0;
ALint iCurrent = 0;
ALint uiBuffer = 0;
alGetSourcei(sourceID, AL_BUFFER, &uiBuffer);
alGetBufferi(uiBuffer, AL_SIZE, &iTotal);
iCurrent = iTotal * offset;
alSourcei(sourceID, AL_BYTE_OFFSET, iCurrent);
alSourcePlay(sourceID);
}
The idea is calling playSource(x, 0.5f) would jump to (roughly) halfway through the buffer, etc.
It works fine the first time I call it, but if I call it again on the same source (whether that source is playing or not) it begins playing as though I'd called it with offset 0.
Any ideas why?
Solved!
Even though the API claims that setting the offsets works on sources in any state, the problem was I should have been calling alSourceRewind instead of alSourceStop at the start.
It seems setting offsets only works on sources in the AL_INITIAL state.
you can play the audio file first,then call setOffset like this,don't need to call alSourcePlay:
- (BOOL)setOffset:(float)offset
{
ALint iTotal = 0;
ALint iCurrent = 0;
ALint uiBuffer = 0;
alGetSourcei(_sourceID, AL_BUFFER, &uiBuffer);
alGetBufferi(uiBuffer, AL_SIZE, &iTotal);
iCurrent = iTotal * offset;
alSourcei(_sourceID, AL_BYTE_OFFSET, iCurrent);
return ((_error = alGetError()) != AL_NO_ERROR);
}

Resources