How to specify training parameters in LibSVM C# Wrapper - c#-4.0

I'm completely new to the world of SVM. I'm using LibSvmWrapper for c# from this link
but I can't figure out how to use it and how to specify the right parameters specially the documentation seems to be corrupted when I tried to run it using Doxygen
here is my attempt:
libSVM_Problem prob = new libSVM_Problem();
libSVM classifier = new libSVM();
libSVM_Parameter parameters = new libSVM_Parameter();
parameters.svm_type = libSVMWrapper.SVM_TYPE.C_SVC;
parameters.kernel_type = KERNEL_TYPE.LINEAR;
parameters.C = 1;
double[] labels = new double[trainClasses.Rows];
//prepare classes labels
for (int i = 0; i < trainClasses.Rows; i++)
{
labels[i] = trainClasses[i, 0];//trainClasses is an array of floats
}
//prepare samples
double[][] samples = new double[trainData.Rows][];
for (int i = 0; i < samples.Length; i++)
{
samples[i] = new double[trainData.Cols];
for (int j = 0; j < samples[i].Length; j++)
{
//trainData is 980 training sample * 400 features
samples[i][j] = trainData[i, j];
}
}
//prepare data and attach it to prob object
prob.labels = labels;
prob.samples = samples;
parameters.nu = 0;
classifier.Train(prob, parameters);
This code throws an exception on calling Train method which states that the weight parameter within libSVM_Parameter is null referenced. I have no idea how to specify these weights and generally the parameters of libSVM_Parameter.
So, if anyone has an example of how to specify the right parameters it would be very helpful.

I would suggest you to use https://github.com/ccerhan/LibSVMsharp
API for libsvm with c#.NET. It has some examples also which will help you to understand SVM too.

Related

NodeJs get-Pixels isolate Red Channel

So I'm trying to make a score calculation for my game, and have made a simple API that will take in the image and respond with the score. To optimise it, I'm only interested in the Red Channel of the texture, but I don't know how I would achieve this. I have this :
// Calculates the Score
function calculateScore(pixels){
var score = 0;
// Iterate through the Red Channel
for (var r = 0; r < pixels.length; r++){
score += pixels[r];
}
return score;
};
I'm unsure on how I'd go about isolating just a single channel
// Calculates the Score
function calculateScore(pixels){
var score = 0;
// Iterate through the Red Channel
for (var r = 0; r < pixels.length; r++){
if(r%4==0){
score += pixels[r];
}
}
return score;
};
Not my proudest moment xD

How to get direct access to polygons in VTK?

I found this post online (dates back to 2013) when I had trouble getting direct access to a specific cell in a vtkPolyData. I am using the latest version: VTK 8.1.1 and it seems like the newer version of VTK still has this issue.
polys->InitTraversal();
for(int i = 0; i < polys->GetNumberOfCells(); i++)
{
polys->GetNextCell(idList); // This sequential method gets the point IDs correctly
int a = idList->GetId(0);
int b = idList->GetId(1);
int c = idList->GetId(2);
}
However, the direct access method seems to have issues
polys->InitTraversal();
for(int i = 0; i < polys->GetNumberOfCells(); i++)
{
polys->GetCell(i, idList); // This method returns wrong ids
int a = idList->GetId(0);
int b = idList->GetId(1);
int c = idList->GetId(2);
}
How can I get the point IDs in a specific cell without looping through all the cell? Isn't polys->GetCell(i, idList) meant to give you direct access to a specific cell?
For direct access, we can use vtkPolyData::GetCellPoints() method. For example we can do
vtkNew<vtkIdList> idL; // or auto idL = vtkSmartPointer<vtkIdList>::New();
poly->GetCellPoints( 13, idL ); // Assuming you want the points for 13th cell
for(auto i = 0; i < idL->GetNumberOfIds(); ++i)
std::cout<< idL->GetId(i) << std::endl;
For looping over all cells I prefer a while loop:
vtkNew<vtkIdList> idL;
poly->GetPolys()->InitTraversal();
while(poly->GetPolys()->GetNextCell(idL)){
for(auto i = 0; i < idL->GetNumberOfIds(); ++i)
std::cout<< idL->GetId(i) << std::endl;
}

How can I compute the autocorrelation of a sample using Math.NET

Apparently Math.Net library does not contain a function for obtaining the autocorrelation of a sample.
How can this be achievied using the same library?
The function:
double ACF<T>(IEnumerable<T> series, int lag, Func<T, double> f)
in
MathNet.Numerics.Statistics.Mcmc
calculates an autocorrelation.
An example of using it is in the unit test.
A snippet from it is:
var series = new double[length];
for (int i = 0; i < length; i++)
{ series[i] = RandomSeries(); }
double result = MCMCDiagnostics.ACF(series, lag, x=>x*x);

Scale space blob detection using opencv

i am new in image processing and computer vision and i would like to detect blobs in an image using Laplacian of Gaussian with different scale spaces. The following links explain in detail.
http://www.cs.utah.edu/~jfishbau/advimproc/project1/
http://www.cs.utah.edu/~manasi/coursework/cs7960/p1/project1.html
So far by using opencv2 i have managed to get the images, apply the Gaussian filter with various kernels and apply the Laplacian filter. The i multiply with sigma squared the whole image to amplify the signal (see description in links) and then i apply a threshhold. The next step is to detect local maxima and minima so i can get the blob center and be able to draw circles, but i am not sure how to do it and whether the image processing i have done so far is correct. Here is my code:
int main(){
image1 = imread("butterfly.jpg",0);
drawing1 = imread("butterfly.jpg");
blobDetect(image1,drawing1);
waitKey();
return 0;
}
void blobDetect(Mat image, Mat drawing){
int ksize = 1;
int n =1;
Mat result[10];
for(int i=0; i<10; i++){
cv::GaussianBlur(image,result[i],cv::Size(ksize,ksize),ksize/3,0);
n+=1;
ksize = 2*n-1;
}
ksize = 1;
n =1;
for(int i=0; i<10; i++){
cv::Laplacian(result[i],result[i],CV_8U,ksize,1,0);
n+=1;
ksize = 2*n-1;
}
ksize = 1;
int cols = image.cols;
int rows = image.rows;
for(int a=0; a<10; a++){
for(int i=0; i<rows; i++){
//uchar* data = result[a].ptr<uchar>(rows);
for(int j=0; j<cols; j++){
result[a].at<uchar>(i,j) *= (ksize/3)*(ksize/3);
}
}
ksize++;
ksize = 2*ksize-1;
}
for(int i=0; i<10; i++){
cv::threshold(result[i], result[i], 100, 255, 0);
}
}
This is the expected result
Thanks
After you detect contours, you can use
minEnclosingCircle()
Even better is to check out this tutorial:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html

Analyze "whistle" sound for pitch/note

I am trying to build a system that will be able to process a record of someone whistling and output notes.
Can anyone recommend an open-source platform which I can use as the base for the note/pitch recognition and analysis of wave files ?
Thanks in advance
As many others have already said, FFT is the way to go here. I've written a little example in Java using FFT code from http://www.cs.princeton.edu/introcs/97data/. In order to run it, you will need the Complex class from that page also (see the source for the exact URL).
The code reads in a file, goes window-wise over it and does an FFT on each window. For each FFT it looks for the maximum coefficient and outputs the corresponding frequency. This does work very well for clean signals like a sine wave, but for an actual whistle sound you probably have to add more. I've tested with a few files with whistling I created myself (using the integrated mic of my laptop computer), the code does get the idea of what's going on, but in order to get actual notes more needs to be done.
1) You might need some more intelligent window technique. What my code uses now is a simple rectangular window. Since the FFT assumes that the input singal can be periodically continued, additional frequencies are detected when the first and the last sample in the window don't match. This is known as spectral leakage ( http://en.wikipedia.org/wiki/Spectral_leakage ), usually one uses a window that down-weights samples at the beginning and the end of the window ( http://en.wikipedia.org/wiki/Window_function ). Although the leakage shouldn't cause the wrong frequency to be detected as the maximum, using a window will increase the detection quality.
2) To match the frequencies to actual notes, you could use an array containing the frequencies (like 440 Hz for a') and then look for the frequency that's closest to the one that has been identified. However, if the whistling is off standard tuning, this won't work any more. Given that the whistling is still correct but only tuned differently (like a guitar or other musical instrument can be tuned differently and still sound "good", as long as the tuning is done consistently for all strings), you could still find notes by looking at the ratios of the identified frequencies. You can read http://en.wikipedia.org/wiki/Pitch_%28music%29 as a starting point on that. This is also interesting: http://en.wikipedia.org/wiki/Piano_key_frequencies
3) Moreover it might be interesting to detect the points in time when each individual tone starts and stops. This could be added as a pre-processing step. You could do an FFT for each individual note then. However, if the whistler doesn't stop but just bends between notes, this would not be that easy.
Definitely have a look at the libraries the others suggested. I don't know any of them, but maybe they contain already functionality for doing what I've described above.
And now to the code. Please let me know what worked for you, I find this topic pretty interesting.
Edit: I updated the code to include overlapping and a simple mapper from frequencies to notes. It works only for "tuned" whistlers though, as mentioned above.
package de.ahans.playground;
import java.io.File;
import java.io.IOException;
import java.util.Arrays;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.UnsupportedAudioFileException;
public class FftMaxFrequency {
// taken from http://www.cs.princeton.edu/introcs/97data/FFT.java.html
// (first hit in Google for "java fft"
// needs Complex class from http://www.cs.princeton.edu/introcs/97data/Complex.java
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2"); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
static class AudioReader {
private AudioFormat audioFormat;
public AudioReader() {}
public double[] readAudioData(File file) throws UnsupportedAudioFileException, IOException {
AudioInputStream in = AudioSystem.getAudioInputStream(file);
audioFormat = in.getFormat();
int depth = audioFormat.getSampleSizeInBits();
long length = in.getFrameLength();
if (audioFormat.isBigEndian()) {
throw new UnsupportedAudioFileException("big endian not supported");
}
if (audioFormat.getChannels() != 1) {
throw new UnsupportedAudioFileException("only 1 channel supported");
}
byte[] tmp = new byte[(int) length];
byte[] samples = null;
int bytesPerSample = depth/8;
int bytesRead;
while (-1 != (bytesRead = in.read(tmp))) {
if (samples == null) {
samples = Arrays.copyOf(tmp, bytesRead);
} else {
int oldLen = samples.length;
samples = Arrays.copyOf(samples, oldLen + bytesRead);
for (int i = 0; i < bytesRead; i++) samples[oldLen+i] = tmp[i];
}
}
double[] data = new double[samples.length/bytesPerSample];
for (int i = 0; i < samples.length-bytesPerSample; i += bytesPerSample) {
int sample = 0;
for (int j = 0; j < bytesPerSample; j++) sample += samples[i+j] << j*8;
data[i/bytesPerSample] = (double) sample / Math.pow(2, depth);
}
return data;
}
public AudioFormat getAudioFormat() {
return audioFormat;
}
}
public class FrequencyNoteMapper {
private final String[] NOTE_NAMES = new String[] {
"A", "Bb", "B", "C", "C#", "D", "D#", "E", "F", "F#", "G", "G#"
};
private final double[] FREQUENCIES;
private final double a = 440;
private final int TOTAL_OCTAVES = 6;
private final int START_OCTAVE = -1; // relative to A
public FrequencyNoteMapper() {
FREQUENCIES = new double[TOTAL_OCTAVES*12];
int j = 0;
for (int octave = START_OCTAVE; octave < START_OCTAVE+TOTAL_OCTAVES; octave++) {
for (int note = 0; note < 12; note++) {
int i = octave*12+note;
FREQUENCIES[j++] = a * Math.pow(2, (double)i / 12.0);
}
}
}
public String findMatch(double frequency) {
if (frequency == 0)
return "none";
double minDistance = Double.MAX_VALUE;
int bestIdx = -1;
for (int i = 0; i < FREQUENCIES.length; i++) {
if (Math.abs(FREQUENCIES[i] - frequency) < minDistance) {
minDistance = Math.abs(FREQUENCIES[i] - frequency);
bestIdx = i;
}
}
int octave = bestIdx / 12;
int note = bestIdx % 12;
return NOTE_NAMES[note] + octave;
}
}
public void run (File file) throws UnsupportedAudioFileException, IOException {
FrequencyNoteMapper mapper = new FrequencyNoteMapper();
// size of window for FFT
int N = 4096;
int overlap = 1024;
AudioReader reader = new AudioReader();
double[] data = reader.readAudioData(file);
// sample rate is needed to calculate actual frequencies
float rate = reader.getAudioFormat().getSampleRate();
// go over the samples window-wise
for (int offset = 0; offset < data.length-N; offset += (N-overlap)) {
// for each window calculate the FFT
Complex[] x = new Complex[N];
for (int i = 0; i < N; i++) x[i] = new Complex(data[offset+i], 0);
Complex[] result = fft(x);
// find index of maximum coefficient
double max = -1;
int maxIdx = 0;
for (int i = result.length/2; i >= 0; i--) {
if (result[i].abs() > max) {
max = result[i].abs();
maxIdx = i;
}
}
// calculate the frequency of that coefficient
double peakFrequency = (double)maxIdx*rate/(double)N;
// and get the time of the start and end position of the current window
double windowBegin = offset/rate;
double windowEnd = (offset+(N-overlap))/rate;
System.out.printf("%f s to %f s:\t%f Hz -- %s\n", windowBegin, windowEnd, peakFrequency, mapper.findMatch(peakFrequency));
}
}
public static void main(String[] args) throws UnsupportedAudioFileException, IOException {
new FftMaxFrequency().run(new File("/home/axr/tmp/entchen.wav"));
}
}
i think this open-source platform suits you
http://code.google.com/p/musicg-sound-api/
Well, you could always use fftw to perform the Fast Fourier Transform. It's a very well respected framework. Once you've got an FFT of your signal you can analyze the resultant array for peaks. A simple histogram style analysis should give you the frequencies with the greatest volume. Then you just have to compare those frequencies to the frequencies that correspond with different pitches.
in addition to the other great options:
csound pitch detection: http://www.csounds.com/manual/html/pvspitch.html
fmod: http://www.fmod.org/ (has a free version)
aubio: http://aubio.org/doc/pitchdetection_8h.html
You might want to consider Python(x,y). It's a scientific programming framework for python in the spirit of Matlab, and it has easy functions for working in the FFT domain.
If you use Java, have a look at TarsosDSP library. It has a pretty good ready-to-go pitch detector.
Here is an example for android, but I think it doesn't require too much modifications to use it elsewhere.
I'm a fan of the FFT but for the monophonic and fairly pure sinusoidal tones of whistling, a zero-cross detector would do a far better job at determining the actual frequency at a much lower processing cost. Zero-cross detection is used in electronic frequency counters that measure the clock rate of whatever is being tested.
If you going to analyze anything other than pure sine wave tones, then FFT is definitely the way to go.
A very simple implementation of zero cross detection in Java on GitHub

Resources