How to calculate confidence interval for detected signals that were matched by time threshold in time-series? - statistics

Data & the goal. There are two time series (A and B): A - exact signal, B - same signal, just distorted and with slightly random shift in its (delta) phase (see the image below). Consider the duration and timing of signals as random. I worked out an algorithm to to detect signals in B data and aiming to validate this detection with signals in the A data. The list of B signals can be larger or smaller because of noise and other technical nuances. I have extracted lists of signals from each time-series with their start and end times: lists of (A1,A2) and (B1, B2). I have matched signals by looking for the lowest absolute difference in their end-timestamps (A2 and B2) and then checked if the difference is below threshold, e.g. 10 min.
Problem. Now, I want to estimate confidence interval or similar statistical assessment, which could be used to interpret the results when I would apply the signal detection just having data B (no data A for validation). I think the fact that I am using a threshold to match the signals makes the problem a bit particular; at least in my humble perception. Thus, I would welcome opinions, insights, and especially solutions.

Related

Flipping a three-sided coin

I have two related question on population statistics. I'm not a statistician, but would appreciate pointers to learn more.
I have a process that results from flipping a three sided coin (results: A, B, C) and I compute the statistic t=(A-C)/(A+B+C). In my problem, I have a set that randomly divides itself into sets X and Y, maybe uniformly, maybe not. I compute t for X and Y. I want to know whether the difference I observe in those two t values is likely due to chance or not.
Now if this were a simple binomial distribution (i.e., I'm just counting who ends up in X or Y), I'd know what to do: I compute n=|X|+|Y|, σ=sqrt(np(1-p)) (and I assume my p=.5), and then I compare to the normal distribution. So, for example, if I observed |X|=45 and |Y|=55, I'd say σ=5 and so I expect to have this variation from the mean μ=50 by chance 68.27% of the time. Alternately, I expect greater deviation from the mean 31.73% of the time.
There's an intermediate problem, which also interests me and which I think may help me understand the main problem, where I measure some property of members of A and B. Let's say 25% in A measure positive and 66% in B measure positive. (A and B aren't the same cardinality -- the selection process isn't uniform.) I would like to know if I expect this difference by chance.
As a first draft, I computed t as though it were measuring coin flips, but I'm pretty sure that's not actually right.
Any pointers on what the correct way to model this is?
First problem
For the three-sided coin problem, have a look at the multinomial distribution. It's the distribution to use for a "binomial" problem with more then 2 outcomes.
Here is the example from Wikipedia (https://en.wikipedia.org/wiki/Multinomial_distribution):
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large.
Second problem
The second problem seems to be a problem for cross-tabs. Then use the "Chi-squared test for association" to test whether there is a significant association between your variables. And use the "standardized residuals" of your cross-tab to identify which of the assiciations is more likely to occur and which is less likely.

Are there other than FFT ways to implement Guitar Tuner?

I want to do precise guitar tuner, this is usually done by many via computing FFT and getting peak. But this is of low appliance for several reasons:
Discrete precision, gives insuffient resolution for tuning bass guitar.
High computation time and complexity, when trying to increase buffer size(and/or sampling rate). Introduces visible delay(lag).
Most of frequency range where concentrates all FFT's precision is unused. Everything above 1-2 khz is not appliable for tuning musical instruments.
There should be simplier way for signals that have single-frequency sinusoidal shape. Given small enough buffer (say it 256 samples with 96khz sampling rate) - how could you measure a base(lowese) frequency?
In simple words: How to find frequency F, so that difference of "sine signal of frequency F" and "actually recorded signal" would give minimal error, than for any frequency, other than F ? (so we can definetely conclude that sinusoid of frequency F is best approximation of recorded sound buffer).
PS. Anything, but not using FFT!
Here is a simple approach based on zero crossing. It relies on being able to map the instrument signal to a simple sinuoid. This may work OK when signal to noise ratio is high, but is not a very robust method.
Bandpass filter around the fundamental frequency of the tone you want to tune for. Example 82.41 Hz for low E string on guitar.
Consider a window of the last N samples. Set it to ex 100ms to update the pitch estimate 10 times per second.
Perform zero-crossing detection with a threshold value T. T could be set to 10% of signal peak for example. Count the periods between each zero crossing, collect them in an array.
Take the median of the periods to get your pitch estimate
You can also compute the quantiles of the periods to estimate how reliable the method is. If they give very different numbers from the median, then the method is not working well.
The approach can be extended by computing autocorrelation on the zero-crossings, as described in
https://www.cycfi.com/2018/03/fast-and-efficient-pitch-detection-bitstream-autocorrelation/

using skewness to segment volume

my knowledge in statistics is minuscule, sorry. I have a large volume of measured amplitudes. In the absence of a signal, the noise is assumed to have a normal distribution. When a signal is present with higher amplitude than the surrounding noise, the shape of the distribution is more tailed on the positive side. I was thinking of using skewness for detection of signal. But the area of higher amplitude (cells in the volume) is rather small compared to the volume itself. So, we are talking of in magnitude of hundreds of cells from a total of some thousands. If the skewness is zero for a normal distribution, how can I extract those cells in my volume which contribute to the non-zero skewness. If say, my skewness value is 0.5, is there a way to drop all cells and keep only those which raised the skewness value. Perhaps I sound unclear but that just shows how little I understand of the topic.
Thanks in advance.
It seems to me that the problem might best be modeled as a mixture model: we have a Gaussian background
B ~ N(0, sigma)
and a signal, about which the poster has not specified a particular model.
If we can assume that the signal also takes the form of one (or possible a mixture of several) Gaussian(s), then Gaussian mixture modelling with the EM algorithm may be a good way to solve it (see Wikipedia).
A good paper in the context of segmentation is this here:
http://www.fil.ion.ucl.ac.uk/~karl/Unified%20segmentation.pdf
If we cannot make such an assumption, I would use a robust regression method to estimate the parameters of the Gaussian noise, where the signal is treated as an outlier, e.g. Least trimmed squares (again see Wikipedia).
The outlier cells can then be found via (Bonferroni-corrected) hypothesis testing, as described e.g. in this paper:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2900857/

Note Onset Detection using Spectral Difference

Im fairly new to onset detection. I read some papers about it and know that when working only with the time-domain, it is possible that there will be a large number of false-positives/negatives, and that it is generally advisable to work with either both the time-domain and frequency-domain or the frequency domain.
Regarding this, I am a bit confused because, I am having trouble on how the spectral energy or the results from the FFT bin can be used to determine note onsets. Because, aren't note onsets represented by sharp peaks in amplitude?
Can someone enlighten me on this? Thank you!
This is the easiest way to think about note onset:
think of a music signal as a flat constant signal. When and onset occurs you look at it as a large rapid CHANGE in signal (a positive or negative peak)
What this means in the frequency domain:
the FT of a constant signal is, well, CONSTANT! and flat
When the onset event occurs there is a rapid increase in spectrial content.
While you may think "Well you're actually talking about the peak of the onset right?" not at all. We are not actually interested in the peak of the onset, but rather the rising edge of the signal. When there is a sharp increase in the signal, the high frequency content increases.
one way to do this is using the spectrial difference function:
take your time domain signal and cut it up into overlaping strips (typically 50% overlap)
apply a hamming/hann window (this is to reduce spectrial smudging) (remember cutting up the signal into windows is like multiplying it by a pulse, in the frequency domain its like convolving the signal with a sinc function)
Apply the FFT algorithm on two sucessive windows
For each DFT bin, calculate the difference between the Xn and Xn-1 bins if it is negative set it to zero
square the results and sum all th bins together
repeat till end of signal.
look for peaks in signal using median thresholding and there are your onset times!
Source:
https://adamhess.github.io/Onset_Detection_Nov302011.pdf
and
http://www.elec.qmul.ac.uk/people/juan/Documents/Bello-TSAP-2005.pdf
You can look at sharp differences in amplitude at a specific frequency as suspected sound onsets. For instance if a flute switches from playing a G5 to playing a C, there will be a sharp drop in amplitude of the spectrum at around 784 Hz.
If you don't know what frequency to examine, the magnitude of an FFT vector will give you the amplitude of every frequency over some window in time (with a resolution dependent on the length of the time window). Pick your frequency, or a bunch of frequencies, and diff two FFTs of two different time windows. That might give you something that can be used as part of a likelihood estimate for a sound onset or change somewhere between the two time windows. Sliding the windows or successive approximation of their location in time might help narrow down the time of a suspected note onset or other significant change in the sound.
"Because, aren't note onsets represented by sharp peaks in amplitude?"
A: Not always. On percussive instruments (including piano) this is true, but for violin, flute, etc. notes often "slide" into each other as frequency changes without sharp amplitude increases.
If you stick to a single instrument like the piano onset detection is do-able. Generalized onset detection is a much more difficult problem. There are about a dozen primitive features that have been used for onset detection. Once you code them, you still have to decide how best to use them.

Identifying common periodic waveforms (square, sine, sawtooth, ...)

Without any user interaction, how would a program identify what type of waveform is present in a recording from an ADC?
For the sake of this question: triangle, square, sine, half-sine, or sawtooth waves of constant frequency. Level and frequency are arbitrary, and they will have noise, small amounts of distortion, and other imperfections.
I'll propose a few (naive) ideas, too, and you can vote them up or down.
You definitely want to start by taking an autocorrelation to find the fundamental.
With that, take one period (approximately) of the waveform.
Now take a DFT of that signal, and immediately compensate for the phase shift of the first bin (the first bin being the fundamental, your task will be simpler if all phases are relative).
Now normalise all the bins so that the fundamental has unity gain.
Now compare and contrast the rest of the bins (representing the harmonics) against a set of pre-stored waveshapes that you're interested in testing for. Accept the closest, and reject overall if it fails to meet some threshold for accuracy determined by measurements of the noisefloor.
Do an FFT, find the odd and even harmonic peaks, and compare the rate at which they decrease to a library of common waveform.. peak... ratios.
Perform an autocorrelation to find the fundamental frequency, measure the RMS level, find the first zero-crossing, and then try subtracting common waveforms at that frequency, phase, and level. Whichever cancels out the best (and more than some threshold) wins.
This answer presumes no noise and that this is a simple academic exercise.
In the time domain, take the sample by sample difference of the waveform. Histogram the results. If the distribution has a sharply defined peak (mode) at zero, it is a square wave. If the distribution has a sharply defined peak at a positive value, it is a sawtooth. If the distribution has two sharply defined peaks, one negative and one positive,it is a triangle. If the distribution is broad and is peaked at either side, it is a sine wave.
arm yourself with more information...
I am assuming that you already know that a theoretically perfect sine wave has no harmonic partials (ie only a fundamental)... but since you are going through an ADC you can throw the idea of a theoretically perfect sine wave out the window... you have to fight against aliasing and determining what are "real" partials and what are artifacts... good luck.
the following information comes from this link about csound.
(*) A sawtooth wave contains (theoretically) an infinite number of harmonic partials, each in the ratio of the reciprocal of the partial number. Thus, the fundamental (1) has an amplitude of 1, the second partial 1/2, the third 1/3, and the nth 1/n.
(**) A square wave contains (theoretically) an infinite number of harmonic partials, but only odd-numbered harmonics (1,3,5,7,...) The amplitudes are in the ratio of the reciprocal of the partial number, just as sawtooth waves. Thus, the fundamental (1) has an amplitude of 1, the third partial 1/3, the fifth 1/5, and the nth 1/n.
I think that all of these answers so far are quite bad (including my own previous...)
after having thought the problem through a bit more I would suggest the following:
1) take a 1 second sample of the input signal (doesn't need to be so big, but it simplifies a few things)
2) over the entire second, count the zero-crossings. at this point you have the cps (cycles per second) and know the frequency of the oscillator. (in case that's something you wanted to know)
3) now take a smaller segment of the sample to work with: take precisely 7 zero-crossings worth. (so your work buffer should now, if visualized, look like one of the graphical representations you posted with the original question.) use this small work buffer to perform the following tests. (normalizing the work buffer at this point could make life easier)
4) test for square-wave: zero crossings for a square wave are always very large differences, look for a large signal delta followed by little to no movement until the next zero crossing.
5) test for saw-wave: similar to square-wave, but a large signal delta will be followed by a linear constant signal delta.
6) test for triangle-wave: linear constant (small) signal deltas. find the peaks, divide by the distance between them and calculate what the triangle wave should look like (ideally) now test the actual signal for deviance. set a deviance tolerance threshold and you can determine whether you are looking at a triangle or a sine (or something parabolic).
First find the base frequency and the phase. You can do that with FFT. Normalize the sample. Then subtract each sample with the sample of the waveform you want to test (same frequency and same phase). Square the result add it all up and divide it by the number of samples. The smallest number is the waveform you seek.

Resources