Let's say I have a data set and used matplotlib to draw a histogram of said data set.
n, bins, patches = plt.hist(data, normed=1)
How do I calculate the standard deviation, using the n and bins values that hist() returns? I'm currently doing this to calculate the mean:
s = 0
for i in range(len(n)):
s += n[i] * ((bins[i] + bins[i+1]) / 2)
mean = s / numpy.sum(n)
which seems to work fine as I get pretty accurate results. However, if I try to calculate the standard deviation like this:
t = 0
for i in range(len(n)):
t += (bins[i] - mean)**2
std = np.sqrt(t / numpy.sum(n))
my results are way off from what numpy.std(data) returns. Replacing the left bin limits with the central point of each bin doesn't change this either. I have the feeling that the problem is that the n and bins values don't actually contain any information on how the individual data points are distributed within each bin, but the assignment I'm working on clearly demands that I use them to calculate the standard deviation.
You haven't weighted the contribution of each bin with n[i]. Change the increment of t to
t += n[i]*(bins[i] - mean)**2
By the way, you can simplify (and speed up) your calculation by using numpy.average with the weights argument.
Here's an example. First, generate some data to work with. We'll compute the sample mean, variance and standard deviation of the input before computing the histogram.
In [54]: x = np.random.normal(loc=10, scale=2, size=1000)
In [55]: x.mean()
Out[55]: 9.9760798903061847
In [56]: x.var()
Out[56]: 3.7673459904902025
In [57]: x.std()
Out[57]: 1.9409652213499866
I'll use numpy.histogram to compute the histogram:
In [58]: n, bins = np.histogram(x)
mids is the midpoints of the bins; it has the same length as n:
In [59]: mids = 0.5*(bins[1:] + bins[:-1])
The estimate of the mean is the weighted average of mids:
In [60]: mean = np.average(mids, weights=n)
In [61]: mean
Out[61]: 9.9763028267760312
In this case, it is pretty close to the mean of the original data.
The estimated variance is the weighted average of the squared difference from the mean:
In [62]: var = np.average((mids - mean)**2, weights=n)
In [63]: var
Out[63]: 3.8715035807387328
In [64]: np.sqrt(var)
Out[64]: 1.9676136767004677
That estimate is within 2% of the actual sample standard deviation.
The following answer is equivalent to Warren Weckesser's, but maybe more familiar to those who prefer to want mean as the expected value:
counts, bins = np.histogram(x)
mids = 0.5*(bins[1:] + bins[:-1])
probs = counts / np.sum(counts)
mean = np.sum(probs * mids)
sd = np.sqrt(np.sum(probs * (mids - mean)**2))
Do take note in certain context you may want the unbiased sample variance where the weights are not normalized by N but N-1.
Related
Following are the codes for skewness and kurtosis in MATLAB:
clc; clear all
% Generate "N" data points
N = 1:1:2000;
% Set sampling frequency
Fs = 1000;
% Set time step value
dt = 1/Fs;
% Frequency of the signal
f = 5;
% Generate time array
t = N*dt;
% Generate sine wave
y = 10 + 5*sin(2*pi*f*t);
% Skewness
y_skew = skewness(y);
% Kurtosis
y_kurt = kurtosis(y);
The answer acquired in MATLAB is:
y_skew = 4.468686410415491e-15
y_kurt = 1.500000000000001 (Value is positive in MATLAB)
Now, below are the codes in Python:
import numpy as np
from scipy.stats import skew
from scipy.stats import kurtosis
# Generate "N" data points
N = np.linspace(1,2000,2000)
# Set sampling frequency
Fs = 1000
# Set time step value
dt = 1/Fs
# Frequency of the signal
f = 5
# Generate time array
t = N*dt
# Generate sine wave
y = 10 + 5*np.sin(2*np.pi*f*t);
# Skewness
y_skew = skew(y)
# Kurtosis
y_kurt = kurtosis(y)
The answer acquired in Python is:
y_skew = -1.8521564287013977e-16
y_kurt = -1.5 (Value has turned out to be negative in Python)
Can somebody please explain, why do we have different answers for skewness and kurtosis, in MATLAB and Python?
Specifically, in the case of kurtosis, the value has changed from positive to negative. Can somebody please help me out in understanding this.
This is the difference between the Fisher and Pearson measure of kurtosis.
From the MATLAB docs:
Kurtosis is a measure of how outlier-prone a distribution is. The kurtosis of the normal distribution is 3. Distributions that are more outlier-prone than the normal distribution have kurtosis greater than 3; distributions that are less outlier-prone have kurtosis less than 3. Some definitions of kurtosis subtract 3 from the computed value, so that the normal distribution has kurtosis of 0. The kurtosis function does not use this convention.
From the scipy docs:
Kurtosis is the fourth central moment divided by the square of the variance. If Fisher’s definition is used, then 3.0 is subtracted from the result to give 0.0 for a normal distribution.
Noting that Fisher's definition is used by default in scipy
scipy.stats.kurtosis(a, axis=0, fisher=True, ...)
Your results would be equivalent if you used fisher=False in Python (or manually add 3) or subtracted 3 from your MATLAB result so that they were both using the same definition.
So it looks like the sign is being flipped, but that's just by chance since +1.5 - 3 = -1.5.
The difference in skewness appears to be due to numerical precision, since both results are basically 0. Please see Why is 24.0000 not equal to 24.0000 in MATLAB?
I'm running a physics simulation related to visible light, and the resulting wave function has a very, very high frequency -- cyclic frequency is on the order of 1.0e15, and the spatial frequency k is on the order of 1.0e7. Thankfully, I only use the spatial frequency, but when I calculate it for later usage (using either math or numpy), I get something that resembles a beat wave, unless I use N ~= k sample points, because I have to calculate it over a much greater range (on the order of 1.0e-3 - 1.0e-1). It produces a beat wave so consistently I spent a few hours to make sure I'm not actually calculating one. I'll also have to use fft() on the resulting wave and I'm afraid it won't work properly with a misrepresented wave.
I've tried using various amounts of sample points, but unless it's extraordinarily high (takes a good minute or two to calculate), only the prominence of beating changes. Just in case I'm misusing numpy, I tried the same thing with appending wave.value calculated by math.sin to a float array, but it had the same result.
import numpy as np
import matplotlib.pyplot as plt
mmScale = 1.0e-3
nmScale = 1.0e-9
c = 3.0e8
N = 1000
class Wave:
def __init__(self, amplitude, wavelength):
self.wavelength = wavelength*nmScale
self.amplitude = amplitude
self.omega = 2*pi*c/self.wavelength
self.k = 2*pi/self.wavelength
def value(self, time, travel):
return self.amplitude*np.sin(self.omega*time - self.k*travel)
x = np.linspace(50, 250, N)*mmScale
wave = Wave(1, 400)
y = wave.value(0.1, x)
plt.plot(x,y)
plt.show()
The code above produces a graph of the function, and you can put in different values for N to see how it gives different waveforms.
Your sampling spatial frequency is:
1/Ts = 1 / ((250-50)*mmScale) / N) = 5000 [samples/meter]
Your wave's spatial frequency is:
1/Tw = 1 / wavelength = 1 / (400e-9) = 2500000 [wavelengths/meter]
You fail to satisfy Nyquist criterion by a factor of (2*2500000 ) / 5000 = 1000.
Thus you must expect serious aliasing effects. See https://en.wikipedia.org/wiki/Aliasing.
Not much can be done to battle it. But there are some tricks that may help you depending on application. One is to represent a wave as a complex envelop around carier frequency, which is 400e-9. Please provide more detail on what you do with the wave.
I have a histogram of sorted random numbers and a Gaussian overlay. The histogram represents observed values per bin (applying this base case to a much larger dataset) and the Gaussian is an attempt to fit the data. Clearly, this Gaussian does not represent the best fit to the histogram. The code below is the formula for a Gaussian.
normc, mu, sigma = 30.845, 50.5, 7 # normalization constant, avg, stdev
gauss = lambda x: normc * exp( (-1) * (x - mu)**2 / ( 2 * (sigma **2) ) )
I calculated the expectation values per bin (area under the curve) and calculated the number of observed values per bin. There are several methods to find the 'best' fit. I am concerned with the best fit possible by minimizing Chi-Squared. In this formula for Chi-Squared, the expectation value is the area under the curve per bin and the observed value is the number of occurrences of sorted data values per bin. So I want to fluctuate normc, mu, and sigma near their given values to find the right combination of normc, mu, and sigma that produce the smallest Chi-Square, as these will be the parameters I can plug into the code above to overlay the best fit Gaussian on my histogram. I am trying to use the scipy module to minimize my Chi-Square as done in this example. Since I need to fluctuate parameters, I will use the function gauss (defined above) to plot the Gaussian overlay, and will define a new function to find the minimum Chi-Squared.
def gaussmin(var,data):
# var[0] = normc
# var[1] = mu
# var[2] = sigma
# data is the sorted random numbers, represents unbinned observed values
for index in range(len(data)):
return var[0] * exp( (-1) * (data[index] - var[1])**2 / ( 2 * (var[2] **2) ) )
# I realize this will return a new value for each index of data, any guidelines to fix?
After this, I am stuck. How can I fluctuate the parameters to find the normc, mu, sigma that produced the best fit? My last attempt at a solution is below:
var = [normc, mu, sigma]
result = opt.minimize(chi2, [normc,mu,sigma])
# chi2 is the chisquare value obtained via scipy
# chisquare input (a,b)
# where a is number of occurences per bin, b is expected value per bin
# b is dependent upon normc, mu, sigma
print(result)
# data is a list, can I keep it as a constant and only fluctuate parameters in var?
There are plenty of examples online for scalar functions but I cannot find any for variable functions.
PS - I can post my full code so far but it's bit lengthy. If you would like to see it, just ask and I can post it here or provide a googledrive link.
A Gaussian distribution is completely characterized by its mean and variance (or std deviation). Under the hypothesis that your data are normally distributed, the best fit will be obtained by using x-bar as the mean and s-squared as the variance. But before doing so, I'd check whether normality is plausible using, e.g., a q-q plot.
I've generated a plot of the attenutation seen in an electrical trace up to a frequency of 14e10 rad/s. The ydata ranges from approximately 1-10 Np/m. I'm trying to generate a fit of the form
y = A*sqrt(x) + B*x + C*x^2.
I expect A to be around 10^-6, B to be around 10^-11, and C to be around 10^-23. However, the smallest coefficient lsqcurvefit will return is 10^-7. Also, its will only return a nonzero coefficient for A, while returning 0 for B and C. The fit actually looks really good however the physics indicates that B and C should not be 0.
Here is how I'm calling the function
% measurement estimate
x_alpha = [1e-6 1e-11 1e-23];
lb = [1e-7, 1e-13, 1e-25];
ub = [1e-3, 1e-6, 1e-15];
x_alpha = lsqcurvefit(#modelfun, x_alpha, omega, alpha_t, lb,ub)
Here is the model function
function [ yhat ] = modelfun( x, xdata )
yhat = x(1)*xdata.^.5 + x(2)*xdata + x(3)*xdata.^2;
end
Is it possible to get lsqcurvefit to return such small coefficients? Is the error in rounding or is it something else? Any ways I can change the tolerance to see a fit closer to what I expect?
Found a stackoverflow page that seems to address this issue!
fit using lsqcurvefit
When we train a ctr(click through rate) model, sometimes we need calcute the real ctr from the history data, like this
#(click)
ctr = ----------------
#(impressions)
We know that, if the number of impressions is too small, the calculted ctr is not real. So we always set a threshold to filter out the large enough impressions.
But we know that the higher impressions, the higher confidence for the ctr. Then my question is that: Is there a impressions-normalized statistic method to calculate the ctr?
Thanks!
You probably need a representation of confidence interval for your estimated ctr. Wilson score interval is a good one to try.
You need below stats to calculate the confidence score:
\hat p is the observed ctr (fraction of #clicked vs #impressions)
n is the total number of impressions
zα/2 is the (1-α/2) quantile of the standard normal distribution
A simple implementation in python is shown below, I use z(1-α/2)=1.96 which corresponds to a 95% confidence interval. I attached 3 test results at the end of the code.
# clicks # impressions # conf interval
2 10 (0.07, 0.45)
20 100 (0.14, 0.27)
200 1000 (0.18, 0.22)
Now you can set up some threshold to use the calculated confidence interval.
from math import sqrt
def confidence(clicks, impressions):
n = impressions
if n == 0: return 0
z = 1.96 #1.96 -> 95% confidence
phat = float(clicks) / n
denorm = 1. + (z*z/n)
enum1 = phat + z*z/(2*n)
enum2 = z * sqrt(phat*(1-phat)/n + z*z/(4*n*n))
return (enum1-enum2)/denorm, (enum1+enum2)/denorm
def wilson(clicks, impressions):
if impressions == 0:
return 0
else:
return confidence(clicks, impressions)
if __name__ == '__main__':
print wilson(2,10)
print wilson(20,100)
print wilson(200,1000)
"""
--------------------
results:
(0.07048879557839793, 0.4518041980521754)
(0.14384999046998084, 0.27112660859398174)
(0.1805388068716823, 0.22099327100894336)
"""
If you treat this as a binomial parameter, you can do Bayesian estimation. If your prior on ctr is uniform (a Beta distribution with parameters (1,1)) then your posterior is Beta(1+#click, 1+#impressions-#click). Your posterior mean is #click+1 / #impressions+2 if you want a single summary statistic of this posterior, but you probably don't, and here's why:
I don't know what your method for determining whether ctr is high enough, but let's say you're interested in everything with ctr > 0.9. You can then use the cumulative density function of the beta distribution to look at what proportion of probability mass is over the 0.9 threshold (this will just be 1 - the cdf at 0.9). In this way, your threshold will naturally incorporate uncertainty about the estimate because of limited sample size.
There are many ways to calculate this confidence interval. An alternative to the Wilson Score is the Clopper-Perrson interval, which I found useful in spreadsheets.
Upper Bound Equation
Lower Bound Equation
Where
B() is the the Inverse Beta Distribution
alpha is the confidence level error (e.g for 95% confidence-level, alpha is 5%)
n is the number of samples (e.g. impressions)
x is the number of successes (e.g. clicks)
In Excel an implementation for B() is provided by the BETA.INV formula.
There is no equivalent formula for B() in Google Sheets, but a Google Apps Script custom function can be adapted from the JavaScript Statistical Library (e.g search github for jstat)