Check if numbers form bell curve (gauss distribution) Python 3 - python-3.x

I've got files with irradiance data measured every minute 24 hours a day.
So if there is a day without any clouds on the sky the data shows a nice continuous bell curves.
When looking for a day without any clouds in the data I always plotted month after month with gnuplot and checked for nice bell curves.
I was wondering If there's a python way to check, if the Irradiance measurements form a continuos bell curve.
Don't know if the question is too vague but I'm simply looking for some ideas on that quest :-)

For a normal distribution, there are normality tests.
In short, we abuse some knowledge we have of what normal distributions look like to identify them.
The kurtosis of any normal distribution is 3. Compute the kurtosis of your data and it should be close to 3.
The skewness of a normal distribution is zero, so your data should have a skewness close to zero
More generally, you could compute a reference distribution and use a Bregman Divergence, to assess the difference (divergence) between the distributions. bin your data, create a histogram, and start with Jensen-Shannon divergence.
With the divergence approach, you can compare to an arbitrary distribution. You might record a thousand sunny days and check if the divergence between the sunny day and your measured day is below some threshold.

Just to complement the given answer with a code example: one can use a Kolmogorov-Smirnov test to obtain a measure for the "distance" between two distributions. SciPy offers a neat interface for this, called kstest:
from scipy import stats
import numpy as np
data = np.random.normal(size=100) # Our (synthetic) dataset
D, p = stats.kstest(data, "norm") # Perform a one-sided Kolmogorov-Smirnov test
In the above example, D denotes the distance between our data and a Gaussian normal (norm) distribution (smaller is better), and p denotes the corresponding p-value. Other distributions can be similarly tested by substituting norm with those implemented in scipy.stats.

Related

Poisson distribution transformation

I'm quite new to biostatistics so I apologize if my question is too dumb.
I'm studying data transformation in biostatistics to fit my data to the normal distribution.
I started with the Poisson distribution (which is quite common in the biostatistics: daily admissions, prevalence of rare disease etc) It is recommended to use the square root to fit data to normal distribution.
I used stata and this free dataset ( https://www.kaggle.com/datasets/martj42/international-football-results-from-1872-to-2017?resource=download ) with the results of a huge amount of football matches.
I have created a new variable for this dataset, made by the whole amount of goals scored by both teams in each match. You will find that as the independent variable distributed as following:
We can see that the distribution quite approximate the Poisson's one, as confirmed by the values of mean and std deviation.
Then, I've created a new variable with the square root of this variable and the distribution is the following (blue line is how the normal distrib with the same mean and std deviation looks like):
As you can see It's quite far from a normal distribution of my data, as proven by normality tests, but also easily visible from the q-q plot:
So, my question is, why sqrt didn't work? What can I do to transform my dataset to fit the normal distribution?

How to generate a random number from a weird distribution

I study a problem of a random walk with drift and an absorbing boundary. The system is well theoretically understood. My task is to simulate it numerically, in particular to generate random numbers from this distribution, see the formula. It is the distribution of the coordinate x at time t given the starting point x_0, the noise intensity \sigma and the drift \mu. The question is how to generate random numbers from this distribution? I can of course use the inverse transform sampling, but it is slow. May be I can make use of the fact that the probability density function is the difference of two Gaussian functions? Can I relate somehow my distribution with the normal distribution?

Scale before PCA

I'm using PCA from sckit-learn and I'm getting some results which I'm trying to interpret, so I ran into question - should I subtract the mean (or perform standardization) before using PCA, or is this somehow embedded into sklearn implementation?
Moreover, which of the two should I perform, if so, and why is this step needed?
I will try to explain it with an example. Suppose you have a dataset that includes a lot features about housing and your goal is to classify if a purchase is good or bad (a binary classification). The dataset includes some categorical variables (e.g. location of the house, condition, access to public transportation, etc.) and some float or integer numbers (e.g. market price, number of bedrooms etc). The first thing that you may do is to encode the categorical variables. For instance, if you have 100 locations in your dataset, the common way is to encode them from 0 to 99. You may even end up encoding these variables in one-hot encoding fashion (i.e. a column of 1 and 0 for each location) depending on the classifier that you are planning to use. Now if you use the price in million dollars, the price feature would have a much higher variance and thus higher standard deviation. Remember that we use square value of the difference from mean to calculate the variance. A bigger scale would create bigger values and square of a big value grow faster. But it does not mean that the price carry significantly more information compared to for instance location. In this example, however, PCA would give a very high weight to the price feature and perhaps the weights of categorical features would almost drop to 0. If you normalize your features, it provides a fair comparison between the explained variance in the dataset. So, it is good practice to normalize the mean and scale the features before using PCA.
Before PCA, you should,
Mean normalize (ALWAYS)
Scale the features (if required)
Note: Please remember that step 1 and 2 are not the same technically.
This is a really non-technical answer but my method is to try both and then see which one accounts for more variation on PC1 and PC2. However, if the attributes are on different scales (e.g. cm vs. feet vs. inch) then you should definitely scale to unit variance. In every case, you should center the data.
Here's the iris dataset w/ center and w/ center + scaling. In this case, centering lead to higher explained variance so I would go with that one. Got this from sklearn.datasets import load_iris data. Then again, PC1 has most of the weight on center so patterns I find in PC2 I wouldn't think are significant. On the other hand, on center | scaled the weight is split up between PC1 and PC2 so both axis should be considered.

statistical test for samples that follow normal distribution, with each sample having multiple measurements?

I have a set of sample (i = 1 : n), with each one measured for a specific metric 10 times.
The metric mean of the 10 measurements for each sample has a mean mu(i).
I've done dbscan clustering on all the mu, to find out the outlier samples. Now I want to test whether a given outlier is statistically different from the core samples.
The samples appear to follow normal distribution. For each sample, the 10 measurements also appear to follow normal distribution.
If I just use the mu(i) as the metric for each sample, I can easily calculate Z-score and p-value based on normal distribution. My question is, how do I make use of the 10 measurements for each sample to add to my statistical power (is it possible?)
Not very good at statistics, anything would help, thanks in advance...

Statistical Significance in terms of Gaussian Sigma

I'm working on an issue in my research where I would like to express my statistical significance for a correlation peak in terms of sigma of a normal distribution. For example, if my peak was at 95% significance it would be at 2sigma. Essentially what I'm asking is say I have an arbitrary peak significance (e.g. 92%), how would I express this in terms of sigma of a normal distribution? I realize this is a more general statistics question, so any reading/background is encouraged. Or if Python as a straightforward function to convert/compute this that works too.
Thanks!
I'm not sure what you mean by "statistical significance of a correlation peak," so I can't comment on whether the statistics you're talking about make any sense. However it sounds like you'd like calculate the following: how many standard deviations from the mean (say 1.96 sigma) cover a given fraction (in this case, 0.95) of the normal distribution? If this is what you're asking, you can use the SciPy statistics library to easily solve this. If you don't have SciPy already, you'll need to install it first.
Once you have SciPy installed, you'll want to use the inverse survival function (ISF) of the normal distribution. The ISF is the inverse of the survival function, which itself is 1-CDF. Here's how you do it in python:
In [1]: import scipy.stats as st
In [2]: yourArea = 0.95
In [3]: st.norm.isf((1-yourArea)/2.)
Out[3]: 1.959963984540054
So that's how you calculate the number that I believe you want. The (1-A)/2 business just accounts for the fact the CDF integrates from -infinity, whereas you're interested in values calculated from the center of the distribution.

Resources