Which confidence interval is better? - statistics

I have 2 confidence intervals, one based on gamma distribution and the other based on normal distribution for a study. I wish to know which CI is used more often in which case.

Related

Predictive Distribution of Time series with Uncertain Future Values

In machine Learning, and especially in Turning Point Detection Problem, it is important to have the best estimate for the probability distribution function (PDF) of the future samples. Lets say that we have ${x_1, \cdots, x_n}$ as a time series, probably a Gaussian one with $f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)$ as its joint distribution function. We want to estimate the predictive distribution of the next $k$ uncertain samples, having the previous $n$ samples with or wouthout the ARMA/ARIMA assumptions, i.e., we are looking to find the $f_{X_{n+1}, \cdots, X_{n+k}}(x_{n+1}, \cdots, x_{n+k} | x_1, \cdots, x_n)$. With the Normal distribution assumption, what would be the mean and variance of the predictive distribution and how could we estimate them using known $n$ samples.

statistical test for samples that follow normal distribution, with each sample having multiple measurements?

I have a set of sample (i = 1 : n), with each one measured for a specific metric 10 times.
The metric mean of the 10 measurements for each sample has a mean mu(i).
I've done dbscan clustering on all the mu, to find out the outlier samples. Now I want to test whether a given outlier is statistically different from the core samples.
The samples appear to follow normal distribution. For each sample, the 10 measurements also appear to follow normal distribution.
If I just use the mu(i) as the metric for each sample, I can easily calculate Z-score and p-value based on normal distribution. My question is, how do I make use of the 10 measurements for each sample to add to my statistical power (is it possible?)
Not very good at statistics, anything would help, thanks in advance...

Determine the distribution for a number list

I have a list of numbers. Below are some basic statistics:
N > 1000
Max: 9.24
Min: 0.00955
Mean: 1.84932
Median: 0.97696
It seems that the data is right skewed, i.e. many small numbers and a few very large numbers.
I want to find a distribution to generalize these numbers. I think Normal distribution, Gamma distribution, and Laplace distribution all look possible. How do I determine which distribution is the best?
I have to say that I usually do it in the same way you did it, by plotting the data I seeing its shape.
When being more accurate, and only for the normal distribution, I perform the Shapiro Wilk test for normality, which at least will tell me that the null hypotesis was not proven, which means that it was not possible to prove that the date does not follow a normal distribution. Usually, this is more than acceptable in scientific environments.
I know there exists equivalent tests for Laplace and Gamma distributions, although still in newly research like this. Instead, there are many sites that offer the Shapiro Wilk test online, like this one.
With all positive values and the mean being about double the median, your data are definitely skewed right. You can rule out both normal and Laplace because both are symmetric and can go negative.
Scope out some of the many fine alternatives at the Wikipedia distributions page. Make a histogram of your data and check it for similarities in shape to those distributions. Exponentials, log normals, chi-squares, and the gamma family could all give numeric results such as the ones you described, but without knowing anything about the variance/std deviation, whether your data are unimodal or multimodal, or where the mode(s) are, we can only make guesses about a very large pool of possibilities.

Statistical mode -- how precise should it be?

When you're sampling numbers whose precision is much higher than what is practical for your purposes, a naive mode implementation is useless (each sample might very well be unique).
For instance, sampling round-trip time across networked machines. The potential precision of a CPU clock is pretty high. If you only cared about precision down to 1ms or so, and you sampled across a range of pings from Pmax to Pmin, what would be a robust way of measuring the "most common" ping among them?
A couple of possible solutions.
(1) construct a histogram, perhaps using automatically-chosen bins. Then report the bin which contains the most data.
(2) fit a parametric distribution to the data and report the mode of that distribution. The simplest example is to fit a Gaussian distribution and report the mean (which equals the mode for a Gaussian distribution). But there are probably other reasonable choices of distributions, which have other parameters to report. E.g. fit a gamma distribution, and report the mode of that.

A method to find the inconsistency or variation in the data

I am running an experiment (it's an image processing experiment) in which I have a set of paper samples and each sample has a set of lines. For each line in the paper sample, its strength is calculated which is denoted by say 's'. For a given paper sample I have to find the variation amongst the strength values 's'. If the variation is above a certain limit, we have to discard that paper.
1) I started with the Standard Deviation of the values, but the problem I am facing is that for each sample, order of magnitude for s (because of various properties of line like its length, sharpness, darkness etc) might differ and also the calculated Standard Deviations values are also differing a lot in magnitude. So I can't really use this method for different samples.
Is there any way where I can find that suitable limit which can be applicable for all samples.
I am thinking that since I don't have any history of how the strength value should behave,( for a given sample depending on the order of magnitude of the strength value more variation could be tolerated in that sample whereas because the magnitude is less in another sample, there should be less variation in that sample) I first need to find a way of baselining the variation in different samples. I don't know what approaches I could try to get started.
Please note that I have to tell variation between lines within a sample whereas the limit should be applicable for any good sample.
Please help me out.
You seem to have a set of samples. Then, for each sample you want to do two things: 1) compute a descriptive metric and 2) perform outlier detection. Both of these are vast subjects that require some knowledge of the phenomenology and statistics of the underlying problem. However, below are some ideas to get you going.
Compute a metric
Median Absolute Deviation. If your sample strength s has values that can jump by an order of magnitude across a sample then it is understandable that the standard deviation was not a good metric. The standard deviation is notoriously sensitive to outliers. So, try a more robust estimate of dispersion in your data. For example, the MAD estimate uses the median in the underlying computations which is more robust to a large spread in the numbers.
Robust measures of scale. Read up on other robust measures like the Interquartile range.
Perform outlier detection
Thresholding. This is similar to what you are already doing. However, you have to choose a suitable threshold for the metric computed above. You might consider using another robust metric for thresholding the metric. You can compute a robust estimate of their mean (e.g., the median) and a robust estimate of their standard deviation (e.g., 1.4826 * MAD). Then identify outliers as metric values above some number of robust standard deviations above the robust mean.
Histogram Another simple method is to histogram your computed metrics from step #1. This is non-parametric so it doesn't require you to model your data. If can histogram your metric values and then use the top 1% (or some other value) as your threshold limit.
Triangle Method A neat and simple heuristic for thresholding is the triangle method to perform binary classification of a skewed distribution.
Anomaly detection Read up on other outlier detection methods.

Resources