I have a dataset with county level data where N=3119 with 93 variables. I am trying to do a PCA, EFA and or CFA. The data has been given to me already min/max normalized, ranging from (0,1). Theory states that the data should be normally distributed for CFA/SEM, but my understanding is that min/max normalization does not change the distribution of the data, only it's scale.
It is clear to me that I do not have multivariate normality or univariate normality due to the skewness of data. I guess what's confusing me, is when people seemingly throw around the term normalization interchangeably with the meaning of normal distribution.
So can I go forward with my analysis since min/max normalization has been performed, or do I need to look more towards other log/box cox transformations to adjust the distribution prior to running my analysis? Is it okay to log transform data that has already been min/max normalized?
my understanding is that min/max normalization does not change the distribution of the data, only it's scale.
Correct. If you print a hist()ogram of original and transformed data, they should look identical. Only the x-axis scale will change.
the term normalization interchangeably with the meaning of normal distribution
Indeed, these are completely separate issues.
Is it okay to log transform data that has already been min/max normalized?
Taking the log() would affect 0--1 data differently than data further up the real-number line. But I don't see why you need to transform the data when nonnormality corrections are available for SEs (in EFA or CFA) and model-fit test statistics (relevant for CFA). Independent-components analysis might be an alternative to PCA if your data are not normal.
Related
I'm quite new to biostatistics so I apologize if my question is too dumb.
I'm studying data transformation in biostatistics to fit my data to the normal distribution.
I started with the Poisson distribution (which is quite common in the biostatistics: daily admissions, prevalence of rare disease etc) It is recommended to use the square root to fit data to normal distribution.
I used stata and this free dataset ( https://www.kaggle.com/datasets/martj42/international-football-results-from-1872-to-2017?resource=download ) with the results of a huge amount of football matches.
I have created a new variable for this dataset, made by the whole amount of goals scored by both teams in each match. You will find that as the independent variable distributed as following:
We can see that the distribution quite approximate the Poisson's one, as confirmed by the values of mean and std deviation.
Then, I've created a new variable with the square root of this variable and the distribution is the following (blue line is how the normal distrib with the same mean and std deviation looks like):
As you can see It's quite far from a normal distribution of my data, as proven by normality tests, but also easily visible from the q-q plot:
So, my question is, why sqrt didn't work? What can I do to transform my dataset to fit the normal distribution?
We plan to store our sensor time series data in cassandra and use spark/spark-ts to apply machine learning algorithms on it.
Unlike in the documentation, our time series data is irregular - unevenly spaced time series - as the sensors send the data event-based.
But most algorithms and models require regular time series.
Does spark-ts provide any function to transform the irregular time series to regular ones (using interpolation or time-weighted-average, etc.)?
If not, what would be a recommended approach to solve that problem ?
spark-ts does not provide any function to transform irregular time series to regular ones.
How you handle irregularly-spaced time series depends on the goals you are trying to achieve through your analysis. Use cases for time series include prediction/forecasting, anomaly detection, or trying to understand/analyze past behaviour.
If you wish to use the algorithms available in spark-ts (as opposed to modeling your data through other statistical processes designed for event streams), one option is to divide the time axis into equally-sized bins, and then compute a summary of your data within each bin (e.g., the total, the mean, etc.). As you make your bins more and more fine-grained, the information lost due to quantizing the time dimension is minimized, but your data may be harder to model (so the bin size controls the tradeoff). And so, the binned data then forms an evenly-spaced time series, which you can analyze using typical time series techniques.
I have a time-series of weekly usage data and I'm going to attempt to use some statistics to segment the population. Skewness and Kurtosis to may allow me to describe the time-series and group the people in different ways. But I also notice some appear to have saw-tooth patterns, or bimodal patterns, then I don't think these two aforementioned statistics will describe them well. Distance from the mean would tell me who has continual steady usage vs. unpredictable usage.
What descriptive statistics are commonly used for time-series data?
Thanks,
The periodogram and the autocorrelation function are two common sources of information
used to analyse and model time series. You can use this information to compare the series.
In the periodogram you can detect the frequencies at which the estimated spectral density is the highest. This will tell you which series are dominated by cycles of the same frequency.
The autocorrelation function (the time domain counterpart of the periodogram) and the partial autocorrelation function can similarly be used to compare and group the series. Those series with significant autocorrelations at the same lag orders could be grouped together.
You may need to transform the series in order to discern some of this information, for example taking differences to render the data stationary. Alternatively you can select an ARIMA model for each series and compare the characteristics of each model (those characteristics will be pretty much the same as those observed in the autocorrelation functions).
I am running an experiment (it's an image processing experiment) in which I have a set of paper samples and each sample has a set of lines. For each line in the paper sample, its strength is calculated which is denoted by say 's'. For a given paper sample I have to find the variation amongst the strength values 's'. If the variation is above a certain limit, we have to discard that paper.
1) I started with the Standard Deviation of the values, but the problem I am facing is that for each sample, order of magnitude for s (because of various properties of line like its length, sharpness, darkness etc) might differ and also the calculated Standard Deviations values are also differing a lot in magnitude. So I can't really use this method for different samples.
Is there any way where I can find that suitable limit which can be applicable for all samples.
I am thinking that since I don't have any history of how the strength value should behave,( for a given sample depending on the order of magnitude of the strength value more variation could be tolerated in that sample whereas because the magnitude is less in another sample, there should be less variation in that sample) I first need to find a way of baselining the variation in different samples. I don't know what approaches I could try to get started.
Please note that I have to tell variation between lines within a sample whereas the limit should be applicable for any good sample.
Please help me out.
You seem to have a set of samples. Then, for each sample you want to do two things: 1) compute a descriptive metric and 2) perform outlier detection. Both of these are vast subjects that require some knowledge of the phenomenology and statistics of the underlying problem. However, below are some ideas to get you going.
Compute a metric
Median Absolute Deviation. If your sample strength s has values that can jump by an order of magnitude across a sample then it is understandable that the standard deviation was not a good metric. The standard deviation is notoriously sensitive to outliers. So, try a more robust estimate of dispersion in your data. For example, the MAD estimate uses the median in the underlying computations which is more robust to a large spread in the numbers.
Robust measures of scale. Read up on other robust measures like the Interquartile range.
Perform outlier detection
Thresholding. This is similar to what you are already doing. However, you have to choose a suitable threshold for the metric computed above. You might consider using another robust metric for thresholding the metric. You can compute a robust estimate of their mean (e.g., the median) and a robust estimate of their standard deviation (e.g., 1.4826 * MAD). Then identify outliers as metric values above some number of robust standard deviations above the robust mean.
Histogram Another simple method is to histogram your computed metrics from step #1. This is non-parametric so it doesn't require you to model your data. If can histogram your metric values and then use the top 1% (or some other value) as your threshold limit.
Triangle Method A neat and simple heuristic for thresholding is the triangle method to perform binary classification of a skewed distribution.
Anomaly detection Read up on other outlier detection methods.
I am using Octave and I would like to use the anderson_darling_test from the Octave forge Statistics package to test if two vectors of data are drawn from the same statistical distribution. Furthermore, the reference distribution is unlikely to be "normal". This reference distribution will be the known distribution and taken from the help for the above function " 'If you are selecting from a known distribution, convert your values into CDF values for the distribution and use "uniform'. "
My question therefore is: how would I convert my data values into CDF values for the reference distribution?
Some background information for the problem: I have a vector of raw data values from which I extract the cyclic component (this will be the reference distribution); I then wish to compare this cyclic component with the raw data itself to see if the raw data is essentially cyclic in nature. If the the null hypothesis that the two are the same can be rejected I will then know that most of the movement in the raw data is not due to cyclic influences but is due to either trend or just noise.
If your data has a specific distribution, for instance beta(3,3) then
p = betacdf(x, 3, 3)
will be uniform by the definition of a CDF. If you want to transform it to a normal, you can just call the inverse CDF function
x=norminv(p,0,1)
on the uniform p. Once transformed, use your favorite test. I'm not sure I understand your data, but you might consider using a Kolmogorov-Smirnov test instead, which is a nonparametric test of distributional equality.
Your approach is misguided in multiple ways. Several points:
The Anderson-Darling test implemented in Octave forge is a one-sample test: it requires one vector of data and a reference distribution. The distribution should be known - not come from data. While you quote the help-file correctly about using a CDF and the "uniform" option for a distribution that is not built in, you are ignoring the next sentence of the same help file:
Do not use "uniform" if the distribution parameters are estimated from the data itself, as this sharply biases the A^2 statistic toward smaller values.
So, don't do it.
Even if you found or wrote a function implementing a proper two-sample Anderson-Darling or Kolmogorov-Smirnov test, you would still be left with a couple of problems:
Your samples (the data and the cyclic part estimated from the data) are not independent, and these tests assume independence.
Given your description, I assume there is some sort of time predictor involved. So even if the distributions would coincide, that does not mean they coincide at the same time-points, because comparing distributions collapses over the time.
The distribution of cyclic trend + error would not expected to be the same as the distribution of the cyclic trend alone. Suppose the trend is sin(t). Then it never will go above 1. Now add a normally distributed random error term with standard deviation 0.1 (small, so that the trend is dominant). Obviously you could get values well above 1.
We do not have enough information to figure out the proper thing to do, and it is not really a programming question anyway. Look up time series theory - separating cyclic components is a major topic there. But many reasonable analyses will probably be based on the residuals: (observed value - predicted from cyclic component). You will still have to be careful about auto-correlation and other complexities, but at least it will be a move in the right direction.