What descriptive statistics are commonly used for time-series data? - statistics

I have a time-series of weekly usage data and I'm going to attempt to use some statistics to segment the population. Skewness and Kurtosis to may allow me to describe the time-series and group the people in different ways. But I also notice some appear to have saw-tooth patterns, or bimodal patterns, then I don't think these two aforementioned statistics will describe them well. Distance from the mean would tell me who has continual steady usage vs. unpredictable usage.
What descriptive statistics are commonly used for time-series data?
Thanks,

The periodogram and the autocorrelation function are two common sources of information
used to analyse and model time series. You can use this information to compare the series.
In the periodogram you can detect the frequencies at which the estimated spectral density is the highest. This will tell you which series are dominated by cycles of the same frequency.
The autocorrelation function (the time domain counterpart of the periodogram) and the partial autocorrelation function can similarly be used to compare and group the series. Those series with significant autocorrelations at the same lag orders could be grouped together.
You may need to transform the series in order to discern some of this information, for example taking differences to render the data stationary. Alternatively you can select an ARIMA model for each series and compare the characteristics of each model (those characteristics will be pretty much the same as those observed in the autocorrelation functions).

Related

Min Max Normalization/Normal Distribution

I have a dataset with county level data where N=3119 with 93 variables. I am trying to do a PCA, EFA and or CFA. The data has been given to me already min/max normalized, ranging from (0,1). Theory states that the data should be normally distributed for CFA/SEM, but my understanding is that min/max normalization does not change the distribution of the data, only it's scale.
It is clear to me that I do not have multivariate normality or univariate normality due to the skewness of data. I guess what's confusing me, is when people seemingly throw around the term normalization interchangeably with the meaning of normal distribution.
So can I go forward with my analysis since min/max normalization has been performed, or do I need to look more towards other log/box cox transformations to adjust the distribution prior to running my analysis? Is it okay to log transform data that has already been min/max normalized?
my understanding is that min/max normalization does not change the distribution of the data, only it's scale.
Correct. If you print a hist()ogram of original and transformed data, they should look identical. Only the x-axis scale will change.
the term normalization interchangeably with the meaning of normal distribution
Indeed, these are completely separate issues.
Is it okay to log transform data that has already been min/max normalized?
Taking the log() would affect 0--1 data differently than data further up the real-number line. But I don't see why you need to transform the data when nonnormality corrections are available for SEs (in EFA or CFA) and model-fit test statistics (relevant for CFA). Independent-components analysis might be an alternative to PCA if your data are not normal.

Maximum log-likelihood from data histogram not data directly

I have a complicated theoretical Probability Density Function (PDF) that I define in mathematica and that depends on some parameters that I need to estimate from comparison with real data. From a big simulation done on a cluster and not my laptop I have acquired a lot of events (over 10^9).
The way I understand things, given that I know what the PDF is I 'just' need to sum the probability that those events appear for a given set of parameters and maximise this quantity by adjusting the parameters.
However, given the number of events I would rather work with something less computer-time consuming and work for example with something easily generated like an histogram of my data. But then how would my log-likelihood estimator work?
Thanks a lot for your answers!

Regularize unevenly spaced time series with spark-ts

We plan to store our sensor time series data in cassandra and use spark/spark-ts to apply machine learning algorithms on it.
Unlike in the documentation, our time series data is irregular - unevenly spaced time series - as the sensors send the data event-based.
But most algorithms and models require regular time series.
Does spark-ts provide any function to transform the irregular time series to regular ones (using interpolation or time-weighted-average, etc.)?
If not, what would be a recommended approach to solve that problem ?
spark-ts does not provide any function to transform irregular time series to regular ones.
How you handle irregularly-spaced time series depends on the goals you are trying to achieve through your analysis. Use cases for time series include prediction/forecasting, anomaly detection, or trying to understand/analyze past behaviour.
If you wish to use the algorithms available in spark-ts (as opposed to modeling your data through other statistical processes designed for event streams), one option is to divide the time axis into equally-sized bins, and then compute a summary of your data within each bin (e.g., the total, the mean, etc.). As you make your bins more and more fine-grained, the information lost due to quantizing the time dimension is minimized, but your data may be harder to model (so the bin size controls the tradeoff). And so, the binned data then forms an evenly-spaced time series, which you can analyze using typical time series techniques.

A method to find the inconsistency or variation in the data

I am running an experiment (it's an image processing experiment) in which I have a set of paper samples and each sample has a set of lines. For each line in the paper sample, its strength is calculated which is denoted by say 's'. For a given paper sample I have to find the variation amongst the strength values 's'. If the variation is above a certain limit, we have to discard that paper.
1) I started with the Standard Deviation of the values, but the problem I am facing is that for each sample, order of magnitude for s (because of various properties of line like its length, sharpness, darkness etc) might differ and also the calculated Standard Deviations values are also differing a lot in magnitude. So I can't really use this method for different samples.
Is there any way where I can find that suitable limit which can be applicable for all samples.
I am thinking that since I don't have any history of how the strength value should behave,( for a given sample depending on the order of magnitude of the strength value more variation could be tolerated in that sample whereas because the magnitude is less in another sample, there should be less variation in that sample) I first need to find a way of baselining the variation in different samples. I don't know what approaches I could try to get started.
Please note that I have to tell variation between lines within a sample whereas the limit should be applicable for any good sample.
Please help me out.
You seem to have a set of samples. Then, for each sample you want to do two things: 1) compute a descriptive metric and 2) perform outlier detection. Both of these are vast subjects that require some knowledge of the phenomenology and statistics of the underlying problem. However, below are some ideas to get you going.
Compute a metric
Median Absolute Deviation. If your sample strength s has values that can jump by an order of magnitude across a sample then it is understandable that the standard deviation was not a good metric. The standard deviation is notoriously sensitive to outliers. So, try a more robust estimate of dispersion in your data. For example, the MAD estimate uses the median in the underlying computations which is more robust to a large spread in the numbers.
Robust measures of scale. Read up on other robust measures like the Interquartile range.
Perform outlier detection
Thresholding. This is similar to what you are already doing. However, you have to choose a suitable threshold for the metric computed above. You might consider using another robust metric for thresholding the metric. You can compute a robust estimate of their mean (e.g., the median) and a robust estimate of their standard deviation (e.g., 1.4826 * MAD). Then identify outliers as metric values above some number of robust standard deviations above the robust mean.
Histogram Another simple method is to histogram your computed metrics from step #1. This is non-parametric so it doesn't require you to model your data. If can histogram your metric values and then use the top 1% (or some other value) as your threshold limit.
Triangle Method A neat and simple heuristic for thresholding is the triangle method to perform binary classification of a skewed distribution.
Anomaly detection Read up on other outlier detection methods.

Obtaining the Standard Error of Weighted Data in SPSS

I'm trying to find confidence intervals for the means of various variables in a database using SPSS, and I've run into a spot of trouble.
The data is weighted, because each of the people who was surveyed represents a different portion of the overall population. For example, one young man in our sample might represent 28000 young men in the general population. The problem is that SPSS seems to think that the young man's database entries each represent 28000 measurements when they actually just represent one, and this makes SPSS think we have much more data than we actually do. As a result SPSS is giving very very low standard error estimates and very very narrow confidence intervals.
I've tried fixing this by dividing every weight value by the mean weight. This gives plausible figures and an average weight of 1, but I'm not sure the resulting numbers are actually correct.
Is my approach sound? If not, what should I try?
I've been using the Explore command to find mean and standard error (among other things), in case it matters.
You do need to scale weights to the actual sample size, but only the procedures in the Complex Samples option are designed to account for sampling weights properly. The regular weight variable in Statistics is treated as a frequency weight.

Resources