I am struggling with finding methods which can be used to detect the periodicity of binary time series.(the binary time series look like 0,0,1,0,0,1,0,0,1... or 1,0,0,0,1,1,0,1,0,0,0,0...)
You can try to solve this problem by using a Python's library, I think. I never experienced it but when I google it as "python how to detect a periodicity" then one of the first links on the search result was this.
According to this package;
Useful tools for analysis of periodicities in time series data.
Documentation: https://periodicity.readthedocs.io
Currently includes:
Auto-Correlation Function
Spectral methods:
Lomb-Scargle periodogram
Wavelet Transform
Hilbert-Huang Transform (WIP)
Phase-folding methods:
String Length
Phase Dispersion Minimization
Analysis of Variance (soon™)
Gaussian Processes:
george implementation
celerite implementation
pymc3 implementation (soon™)
Hope, this helps.
Related
I am working on a new model which is very sensitive to the interpolation/fit used to describe a certain dataset. I have some success with linear splines and logarithmic fits but I think there is still significant room for improvement. My supervisor suggested I take a look at exponential splines. I have found some books on the theory of exponential spines but no reference to a library or code example to follow.
Is there a library that I am unaware of that supports this feature?
I'm working on a 3D reconstruction system and want to generate a triangular mesh from the registered point cloud data using Python 3. My objects are not convex, so the marching cubes algorithm seems to be the solution.
I prefer to use an existing implementation of such method, so I tried scikit-image and Open3d but both the APIs do not accept raw point clouds as input (note that I'm not expert of those libraries). My attempts to convert my data failed and I'm running out of ideas since the documentation does not clarify the input format of the functions.
These are my desired snippets where pcd_to_volume is what I need.
scikit-image
import numpy as np
from skimage.measure import marching_cubes_lewiner
N = 10000
pcd = np.random.rand(N,3)
def pcd_to_volume(pcd, voxel_size):
#TODO
volume = pcd_to_volume(pcd, voxel_size=0.05)
verts, faces, normals, values = marching_cubes_lewiner(volume, 0)
open3d
import numpy as np
import open3d
N = 10000
pcd = np.random.rand(N,3)
def pcd_to_volume(pcd, voxel_size):
#TODO
volume = pcd_to_volume(pcd, voxel_size=0.05)
mesh = volume.extract_triangle_mesh()
I'm not able to find a way to properly write the pcd_to_volume function. I do not prefer a library over the other, so both the solutions are fine to me.
Do you have any suggestions to properly convert my data? A point cloud is a Nx3 matrix where dtype=float.
Do you know another implementation [of the marching cube algorithm] that works on raw point cloud data? I would prefer libraries like scikit and open3d, but I will also take into account github projects.
Do you know another implementation [of the marching cube algorithm] that works on raw point cloud data?
Hoppe's paper Surface reconstruction from unorganized points might contain the information you needed and it's open sourced.
And latest Open3D seems to be containing surface reconstruction algorithms like alphaShape, ballPivoting and PoissonReconstruction.
From what I know, marching cubes is usually used for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (that's what you mean by volume). The algorithm does not work on raw point cloud data.
Hoppe's algorithm works by first generating a signed distance function field (a SDF volume), and then passing it to marching cubes. This can be seen as an implementation to you pcd_to_volume and it's not the only way!
If the raw point cloud is all you have, then the situation is a little bit constrained. As you might see, the Poisson reconstruction and Screened Poisson reconstruction algorithm both implement pcd_to_volume in their own way (they are highly related). However, they needs additional point normal information, and the normals have to be consistently oriented. (For consistent orientation you can read this question).
While some Delaunay based algorithm (they do not use marching cubes) like alphaShape and this may not need point normals as input, for surfaces with complex topology, it's hard to get a satisfactory result due to orientation problem. And the graph cuts method can use visibility information to solve that.
Having said that, if your data comes from depth images, you will usually have visibility information. And you can use TSDF to build a good surface mesh. Open3D have already implemented that.
For example, how can I simply find the minimum of (x-1)^2 via ortools in Python?
I read the document of ortools, but I cannot find it. I knew it does not belong to linear optimization, but I cannot find a proper type in its document.
Google OR-Tools does not support quadratic programming. This page contains a list of what it supports:
Google Optimization Tools (OR-Tools) is a fast and portable software suite for solving combinatorial optimization problems. The suite contains:
A constraint programming solver.
A simple and unified interface to several linear programming and mixed integer programming solvers, including CBC, CLP, GLOP, GLPK, Gurobi, CPLEX, and SCIP.
Graph algorithms (shortest paths, min cost flow, max flow, linear sum assignment).
Algorithms for the Traveling Salesman Problem and Vehicle Routing Problem.
Bin packing and knapsack algorithms.
The following link clarifies that the mixed integer programming (MIP) support does not include quadratic MIP (MIQP):
https://github.com/google/or-tools/issues/598
You might check out this resource for ideas of how to do QP in Python:
https://scaron.info/blog/quadratic-programming-in-python.html
In Maple, there is some feature that allows you to calculate the pdf of a function of a random variable. For example, if X is exponentially distributed, and you want to know the distribution of X^2, then there is a function that will do that for you.
My question is , is there a functionality in matlab that allows you to do so? I have looked through the matlab's guide, but I didn't see it.
The Statistics toolbox includes many probability distributions for you to choose from, both parametric and non-parametric distributions. For each it provides functions for PDF, CDF, fitting, random number generation, etc..
I suggest you start with the "Distribution Fitting app": dfittool.
EDIT:
In addition, MuPAD has support for a number of distributions, which you can manipulate symbolically. Example:
The function intlib::changevar might be of interest here, though it seems intended for integrals...
Also, if you're interested in getting the values of the PMF, or discrete PDF, then, given x some RV with some distribution,
my_pmf = hist(x)/sum(x);
So try,
doc hist
I am using Octave and I would like to use the anderson_darling_test from the Octave forge Statistics package to test if two vectors of data are drawn from the same statistical distribution. Furthermore, the reference distribution is unlikely to be "normal". This reference distribution will be the known distribution and taken from the help for the above function " 'If you are selecting from a known distribution, convert your values into CDF values for the distribution and use "uniform'. "
My question therefore is: how would I convert my data values into CDF values for the reference distribution?
Some background information for the problem: I have a vector of raw data values from which I extract the cyclic component (this will be the reference distribution); I then wish to compare this cyclic component with the raw data itself to see if the raw data is essentially cyclic in nature. If the the null hypothesis that the two are the same can be rejected I will then know that most of the movement in the raw data is not due to cyclic influences but is due to either trend or just noise.
If your data has a specific distribution, for instance beta(3,3) then
p = betacdf(x, 3, 3)
will be uniform by the definition of a CDF. If you want to transform it to a normal, you can just call the inverse CDF function
x=norminv(p,0,1)
on the uniform p. Once transformed, use your favorite test. I'm not sure I understand your data, but you might consider using a Kolmogorov-Smirnov test instead, which is a nonparametric test of distributional equality.
Your approach is misguided in multiple ways. Several points:
The Anderson-Darling test implemented in Octave forge is a one-sample test: it requires one vector of data and a reference distribution. The distribution should be known - not come from data. While you quote the help-file correctly about using a CDF and the "uniform" option for a distribution that is not built in, you are ignoring the next sentence of the same help file:
Do not use "uniform" if the distribution parameters are estimated from the data itself, as this sharply biases the A^2 statistic toward smaller values.
So, don't do it.
Even if you found or wrote a function implementing a proper two-sample Anderson-Darling or Kolmogorov-Smirnov test, you would still be left with a couple of problems:
Your samples (the data and the cyclic part estimated from the data) are not independent, and these tests assume independence.
Given your description, I assume there is some sort of time predictor involved. So even if the distributions would coincide, that does not mean they coincide at the same time-points, because comparing distributions collapses over the time.
The distribution of cyclic trend + error would not expected to be the same as the distribution of the cyclic trend alone. Suppose the trend is sin(t). Then it never will go above 1. Now add a normally distributed random error term with standard deviation 0.1 (small, so that the trend is dominant). Obviously you could get values well above 1.
We do not have enough information to figure out the proper thing to do, and it is not really a programming question anyway. Look up time series theory - separating cyclic components is a major topic there. But many reasonable analyses will probably be based on the residuals: (observed value - predicted from cyclic component). You will still have to be careful about auto-correlation and other complexities, but at least it will be a move in the right direction.