I need to add many big 3D arrays (with a shape of 500x500x500) together and want to speed up the process by using multiplication in the Fourier space. The problem is that I don't get the same answer when multiplying in the Fourier space compared to simply adding the matrix.
To test it out, I wrote a minimal example trying to make it work but the answer is not what I expected. Either my math knowledge is wrong or I am not using the function correctly.
Below is the simplest code showing what I am trying to do:
import numpy as np
c = np.asarray(((1,2),(2,3)))
d = np.asarray(((1,4),(1,5)))
print("Transform")
Nc = np.fft.rfft2(c)
Nd = np.fft.rfft2(d)
print("Inverse")
Nnc = np.fft.irfft2(Nc)
Nnd = np.fft.irfft2(Nd)
print("Somme")
S = np.dot(Nc, Nd)
print(np.fft.irfft2(S))
When I print S, I get the result:
[[6, 28],[10,46]]
But from what I understood about the Fourier space, multiplication would mean addition outside of the Fourier space so I should get S = c + d?
Am I doing something wrong using the FFT function or is my assumption that S should equal c plus d wrong?
There is a little misunderstanding here:
Multiplication in Fourier space corresponds to convolution in the spatial domain and not to addition.
There is no way to speed up addition in that way.
If you want to compute c+d through the Fourier domain, you'd have to add the two spectra, not multiply them:
np.fft.irfft2(Nc+Nd) == c+d # (up to numerical precision)
Of course, this is much slower than simply adding the matrices in the spatial domain.
As #Florian said, it is convolution that can be sped up by multiplying in the spatial domain.
Related
I am developing my own Architecture Search algorithm using Pythons numpy. Currently I am trying to determine how to develop a cost function that can see the distance between X and Y, or two matrices.
I'd like to reduce the difference between the two, to a meaningful scalar value.
Ideally between 0 and 1, so that if both sets of elements within the matrices are the same numerically and positionally, a 0 is returned.
In the example below, I have the output of my algorithm X. Both X and Y are the same shape. I tried to sum the difference between the two matrices; however I'm not sure that using summation will work in all conditions. I also tried returning the mean. I don't think that either approach will work though. Aside from looping through both matrices and comparing elements directly, is there a way to capture the degree of difference in a scalar?
Y = np.arange(25).reshape(5, 5)
for i in range(1000):
X = algorithm(Y)
# I try to reduce the difference between the two matrices to a scalar value
cost = np.sum(X-Y)
There are many ways to calculate a scalar "difference" between two matrices. Here are just two examples.
The mean square error:
((m1 - m2) ** 2).mean() ** 0.5
The max absolute error:
np.abs(m1 - m2).max()
The choice of the metric depends on your problem.
I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to "disable" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear.
If not, is there a way to calculate a transformation matrix from the points that doesn't include shear?
I can only use numpy or tensorflow to solve this problem btw.
I'm not sure I understand what you're asking.
Anyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized.
You can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).
In a Python 3 application I'm using NumPy to calculate eigenvalues and eigenvectors of a symmetric real matrix.
Here's my demo code:
import numpy as np
a = np.random.rand(3,3) # generate a random array shaped (3,3)
a = (a + a.T)/2 # a becomes a random simmetric matrix
evalues1, evectors1 = np.linalg.eig(a)
evalues2, evectors2 = np.linalg.eigh(a)
Except for the signs, I got the same eigenvectors and eigenvalues using np.linalg.eig and np.linalg.eigh. So, what's the difference between the two methods?
Thanks
EDIT: I've read the docs here https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html
and here https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html
but still I can not understand why I should use eigh() when I have a symmetric array.
eigh guarantees you that the eigenvalues are sorted and uses a faster algorithm that takes advantage of the fact that the matrix is symmetric. If you know that your matrix is symmetric, use this function.
Attention, eigh doesn't check if your matrix is indeed symmetric, it by default just takes the lower triangular part of the matrix and assumes that the upper triangular part is defined by the symmetry of the matrix.
eig works for general matrices and therefore uses a slower algorithm, you can check that for example with IPythons magic command %timeit. If you test with larger matrices, you will also see that in general the eigenvalues are not sorted here.
Should the input to sklearn.clustering.DBSCAN be pre-processeed?
In the example http://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html#example-cluster-plot-dbscan-py the distances between the input samples X are calculated and normalized:
D = distance.squareform(distance.pdist(X))
S = 1 - (D / np.max(D))
db = DBSCAN(eps=0.95, min_samples=10).fit(S)
In another example for v0.14 (http://jaquesgrobler.github.io/online-sklearn-build/auto_examples/cluster/plot_dbscan.html) some scaling is done:
X = StandardScaler().fit_transform(X)
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
I base my code on the latter example and have the impression clustering works better with this scaling. However, this scaling "Standardizes features by removing the mean and scaling to unit variance". I try to find 2d clusters. If I have my clusters distributed in a squared area - let's say 100x100 I see no problem in the scaling. However, if the are distributed in an rectangled area e.g. 800x200 the scaling 'squeezes' my samples and changes the relative distances between them in one dimension. This deteriorates the clustering, doesn't it? Or am I understanding sth. wrong?
Do I need to apply some preprocessing at all, or can I simply input my 'raw' data?
It depends on what you are trying to do.
If you run DBSCAN on geographic data, and distances are in meters, you probably don't want to normalize anything, but set your epsilon threshold in meters, too.
And yes, in particular a non-uniform scaling does distort distances. While a non-distorting scaling is equivalent to just using a different epsilon value!
Note that in the first example, apparently a similarity and not a distance matrix is processed. S = (1 - D / np.max(D)) is a heuristic to convert a similarity matrix into a dissimilarity matrix. Epsilon 0.95 then effectively means at most "0.05 of the maximum dissimilarity observed". An alternate version that should yield the same result is:
D = distance.squareform(distance.pdist(X))
S = np.max(D) - D
db = DBSCAN(eps=0.95 * np.max(D), min_samples=10).fit(S)
Whereas in the second example, fit(X) actually processes the raw input data, and not a distance matrix. IMHO that is an ugly hack, to overload the method this way. It's convenient, but it leads to misunderstandings and maybe even incorrect usage sometimes.
Overall, I would not take sklearn's DBSCAN as a referene. The whole API seems to be heavily driven by classification, not by clustering. Usually, you don't fit a clustering, you do that for supervised methods only. Plus, sklearn currently does not use indexes for acceleration, and needs O(n^2) memory (which DBSCAN usually would not).
In general, you need to make sure that your distance works. If your distance function doesn't work no distance-based algorithm will produce the desired results. On some data sets, naive distances such as Euclidean work better when you first normalize your data. On other data sets, you have a good understanding on what distance is (e.g. geographic data. Doing a standardization on this obivously does not make sense, nor does Euclidean distance!)
in the J programming language,
-: i. 5
the above function computes the halves of all integers in [0,4]. Now let's say I'd like to re-write the -: function, just for the fun of it. My best guess so far was
]&%.2
but that doesn't seem to cut it. How do you do it?
%&2 NB. divide by two
0.5&* NB. multiply by one half
Note that ] % 2: would also work, but to ensure proper grammar you would either want to use that as the definition of a name, or you would want to put the expression in parenthesis.
I saw you were using %. probably because you were dividing a matrix and thought you needed to do a "matrix divide".
The matrix divide and matrix inverse they are talking about there is for matrix algebra, where you have a list of, well, essentially polynomials, and you want to do transformations on the polynomials all at once, so as to solve the equations. One of the things you can do really easily in J is matrix algebra, there are builtins for matrix divide and for inverting a matrix (as you have seen) and in the phrases section, there are short phrases for doing all of the typical matrix transformations. Taking the determinant, for example.
But when you are simply dividing a vector by a scalar to get a vector, or you are dividing a matrix by the corresponding elements of another matrix, well, that is just the % division symbol.
If you want to try and understand this, look at euler problem 101 (http://projecteuler.net/problem=101) and then google curve fitting on the Jsoftware.com site. Creating the matrixes from the observations, and the basic matrixes as shown allow you to solve for ax^2+bx+c = y where you have x and y and you want to determine a, b, and c. Just remember to use extended arithmetic for everything, as the resultant equations are very good but not perfect unless you do, and to solve the equation you need perfect equations.
Just a thought, unless you want to play with Matrix Algebra, you might not care.