DFT of 2D double array using fftw3 r2c comparing to c2c - gaussian

I am trying to understand how to use the "fftw_plan_dft_r2c_2d". First, I generate a matrix with a non symmetric gaussian (sigma_x != sigma_y) and I make the fft using fftw_plan_dft_c2c_2D. But it requires a temporary fftw_complexe array where I just copy the gaussian matrix and set the imaginary part to 0.0.
In order to save some memory, I am trying to avoid the temporary fftw_complexe and use directly the "fftw_plan_dft_r2c_2D" expecting the same "half" result.
It looks like I missing something.. can anyone help please ?
thanks,

You need to shift the center frequency for the inverse transformation as described here for numpy:
https://numpy.org/doc/stable/reference/generated/numpy.fft.fftshift.html

Related

Eigenvectors in Julia vs Numpy

I'm currently working to diagonalize a 5000x5000 Hermitian matrix, and I find that when I use Julia's eigen function in the LinearAlgebra module, which produces both the eigenvalues and eigenvectors, I get different results for the eigenvectors compared to when I solve the problem using numpy's np.linalg.eigh function. I believe both of them use BLAS, but I'm not sure what else they may be using that is different.
Has anyone else experienced this/knows what is going on?
numpy.linalg.eigh(a, UPLO='L') is a different algorithm. It assumes the matrix is symmetric and takes the lower triangular matrix (as a default) to more efficiently compute the decomposition.
The equivalent to Julia's LinearAlgebra.eigen() is numpy.linalg.eig. You should get the same result if you turn your matrix in Julia into a Symmetric(A, uplo=:L) matrix before feeding it into LinearAlgebra.eigen().
Check out numpy's docs on eig and eigh. Whilst Julia's standard LinearAlgebra capabilities are here. If you go down to the special matrices sections, it details what special methods it uses depending on the type of special matrix thanks to multiple dispatch.

How to classify arrays using svm?

I want to make a model which can differentiate between general functions eg. If a given set of points fall on a line or a parabola etc.
I am not able to train a svc directly on arrays as it expects an array of 2d shape
Any suggestions?
Note: eventually i want to build it into classifying into periodic functions given a set of data points
Okay so your input is an array of points, each point has coordinates (x,y) and your label is the type of function.
In math, this task is called interpolation and this is where you get a set of points and you return the function that should be returned.
What you are describing seems more like non-linear regression (curve fitting) than it is about classification, you'll have too many classes to cover and it doesn't really make sense to do that anyway.
Here is a tutorial in python about non-linear regression that would be more useful. https://scipy-cookbook.readthedocs.io/items/robust_regression.html

How to use Fast Fourier Transform to execute convolution of matrix?

I need to add many big 3D arrays (with a shape of 500x500x500) together and want to speed up the process by using multiplication in the Fourier space. The problem is that I don't get the same answer when multiplying in the Fourier space compared to simply adding the matrix.
To test it out, I wrote a minimal example trying to make it work but the answer is not what I expected. Either my math knowledge is wrong or I am not using the function correctly.
Below is the simplest code showing what I am trying to do:
import numpy as np
c = np.asarray(((1,2),(2,3)))
d = np.asarray(((1,4),(1,5)))
print("Transform")
Nc = np.fft.rfft2(c)
Nd = np.fft.rfft2(d)
print("Inverse")
Nnc = np.fft.irfft2(Nc)
Nnd = np.fft.irfft2(Nd)
print("Somme")
S = np.dot(Nc, Nd)
print(np.fft.irfft2(S))
When I print S, I get the result:
[[6, 28],[10,46]]
But from what I understood about the Fourier space, multiplication would mean addition outside of the Fourier space so I should get S = c + d?
Am I doing something wrong using the FFT function or is my assumption that S should equal c plus d wrong?
There is a little misunderstanding here:
Multiplication in Fourier space corresponds to convolution in the spatial domain and not to addition.
There is no way to speed up addition in that way.
If you want to compute c+d through the Fourier domain, you'd have to add the two spectra, not multiply them:
np.fft.irfft2(Nc+Nd) == c+d # (up to numerical precision)
Of course, this is much slower than simply adding the matrices in the spatial domain.
As #Florian said, it is convolution that can be sped up by multiplying in the spatial domain.

Can I avoid using `Theano.scan`?

I have 3-dimensional tensor ("tensor3" -- an array of matrices), and I'd like to compute the determinant (theano.sandbox.linalg.det) of each matrix. Is there a way to compute each determinant without using theano.scan? When I try calling det directly on the tensor I get the error
3-dimensional array given. Array must be two-dimensional.
But I read that scan is slow and doesn't parallelize well, and that one should use only tensor operations if possible. Is that so? Can I avoid using scan in this case?
I see 3 possibilities:
If you know before compiling the Theano function the number of matrix in the tensor3 variable, you could use the split() op or just call det() on all matrix in the tensor3.
If you don't know the shape, you can make your own op, that will loop over the input and call the numpy fct. See for an example on how to make an op.
Use scan. It is easy to use it for this case. See this example, just change the call from tensordot to det().

Rotate Text Cairo, Transformation matrix

http://www.cairographics.org/manual/cairo-Transformations.html
I have been using Cairo Vector Graphics Library for some work, and I quite understand some parts :-
What is the default value of the transformation matrix ?
When do I need the transformation matrix anyway ?
Suppose I don't want to rotate text, will I still need to set it , will it still be set ?
I know it is very nooblike, & I must investigate it on my own, but I cant quite understand it
The default transformation is the identity matrix. This matrix doesn't change values, so (x, y) stays the same when transformed by the identity matrix.
Rotating text is one reason that you might need this. If you don't rotate text, then you likely don't need the matrix. Most stuff shouldn't need a transformation.
If you need the matrix depends on which other stuff you do. For example, if you call other code and want to scale up the drawing by a factor of two, you could do this with a transformation matrix.
So the short version: If you don't know what to do with the transformation matrix, you can most likely leave it alone.

Resources