I would like to create a square matrix with the eigenvalues on the diagonal:
eigen_values, eigen_vectors = theano.tensor.nlinalg.eig(covariance_matrix)
D = T.nlinalg.AllocDiag(eigen_values)
However apparently theano does not treat the D matrix I created as a standard matrix, thus I cannot use it in the succeding computations.
theano.tensor.var.AsTensorError: ('Cannot convert <theano.tensor.nlinalg.AllocDiag object at 0x7face5708450> to TensorType', <class 'theano.tensor.nlinalg.AllocDiag'>)
You're using an operation class as if it were an operation function.
Instead of
D = T.nlinalg.AllocDiag(eigen_values)
try
D = T.nlinalg.AllocDiag()(eigen_values)
or
D = T.nlinalg.alloc_diag(eigen_values)
Related
Is there a built in function to calculate efficiently all pairwaise dot products of two tensors in Pytorch?
e.g.
input - tensor A (shape NxD)
tensor B (shape NxD)
output - tensor C (shape NxN) such that C_i,j = torch.dot(A_i, B_j) ?
Isn't it simply
C = torch.mm(A, B.T) # same as C = A # B.T
BTW,
A very flexible tool for matrix/vector/tensor dot products is torch.einsum:
C = torch.einsum('id,jd->ij', A, B)
I need help in speeding up the following block of code:
import numpy as np
x = 100
pp = np.zeros((x, x))
M = np.ones((x,x))
arrayA = np.random.uniform(0,5,2000)
arrayB = np.random.uniform(0,5,2000)
for i in range(x):
for j in range(x):
y = np.multiply(arrayA, np.exp(-1j*(M[j,i])*arrayB))
p = np.trapz(y, arrayB) # Numerical evaluation/integration y
pp[j,i] = abs(p**2)
Is there a function in numpy or another method to rewrite this piece of code with so that the nested for-loops can be omitted? My idea would be a function that multiplies every element of M with the vector arrayB so we get a 100 x 100 matrix in which each element is a vector itself. And then further each vector gets multiplied by arrayA with the np.multiply() function to then again obtain a 100 x 100 matrix in which each element is a vector itself. Then at the end perform numerical integration for each of those vectors with np.trapz() to obtain a 100 x 100 matrix of which each element is a scalar.
My problem though is that I lack knowledge of such functions which would perform this.
Thanks in advance for your help!
Edit:
Using broadcasting with
M = np.asarray(M)[..., None]
y = 1000*arrayA*np.exp(-1j*M*arrayB)
return np.trapz(y,B)
works and I can ommit the for-loops. However, this is not faster, but instead a little bit slower in my case. This might be a memory issue.
y = np.multiply(arrayA, np.exp(-1j*(M[j,i])*arrayB))
can be written as
y = arrayA * np.exp(-1j*M[:,:,None]*arrayB
producing a (x,x,2000) array.
But the next step may need adjustment. I'm not familiar with np.trapz.
np.trapz(y, arrayB)
Is there an efficient 'Numpy'-based solution to create a 3 (or higher) dimensional diagonal matrix?
More specifically, I am looking for a shorter (and perhaps more efficient) solution to replace the following:
N = 100
M = 4
d = np.random.randn(N) # calculated in the real use case from other parameters
A = np.zeros(M, M, N, dtype=d.dtype)
for i in range(M):
A[i, i, :] = d
The above-mentioned solution will be slow if M is large, and I think not very memory-efficient as d is copied M times in the memory.
Here's one with np.einsum diag-view -
np.einsum('iij->ij',A)[:] = d
Looking at the string notation, this also translates well from the iterative part : A[i, i, :] = d.
Generalize to ndarray with ellipsis -
np.einsum('ii...->i...',A)[:] = d
I have the problem that I need to enforce rank 2 of n 3x3 matrices and I don't want to use loops for it.
So I created an SVD of all matrices and set the minimum element of the diagonal matrix to 0.
import numpy as np
from numpy import linalg as la
[u, s, vT] = la.svd(A)
s[:,2] = 0
So when I just had one matrix I would do the following:
sD = np.diag(s) # no idea how to solve this for n matrices
Ar2 = u.dot(sD).dot(vT) # doing the dot products as two dot products
# using np.einsum for n matrices
Ok, so I have the problem to build my diagonal matrices from an (n,3) array. I tried to use np.diag after some reshapes but I don't think that this function can handle the two dimensions of the s-array. A loop would be a possible solution but it is too slow. So, what is the cleanest and quickest way to build my s-matrices into diagonal form respectively calculating the two dot-products with the given information?
# dimensions of arrays:
# A -> (n,3,3)
# u -> (n,3,3)
# s -> (n,3)
# vT -> (n,3,3)
As you suspected, you can use einsum:
Ar2 = np.einsum('ijk,ik,ikl->ijl', u, s, vT)
I am trying to test a new kernel method in Kernel Ridge Regression and want to do this by implementing the Fastfood transformation (https://arxiv.org/abs/1408.3060). I can write a function which computes this transform but it isn't playing nicely with the kernel ridge regression function in sklearn. As a result I have gone to the source code for sklearn kernel ridge regression (https://insight.io/github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_ridge.py) and approximate_kernel.py (https://insight.io/github.com/scikit-learn/scikit-learn/blob/master/sklearn/kernel_approximation.py) in order to try and define this new kernel as a class definition in approximate_kernel.py. The problem is that I have no idea how to convert my construction to something which will work in the approximate_kernel KernelRidge programs. Would anybody be able to advise how best to do this please?
My construction for the fastfood transform is:
def fastfood_product(d):
'''
Constructs the fastfood matrix composition V = const*S*H*G*Pi*B where
S is a scaling matrix
H is Hadamard transform
G is a diagonal random Gaussian
Pi is a permutation matrix
B is a diagonal Rademacher matrix.
Inputs: n - dimensionality of the feature vectors for the kernel.
must be a power of two and be divisible by d. If not then can
pad the matrix with zeros but for simplicity assume this condition
is always met.
Output: V'''
S = np.zeros(shape=(d,d))
G = np.zeros_like(S)
B = np.zeros_like(S)
H = hadamard(d)
Pi = np.eye(d)
np.random.shuffle(Pi) # Permutation matrix
# Construct the simple matrices
np.fill_diagonal(B, 2*np.random.randint(low=0,high=2,size=(d,1)).flatten() - 1)
np.fill_diagonal(G, np.random.randn(G.shape[0],1)) # May want to change standard normal to arbitrary which will affect the scaling for V
np.fill_diagonal(S, np.linalg.norm(G,'fro')**(-0.5))
#print('Shapes of B {}, S {}, G {}, H{}, Pi {}'.format(B.shape, S.shape, G.shape, H.shape, Pi.shape))
V = d**(-0.5)*S.dot(H).dot(G).dot(Pi).dot(H).dot(B)
return V
def fastfood_feature_map(X, n):
'''Given a matrix X of data compute the fastfood transformation and feature mapping.
Input: X data of dimension d by m, n = the number of nonlinear basis functions to choose (power of 2)
Outputs: Phi - matrix of random features for fastfood kernel approximation.
Usage: Phi must be transposed for computation in the kernel ridge regression.
i.e solve ||Phi.T * w - b || + regulariser
Comments: This only uses a standard normal distribution but this could
be altered with different hyperparameters.'''
d,m = A.shape
V = fastfood_product(d)
Phi = n**(-0.5)*np.exp(1j*np.dot(V, X))
return Phi
I think the imports numpy as np and from linalg import hadamard will be necessary for the above.