How to create a multi-diagonal square matrix in Theano? - theano

Is there a better way to create a multi-diagonal square matrix in theano than the following,
A = theano.tensor.nlinalg.AllocDiag(offset=0)(x)
A += theano.tensor.nlinalg.AllocDiag(offset=1)(x[:-1])
A += theano.tensor.nlinalg.AllocDiag(offset=-1)(x[1:])
where x is the vector i want on the diagonals? Each time i call AllocDiag()() a new Apply node is created which is causing memory issues and inefficiencies.
I'm hoping there is a way similar to scipy where a list of vectors can be passed into the function with a corresponding list of offsets, see https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.diags.html.
Any assistance is much appreciated.

One way which doesn't require AllocDiag()() is to use theano.tensor.set_subtensor() with A[range(n),range(n)] to obtain the diagonal indexes where A is an n*nmatrix . Something like the following:
A = tt.set_subtensor(A0[range(n),range(n)], x)
A = tt.set_subtensor(A[range(n-1),range(1,n)], x[:-1])
A = tt.set_subtensor(A[range(1,n),range(n-1), x[1:])
where A0is the initial matrix, for example, a matrix of zeros.

Related

How to create a sparse matrix using scipy.sparse.csr_matrix in scipy/numpy

I have to create a sparse matrix in python using a function similar to the Matlab function
S = sparse(i,j,v,m,n) where i, j, and v such that S(i(k),j(k)) = v(k) and the size of S is specified as m-by-n.
I have chosen the function scipy.sparse.csr_matrix to do this. My code is something like the following.
arg_shape=np.array([ndof,ndof])
K = csr_matrix((arg_data,(arg_x,arg_y)),shape=arg_shape)
here ndof=786432 and arg_data, arg_x, arg_y are numpy arrays and all of the same shape.i.e. (150994944,).
when I run this code, I get the following error:
ValueError: row index exceeds matrix dimensions
In Matlab the code looks like this and works:
K = sparse(arg_x,arg_y,arg_data, ndof, ndof);
Could anyone please help me with the following points:
1). Is scipy.sparse.csr_matrix a good replacement for the Matlab spare function.
2). If yes, what is the mistake I am making in the code?
Thank you very much.

Find the closest distance between every galaxy in the data and create pairs based on closest distance between them

My task is to pair up galaxies that are closest together from a large list of galaxies. I have the RA, DEC and Z of each, and a formula to work out the distance between each one from the data given. However, I can't work out an efficient method of iterating over the whole list to find the distance between EACH galaxy and EVERY other galaxy in the list, with the intention of then matching each galaxy with its nearest neighbour.
The data has been imported in the following way:
hdulist = fits.open("documents/RADECMASSmatch.fits")
CATAID = data['CATAID_1']
Xpos_DEIMOS_1 = data['Xpos_DEIMOS_1']
z = data['Z_1']
RA = data['RA']
DEC = data['DEC']
I have tried something like:
radiff = []
for i in range(0,n):
for j in range(i+1,n):
radiff.append(abs(RA[i]-RA[j]))
to initially work out difference in RA and DEC between every galaxy, which does actually work but I feel like there must be a better way.
A friend suggested something along the lines of:
galaxy_coords = (data['RA'],data['DEC'],data['Z])
separation_matrix = np.zeros((len(galaxy_coords),len(galaxy_coords))
done = []
for i, coords1 in enumerate(galaxy_coords):
for j, coords2 in enumerate(galaxy_coords):
if (j,i) in done:
separation_matrix[i,j] += separation matrix[j,i]
continue
separation = your_formula(coords1, coords2)
separation_matrix[i,j] += separation
done.append((i,j))
But I don't really understand this so can't readily apply it. I've tried but it yields nothing useful.
Any help with this would be much appreciated, thanks
Your friend's code seems to be generating a 2D array of the distances between each pair, and taking advantage of the symmetry (distance(x,y) = distance(y,x)). It would be slightly better if it used itertools to generate combinations, and assigned your_formula(coords1, coords2) to separation_matrix[i,j] and separation_matrix[j,i] within the same iteration, rather than having separate iterations for both i,j and j,i.
Even better would probably be this package that uses a tree-based algorithm: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.KDTree.html . It seems to be focused on rectilinear coordinates, but that should be addressable in linear time.

TypeError: append() missing 1 required positional argument: 'values'

I have variable 'x_data' sized 360x190, I am trying to select particular rows of data.
x_data_train = []
x_data_train = np.append([x_data_train,
x_data[0:20,:],
x_data[46:65,:],
x_data[91:110,:],
x_data[136:155,:],
x_data[181:200,:],
x_data[226:245,:],
x_data[271:290,:],
x_data[316:335,:]],axis = 0)
I get the following error :
TypeError: append() missing 1 required positional argument: 'values'
where did I go wrong ?
If I am using
x_data_train = []
x_data_train.append(x_data[0:20,:])
x_data_train.append(x_data[46:65,:])
x_data_train.append(x_data[91:110,:])
x_data_train.append(x_data[136:155,:])
x_data_train.append(x_data[181:200,:])
x_data_train.append(x_data[226:245,:])
x_data_train.append(x_data[271:290,:])
x_data_train.append(x_data[316:335,:])
the size of the output is 8 instead of 160 rows.
Update:
In matlab, I will load the text file and x_data will be variable having 360 rows and 190 columns.
If I want to select 1 to 20 , 46 to 65, ... rows of data , I simply give
x_data_train = xdata([1:20,46:65,91:110,136:155,181:200,226:245,271:290,316:335], :);
the resulting x_data_train will be the array of my desired.
How can do that in python because it results array of 8 subsets of array for 20*192 each, but I want it to be one array 160*192
Short version: the most idiomatic and fastest way to do what you want in python is this (assuming x_data is a numpy array):
x_data_train = np.vstack([x_data[0:20,:],
x_data[46:65,:],
x_data[91:110,:],
x_data[136:155,:],
x_data[181:200,:],
x_data[226:245,:],
x_data[271:290,:],
x_data[316:335,:]])
This can be shortened (but made very slightly slower) by doing:
xdata[np.r_[0:20,46:65,91:110,136:155,181:200,226:245,271:290,316:335], :]
For your case where you have a lot of indices I think it helps readability, but in cases where there are fewer indices I would use the first approach.
Long version:
There are several different issues at play here.
First, in python, [] makes a list, not an array like in MATLAB. Lists are more like 1D cell arrays. They can hold any data type, including other lists, but they cannot have multiple dimensions. The equivalent of MATLAB matrices in Python are numpy arrays, which are created using np.array.
Second, [x, y] in Python always creates a list where the first element is x and the second element is y. In MATLAB [x, y] can do one of several completely different things depending on what x and y are. In your case, you want to concatenate. In Python, you need to explicitly concatenate. For two lists, there are several ways to do that. The simplest is using x += y, which modifies x in-place by putting the contents of y at the end. You can combine multiple lists by doing something like x += y + z + w. If you want to keep x, unchanged, you can assign to a new variable using something like z = x + y. Finally, you can use x.extend(y), which is roughly equivalent to x += y but works with some data types besides lists.
For numpy arrays, you need to use a slightly different approach. While Python lists can be modified in-place, strictly speaking neither MATLAB matrices nor numpy arrays can be. MATLAB pretends to allow this, but it is really creating a new matrix behind-the-scenes (which is why you get a warning if you try to resize a matrix in a loop). Numpy requires you to be more explicit about creating a new array. The simplest approach is to use np.hstack, which concatenates two arrays horizontally (or np.vstack or np.dstack for vertical and depth concatenation, respectively). So you could do z = np.hstack([v, w, x, y]). There is an append method and function in numpy, but it almost never works in practice so don't use it (it requires careful memory management that is more trouble than it is worth).
Third, what append does is to create one new element in the target list, and put whatever variable append is called with in that element. So if you do x.append([1,2,3]), it adds one new element to the end of list x containing the list [1,2,3]. It would be more like x = [x, {{1,2,3}}}, where x is a cell array.
Fourth, Python makes heavy use of "methods", which are basically functions attached to data (it is a bit more complicated than that in practice, but those complexities aren't really relevant here). Recent versions of MATLAB has added them as well, but they aren't really integrated into MATLAB data types like they are in Python. So where in MATLAB you would usually use sum(x), for numpy arrays you would use x.sum(). In this case, assuming you were doing appending (which you aren't) you wouldn't use the np.append(x, y), you would use x.append(y).
Finally, in MATLAB x:y creates a matrix of values from x to y. In Python, however, it creates a "slice", which doesn't actually contain all the values and so can be processed much more quickly by lists and numpy arrays. However, you can't really work with multiple slices like you do in your example (nor does it make sense to because slices in numpy don't make copies like they do in MATLAB, while using multiple indexes does make a copy). You can get something close to what you have in MATLAB using np.r_, which creates a numpy array based on indexes and slices. So to reproduce your example in numpy, where xdata is a numpy array, you can do xdata[np.r_[1:20,46:65,91:110,136:155,181:200,226:245,271:290,316:335], :]
More information on x_data and np might be needed to solve this but...
First: You're creating 2 copies of the same list: np and x_data_train
Second: Your indexes on x_data are strange
Third: You're passing 3 objects to append() when it only accepts 2.
I'm pretty sure revisiting your indexes on x_data will be where you solve the current error, but it will result in another error related to passing 2 values to append.
And I'm also sure you want
x_data_train.append(object)
not
x_data_train = np.append(object)
and you may actually want
x_data_train.extend([objects])
More on append vs extend here: append vs. extend

(Incremental)PCA's Eigenvectors are not transposed but should be?

When we posted a homework assignment about PCA we told the course participants to pick any way of calculating the eigenvectors they found. They found multiple ways: eig, eigh (our favorite was svd). In a later task we told them to use the PCAs from scikit-learn - and were surprised that the results differed a lot more than we expected.
I toyed around a bit and we posted an explanation to the participants that either solution was correct and probably just suffered from numerical instabilities in the algorithms. However, recently I picked that file up again during a discussion with a co-worker and we quickly figured out that there's an interesting subtle change to make to get all results to be almost equivalent: Transpose the eigenvectors obtained from the SVD (and thus from the PCAs).
A bit of code to show this:
def pca_eig(data):
"""Uses numpy.linalg.eig to calculate the PCA."""
data = data.T # data
val, vec = np.linalg.eig(data)
return val, vec
versus
def pca_svd(data):
"""Uses numpy.linalg.svd to calculate the PCA."""
u, s, v = np.linalg.svd(data)
return s ** 2, v
Does not yield the same result. Changing the return of pca_svd to s ** 2, v.T, however, works! It makes perfect sense following the definition by wikipedia: The SVD of X follows X=UΣWT where
the right singular vectors W of X are equivalent to the eigenvectors of XTX
So to get the eigenvectors we need to transposed the output v of np.linalg.eig(...).
Unless there is something else going on? Anyway, the PCA and IncrementalPCA both show wrong results (or eig is wrong? I mean, transposing that yields the same equality), and looking at the code for PCA reveals that they are doing it as I did it initially:
U, S, V = linalg.svd(X, full_matrices=False)
# flip eigenvectors' sign to enforce deterministic output
U, V = svd_flip(U, V)
components_ = V
I created a little gist demonstrating the differences (nbviewer), the first with PCA and IncPCA as they are (also no transposition of the SVD), the second with transposed eigenvectors:
Comparison without transposition of SVD/PCAs (normalized data)
Comparison with transposition of SVD/PCAs (normalized data)
As one can clearly see, in the upper image the results are not really great, while the lower image only differs in some signs, thus mirroring the results here and there.
Is this really wrong and a bug in scikit-learn? More likely I am using the math wrong – but what is right? Can you please help me?
If you look at the documentation, it's pretty clear from the shape that the eigenvectors are in the rows, not the columns.
The point of the sklearn PCA is that you can use the transform method to do the correct transformation.

MATLAB: fastest way to do a root-mean-squared error between a vector and array of vectors

I have a question regarding the fastest way to compute the RMSE between a single vector and an array of vectors. Specifically, I have a vector A representing an point and would like to find the index in a list B of points that A is closest to. Right now I am using:
tempmat = bsxfun(#minus,A,B);
tempmat1 = sqrt(sum(tempmat.^2,2);
index = find(tempmat1 == min(tempmat1));
this takes about 0.058 seconds to calculate the index. Is there a faster way in MATLAB of doing this? I performing this calculations literally millions of times.
Many thanks for reading,
Joe
tempmat = bsxfun(#minus,A,B);
tmpmat1 = sum(tempmat.^2,2);
[m,index] = min(tempmat1);
m = sqrt(m); %# optional, only if you need the actual numerical value
This avoids calculating sqrt on the whole array, since the minumum of the squared differences will have the same index. It also uses the second output of min to avoid the second pass of find.
You'll probably find that
tempmat = A - B(ones(1, size(A,1)), :)
is faster than the bsxfun version, unless size(A,1) is exceptionally large.
This assumes that A is your array and B is your vector. The RSS calculation implies that you have row vectors.
Also, I presume you know that you're calculating the RSS not RMS.

Resources