Having problems using numpy linalg svd - python-3.x

I'm testing svd decomposition with simple matrix
A=np.array([[1,2,3],[4,5,6]])
but when I use :
U,D,V=np.linalg.svd(A)
the output are U whit shape (2,2), D with shape (2,) and V with shape (3,3)
the problem is the shape of V, the svd algorithm should return a 2x3 matrix since my original matrix is a 2x3 matrix and i'm geting 2 singular values, but it return a 3x3 matrix, when i take V[:2,:] and make the product:
U.dot(np.diag(D).dot(V[:2,:]))
it returns the original matrix A, what is happening here?
thank you for your reading and answers and sorry for the grammar, i'm beginning in english

This is explained in the docstring, but it might take a few readings to get it. The boolean parameter full_matrices determines the shape of the returned arrays. In your case, you want full_matrices=False, e.g.:
In [42]: A = np.array([[1, 2, 3], [4, 5, 6]])
In [43]: U, D, V = np.linalg.svd(A, full_matrices=False)
In [44]: U # np.diag(D) # V
Out[44]:
array([[1., 2., 3.],
[4., 5., 6.]])

Related

How can I apply a linear transformation on sparse matrix in PyTorch?

In PyTorch, we have nn.linear that applies a linear transformation to the incoming data:
y = WA+b
In this formula, W and b are our learnable parameters and A is my input data matrix. The matrix 'A' for my case is too large for RAM to complete loading, so I use it sparsely. Is it possible to perform such an operation on sparse matrices using PyTorch?
This is possible with PyTorch using sparse matrix multiply. In your case, I think you want something like:
>> i = [[0, 1, 1],
[2, 0, 2]]
>> v = [3, 4, 5]
>> A = torch.sparse_coo_tensor(i, v, (2, 3))
>> A.to_dense()
tensor([[0, 0, 3],
[4, 0, 5]])
# compute W#A by computing ((A.T)#(W.T)).T because...
# at time of writing, the sparse matrix must be first in the matmul
>> (A.t() # W.t()).t()

What is the recommended way to do embeddings in jax?

So I mean something where you have a categorical feature $X$ (suppose you have turned it into ints already) and say you want to embed that in some dimension using the features $A$ where $A$ is arity x n_embed.
What is the usual way to do this? Is using a for loop and vmap correct? I do not want something like jax.nn, something more efficient like
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding
For example consider high arity and low embedding dim.
Is it jnp.take as in the flax.linen implementation here? https://github.com/google/flax/blob/main/flax/linen/linear.py#L624
Indeed the typical way to do this in pure jax is with jnp.take. Given array A of embeddings of shape (num_embeddings, num_features) and categorical feature x of integers shaped (n,) then the following gives you the embedding lookup.
jnp.take(A, x, axis=0) # shape: (n, num_features)
If using Flax then the recommended way would be to use the flax.linen.Embed module and would achieve the same effect:
import flax.linen as nn
class Model(nn.Module):
#nn.compact
def __call__(self, x):
emb = nn.Embed(num_embeddings, num_features)(x) # shape
Suppose that A is the embedding table and x is any shape of indices.
A[x], which is like jnp.take(A, x, axis=0) but simpler.
vmap-ed A[x], which parallelizes along axis 0 of x.
nested vmap-ed A[x], which parallelizes along all axes of x.
Here are the source code for your reference.
import jax
import jax.numpy as jnp
embs = jnp.array([[1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]], dtype=jnp.float32)
x = jnp.array([[3, 1], [2, 0]], dtype=jnp.int32)
print("\ntake\n", jnp.take(embs, x, axis=0))
print("\nuse []\n", embs[x])
print(
"\nvmap\n",
jax.vmap(lambda embs, x: embs[x], in_axes=[None, 0], out_axes=0)(embs, x),
)
print(
"\nnested vmap\n",
jax.vmap(
jax.vmap(lambda embs, x: embs[x], in_axes=[None, 0], out_axes=0),
in_axes=[None, 0],
out_axes=0,
)(embs, x),
)
BTW, I learned the nested-vmap trick from the IREE GPT2 model code by James Bradbury.

Multiply 2D tensor with 3D tensor in pytorch

Suppose I have a matrix such as P = [[0,1],[1,0]] and a vector v = [a,b]. If I multiply them I have:
Pv = [b,a]
The matrix P is simply a permutation matrix, which changes the order of each element.
Now suppose that I have the same P, but I have the matrices M1 = [[1,2],[3,4]] and M2=[[5,6],[7,8]]. Now let me combine them as the 3D Tensor T= [[[1,2],[3,4]], [[5,6],[7,8]]] with dimensions (2,2,2) - (C,W,H). Suppose I multiply P by T such that:
PT = [[[5,6],[7,8]], [[1,2],[3,4]]]
Note that now M1 now equals [[5,6],[7,8]] and M2 equals [[1,2],[3,4]] as the values have been permuted across the C dimension in T (C,W,H).
How can I multiply PT (P=2D tensor,T=3D tensor) in pytorch using matmul? The following does not work:
torch.matmul(P, T)
An alternative solution to #mlucy's answer, is to use torch.einsum. This has the benefit of defining the operation yourself, without worrying about torch.matmul's requirements:
>>> torch.einsum('ij,jkl->ikl', P, T)
tensor([[[5, 6],
[7, 8]],
[[1, 2],
[3, 4]]])
Or with torch.matmul:
>>> (P # T.flatten(1)).reshape_as(T)
tensor([[[5, 6],
[7, 8]],
[[1, 2],
[3, 4]]])
You could do something like:
torch.matmul(P, X.flatten(1)).reshape(X.shape)

LDA covariance matrix not match calculated covariance matrix

I'm looking to better understand the covariance_ attribute returned by scikit-learn's LDA object.
I'm sure I'm missing something, but I expect it to be the covariance matrix associated with the input data. However, when I compare .covariance_ against the covariance matrix returned by numpy.cov(), I get different results.
Can anyone help me understand what I am missing? Thanks and happy to provide any additional information.
Please find a simple example illustrating the discrepancy below.
import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# Sample Data
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
y = np.array([1, 1, 1, 0, 0, 0])
# Covariance matrix via np.cov
print(np.cov(X.T))
# Covariance matrix via LDA
clf = LinearDiscriminantAnalysis(store_covariance=True).fit(X, y)
print(clf.covariance_)
In sklearn.discrimnant_analysis.LinearDiscriminantAnalysis, the covariance is computed as follow:
In [1]: import numpy as np
...: cov = np.zeros(shape=(X.shape[1], X.shape[1]))
...: for c in np.unique(y):
...: Xg = X[y == c, :]
...: cov += np.count_nonzero(y==c) / len(y) * np.cov(Xg.T, bias=1)
...: print(cov)
array([[0.66666667, 0.33333333],
[0.33333333, 0.22222222]])
So it corresponds to the sum of the covariance of each individual class multiplied by a prior which is the class frequency. Note that this prior is a parameter of LDA.

Python declaring a numpy matrix of lists of lists

I would like to have a numpy matrix that looks like this
[int, [[int,int]]]
I receive an error that looks like this "ValueError: setting an array element with a sequence."
below is the declaration
def __init__(self):
self.path=np.zeros((1, 2))
I attempt to assign a value to this in the line below
routes_traveled.path[0, 1]=[loc]
loc is a list and routes_traveled is the object
Do you want a higher dimensional array, say 3d, or do you really want a 2d array whose elements are Python lists. Real lists, not numpy arrays?
One way to put lists in to an array is to use dtype=object:
In [71]: routes=np.zeros((1,2),dtype=object)
In [72]: routes[0,1]=[1,2,3]
In [73]: routes[0,0]=[4,5]
In [74]: routes
Out[74]: array([[[4, 5], [1, 2, 3]]], dtype=object)
One term of this array is 2 element list, the other a 3 element list.
I could have created the same thing directly:
In [76]: np.array([[[4,5],[1,2,3]]])
Out[76]: array([[[4, 5], [1, 2, 3]]], dtype=object)
But if I'd given it 2 lists of the same length, I'd get a 3d array:
In [77]: routes1=np.array([[[4,5,6],[1,2,3]]])
Out[77]:
array([[[4, 5, 6],
[1, 2, 3]]])
I could index the last, routes1[0,1], and get an array: array([1, 2, 3]), where as routes[0,1] gives [1, 2, 3].
In this case you need to be clear where you talking about arrays, subarrays, and Python lists.
With dtype=object, the elements can be anything - lists, dictionaries, numbers, strings
In [84]: routes[0,0]=3
In [85]: routes
Out[85]: array([[3, [1, 2, 3]]], dtype=object)
Just be ware that such an array looses a lot of the functionality that a purely numeric array has. What the array actually contains is pointers to Python objects - just a slight generalization of Python lists.
Did you want to create an array of zeros with shape (1, 2)? In that case use np.zeros((1, 2)).
In [118]: np.zeros((1, 2))
Out[118]: array([[ 0., 0.]])
In contrast, np.zeros(1, 2) raises TypeError:
In [117]: np.zeros(1, 2)
TypeError: data type not understood
because the second argument to np.zeros is supposed to be the dtype, and 2 is not a value dtype.
Or, to create a 1-dimensional array with a custom dtype consisting of an int and a pair of ints, you could use
In [120]: np.zeros((2,), dtype=[('x', 'i4'), ('y', '2i4')])
Out[120]:
array([(0, [0, 0]), (0, [0, 0])],
dtype=[('x', '<i4'), ('y', '<i4', (2,))])
I wouldn't recommend this though. If the values are all ints, I think you would be better off with a simple ndarray with homogeneous integer dtype, perhaps of shape (nrows, 3):
In [121]: np.zeros((2, 3), dtype='<i4')
Out[121]:
array([[0, 0, 0],
[0, 0, 0]], dtype=int32)
Generally I find using an array with a simple dtype makes many operations from building the array to slicing and reshaping easier.

Resources