Finding a vector that is orthogonal to n columns of a matrix - python-3.x

Given a matrix B with shape (M, N), where M > N. How to find a vector v (with shape of M) that is perpendicular to all columns in B.
I tried using Numpy numpy.linalg.lstsq method to solve : Bx = 0. 0 here is a vector with M zeros.
It returns a vector of zeros with (N,) shape.

You can use sympy library, like
from sympy import Matrix
B = [[2, 3, 5], [-4, 2, 3], [0, 0, 0]]
V = A.nullspace()[0]
or to find whole nullspace
N = A.nullspace()

Here is what worked for me in case someone else need the answer:
u,s,vh=np.linalg.svd(B)
v=vh[-1:1]

Related

How to solve a cubic function without knowing the coefficients a b and c?

Apology for this silly question. In the cubie function below (as from a school project), I am given the value for x and f(x) while coefficients a, b ,c and constant of d are unknown.
f(x)=ax^3+bx^2+cx+d
In such case, is there a way to find out a, b and c by using any python package? I found good amount of python tutorial for solving cubic function but they seem to mainly focus on solving x while a, b and c value are given.
Here is an approach via sympy, Python's symbolic math library.
As an example, we are trying to find the formula for the sum of the first n triangular numbers. The triangular numbers (formula n*(n+1)/2) are 0, 1, 3, 6, 10, 15, 21, ..... The sums of the first n triangular numbers are thus 0, 1, 4, 10, 20, 35, 56, ....
from sympy import Eq, solve
from sympy.abc import a,b,c,d, x
formula = a*x**3 + b*x**2 + c*x + d # general cubic formula
xs = [0, 1, 2, 3] # some x values
fxs = [0, 1, 4, 10] # the corresponding function values
sol = solve([Eq(formula.subs(x, xi), fx) for xi, fx in zip(xs, fxs)])
print(sol) # {a: 1/6, b: 1/2, c: 1/3, d: 0}
You can use more x, fx pairs to check that a cubic formula suffices (this won't work with float values, as sympy needs exact symbolic equations).
Also sympy's interpolate can be interesting. This calculates a polynomial through some given points. Such code could look like:
from sympy import interpolate
from sympy.abc import x
xs = [0, 1, 2, 3]
fxs = [0, 1, 4, 10]
fx_dict = dict(zip(xs, fxs))
sol = interpolate(fx_dict, x)
print(sol) # x**3/6 + x**2/2 + x/3
print(sol.factor()) # x*(x + 1)*(x + 2)/6

Multiply 2D tensor with 3D tensor in pytorch

Suppose I have a matrix such as P = [[0,1],[1,0]] and a vector v = [a,b]. If I multiply them I have:
Pv = [b,a]
The matrix P is simply a permutation matrix, which changes the order of each element.
Now suppose that I have the same P, but I have the matrices M1 = [[1,2],[3,4]] and M2=[[5,6],[7,8]]. Now let me combine them as the 3D Tensor T= [[[1,2],[3,4]], [[5,6],[7,8]]] with dimensions (2,2,2) - (C,W,H). Suppose I multiply P by T such that:
PT = [[[5,6],[7,8]], [[1,2],[3,4]]]
Note that now M1 now equals [[5,6],[7,8]] and M2 equals [[1,2],[3,4]] as the values have been permuted across the C dimension in T (C,W,H).
How can I multiply PT (P=2D tensor,T=3D tensor) in pytorch using matmul? The following does not work:
torch.matmul(P, T)
An alternative solution to #mlucy's answer, is to use torch.einsum. This has the benefit of defining the operation yourself, without worrying about torch.matmul's requirements:
>>> torch.einsum('ij,jkl->ikl', P, T)
tensor([[[5, 6],
[7, 8]],
[[1, 2],
[3, 4]]])
Or with torch.matmul:
>>> (P # T.flatten(1)).reshape_as(T)
tensor([[[5, 6],
[7, 8]],
[[1, 2],
[3, 4]]])
You could do something like:
torch.matmul(P, X.flatten(1)).reshape(X.shape)

returing a list of diagonals in square matrices (simplification)

I want to return the diagonals i.e. from left to right and from right to left in a given matrix, Im using list comprehensionsto do so but I came up, in my eyes with a to complicated comprehension that returns the left to right matrix.
I was wondering if there are simpler ways to write a comprehension that returns the right to left diagonal ? Also, given that im a noob, im only beginning to dive into learning the language, i was wondering if the right_left comprehension is even conventional ?
matrix = [[1,2,3],
[4,5,6],
[7,8,9]]
left_right = [arr[i][i]
for i in range(len(arr))]
right_left = [arr[i][[-j
for j in range(-len(arr)+1,1)][i]]
for i in range(-len(arr),0)]
left_right = [arr[i][-(i+1)] for i in range(len(arr))]
For explanation of negiative indicies read this: https://stackoverflow.com/a/11367936/8326775
[list(reversed(matrix[i]))[i] for i in range(len(matrix))]
# more transparent version:
for i in range(len(matrix)):
row = list(reversed(matrix[i]))
el = row[i]
print("reversed row {} = {} -> extract element {} -> gives {}".format(i, row, i, el))
#reversed row 0 = [3, 2, 1] -> extract element 0 -> gives 3
#reversed row 1 = [6, 5, 4] -> extract element 1 -> gives 5
#reversed row 2 = [9, 8, 7] -> extract element 2 -> gives 7
You're probably better off learning about numpy, which has a function for diagonals built-in
>>> import numpy as np
>>> matrix = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
>>> np.diag(matrix)
array([1, 5, 9])
>>> np.diag(matrix, k=1)
array([2, 6])
>>> np.diag(matrix, k=-1)
array([4, 8])

How to repeat tensor in a specific new dimension in PyTorch

If I have a tensor A which has shape [M, N],
I want to repeat the tensor K times so that the result B has shape [M, K, N]
and each slice B[:, k, :] should has the same data as A.
Which is the best practice without a for loop.
K might be in other dimension.
torch.repeat_interleave() and tensor.repeat() does not seem to work. Or I am using it in a wrong way.
tensor.repeat should suit your needs but you need to insert a unitary dimension first. For this we could use either tensor.unsqueeze or tensor.reshape. Since unsqueeze is specifically defined to insert a unitary dimension we will use that.
B = A.unsqueeze(1).repeat(1, K, 1)
Code Description A.unsqueeze(1) turns A from an [M, N] to [M, 1, N] and .repeat(1, K, 1) repeats the tensor K times along the second dimension.
Einops provides repeat function
import einops
einops.repeat(x, 'm n -> m k n', k=K)
repeat can add arbitrary number of axes in any order and reshuffle existing axes at the same time.
Adding to the answer provided by #Alleo. You can use following Einops function.
einops.repeat(example_tensor, 'b h w -> (repeat b) h w', repeat=b)
Where b is the number of times you want your tensor to be repeated and h, w the additional dimensions to the tensor.
Example -
example_tensor.shape -> torch.Size([1, 40, 50])
repeated_tensor = einops.repeat(example_tensor, 'b h w -> (repeat b) h w', repeat=8)
repeated_tensor.shape -> torch.Size([8, 40, 50])
More examples here - https://einops.rocks/api/repeat/
Repeated values are memory heavy, in most cases best practice is to use broadcasting. So you would use A[:, None, :] which would make A.shape==(M, 1, N).
One case where I would agree to repeating the values, is that of in place operations in the following steps.
As numpy and torch differ in their implementations I like to agnostic (A * torch.ones(K, 1, 1))) followed by a transpose.
tensor.expand might be a better choice than tensor.repeat because according to this: "Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0."
However, be aware that: "More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first."
M = N = K = 3
A = torch.arange(0, M * N).reshape((M, N))
B = A.unsqueeze(1).expand(M, K, N)
B
'''
tensor([[[0, 1, 2],
[0, 1, 2],
[0, 1, 2]],
[[3, 4, 5],
[3, 4, 5],
[3, 4, 5]],
[[6, 7, 8],
[6, 7, 8],
[6, 7, 8]]])
'''

Multiply a specific column in each row by -1

I have a pytorch tensor A, that's of size (n,m) and a list of indices for size n, such that each entry of 0 <= indices[i] < m. For each row i of A, I want to multiply A[i, indices[i]] *= -1, in a vectorized way. Is there an easy way to do this?
A = torch.tensor([[1,2,3],[4,5,6]])
indices = torch.tensor([1, 2])
#desired result
A = [[1,-2,3],[4,5,-6]]
Sure there is, fancy indexing is the way to go:
import torch
A = torch.tensor([[1, 2, 3], [4, 5, 6]])
indices = torch.tensor([1, 2]).long()
A[range(A.shape[0]), indices] *= -1
Remember indices must be torch.LongTensor type. You could cast it if you have float using .long() member function.

Resources