Matrix Multiplication Python - python-3.x

I am trying to multiply the matrix and came across below code, can someone please help me understand the logic for second 'for' loop, why is it range(len(B[0])). I am quite a newbie to programming world so unable to understand the logic. Please help.
for i in range(r1):
print("i=",i)
for j in range(len(B[0])):
print("j=",j)
for k in range(r2):
print("k=",k)
result[i][j] += A[i][k] * B[k][j]
return(result)
Here r1 and r2 are lengths of two matrix

A simple way to do matrix multiplication is to use numpy dot product:
import numpy as np
result = np.dot([[2, 5], [5, 8]],[[2, 1], [5, 9]])
#result = np.dot(matrix1, matrix2)

Related

Integer Linear Programming with CVXPY in python3

I'm trying to solve an integer linear programming problem using the CVXPY but am struggling with some syntax and can not figure out a way of how to enforce my variable that I'm interested to solve for the constraint to take values of either 0 or 1. I thought that setting it to be boolean was the solution in the Variable object, but for some reason I'm not getting what I want to
I installed the cvxpy library and tried to run it using a small example to solve it. The input for my problem is a binary matrix M of size (I, J) that only has values of (0 or 1),
also the variable that I want to solve for is a boolean (or a binary vector again) vector P of size J,
the objective function is to minimize the sum of the values of my Variable vector of size J (i.e. minimize the number of 1s inside that vector)
such that sum of each row of my matrix M times my variable Vector P is greater or equal to 1.
i.e. Summation(over j) of Mij*Pj >= 1, for all i.
with the objective of minimizing sum of vector P.
I wrote the following code to do that however I'm struggling in finding what is it that I did wrong in it.
import numpy as np
import cvxpy as cp
M = np.array([[1,0,0,0], [1,0,0,0], [0,1,1,0], [1,0,0,0], [0,0,1,1], [0,0,1,0]])
variable= cp.Variable(M.shape[1], value = 1, boolean=True)
one_vec = np.ones(M.shape[1])
obj = cp.Minimize(sum(np.dot(variable, one_vec)))
constraints = []
for i in range(len(M)):
constraints.append(np.sum(np.dot(M[i], variable)) >= 1)
problem = cp.Problem(obj, constraints=constraints)
problem.solve()
so as an answer to this simple example given by the matrix M in my code, the answer should be such that variable vector's value should be [1, 0, 1, 0], since multiplying the vector [1, 0, 1, 0] with the matrix
[[1, 0, 0, 0]
[1, 0, 0, 0]
[0, 1, 1, 0]
[1, 0, 0, 0]
[0, 0, 1, 1]
[0, 0, 1, 0]
]
would give a value of at least 1 for each row.
But if I run this code that I have written, I'm getting a value that is a float as my answer, hence I'm doing something wrong which I cannot figure out. I do not know how to phrase this question programmatically I guess so that the solver would solve it. Any help would be well appreciated. Thanks.
UPDATE! I think I figured it out
I modified the code to this:
import numpy as np
import cvxpy as cp
M = np.array([[1,0,0,1], [1,0,0,1], [0,1,1,1], [1,0,0,1], [0,0,1,1], [0,0,1,1]])
selection = cp.Variable(M.shape[1], boolean = True)
ones_vec = np.ones(M.shape[1])
constraints = []
for i in range(len(M)):
constraints.append(M[i] * selection >= 1)
total_genomes = ones_vec * selection
problem = cp.Problem(cp.Minimize(total_genomes), constraints)
problem.solve()
and now it's working. I used the * operator instead of numpy dot product, cvxpy has overloaded that operator I think to perform vector multiplications.

Efficient way to get nearest point [duplicate]

let's say I have the following numpy matrix (simplified):
matrix = np.array([[1, 1],
[2, 2],
[5, 5],
[6, 6]]
)
And now I want to get the vector from the matrix closest to a "search" vector:
search_vec = np.array([3, 3])
What I have done is the following:
min_dist = None
result_vec = None
for ref_vec in matrix:
distance = np.linalg.norm(search_vec-ref_vec)
distance = abs(distance)
print(ref_vec, distance)
if min_dist == None or min_dist > distance:
min_dist = distance
result_vec = ref_vec
The result works, but is there a native numpy solution to do it more efficient?
My problem is, that the bigger the matrix becomes, the slower the entire process will be.
Are there other solutions that handle these problems in a more elegant and efficient way?
Approach #1
We can use Cython-powered kd-tree for quick nearest-neighbor lookup, which is very efficient both memory-wise and with performance -
In [276]: from scipy.spatial import cKDTree
In [277]: matrix[cKDTree(matrix).query(search_vec, k=1)[1]]
Out[277]: array([2, 2])
Approach #2
With SciPy's cdist -
In [286]: from scipy.spatial.distance import cdist
In [287]: matrix[cdist(matrix, np.atleast_2d(search_vec)).argmin()]
Out[287]: array([2, 2])
Approach #3
With Scikit-learn's Nearest Neighbors -
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=1).fit(matrix)
closest_vec = matrix[nbrs.kneighbors(np.atleast_2d(search_vec))[1][0,0]]
Approach #4
With Scikit-learn's kdtree -
from sklearn.neighbors import KDTree
kdt = KDTree(matrix, metric='euclidean')
cv = matrix[kdt.query(np.atleast_2d(search_vec), k=1, return_distance=False)[0,0]]
Approach #5
From eucl_dist package (disclaimer: I am its author) and following the wiki contents, we could leverage matrix-multiplication -
M = matrix.dot(search_vec)
d = np.einsum('ij,ij->i',matrix,matrix) + np.inner(search_vec,search_vec) -2*M
closest_vec = matrix[d.argmin()]

python3: find most k nearest vectors from a list?

Say I have a vector v1 and a list of vector l1. I want to find k vectors from l1 that are most closed (similar) to v1 in descending order.
I have a function sim_score(v1,v2) that will return a similarity score between 0 and 1 for any two input vectors.
Indeed, a naive way is to write a for loop over l1, calculate distance and store them into another list, then sort the output list. But is there a Pythonic way to do the task?
Thanks
import numpy as np
np.sort([np.sqrt(np.sum(( l-v1)*(l-v1))) For l in l1])[:3]
Consider using scipy.spatial.distance module for distance computations. It supports the most common metrics.
import numpy as np
from scipy.spatial import distance
v1 = [[1, 2, 3]]
l1 = [[11, 3, 5],
[ 2, 1, 9],
[.1, 3, 2]]
# compute distances
dists = distance.cdist(v1, l1, metric='euclidean')
# sorted distances
sd = np.sort(dists)
Note that each parameter to cdist must be two-dimensional. Hence, v1 must be a nested list, or a 2d numpy array.
You may also use your homegrown metric like:
def my_metric(a, b, **kwargs):
# some logic
dists = distance.cdist(v1, l1, metric=my_metric)

Spark Matrix multiplication with python

I am trying to do matrix multiplication using Apache Spark and Python.
Here is my data
from pyspark.mllib.linalg.distributed import RowMatrix
My RDD of vectors
rows_1 = sc.parallelize([[1, 2], [4, 5], [7, 8]])
rows_2 = sc.parallelize([[1, 2], [4, 5]])
My maxtrix
mat1 = RowMatrix(rows_1)
mat2 = RowMatrix(rows_2)
I would like to do something like this:
mat = mat1 * mat2
I wrote a function to process the matrix multiplication but I'm afraid to have a long processing time. Here is my function:
def matrix_multiply(df1, df2):
nb_row = df1.count()
mat=[]
for i in range(0, nb_row):
row=list(df1.filter(df1['index']==i).take(1)[0])
row_out = []
for r in range(0, len(row)):
r_value = 0
col = df2.select(df2[list_col[r]]).collect()
col = [list(c)[0] for c in col]
for c in range(0, len(col)):
r_value += row[c] * col[c]
row_out.append(r_value)
mat.append(row_out)
return mat
My function make a lot of spark actions (take, collect, etc.). Does the function will take a lot of processing time?
If someone have another idea it will be helpful for me.
You cannot. Since RowMatrix has no meaningful row indices it cannot be used for multiplications. Even ignoring that the only distributed matrix which supports multiplication with another distributed structure is BlockMatrix.
from pyspark.mllib.linalg.distributed import *
def as_block_matrix(rdd, rowsPerBlock=1024, colsPerBlock=1024):
return IndexedRowMatrix(
rdd.zipWithIndex().map(lambda xi: IndexedRow(xi[1], xi[0]))
).toBlockMatrix(rowsPerBlock, colsPerBlock)
as_block_matrix(rows_1).multiply(as_block_matrix(rows_2))

Using Theano.scan with multidimensional arrays

To speed up my code I am converting a multidimensional sumproduct function from Python to Theano. My Theano code reaches the same result, but only calculates the result for one dimension at a time, so that I have to use a Python for-loop to get the end result. I assume that would make the code slow, because Theano cannot optimize memory usage and transfer (for the gpu) between multiple function calls. Or is this a wrong assumption?
So how can I change the Theano code, so that the sumprod is calculated in one function call?
The original Python function:
def sumprod(a1, a2):
"""Sum the element-wise products of the `a1` and `a2`."""
result = numpy.zeros_like(a1[0])
for i, j in zip(a1, a2):
result += i*j
return result
For the following input
a1 = ([1, 2, 4], [5, 6, 7])
a2 = ([1, 2, 4], [5, 6, 7])
the output would be: [ 26. 40. 65.] that is 1*1 + 5*5, 2*2 + 6*6 and 4*4 + 7*7
The Theano version of the code:
import theano
import theano.tensor as T
import numpy
a1 = ([1, 2, 4], [5, 6, 7])
a2 = ([1, 2, 4], [5, 6, 7])
# wanted result: [ 26. 40. 65.]
# that is 1*1 + 5*5, 2*2 + 6*6 and 4*4 + 7*7
Tk = T.iscalar('Tk')
Ta1_shared = theano.shared(numpy.array(a1).T)
Ta2_shared = theano.shared(numpy.array(a2).T)
outputs_info = T.as_tensor_variable(numpy.asarray(0, 'float64'))
Tsumprod_result, updates = theano.scan(fn=lambda Ta1_shared, Ta2_shared, prior_value:
prior_value + Ta1_shared * Ta2_shared,
outputs_info=outputs_info,
sequences=[Ta1_shared[Tk], Ta2_shared[Tk]])
Tsumprod_result = Tsumprod_result[-1]
Tsumprod = theano.function([Tk], outputs=Tsumprod_result)
result = numpy.zeros_like(a1[0])
for i in range(len(a1[0])):
result[i] = Tsumprod(i)
print result
First, there is more people that will answer your questions on theano mailing list then on stackoverflow. But I'm here:)
First, your function isn't a good fit for GPU. Even if everything was well optimized, the transfer of the input to the gpu just to add and sum the result will take more time to run then the python version.
Your python code is slow, here is a version that should be faster:
def sumprod(a1, a2):
"""Sum the element-wise products of the `a1` and `a2`."""
a1 = numpy.asarray(a1)
a2 = numpy.asarray(a2)
result (a1 * a2).sum(axis=0)
return result
For the theano code, here is the equivalent of this faster python version(no need of scan)
m1 = theano.tensor.matrix()
m2 = theano.tensor.matrix()
f = theano.function([m1, m2], (m1 * m2).sum(axis=0))
The think to remember from this is that you need to "vectorize" your code. The "vectorize" is used in the NumPy context and it mean to use numpy.ndarray and use function that work on the full tensor at a time. This is always faster then doing it with loop (python loop or theano scan). Also, Theano optimize some of thoses cases by moving the computation outside the scan, but it don't always do it.

Resources