Element-wise multiplication with specified channel in pytorch - pytorch

I have there tensors with shapes a = (B,12,512,512), b= (B,12,512,512), c = (B,2,512,512).
I want to multiply a* c[:,0,:,:] + b*c[:,1,:,:]
As far as I know, gradient calculation does not support indexing operation, so I need to implement this calculation without using indexing. How can I implement it in a vectorized way with Pytorch?
Thanks.

Related

How does a trained SVR model predict values?

I've been trying to understand how does a model trained with support vector machines for regression predict values. I have trained a model with the sklearn.svm.SVR, and now I'm wondering how to "manually" predict the outcome of an input.
Some background - the model is trained with kernel SVR, with RBF function and uses the dual formulation. So now I have arrays of the dual coefficients, the indexes of the support vectors, and the support vectors themselves.
I found the function which is used to fit the hyperplane but I've been unsuccessful in applying that to "manually" predict outcomes without the function .predict.
The few things I tried all include the dot products of the input (features) array, and all the support vectors.
If anyone ever needs this, I've managed to understand the equation and code it in python.
The following is the used equation for the dual formulation:
where N is the number of observations, and αi multiplied by yi are the dual coefficients found from the model's attributed model.dual_coef_. The xiT are some of the observations used for training (support vectors) accessed by the attribute model.support_vectors_ (transposed to allow multiplication of the two matrices), x is the input vector containing a value for each feature (its the one observation for which we want to get prediction), and b is the intercept accessed by model.intercept_.
The xiT and x, however, are the observations transformed in a higher-dimensional space, as explained by mery in this post.
The calculation of the transformation by RBF can be either applied manually step by stem or by using the sklearn.metrics.pairwise.rbf_kernel.
With the latter, the code would look like this (my case shows I have 589 support vectors, and 40 features).
First we access the coefficients and vectors:
support_vectors = model.support_vectors_
dual_coefs = model.dual_coef_[0]
Then:
pred = (np.matmul(dual_coefs.reshape(1,589),
rbf_kernel(support_vectors.reshape(589,40),
Y=input_array.reshape(1,40),
gamma=model.get_params()['gamma']
)
)
+ model.intercept_
)
If the RBF funcion needs to be applied manually, step by step, then:
vrbf = support_vectors.reshape(589,40) - input_array.reshape(1,40)
pred = (np.matmul(dual_coefs.reshape(1,589),
np.diag(np.exp(-model.get_params()['gamma'] *
np.matmul(vrbf, vrbf.T)
)
).reshape(589,1)
)
+ model.intercept_
)
I placed the .reshape() function even where it is not necessary, just to emphasize the shapes for the matrix operations.
These both give the same results as model.predict(input_array)

How to do constrained minimization in Pytorch?

I want to minimize an equation. The equation consists of elements which are all tensors.
f=alpha + (vnorm/2) #Equation to minimize
where, vnorm=norm(v)*norm(v)
v is a tensor vector of n*1 and alpha is a tensor of 1*1
Now I need to minimize f with respect to a contraint, that is–
(A # v)+alpha<=0 #Constraint involve in the minimization
where A is a tensor of 2*n.
How should I formulate the above equation and the the constraint to minimize the same in Pytorch ? I was successful in doing the same with 'scipy' but I want to do it in Pytorch so that I can make the minimization process faster taking the help of the tensors.

matrix multiplication for complex numbers in PyTorch

I am trying to multiply two complex matrices in PyTorch and it seems the torch.matmul functions is not added yet to PyTorch library for complex numbers.
Do you have any recommendation or is there another method to multiply complex matrices in PyTorch?
Currently torch.matmul is not supported for complex tensors such as ComplexFloatTensor but you could do something as compact as the following code:
def matmul_complex(t1,t2):
return torch.view_as_complex(torch.stack((t1.real # t2.real - t1.imag # t2.imag, t1.real # t2.imag + t1.imag # t2.real),dim=2))
When possible avoid using for loops as these will result in much slower implementations.
Vectorization is achieved by using built-in methods as demonstrated in the code I have attached.
For example, your code takes roughly 6.1s on CPU while the vectorized version takes only 101ms (~60 times faster) for 2 random complex matrices with dimensions 1000 X 1000.
Update:
Since PyTorch 1.7.0 (as #EduardoReis mentioned) you can do matrix multiplication between complex matrices similarly to real-valued matrices as follows:
t1 # t2
(for t1, t2 complex matrices).
I implemented this function for pytorch.matmul for complex numbers using torch.mv and it's working fine for time-being:
def matmul_complex(t1, t2):
m = list(t1.size())[0]
n = list(t2.size())[1]
t = torch.empty((1,n), dtype=torch.cfloat)
t_total = torch.empty((m,n), dtype=torch.cfloat)
for i in range(0,n):
if i == 0:
t_total = torch.mv(t1,t2[:,i])
else:
t_total = torch.cat((t_total, torch.mv(t1,t2[:,i])), 0)
t_final = torch.reshape(t_total, (m,n))
return t_final
I am new to PyTorch, so please correct me if I am wrong.

A vector and matrix rows cosine similarity in pytorch

In pytorch, I have multiple (scale of hundred thousand) 300 dim vectors (which I think I should upload in a matrix), I want to sort them by their cosine similarity with another vector and extract the top-1000. I want to avoid for loop as it is time consuming. I was looking for an efficient solution.
You can use torch.nn.functional.cosine_similarity function for computing cosine similarity. And torch.argsort to extract top 1000.
Here is an example:
x = torch.rand(10000,300)
y = torch.rand(1,300)
dist = F.cosine_similarity(x,y)
index_sorted = torch.argsort(dist)
top_1000 = index_sorted[:1000]
Please note the shape of y, don't forget to reshape before calling similarity function. Also note that argsort simply returns the indexes of closest vectors. To access those vectors themselves, just write x[top_1000], which will return a matrix shaped (1000,300).

Pytorch parameter matrix from loss function of transformation

I have a pytorch tensor k x (n+k-1) tensor w with requires_grad=True. I want to transform it into a kxn tensor p also with as such: p[i] = w[i][i:i+n]. How do I do this, such that by calling backward() on a loss function of p in the end, I will learn w?
Any sort of indexing operation would do, with the backward function being <CopySlices>
A naive way of doing this would be using simple python indexing:
w_unrolled = torch.zeros(p.size())
for i in range(w.shape[0]):
w_unrolled[i] = w[i][i:i+n]
loss = criterion(w_unrolled, p)
You can then reduce your loss via mean/sum on whichever axis. Note that while this will work, it is inefficient; the optimal way would be to use a native indexing function to speed things up.

Resources