torch.einsum doesn't accept float tensors? - pytorch

I get the following error:
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #3 'mat2' in call to _th_addmm_out
I use torch.einsum as follows:
mu = torch.einsum('ijl, akij -> akl', idxs, activation_map)
I don't understand this, as in the documentation they are using float tensors too (https://pytorch.org/docs/stable/generated/torch.einsum.html). Also choosing a long tensor is no option, as all values in activation_map are between 0 and 1.

It seems like your first argument, idxs is of type Long.
All input tensors to torch.einsum should be Float.

Related

How to make torch.distributions.MultivariateNormal accept zero values on the diagonal of covariance matrix?

I use torch.normal to generate (1,n)-dimensional samples with an std vector that includes zero values. This doesn't generate any errors (when std[k] is zero the corresponding mean[k] is sampled).
However, torch.distributions.MultivariateNormal doesn't accept diagonal covariance matrix whose diagonal values are std[k]**2
I need the MultivariateNormal object to compute log_prob of the generated sample for which i used torch.normal
Code:
import torch
mean = torch.Tensor([1.00000,1.00000,1.00000,1.00000,1.00000,-1.00000,-1.00000,-1.00000,-1.00000])
std = torch.Tensor([17708.14062,68.16734,0.00000,0.00000,5917.79932,15390.00488, 0.00000,10070.79395,4994.94434])
x = torch.normal(mean, std)
torch.distributions.MultivariateNormal(loc=mean, covariance_matrix=torch.Tensor(np.diag(np.array(std)**2))).log_prob(x)
Error:
ValueError: Expected parameter covariance_matrix (Tensor of shape (9, 9)) of distribution MultivariateNormal(loc: torch.Size([9]), covariance_matrix: torch.Size([9, 9])) to satisfy the constraint PositiveDefinite(), but found invalid values
I don't get why torch.normal doesn't mind zero values in the std but normal.distributions.MultivariateNormal does.
I tried to look up special parameters that can be passed to the constructor to ignore this, like the 'allow_singular' parameter in the scipy library, but there seems to be none.
from scipy.stats import multivariate_normal
pi = multivariate_normal(mean=mean, cov=np.diag(np.array(std)**2), allow_singular=True).pdf(
np.array(action))

determining complex torch.dtype with precision/range equivalent to a given type

I want to create functions that support different precision, the floating point precision is the data type for the real and imaginary parts.
Basically I would like to use a native method to translate between the real and complex types like
float16 <-> complex32
float32 <-> complex64
float64 <-> complex128
If I want to use a hardcoded type we can do, for instance
torch.float32 # 45ns
I know that if I have a complex type I can get at least the string form of the the real counterpart. Then we could get the real counterpart as
getattr(torch, torch.finfo(torch.complex64).dtype) # ~400ns
I know I can get the complex counterpart by constructing a minimum tensor and viewing it as complex
torch.view_as_complex(torch.zeros(2, dtype=torch.float16)).dtype # ~3200ns
Or even a zero element tensor
torch.view_as_complex(torch.zeros(0,2, dtype=torch.float16)).dtype # ~3900ns
But this code is too ugly to be repeatedly typed
def complex_type(dtype):
'''
Returns the type for a complex number with the same precision
as the given dtype
'''
if(dtype.is_complex):
return dtype;
else:
return torch.view_as_complex(torch.zeros([], dtype=dtype)).dtype

Efficient interpolation implementation

I'm writing an interpolation method without using a library functions which does it directly. The signature of the function is:
def interpolate(self, f: callable, a: float, b: float, n: int) -> callable:
"""
Parameters
----------
f : callable. it is the given function
a : float
beginning of the interpolation range.
b : float
end of the interpolation range.
n : int
maximal number of points to use.
Returns
-------
The interpolating function.
"""
Right now my implementation is straight forward "Lagrange Interpolation" as explained here: https://www.codesansar.com/numerical-methods/python-program-lagrange-interpolation-method.htm
However, this kind of implementation is O(n^2) and I'm looking for a more efficient solution which runs in O(n).
(Maybe Bézier Curves can help here somehow?)
I am doing the same thing and I thought of NumPy elementwise calculation, here is my line of code that solves your question:
for xi, yi in zip(x,y):
yp += yi*np.prod((xp-x[x!=xi])/(xi-x[x!=xi]))

Reshaping the particular dimension of matrix

I am trying to reshape one of the dimension of matrix into 2d in fortran
I have a matrix "vfm[4208,5155]". Out of this, I want to convert "vfm[4208,1166:5515]" into "vfm[4208,290]".
EXtracting vfm[4208,1166:5515] makes it to vfm[4208,4350]. Further, I want to reshape a second dimension (4350) into 2d (290x15) and then average the reshaped matrix on its second dimension (15). It means final matrix should be as "vfm[4208,290]".
Here is a flow of code:
vfm[4208,4350] --> vfm[4208,290,15] ---> vfm[4208,290]
Actual code of mine is too long, thereofre I am writing a assocaited part of code here.
Character*80:vfmf
integer, dimension(:,:), allocatable, target :: vfm
integer, dimension(:,:), pointer :: vfm1
xdim=4208
ydim=5515
open(16,file=vfmf,status='old')
allocate(vfm(xdim,ydim))
read(16,*)((vfm(ii,jj),jj=1,ydim),ii=1,xdim)
vfm1 => vfm(:,1166:5515)
vfm2=reshape((vfm1),(/xdim,4350/),(/xdim,290,15))
Stop
End
Following this I am facing following error
Error: Syntax error in array constructor at (1)
I am unable to reshape vfm[4208, 4350] to vfm[4208,290,15] using fortran in Ubuntu.
Kindly help me to resolve this.
Thank you in advance.

Why do I get int64 when creating itensor3 in Theano?

I'm quite new to Theano,
I'm trying to create a tensor of int32 using itensor3,but for some reason I get int64 instead of int32.
Do I need to specify anything in the config file?
from theano import tensor as T
l=T.itensor3()
k=l.shape[0]
f=theano.function([l],k)
inp=numpy.zeros((2,3,4), dtype=numpy.int32)
f(inp)
>>>array(2L, dtype=int64)
In Theano I believe shapes are always specified in int64 values.
The result of your Theano function, f, is a shape size, i.e. l.shape[0] so the type of the result returned by f is going to be int64. This does not change the fact that the input is of type int32.

Resources