I'm quite new to Theano,
I'm trying to create a tensor of int32 using itensor3,but for some reason I get int64 instead of int32.
Do I need to specify anything in the config file?
from theano import tensor as T
l=T.itensor3()
k=l.shape[0]
f=theano.function([l],k)
inp=numpy.zeros((2,3,4), dtype=numpy.int32)
f(inp)
>>>array(2L, dtype=int64)
In Theano I believe shapes are always specified in int64 values.
The result of your Theano function, f, is a shape size, i.e. l.shape[0] so the type of the result returned by f is going to be int64. This does not change the fact that the input is of type int32.
Related
I get the following error:
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #3 'mat2' in call to _th_addmm_out
I use torch.einsum as follows:
mu = torch.einsum('ijl, akij -> akl', idxs, activation_map)
I don't understand this, as in the documentation they are using float tensors too (https://pytorch.org/docs/stable/generated/torch.einsum.html). Also choosing a long tensor is no option, as all values in activation_map are between 0 and 1.
It seems like your first argument, idxs is of type Long.
All input tensors to torch.einsum should be Float.
I am trying to generate a number from the normal distribution using:
from torch.distributions import Normal
noise = Normal(th.tensor([0]), th.tensor(3.20))
noise = noise.sample()
But I am getting this error: RuntimeError: _th_normal not supported on CPUType for Long
Your first tensor th.tensor([0]) is of torch.Long type due to automatic type inference from passed value while float or FloatTensor is required by function.
You can solve it by passing 0.0 explicitly like this:
import torch
noise = torch.distributions.Normal(torch.tensor([0.0]), torch.tensor(3.20))
noise = noise.sample()
Better yet, drop torch.tensor altogether, in this case Python types will be automatically casted to float if possible, so this is valid as well:
import torch
noise = torch.distributions.Normal(0, 3.20)
noise = noise.sample()
And please, do not alias torch as th, it is not official, use fully qualified name as this only confuses everyone.
I am using numpy and numpy.array2string to probe effects of small changes and numerical stability. I am baffled by the following exercise:
<Get vector data through prior code>
print(data.shape)
print(data.dtype)
variance = np.var(data)
print(variance.dtype)
print(variance)
print(np.array2string(variance, suppress_small=True,formatter={'float': '{: 8.55f}'.format}))
Which gives as output:
torch.Size([1, 64]) # shape of the data
torch.float32 # dtype of the data
float32 # dtype of the variance result
37166410.0
37166408.0000000000000000000000000000000000000000000000000000000
This makes me crosseyed: The array2string isn't just reformatting the value with a (patently absurd) number of digits, it seems to be changing the value itself well above the point where round off errors should matter. (It doesn't seem to be the formatter, as I can remove that and still see changed values.)
A float32 can easily represent either of those integer values directly.
What am I not understanding here? Please reaffirm my faith in objective reality.
UPDATE: No float32 cannot exactly represent 37166410.0, and I need reading glasses.
import numpy as np
import torch
a = torch.zeros(5)
b = torch.tensor(tuple((0,1,0,1,0)),dtype=torch.uint8)
c= torch.tensor([7.,9.])
print(a[b].size())
a[b]=c
print(a)
torch.Size([2])tensor([0., 7., 0., 9., 0.])
I am struggling to understand how this works. I initially thought the above code was using Fancy indexing but I realised that values from c tensors are getting copied corresponding to the indices marked 1. Also, if I don't specify dtype of b as uint8 then the above code does not work. Can someone please explain me the mechanism of the above code.
Indexing with arrays works the same as in numpy and most other vectorized math packages I am aware of. There are two cases:
When b is of type uint8 (think boolean, pytorch doesn't distinguish bool from uint8), a[b] is a 1-d array containing the subset of values of a (a[i]) for which the corresponding in b (b[i]) was nonzero. These values are aliased to the original a so if you modify them, their corresponding locations will change as well.
The alternative type you can use for indexing is an array of int64, in which case a[b] creates an array of shape (*b.shape, *a.shape[1:]). Its structure is as if each element of b (b[i]) was replaced by a[i]. In other words, you create a new array by specifying from which indexes of a should the data be fetched. Again, the values are aliased to the original a, so if you modify a[b] the values of a[b[i]], for each i, will change. An example usecase is shown in this question.
These two modes are explained for numpy in integer array indexing and boolean array indexing, where for the latter you have to keep in mind that pytorch uses uint8 in place of bool.
Also, if your goal is to copy data from one tensor to another you have to keep in mind that an operation like a[ixs] = b[ixs] is an in-place operation (a is modified in place), which my not play well with autograd. If you want to do out of place masking, use torch.where. An example usecase is shown in this answer.
If I have a cost that is the imaginary part of a complex number, trying to obtain the gradient with theano I get the following error:
TypeError: Elemwise{imag,no_inplace}.grad illegally returned an integer-valued variable. (Input index 0, dtype complex128)
Is it not possible to use the imaginary part as cost despite it being a real-valued cost?
Edit. Minimal working example
import theano.tensor as T
from theano import function
a = T.zscalar('a')
f = function([a], T.grad(T.imag(a),a))
I would expect this to work as T.imag(a) is a real scalar cost..