RuntimeError: "exp" not implemented for 'torch.LongTensor' - pytorch

I am following this tutorial: http://nlp.seas.harvard.edu/2018/04/03/attention.html
to implement the Transformer model from the "Attention Is All You Need" paper.
However I am getting the following error :
RuntimeError: "exp" not implemented for 'torch.LongTensor'
This is the line, in the PositionalEnconding class, that is causing the error:
div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))
When it is being constructed here:
pe = PositionalEncoding(20, 0)
Any ideas?? I've already tried converting this to perhaps a Tensor Float type, but this has not worked.
I've even downloaded the whole notebook with accompanying files and the error seems to persist in the original tutorial.
Any ideas what may be causing this error?
Thanks!

I happened to follow this tutorial too.
For me I just got the torch.arange to generate float type tensor
from
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))
to
position = torch.arange(0., max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0., d_model, 2) * -(math.log(10000.0) / d_model))
Just a simple fix. But now it works for me. It is possible that the torch exp and sin previously support LongTensor but not anymore (not very sure about it).

It seems like torch.arange returns a LongTensor, try torch.arange(0.0, d_model, 2) to force torch to return a FloatTensor instead.

The suggestion given by #shai worked for me. I modified the init method of PositionalEncoding by using 0.0 in two spots:
position = torch.arange(0.0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0.0, d_model, 2) * -(math.log(10000.0) / d_model))

For me, installing pytorch == 1.7.1 solved the problem.

Like Rubens said, in the higher version of Pytorch, you don't need to worry about this stuff. I can easily run it on my desktop's 1.8.0 Pytorch, but failed to go through it in my server's 1.2.0 Pytorch. There is something incompatible between different versions.

Related

CUDA Illegal Memory Access on PyTorch 1.3

#staticmethod
def backward(ctx, grad_output):
grad_label = grad_output.clone()
num_ft = grad_output.shape[0]
# grad_label.data.resize_(num_ft, 32, 41)
lin_indices_3d, lin_indices_2d = ctx.saved_variables
num_ind = lin_indices_3d.data[0]
grad_label.data.view(num_ft, -1).index_copy_(1, lin_indices_2d.data[1:1 + num_ind],
torch.index_select(grad_output.data.contiguous().view(num_ft, -1),
1, lin_indices_3d.data[1:1 + num_ind]))
# raw_input('sdflkj')
return grad_label, None, None, None
This is the code snippet I am trying to run on PyTorch. However, I strangely keep getting the error of Illegal Memory Access. When I tried to use a Debugger and try and find the culprit, I would see
As such I am not certain what is wrong here. The same code was running on PyTorch 0.4 and now I am trying to run it on PyTorch 1.3 and it does not work. The same error remains on versions 1.4 and 1.5 which are the latest versions for the framework. Any help shall be highly appreciated.
It turns out this was an error with PyTorch framework itself. They are going to correct it with the version 1.6
Here is the github link
https://github.com/pytorch/pytorch/issues/34450

AttributeError: module 'tensorflow' has no attribute 'streaming_accuracy'

accuracy = tf.streaming_accuracy (y_pred,y_true,name='acc')
recall = tf.streaming_recall (y_pred,y_true,name='acc')
precision = tf.streaming_precision(y_pred,y_true,name='acc')
confusion = tf.confuson_matrix(Labels, y_pred,num_classes=10,dtype=tf.float32,name='conf')
For the above code, I have received the same error in past few days.
Isn't the syntax same as it is in the API documentation for tensorflow?
try to use this instead (in a fresh python file, I would suggest create a /tmp/temp.py and run that)
from tensorflow.contrib.metrics import streaming_accuracy
and if this doesn't work then
either there is an installation problem (In which case reinstall)
or you are importing the wrong tensorflow module.

TensorFlow 0.12 tutorials produce warning: "Rank of input Tensor should be the same as output_rank for column

I have some experience with writing machine learning programs in python, but I'm new to TensorFlow and am checking it out. My dev environment is a lubuntu 14.04 64-bit virtual machine. I've created a python 3.5 conda environment from miniconda and installed TensorFlow 0.12 and its dependencies. I began trying to run some example code from TensorFlow's tutorials and encountered this warning when calling fit() in the boston.py example for input functions: source.
WARNING:tensorflow:Rank of input Tensor (1) should be the same as
output_rank (2) for column. Will attempt to expand dims. It is highly
recommended that you resize your input, as this behavior may change.
After some searching in Google, I found other people encountered this same warning:
https://github.com/tensorflow/tensorflow/issues/6184
https://github.com/tensorflow/tensorflow/issues/5098
Tensorflow - Boston Housing Data Tutorial Errors
However, they also experienced errors which prevent code execution from completing. In my case, the code executes with the above warning. Unfortunately, I couldn't find a single answer in those links regarding what caused the warning and how to fix the warning. They all focused on the error. How does one remove the warning? Or is the warning safe to ignore?
Cheers!
Extra info, I also see the following warnings when running the aforementioned boston.py example.
WARNING:tensorflow:*******************************************************
WARNING:tensorflow:TensorFlow's V1 checkpoint format has been
deprecated. WARNING:tensorflow:Consider switching to the more
efficient V2 format: WARNING:tensorflow:
'tf.train.Saver(write_version=tf.train.SaverDef.V2)'
WARNING:tensorflow:now on by default.
WARNING:tensorflow:*******************************************************
and
WARNING:tensorflow:From
/home/kade/miniconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py:1053
in predict.: calling BaseEstimator.predict (from
tensorflow.contrib.learn.python.learn.estimators.estimator) with x is
deprecated and will be removed after 2016-12-01. Instructions for
updating: Estimator is decoupled from Scikit Learn interface by moving
into separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion: est = Estimator(...) -> est =
SKCompat(Estimator(...))
UPDATE (2016-12-22):
I've tracked the warning to this file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column_ops.py
and this code block:
except NotImplementedError:
with variable_scope.variable_scope(
None,
default_name=column.name,
values=columns_to_tensors.values()):
tensor = column._to_dense_tensor(transformed_tensor)
tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
variable = [
contrib_variables.model_variable(
name='weight',
shape=[tensor.get_shape()[1], num_outputs],
initializer=init_ops.zeros_initializer(),
trainable=trainable,
collections=weight_collections)
]
predictions = math_ops.matmul(tensor, variable[0], name='matmul')
Note the line: tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
The method signature is: _reshape_real_valued_tensor(input_tensor, output_rank, column_name=None)
The value 2 is hardcoded as the value of output_rank, but the boston.py example is passing in an input_tensor of rank 1. I will continue to investigate.
If you specify the shape of your tensor explicitly:
tf.constant(df[k].values, shape=[df[k].size, 1])
the warning should go away.
After I specify the shape of the tensor explicitly.
continuous_cols = {k: tf.constant(df[k].values, shape=[df[k].size, 1]) for k in CONTINUOUS_COLUMNS}
It works!

ImportError: No module named 'theano.floatX'

I am following a tutorial to create convolution neural network with Theano. Although, I got a problem in a piece of code:
>> x = theano.floatX.xmatrix(theano.config.floatX) # rasterized images
AttributeError: 'module' object has no attribute 'floatX'
I loaded floatX with:
>> from theano import config
and checked with:
>> print(theano.config.floatX)
float 32
But still cannot load the module xmatrix, which should be in theano.config.floatX, judging from documentation. Does somebody know where can I find it?
Thank you in advance!
This sections of the convnet tutorial has a bug or is very outdated. Symbolic variables in Theano are located in theano.tensor package. This package theano.floatX even doesn't exist!
The current version in the tutorial github repository works fine. They allocate the symbolic variable the right way:
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
Browsing the tutorial repository I found the revision where this bug was corrected.
They seem to have forgotten to update the tutorial text with this fix.

rpy2: compatibiliity issue 2.0.6 vs 2.3.8

The following works in rpy2 2.0.6:
robjects.r('M = lm(...)')
M = robjects.r('M')
coefficients = M.r['coefficients'][0]
But after I upgraded to rpy2 2.3.8, the above fails with the message
AttributeError: 'ListVector' object has no attribute 'r'
What do I need to change to make this work in 2.3.8?
I am not certain that the code snippet you provide worked with rpy2-2.0.x
The documentation, section Introduction, is showing how to extract coefficients from linear models:
http://rpy.sourceforge.net/rpy2/doc-2.1/html/introduction.html#linear-models

Resources