ImportError: No module named 'theano.floatX' - theano

I am following a tutorial to create convolution neural network with Theano. Although, I got a problem in a piece of code:
>> x = theano.floatX.xmatrix(theano.config.floatX) # rasterized images
AttributeError: 'module' object has no attribute 'floatX'
I loaded floatX with:
>> from theano import config
and checked with:
>> print(theano.config.floatX)
float 32
But still cannot load the module xmatrix, which should be in theano.config.floatX, judging from documentation. Does somebody know where can I find it?
Thank you in advance!

This sections of the convnet tutorial has a bug or is very outdated. Symbolic variables in Theano are located in theano.tensor package. This package theano.floatX even doesn't exist!
The current version in the tutorial github repository works fine. They allocate the symbolic variable the right way:
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
Browsing the tutorial repository I found the revision where this bug was corrected.
They seem to have forgotten to update the tutorial text with this fix.

Related

TensorFlow 2: detecting to which graph a certain node belong

I just started (self) learning TensorFlow and I decided to follow the book "Learning TensorFlow" which I found in my local library.
Unfortunately in the book they are using TensorFlow 1.x, while I want to use the 2.4 version.
I have some troubles in replicating an example from Chapter 3. The point of the code is to create a new empty computing graph, create a node (i.e. in this case a constant) and then figure out whether the node belongs to the default graph or to the newly created one.
Here is the code from the book, which should work fine with TensorFlow1:
import tensorflow as tf
print(tf.get_default_graph())
g = tf.Graph() # This creates a new empty graph
a = tf.constant(5) # This creates a node
print(a.graph is g)
print(a.graph is tf.get_default_graph())
I did realize that the attribute get_default_graph() is no longer available in TensorFlow 2 and I substituted with tf.compat.v1.get_default_graph() but I still get the following error:
AttributeError: Tensor.graph is meaningless when eager execution is enabled.
Any help will be very much appreciated! Thanks in advance!
After import tensorflow need disable eager execution, like below:
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
print(tf.compat.v1.get_default_graph())
g = tf.Graph() # This creates a new empty graph
a = tf.constant(5) # This creates a node
print(a.graph is g)
print(a.graph is tf.compat.v1.get_default_graph())

Load trained model on another machine - fastai, torch, huggingface

I am using fastai with pytorch to fine tune XLMRoberta from huggingface.
I've trained the model and everything is fine on the machine where I trained it.
But when I try to load the model on another machine I get OSError - Not Found - No such file or directory pointing to .cache/torch/transformers/. The issue is the path of a vocab_file.
I've used fastai's Learner.export to export the model in .pkl file, but I don't believe that issue is related to fastai since I found the same issue appearing in flairNLP.
It appears that the path to the cache folder, where the vocab_file is stored during the training, is embedded in the .pkl file:
The error comes from transformer's XLMRobertaTokenizer __setstate__:
def __setstate__(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(self.vocab_file)
which tries to load the vocab_file using the path from the file.
I've tried patching this method using:
pretrained_model_name = "xlm-roberta-base"
vocab_file = XLMRobertaTokenizer.from_pretrained(pretrained_model_name).vocab_file
def _setstate(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(vocab_file)
XLMRobertaTokenizer.__setstate__ = MethodType(_setstate, XLMRobertaTokenizer(vocab_file))
And that successfully loaded the model but caused other problems like missing model attributes and other unwanted issues.
Can someone please explain why is the path embedded inside the file, is there a way to configure it without reexporting the model or if it has to be reexported how to configure it dynamically using fastai, torch and huggingface.
I faced the same error. I had fine tuned XLMRoberta on downstream classification task with fastai version = 1.0.61. I'm loading the model inside docker.
I'm not sure about why the path is embedded, but I found a workaround. Posting for future readers who might be looking for workaround as retraining is usually not possible.
I created /home/.cache/torch/transformer/ inside the docker image.
RUN mkdir -p /home/<username>/.cache/torch/transformers
Copied the files (which were not found in docker) from my local /home/.cache/torch/transformer/ to docker image /home/.cache/torch/transformer/
COPY filename:/home/<username>/.cache/torch/transformers/filename

Problems Converting Numpy/OpenCV Array Image into a Wand Image

I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)

keras-vis visualize saliency issue

I am trying to replicate the results from: https://github.com/raghakot/keras-vis/blob/master/examples/mnist/attention.ipynb to produce a saliency map of zero. I put in the exact same code, and as suggested, used:
from vis.visualization import visualize_saliency
from vis.utils import utils
from keras import activations
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,seed_input=x_test[idx])
# Plot with 'jet' colormap to visualize as a heatmap.
plt.imshow(grads, cmap='jet')
However, I keep getting the following error:
InvalidArgumentError: conv2d_1_input_3:0 is both fed and fetched.
I looked elsewhere and have seen suggestions to upgrade keras-vis, I have done this but the same error shows up. The error appears to be in
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,seed_input=x_test[idx])
as when I comment this line out, no error shows.
How can I solve this?
It's solved! If anyone has this problem, use:
pip install git+git://github.com/raghakot/keras-vis.git --upgrade --no-deps

TensorFlow 0.12 tutorials produce warning: "Rank of input Tensor should be the same as output_rank for column

I have some experience with writing machine learning programs in python, but I'm new to TensorFlow and am checking it out. My dev environment is a lubuntu 14.04 64-bit virtual machine. I've created a python 3.5 conda environment from miniconda and installed TensorFlow 0.12 and its dependencies. I began trying to run some example code from TensorFlow's tutorials and encountered this warning when calling fit() in the boston.py example for input functions: source.
WARNING:tensorflow:Rank of input Tensor (1) should be the same as
output_rank (2) for column. Will attempt to expand dims. It is highly
recommended that you resize your input, as this behavior may change.
After some searching in Google, I found other people encountered this same warning:
https://github.com/tensorflow/tensorflow/issues/6184
https://github.com/tensorflow/tensorflow/issues/5098
Tensorflow - Boston Housing Data Tutorial Errors
However, they also experienced errors which prevent code execution from completing. In my case, the code executes with the above warning. Unfortunately, I couldn't find a single answer in those links regarding what caused the warning and how to fix the warning. They all focused on the error. How does one remove the warning? Or is the warning safe to ignore?
Cheers!
Extra info, I also see the following warnings when running the aforementioned boston.py example.
WARNING:tensorflow:*******************************************************
WARNING:tensorflow:TensorFlow's V1 checkpoint format has been
deprecated. WARNING:tensorflow:Consider switching to the more
efficient V2 format: WARNING:tensorflow:
'tf.train.Saver(write_version=tf.train.SaverDef.V2)'
WARNING:tensorflow:now on by default.
WARNING:tensorflow:*******************************************************
and
WARNING:tensorflow:From
/home/kade/miniconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py:1053
in predict.: calling BaseEstimator.predict (from
tensorflow.contrib.learn.python.learn.estimators.estimator) with x is
deprecated and will be removed after 2016-12-01. Instructions for
updating: Estimator is decoupled from Scikit Learn interface by moving
into separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion: est = Estimator(...) -> est =
SKCompat(Estimator(...))
UPDATE (2016-12-22):
I've tracked the warning to this file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column_ops.py
and this code block:
except NotImplementedError:
with variable_scope.variable_scope(
None,
default_name=column.name,
values=columns_to_tensors.values()):
tensor = column._to_dense_tensor(transformed_tensor)
tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
variable = [
contrib_variables.model_variable(
name='weight',
shape=[tensor.get_shape()[1], num_outputs],
initializer=init_ops.zeros_initializer(),
trainable=trainable,
collections=weight_collections)
]
predictions = math_ops.matmul(tensor, variable[0], name='matmul')
Note the line: tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
The method signature is: _reshape_real_valued_tensor(input_tensor, output_rank, column_name=None)
The value 2 is hardcoded as the value of output_rank, but the boston.py example is passing in an input_tensor of rank 1. I will continue to investigate.
If you specify the shape of your tensor explicitly:
tf.constant(df[k].values, shape=[df[k].size, 1])
the warning should go away.
After I specify the shape of the tensor explicitly.
continuous_cols = {k: tf.constant(df[k].values, shape=[df[k].size, 1]) for k in CONTINUOUS_COLUMNS}
It works!

Resources