TensorFlow 0.12 tutorials produce warning: "Rank of input Tensor should be the same as output_rank for column - python-3.x

I have some experience with writing machine learning programs in python, but I'm new to TensorFlow and am checking it out. My dev environment is a lubuntu 14.04 64-bit virtual machine. I've created a python 3.5 conda environment from miniconda and installed TensorFlow 0.12 and its dependencies. I began trying to run some example code from TensorFlow's tutorials and encountered this warning when calling fit() in the boston.py example for input functions: source.
WARNING:tensorflow:Rank of input Tensor (1) should be the same as
output_rank (2) for column. Will attempt to expand dims. It is highly
recommended that you resize your input, as this behavior may change.
After some searching in Google, I found other people encountered this same warning:
https://github.com/tensorflow/tensorflow/issues/6184
https://github.com/tensorflow/tensorflow/issues/5098
Tensorflow - Boston Housing Data Tutorial Errors
However, they also experienced errors which prevent code execution from completing. In my case, the code executes with the above warning. Unfortunately, I couldn't find a single answer in those links regarding what caused the warning and how to fix the warning. They all focused on the error. How does one remove the warning? Or is the warning safe to ignore?
Cheers!
Extra info, I also see the following warnings when running the aforementioned boston.py example.
WARNING:tensorflow:*******************************************************
WARNING:tensorflow:TensorFlow's V1 checkpoint format has been
deprecated. WARNING:tensorflow:Consider switching to the more
efficient V2 format: WARNING:tensorflow:
'tf.train.Saver(write_version=tf.train.SaverDef.V2)'
WARNING:tensorflow:now on by default.
WARNING:tensorflow:*******************************************************
and
WARNING:tensorflow:From
/home/kade/miniconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py:1053
in predict.: calling BaseEstimator.predict (from
tensorflow.contrib.learn.python.learn.estimators.estimator) with x is
deprecated and will be removed after 2016-12-01. Instructions for
updating: Estimator is decoupled from Scikit Learn interface by moving
into separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion: est = Estimator(...) -> est =
SKCompat(Estimator(...))
UPDATE (2016-12-22):
I've tracked the warning to this file:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column_ops.py
and this code block:
except NotImplementedError:
with variable_scope.variable_scope(
None,
default_name=column.name,
values=columns_to_tensors.values()):
tensor = column._to_dense_tensor(transformed_tensor)
tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
variable = [
contrib_variables.model_variable(
name='weight',
shape=[tensor.get_shape()[1], num_outputs],
initializer=init_ops.zeros_initializer(),
trainable=trainable,
collections=weight_collections)
]
predictions = math_ops.matmul(tensor, variable[0], name='matmul')
Note the line: tensor = fc._reshape_real_valued_tensor(tensor, 2, column.name)
The method signature is: _reshape_real_valued_tensor(input_tensor, output_rank, column_name=None)
The value 2 is hardcoded as the value of output_rank, but the boston.py example is passing in an input_tensor of rank 1. I will continue to investigate.

If you specify the shape of your tensor explicitly:
tf.constant(df[k].values, shape=[df[k].size, 1])
the warning should go away.

After I specify the shape of the tensor explicitly.
continuous_cols = {k: tf.constant(df[k].values, shape=[df[k].size, 1]) for k in CONTINUOUS_COLUMNS}
It works!

Related

module 'torch.cuda' has no attribute 'memory_summary'

I'm trying to measure the available space on each of my GPUs using torch.cuda module. However it is returning me the following error.
module 'torch.cuda' has no attribute 'memory_summary'
My code is below
if torch.cuda.is_available():
for i in range(torch.cuda.device_count()):
print(torch.cuda.get_device_name(i))
a = torch.cuda.memory_summary(torch.device('cuda:{}'.format(i)))
print(a)
Similarly memory_stats, mem_get_info and memory_reserved all are failing.
torch.cuda.memory_summary is introduced in Pytorch 1.4.0. So if your torch install is older than that you won't be able to use it.

Scatternd not supported

I have a tensor with shape [1,100,34] which needs to be added with 2 tensors [1,100,17] shape at alternative indexes. Current Implementation is like this in pytorch.
kps[..., ::2] += xs.view(batch, K, 1).expand(batch, K, num_joints)
kps[..., 1::2] += ys.view(batch, K, 1).expand(batch, K, num_joints)
This gets successfully converted to onnx model using opset>=11. This is achieved using the scatternd operation. However, this operation is not yet supported in tensorrt. It fails with error
ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1
Parse Failed, Layers = 4447
In node -1 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Is there anyway to avoid converting this code to use scatternd operation in onnx. Or any other alternative workaround?
ScatterND is now supported on Tensorrt > 8 versions.

Tensorflow serving: request fails with object has no attribute 'unary_unary

I'm building a CNN text classifier using TensorFlow which I want to load in tensorflow-serving and query using the serving apis. When I call the Predict() method on the grcp stub I receive this error: AttributeError: 'grpc._cython.cygrpc.Channel' object has no attribute 'unary_unary'
What I've done to date:
I have successfully trained and exported a model suitable for serving (i.e., the signatures are verified and using tf.Saver I can successfully return a prediction). I can also load the model in tensorflow_model_server without error.
Here is a snippet of the client code (simplified for readability):
with tf.Session() as sess:
host = FLAGS.server
channel = grpc.insecure_channel('localhost:9001')
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'predict_text'
request.model_spec.signature_name = 'predict_text'
x_text = ["space"]
# restore vocab processor
# then create a ndarray with transform_fit using the vocabulary
vocab = learn.preprocessing.VocabularyProcessor.restore('/some_path/model_export/1/assets/vocab')
x = np.array(list(vocab.fit_transform(x_text)))
# data
temp_data = tf.contrib.util.make_tensor_proto(x, shape=[1, 15], verify_shape=True)
request.inputs['input'].CopyFrom(tf.contrib.util.make_tensor_proto(x, shape=[1, 15], verify_shape=True))
# get classification prediction
result = stub.Predict(request, 5.0)
Where I'm bending the rules: I am using tensorflow-serving-apis in Python 3.5.3 when pip install is not officially supported. Various posts (example: https://github.com/tensorflow/serving/issues/581) have reported that using tensorflow-serving with Python 3 has been successful. I have downloaded tensorflow-serving-apis package from pypi (https://pypi.python.org/pypi/tensorflow-serving-api/1.5.0)and manually pasted into the environment.
Versions: tensorflow: 1.5.0, tensorflow-serving-apis: 1.5.0, grpcio: 1.9.0rc3, grcpio-tools: 1.9.0rcs, protobuf: 3.5.1 (all other dependency version have been verified but are not included for brevity -- happy to add if they have utility)
Environment: Linux Mint 17 Qiana; x64, Python 3.5.3
Investigations:
A github issue (https://github.com/GoogleCloudPlatform/google-cloud-python/issues/2258) indicated that a historical package triggered this error was related to grpc beta.
What data or learning or implementation am I missing?
beta_create_PredictionService_stub() is deprecated. Try this:
from tensorflow_serving.apis import prediction_service_pb2_grpc
...
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
Try to use grpc.beta.implementations.insecure_channel instead of grpc.insecure_channel.
See example code here.

Keras with Theano on GPU

While trying to run my Keras code on GPU (CUDA installed), I am not able to execute the following statement, as has been suggested on many online references.
set THEANO_FLAGS="mode=FAST_RUN,device=gpu,floatX=float32" & python theanogpu_example.py
I am getting the following error.
ValueError: Invalid value ("FAST_RUN,device=gpu,floatX=float32") for configurati
on variable "mode". Valid options are ('Mode', 'DebugMode', 'FAST_RUN', 'NanGuar
dMode', 'FAST_COMPILE', 'DEBUG_MODE')
I have tried the other mode suggested as well from inside the code.
import theano
theano.config.device = 'gpu'
theano.config.floatX = 'float32'
I get the following error.
Exception: Can't change the value of this config parameter after initialization!
Apart from knowing how to make it run, I would also take this opportunity to ask a simpler question. How to know in Windows what is my device i.e. whether 'gpu' or 'gpu1' or 'gpu0'? I have tried all 3 for my case but it hasn't yielded result.
Any suggestions will be appreciated.
The best way is using THEANO_FLAGS before run code, because the config variables cannot be changed after importing Theano, try this:
import os
os.environ['THEANO_FLAGS'] = "device=cuda,force_device=True,floatX=float32"
import theano

ImportError: No module named 'theano.floatX'

I am following a tutorial to create convolution neural network with Theano. Although, I got a problem in a piece of code:
>> x = theano.floatX.xmatrix(theano.config.floatX) # rasterized images
AttributeError: 'module' object has no attribute 'floatX'
I loaded floatX with:
>> from theano import config
and checked with:
>> print(theano.config.floatX)
float 32
But still cannot load the module xmatrix, which should be in theano.config.floatX, judging from documentation. Does somebody know where can I find it?
Thank you in advance!
This sections of the convnet tutorial has a bug or is very outdated. Symbolic variables in Theano are located in theano.tensor package. This package theano.floatX even doesn't exist!
The current version in the tutorial github repository works fine. They allocate the symbolic variable the right way:
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
Browsing the tutorial repository I found the revision where this bug was corrected.
They seem to have forgotten to update the tutorial text with this fix.

Resources