How to debug "ValueError: Error when checking input" - keras

I trained a model using keras, now I want to predict the value from each row in a pandas dataframe. I don't know where the shape of the input is changing, it looks of all the way down to the prediction step.
I want to compare the performance of my NN before and after loading the trained weights. I already have my weights saved
PS: I loosely followed the steps in this tutorial
Create the NN:
def create_model(sample_input): # numpy array so we can use .shape
model = tf.keras.Sequential([
layers.Dense(64, activation='relu',
input_shape=sample_input.shape),
layers.Dense(64, activation='relu'),
layers.Dense(9, activation='softmax')
])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
Predict:
right = 0
wrong = 0
for index, row in X_test.iterrows():
print(row.values.shape)
prediction = model.predict(row.values)
if prediction == y_test.values[index]:
right += 1
else:
wrong +=1
print("Correct prediction = ", right)
print("Wrong prediction = ", wrong)
However, I get this output (note: it breaks in the first iteration, notice when I print the shape of the sample to predict, it matches the expected input that Keras complained about):
(11,)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-30-7b432180b212> in <module>
3 for index, row in X_test.iterrows():
4 print(row.values.shape)
----> 5 prediction = model.predict(row.values)
6 if prediction == y_test.values[index]:
7 right += 1
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, max_queue_size, workers, use_multiprocessing)
1094 # batch size.
1095 x, _, _ = self._standardize_user_data(
-> 1096 x, check_steps=True, steps_name='steps', steps=steps)
1097
1098 if (self.run_eagerly or (isinstance(x, iterator_ops.EagerIterator) and
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle)
2380 feed_input_shapes,
2381 check_batch_axis=False, # Don't enforce the batch size.
-> 2382 exception_prefix='input')
2383
2384 if y is not None:
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
360 'Error when checking ' + exception_prefix + ': expected ' +
361 names[i] + ' to have shape ' + str(shape) +
--> 362 ' but got array with shape ' + str(data_shape))
363 return data
364
ValueError: Error when checking input: expected dense_9_input to have shape (11,) but got array with shape (1,)
I tried wrapping it like model.predict([row.values]), thinking at some point keras accesses the inner elements, but no luck, same problem.
I expect the model to predict something, even if it's wrong.

Related

Object Detection With YOLOv3 in Keras- ValueError: If your data is in the form of symbolic tensors

Any help will be appreciated.
I am practicing "Object Detection With YOLOv3 in Keras", as a part of a tutorial model which you can find on this website: (https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/).
In the following block of code where I am trying to "make prediction":
# make prediction
yhat = model.predict(Image_file)
# summarize the shape of the list of arrays
print([a.shape for a in yhat])
I am receiving the following error:
--------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-35-278b18af4867> in <module>
1 # make prediction
----> 2 yhat = model.predict(Image_file)
3 # summarize the shape of the list of arrays
4 print([a.shape for a in yhat])
~/anaconda3/lib/python3.7/site-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
1460 verbose=verbose,
1461 steps=steps,
-> 1462 callbacks=callbacks)
1463
1464 def train_on_batch(self, x, y,
~/anaconda3/lib/python3.7/site-packages/keras/engine/training_arrays.py in predict_loop(model, f, ins, batch_size, verbose, steps, callbacks)
248 batch_size=batch_size,
249 steps=steps,
--> 250 steps_name='steps')
251
252 # Check if callbacks have not been already configured
~/anaconda3/lib/python3.7/site-packages/keras/engine/training_utils.py in check_num_samples(ins, batch_size, steps, steps_name)
569 raise ValueError(
570 'If your data is in the form of symbolic tensors, '
--> 571 'you should specify the `' + steps_name + '` argument '
572 '(instead of the `batch_size` argument, '
573 'because symbolic tensors are expected to produce '
ValueError: If your data is in the form of symbolic tensors, you should specify the `steps` argument (instead of the `batch_size` argument, because symbolic tensors are expected to produce batches of input data).
I fixed this error by adding "steps=2" to the predict command as below:
yhat = model.predict(Image_file,steps=2)

Keras: Using model output as the input for another: When feeding symbolic tensors to a model, we expect thetensors to have a static batch size

I have the following two models, where model_A is trained first, then the output of model_A is used to train the model_C:
import keras
from keras.layers import Input, Dense
from keras.models import Model
inputs = Input(shape=(12,))
# ---------------------------------------
# model_A
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions_A = Dense(3, activation='softmax')(x)
model_A = Model(inputs=inputs, outputs=predictions_A)
model_A.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model_A.fit(my_data_x, axis = 1), pd.get_dummies(my_data['target_cate'],prefix=['cate_']))
#----------------------------------------
input_C_out_A = Input(shape=(3,))
# Concatenating the two input layers
concat = keras.layers.concatenate([inputs, input_C_out_A])
x1 = Dense(64, activation='relu')(concat)
x1 = Dense(64, activation='relu')(x1)
predictions_C= Dense(1, activation='sigmoid')(x1)
model_C = Model(inputs=[inputs, input_C_out_A], outputs=predictions_C)
model_C.compile(loss='mean_squared_error', optimizer='adam')
model_C.fit([my_data_x,predictions_A], my_data['target_numeric'])
model_A training seems to be fine, but then I got the following errors when training the model_C:
Epoch 1/1
374667/374667 [==============================] - 11s 30us/step - loss: 0.3157 - acc: 0.9119
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-78-8df7b1dec93f> in <module>
28 model_C = Model(inputs=[inputs, input_C_out_A], outputs=predictions_C)
29 model_C.compile(loss='mean_squared_error', optimizer='adam')
---> 30 model_C.fit([my_data_x,predictions_A], my_data['target_numeric'])
~/workspace/git/tensorplay/venv/lib/python3.7/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False
~/workspace/git/tensorplay/venv/lib/python3.7/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
749 feed_input_shapes,
750 check_batch_axis=False, # Don't enforce the batch size.
--> 751 exception_prefix='input')
752
753 if y is not None:
~/workspace/git/tensorplay/venv/lib/python3.7/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
90 data = data.values if data.__class__.__name__ == 'DataFrame' else data
91 data = [data]
---> 92 data = [standardize_single_array(x) for x in data]
93
94 if len(data) != len(names):
~/workspace/git/tensorplay/venv/lib/python3.7/site-packages/keras/engine/training_utils.py in <listcomp>(.0)
90 data = data.values if data.__class__.__name__ == 'DataFrame' else data
91 data = [data]
---> 92 data = [standardize_single_array(x) for x in data]
93
94 if len(data) != len(names):
~/workspace/git/tensorplay/venv/lib/python3.7/site-packages/keras/engine/training_utils.py in standardize_single_array(x)
23 'When feeding symbolic tensors to a model, we expect the'
24 'tensors to have a static batch size. '
---> 25 'Got tensor with shape: %s' % str(shape))
26 return x
27 elif x.ndim == 1:
ValueError: When feeding symbolic tensors to a model, we expect thetensors to have a static batch size. Got tensor with shape: (None, 3)
Any idea what I missed? Thanks!
The model_C works by:
Giving the input data to model_A,
Getting output of model_A, and
Feeding that, along with the original inputs, to the first Dense layer.
So implement just what you said (and try to keep models separate, i.e. each with its own input/output layers):
input_C = Input(shape=(12,))
out_A = model_A(input_C) # get the output of model_A
concat = keras.layers.concatenate([input_C, out_A])
x1 = Dense(64, activation='relu')(concat)
x1 = Dense(64, activation='relu')(x1)
predictions_C= Dense(1, activation='sigmoid')(x1)
model_C = Model(inputs=input_C, outputs=predictions_C)
model_C.compile(loss='mean_squared_error', optimizer='adam')
model_C.fit(my_data_x, my_data['target_numeric'])
If you don't want the model_A to be trained when training model_C (i.e. if you have already trained model_A and don't want its weights to be changed), just set model_A.trainable = False before compiling model_C.
It makes no sense to put the output of a model (a symbolic tensor) into model.fit, since there is no input data there. You should first obtain predictions from model A and then use them to fit model C:
pred_a = model_A.predict(my_data_x)
model_C.fit([my_data_x, pred_a], my_data['target_numeric'])

IndexError: List Index out of range Keras Tokenizer

I'm working with the sentiment140 dataset to try and learn sentiment analysis using RNNs. I found this tutorial online that uses the keras.imdb datasource, but I want to try and use my own datasource, so I have tried to adapt the code my own data.
Tutorial: https://towardsdatascience.com/a-beginners-guide-on-sentiment-analysis-with-rnn-9e100627c02e
The data preprocessing involves extracting series data and then tokenizing and padding it before sending it to the model for training. I performed these operations below, in my code but whenever I try to run the training I get if isinstance(data[0], list):IndexError: list index out of range. I did not define data so this leads me to believe that I did something that keras or tensorflow did not like. Any ideas as to what is causing this error?
My data is currently in a csv file format with the headers being SENTIMENT and TEXT. SENTIMENT is 0 for negative and 1 for positive. TEXT is the processed tweet that was collected. Here is a sample.
Dataset CSV (Only a view lines to save space)
SENTIMENT,TEXT
0,about to file tax
0,ahh i hate dogs
1,My paycheck came in today
1,lot to do before chi this weekend
1,lol love food
Code
import pandas as pd
import keras
import keras.preprocessing.text as kpt
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import json
import numpy as np
# Load in DS
df = pd.read_csv('./train.csv')
print(df.head())
#Create sequence
vocabulary_size = 1000
tokenizer = Tokenizer(num_words= vocabulary_size, split=' ')
tokenizer.fit_on_texts(df['TEXT'].values)
X_train = tokenizer.texts_to_sequences(df['TEXT'].values)
#Pad Sequence
X_train = pad_sequences(X_train)
print(X_train)
#Get Sentiment
y_train = df['SENTIMENT'].tolist()
#create model
max_words = 24
from keras import Sequential
from keras.layers import Embedding, LSTM, Dense, Dropout
embedding_size=32
model=Sequential()
model.add(Embedding(vocabulary_size, embedding_size, input_length=max_words))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
batch_size = 64
num_epochs = 3
X_valid, y_valid = X_train[:batch_size], y_train[:batch_size]
X_train2, y_train2 = X_train[batch_size:], y_train[batch_size:]
model.fit(X_train2, y_train2,
validation_data=(X_valid, y_valid),
batch_size=batch_size,
epochs=num_epochs)
Output
Using TensorFlow backend.
SENTIMENT TEXT
0 0 aww that be bummer You shoulda get david carr ...
1 0 be upset that he can not update his facebook b...
2 0 I dive many time for the ball manage to save t...
3 0 my whole body feel itchy and like its on fire
4 0 no it be not behave at all be mad why be here ...
[[ 0 0 0 ... 3 10 5]
[ 0 0 0 ... 46 47 89]
[ 0 0 0 ... 29 9 96]
...
[ 0 0 0 ... 30 309 310]
[ 0 0 0 ... 0 0 72]
[ 0 0 0 ... 33 312 313]]
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 24, 32) 32000
_________________________________________________________________
lstm_1 (LSTM) (None, 100) 53200
_________________________________________________________________
dense_1 (Dense) (None, 1) 101
=================================================================
Total params: 85,301
Trainable params: 85,301
Non-trainable params: 0
_________________________________________________________________
None
Traceback (most recent call last):
File "mcve.py", line 50, in <module>
epochs=num_epochs)
File "/home/dv/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training.py", line 950, in fit
batch_size=batch_size)
File "/home/dv/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training.py", line 787, in _standardize_user_data
exception_prefix='target')
File "/home/dv/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training_utils.py", line 79, in standardize_input_data
if isinstance(data[0], list):
IndexError: list index out of range
JUPYTER NOTEBOOK ERROR
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-25-184505b70981> in <module>()
20 model.fit(X_train2, y_train2,
21 batch_size=batch_size,
---> 22 epochs=num_epochs)
23
~/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
948 sample_weight=sample_weight,
949 class_weight=class_weight,
--> 950 batch_size=batch_size)
951 # Prepare validation data.
952 do_validation = False
~/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
785 feed_output_shapes,
786 check_batch_axis=False, # Don't enforce the batch size.
--> 787 exception_prefix='target')
788
789 # Generate sample-wise weight values given the `sample_weight` and
~/tensorflow/venv/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
77 'for each key in: ' + str(names))
78 elif isinstance(data, list):
---> 79 if isinstance(data[0], list):
80 data = [np.asarray(d) for d in data]
81 elif len(names) == 1 and isinstance(data[0], (float, int)):
IndexError: list index out of range
Edit
My former suggestion is wrong. I've checked your code and run it, and it works without errors for me.
Then I've looked at the source code, standardize_input_data function. There's a line which checks a data argument:
def standardize_input_data(data,
names,
shapes=None,
check_batch_axis=True,
exception_prefix=''):
"""Normalizes inputs and targets provided by users.
Users may pass data as a list of arrays, dictionary of arrays,
or as a single array. We normalize this to an ordered list of
arrays (same order as `names`), while checking that the provided
arrays have shapes that match the network's expectations.
# Arguments
data: User-provided input data (polymorphic).
...
At the line 79:
elif isinstance(data, list):
if isinstance(data[0], list):
...
So, it looks like in case of error an input data is list, but a list of zero length.
A standartize_input_data function is called inside Model.fit(...) method throught a call to Model._standardize_user_data(...). Through this chain of functions, passed data argument gets a value of x argument of Model.fit(x, y, ...). So, I guess is that the problem with type or content of X_train2 or X_valid. Would you provide X_train2 and X_val in addition to X_train content?
Old wrong suggestion
You should increase vocabulary size by one to deal with out-of-vocabulary tokens, I guess.
I.e, change initialization of the Embedding layer:
model.add(Embedding(vocabulary_size + 1, embedding_size, input_length=max_words))
According to the docs, "input_dim: int > 0. Size of the vocabulary, i.e. maximum integer index + 1".
You may check a max. value of the max(X_train) (edited).
Hope it helps!

Input Dimensions Tensorflow v1.8 ConvLSTMCell

ConvLSTMCell Official Docs
GitHub _conv where the error occurs
Issue
I'm experimenting with the ConvLSTMCell in tensorflow r1.8. The error I'm continuing to generate occurs in the __call__ method of ConvLSTMCell. The _conv method is invoked and the error is raised.
ValueError: Conv Linear Expects 3D, 4D, 5D
The error is raised from the unstacked inputs. unstacked (in this example) has dimensions of [BATCH_SIZE, N_INPUTS] = [2,5]. I am using tf.unstack to generate the required sequence that the ConvLSTMCell requires.
Why use tf.unstack?
If the input array is not unstacked, the TypeError below is raised.
TypeError: inputs must be a sequence
Question
What am I missing on the formatting? I've read through related issues but have not found anything that has guided me into a working implementation.
Are the placeholder dimensions correct?
Should I be unstacking or is there a better way?
Am I providing the proper input dimension into the ConvLSTMCell?
Code
# Parameters
TIME_STEPS = 28
N_INPUT = 5
N_HIDDEN = 128
LEARNING_RATE = 0.001
NUM_UNITS = 28
CHANNEL = 1
tf.reset_default_graph()
# Input placeholders
x = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, N_INPUT])
y = tf.placeholder(tf.float32, [None, 1])
# Format input as a sequence for LSTM Input
unstacked = tf.unstack(x, TIME_STEPS, 1) # shape=(timesteps, batch, inputs)
# Convolutional LSTM Layer
lstm_layer = tf.contrib.rnn.ConvLSTMCell(
conv_ndims=1,
input_shape=[BATCH_SIZE, N_INPUT],
output_channels=5,
kernel_shape=[7,5]
)
# Error is generated when the lstm_layer is invoked
outputs, _ = tf.contrib.rnn.static_rnn(
lstm_layer,
unstacked,
dtype=tf.float32)
Error Message
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-83-3568a097e4ea> in <module>()
10 lstm_layer,
11 unstacked,
---> 12 dtype=tf.float32)
~/miniconda3/envs/MultivariateTimeSeries/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py in static_rnn(cell, inputs, initial_state, dtype, sequence_length, scope)
1322 state_size=cell.state_size)
1323 else:
-> 1324 (output, state) = call_cell()
1325
1326 outputs.append(output)
~/miniconda3/envs/MultivariateTimeSeries/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py in <lambda>()
1309 varscope.reuse_variables()
1310 # pylint: disable=cell-var-from-loop
-> 1311 call_cell = lambda: cell(input_, state)
1312 # pylint: enable=cell-var-from-loop
1313 if sequence_length is not None:
~/miniconda3/envs/MultivariateTimeSeries/lib/python3.6/site-packages/tensorflow/python/ops/rnn_cell_impl.py in __call__(self, inputs, state, scope)
230 setattr(self, scope_attrname, scope)
231 with scope:
--> 232 return super(RNNCell, self).__call__(inputs, state)
233
234 def _rnn_get_variable(self, getter, *args, **kwargs):
~/miniconda3/envs/MultivariateTimeSeries/lib/python3.6/site-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs)
715
716 if not in_deferred_mode:
--> 717 outputs = self.call(inputs, *args, **kwargs)
718 if outputs is None:
719 raise ValueError('A layer\'s `call` method should return a Tensor '
~/miniconda3/envs/MultivariateTimeSeries/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/rnn_cell.py in call(self, inputs, state, scope)
2110 cell, hidden = state
2111 new_hidden = _conv([inputs, hidden], self._kernel_shape,
-> 2112 4 * self._output_channels, self._use_bias)
2113 gates = array_ops.split(
2114 value=new_hidden, num_or_size_splits=4, axis=self._conv_ndims + 1)
~/miniconda3/envs/MultivariateTimeSeries/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/rnn_cell.py in _conv(args, filter_size, num_features, bias, bias_start)
2184 if len(shape) not in [3, 4, 5]:
2185 raise ValueError("Conv Linear expects 3D, 4D "
-> 2186 "or 5D arguments: %s" % str(shapes))
2187 if len(shape) != len(shapes[0]):
2188 raise ValueError("Conv Linear expects all args "
ValueError: Conv Linear expects 3D, 4D or 5D arguments: [[2, 5], [2, 2, 5]]
Here's an example with a couple tweaks, which at least passes static shape checking:
import tensorflow as tf
# Parameters
TIME_STEPS = 28
N_INPUT = 5
N_HIDDEN = 128
LEARNING_RATE = 0.001
NUM_UNITS = 28
CHANNEL = 1
BATCH_SIZE = 16
# Input placeholders
x = tf.placeholder(tf.float32, [BATCH_SIZE, TIME_STEPS, N_INPUT])
y = tf.placeholder(tf.float32, [None, 1])
# Format input as a sequence for LSTM Input
unstacked = tf.unstack(x[..., None], TIME_STEPS, 1) # shape=(timesteps, batch, inputs)
# Convolutional LSTM Layer
lstm_layer = tf.contrib.rnn.ConvLSTMCell(
conv_ndims=1,
input_shape=[N_INPUT, 1],
output_channels=5,
kernel_shape=[7]
)
# Error is generated when the lstm_layer is invoked
outputs, _ = tf.contrib.rnn.static_rnn(
lstm_layer,
unstacked,
dtype=tf.float32)
Notes:
input_shape does not include the batch dimension (see docstring)
The input needs a channels dimension. Fine for it to be one in the input (that's what I've done).
Not sure what more than one dimension on kernel_shape would mean for a 1-D convolution.

DNN Linear Regression. MAE measurement error

I am trying to implement MAE as a performance measurement for my DNN regression model. I am using DNN to predict the number of comments a facebook post will get. As I understand, if it is a classification problem, then we use accuracy. If it is regression problem, then we use either RMSE or MAE. My code is the following:
with tf.name_scope("eval"):
correct = tf.metrics.mean_absolute_error(labels = y, predictions = logits)
mae = tf.reduce_mean(tf.cast(correct, tf.int64))
mae_summary = tf.summary.scalar('mae', accuracy)
For some reason, I get the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-396-313ddf858626> in <module>()
1 with tf.name_scope("eval"):
----> 2 correct = tf.metrics.mean_absolute_error(labels = y, predictions = logits)
3 mae = tf.reduce_mean(tf.cast(correct, tf.int64))
4 mae_summary = tf.summary.scalar('mae', accuracy)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/metrics_impl.py in mean_absolute_error(labels, predictions, weights, metrics_collections, updates_collections, name)
736 predictions, labels, weights = _remove_squeezable_dimensions(
737 predictions=predictions, labels=labels, weights=weights)
--> 738 absolute_errors = math_ops.abs(predictions - labels)
739 return mean(absolute_errors, weights, metrics_collections,
740 updates_collections, name or 'mean_absolute_error')
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
883 if not isinstance(y, sparse_tensor.SparseTensor):
884 try:
--> 885 y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
886 except TypeError:
887 # If the RHS is not a tensor, it might be a tensor aware object
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in convert_to_tensor(value, dtype, name, preferred_dtype)
834 name=name,
835 preferred_dtype=preferred_dtype,
--> 836 as_ref=False)
837
838
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
924
925 if ret is None:
--> 926 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
927
928 if ret is NotImplemented:
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _TensorTensorConversionFunction(t, dtype, name, as_ref)
772 raise ValueError(
773 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 774 (dtype.name, t.dtype.name, str(t)))
775 return t
776
ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int64: 'Tensor("eval_9/remove_squeezable_dimensions/cond_1/Merge:0", dtype=int64)'
This line in your code:
correct = tf.metrics.mean_absolute_error(labels = y, predictions = logits)
executes in a way where TensorFlow is first subtracting predictions from labels as seen in the backrace:
absolute_errors = math_ops.abs(predictions - labels)
In order to do the subtraction, the two tensors need to be the same datatype. Presumably your predictions (logits) are float32 and from the error message your labels are int64. You either have to do an explicit conversion with tf.to_float or an implicit one you suggest in your comment: defining the placeholder as float32 to start with, and trusting TensorFlow to do the conversion when the feed dictionary is processed.

Resources