I have seen problems similar to mine here on Stack Overflow, but not exactly the same. I can reshape when using fully-connected NN layers, but not with Conv1D layers. Here's a minimal example. I'm using TF 1.4.0 on Python 3.6.3.
import tensorflow as tf
# fully connected
fc = tf.placeholder(tf.float32, [None,12])
fc = tf.contrib.layers.fully_connected(fc, 12)
fc = tf.contrib.layers.fully_connected(fc, 6)
fc = tf.reshape(fc, [-1,3,2])
# convolutional
con = tf.placeholder(tf.float32, [None,50,4])
con = tf.layers.Conv1D(con, 12, 3, activation=tf.nn.relu)
con = tf.layers.Conv1D(con, 6, 3, activation=tf.nn.relu)
con = tf.reshape(con, [-1,50,3,2])
Here is the output (yes, I'm aware of the RuntimeWarning. The messages I have found which discuss it suggest that it's harmless, but if you know otherwise, please share!):
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py", line 468, in make_tensor_proto
str_values = [compat.as_bytes(x) for x in proto_values]
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py", line 468, in <listcomp>
str_values = [compat.as_bytes(x) for x in proto_values]
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/compat.py", line 65, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got <tensorflow.python.layers.convolutional.Conv1D object at 0x7fa67e0d1a20>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "minimal reshape example.py", line 16, in <module>
con = tf.reshape(con, [-1,width,3,2])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 3938, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 513, in _apply_op_helper
raise err
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 926, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 229, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py", line 472, in make_tensor_proto
"supported type." % (type(values), values))
TypeError: Failed to convert object of type <class 'tensorflow.python.layers.convolutional.Conv1D'> to Tensor. Contents: <tensorflow.python.layers.convolutional.Conv1D object at 0x7fa67e0d1a20>. Consider casting elements to a supported type.
My code fails at con = tf.reshape(con, [-1,50,3,2]). Yet the pattern is nearly identical to the pattern that I use for the fully-connected graph, fc.
I made nets very similar to these work in the higher-level API for TensorFlow called TFLearn. However, TFLearn's DNN Estimator object is having trouble managing a tf.Session correctly. After over a month, I have yet to resolve the issue with TFLearn's developers on GitHub.
I don't mind using TensorFlow's native Estimator, but I have to solve this reshape problem to achieve it.
Well, I found the error: tf.layers.Conv1D != tf.layers.conv1d. Changing the former to the latter eliminated the error. Let the TensorFlow / Python user beware!
Even though TensorFlow seems to avoid Python's object model (which is probably necessary, given the possibility of distributed, low-level computation), there are in fact a few genuine classes in the Python API. The class constructors can accept many (all?) of the same arguments as the similarly-named convenience functions.
Related
Trying to use multiple TensorFlow models in parallel using pathos.multiprocessing.Pool
Error is:
multiprocess.pool.RemoteTraceback:
Traceback (most recent call last):
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\multiprocess\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\multiprocess\pool.py", line 44, in mapstar
return list(map(*args))
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\pathos\helpers\mp_helper.py", line 15, in <lambda>
func = lambda args: f(*args)
File "c:\Users\Burge\Desktop\SwarmMemory\sim.py", line 38, in run
i.step()
File "c:\Users\Burge\Desktop\SwarmMemory\agent.py", line 240, in step
output = self.ai(np.array(self.internal_log).reshape(-1, 1, 9))
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1012, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\keras\engine\sequential.py", line 375, in call
return super(Sequential, self).call(inputs, training=training, mask=mask)
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 425, in call
inputs, training=training, mask=mask)
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\keras\engine\functional.py", line 569, in _run_internal_graph
assert x_id in tensor_dict, 'Could not compute output ' + str(x)
AssertionError: Could not compute output KerasTensor(type_spec=TensorSpec(shape=(None, 1, 4), dtype=tf.float32, name=None), name='dense_1/BiasAdd:0', description="created by layer 'dense_1'")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\Burge\Desktop\SwarmMemory\sim.py", line 78, in <module>
p.map(Sim.run, sims)
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\pathos\multiprocessing.py", line 137, in map
return _pool.map(star(f), zip(*args)) # chunksize
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\multiprocess\pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "c:\users\burge\appdata\local\programs\python\python37\lib\site-packages\multiprocess\pool.py", line 657, in get
raise self._value
AssertionError: Could not compute output KerasTensor(type_spec=TensorSpec(shape=(None, 1, 4), dtype=tf.float32, name=None), name='dense_1/BiasAdd:0', description="created by layer 'dense_1'")
The creation of the pool is as follows:
if __name__ == '__main__':
freeze_support()
model = Sequential()
model.add(Input(shape=(1,9)))
model.add(LSTM(10, return_sequences=True))
model.add(Dropout(0.1))
model.add(LSTM(5))
model.add(Dropout(0.1))
model.add(Dense(4))
model.add(Dense(4))
models = []
sims = []
for i in range(6):
models.append(tensorflow.keras.models.clone_model(model))
sims.append(Sim(models[-1]))
p = Pool()
p.map(Sim.run, sims)
Basically, I am running a simulation using the model provided to the class sim. This means after the sim has run I can get use a fitness function on results, and apply a genetic algorithm to the results.
GitHub link for more information, under branch python-ver:
https://github.com/HarryBurge/SwarmMemory
EDIT:
In case anyone needs to know how to do this in the future.
I used keras-pickle-wrapper to be able to pickle the keras model and just pass it to the run method.
models = []
sims = []
for i in range(6):
models.append(KerasPickleWrapper(tensorflow.keras.models.clone_model(model)))
sims.append(Sim())
p = Pool()
p.map(Sim.run, sims, models)
I'm the author of pathos. Whenever you see self._value in the error, what's generally happening is that something you tried to send to another processor failed to serialize. The error and traceback is a bit obtuse, admittedly. However, what you can do is check the serialization with dill, and determine if you need to use one of the serialization variants (like dill.settings['trace'] = True), or whether you need to restructure your code slightly to better accommodate serialization. If the class you are working with is something you can edit, then an easy thing to do is to add a __reduce__ method, or similar, to aid serialization.
I'm trying to use the pretrained tf-hub elmo model by integrating it into a keras layer.
Keras Layer:
class ElmoEmbeddingLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
self.dimensions = 1024
self.trainable = True
self.elmo = None
def build(self, input_shape):
url = 'https://tfhub.dev/google/elmo/2'
self.elmo = hub.Module(url)
self._trainable_weights += trainable_variables(
scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None):
result = self.elmo(
x,
signature="default",
as_dict=True)["elmo"]
return result
def compute_output_shape(self, input_shape):
return input_shape[0], self.dimensions
When I run the code I get the following error:
Traceback (most recent call last):
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 170, in <module>
validation_steps=validation_dataset.size())
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 79, in train_gpu
model = build_model(self.config, self.embeddings, self.sequence_len, self.out_classes, summary=True)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 8, in build_model
return my_model(embeddings, config, sequence_length, out_classes, summary)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 66, in my_model
inputs, embedding = resolve_inputs(embeddings, sequence_length, model_config, input_type)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 19, in resolve_inputs
return elmo_input(model_conf)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 58, in elmo_input
embedding = ElmoEmbeddingLayer()(input_text)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 616, in __call__
self._maybe_build(inputs)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1966, in _maybe_build
self.build(input_shapes)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\custom_layers.py", line 21, in build
self.elmo = hub.Module(url)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow_hub\module.py", line 156, in __init__
abs_state_scope = _try_get_state_scope(name, mark_name_scope_used=False)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow_hub\module.py", line 389, in _try_get_state_scope
"name_scope was already taken." % abs_state_scope)
RuntimeError: variable_scope module/ was unused but the corresponding name_scope was already taken.
It seems to be due to the eager execution behaviour. If I disable eager execution I have to surround the model.fit function within a tensorflow session and initialize the variables by using sess.run(global_variables_initializer()) to avoid the next error:
Traceback (most recent call last):
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 168, in <module>
validation_steps=validation_dataset.size().eval(session=Session()))
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 90, in train_gpu
class_weight=weighted)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\training.py", line 643, in fit
use_multiprocessing=use_multiprocessing)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 664, in fit
steps_name='steps_per_epoch')
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 294, in model_iteration
batch_outs = f(actual_inputs)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\backend.py", line 3353, in __call__
run_metadata=self.run_metadata)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\client\session.py", line 1458, in __call__
run_metadata_ptr)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/class tensorflow::Var does not exist.
[[{{node elmo_embedding_layer/module_apply_default/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/Read/ReadVariableOp}}]]
(1) Failed precondition: Error while reading resource variable module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/class tensorflow::Var does not exist.
[[{{node elmo_embedding_layer/module_apply_default/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/Read/ReadVariableOp}}]]
[[metrics/f1_micro/Identity/_223]]
0 successful operations.
0 derived errors ignored.
My solution:
with Session() as sess:
sess.run(global_variables_initializer())
history = model.fit(self.train_data.repeat(),
epochs=self.config['epochs'],
validation_data=self.validation_data.repeat(),
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
callbacks=self.__callbacks(monitor_metric),
class_weight=weighted)
The main question is if there is another way to use elmo tf-hub module in a keras custom layer and train my model. Another question is if my current solution is not affecting the training performances or give the OOM GPU error (I get the OOM error after a few epochs with a higher batch size, which I've found to be related to sessions not closed or memory leaks).
If you wrap your model in Session() field, you will also have to wrap all another code that uses your model in Session() field. It takes a lot times and efforts. I have another way to deal with it:
firstly, create a elmo module, add a session to keras:
elmo_model = hub.Module("https://tfhub.dev/google/elmo/3", trainable=True,
name='elmo_module')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
K.set_session(sess)
Instead of create elmo module directly in your ElmoEmbeddinglayer
self.elmo = hub.Module(url)
self._trainable_weights += trainable_variables(
scope="^{}_module/.*".format(self.name))
You can do the following, i think it works normally!
self.elmo = elmo_model
self._trainable_weights += trainable_variables(
scope="^elmo_module/.*")
Here is a simple solution that I used in my case:
That thing happened to me while I was using a separated python script to create the module.
To solve it I passed the tf.Session() in the main script to the tf.keras.backend in the other script by creating an entry point to pass it before calling the Layer.init
Example:
Main file:
import tensorflow.compat.v1 as tf
from ModuleFile import ModuleLayer
def __main__():
init_args = [...]
input = ...
sess= tf.keras.backend.get_session()
Module_layer.__init_session___(sess)
module_layer = ModuleLayer(init_args)(input)
Module file:
import tensorflow.compat.v1 as tf
class ModuleLayer(tf.keras.layers.Layer):
#staticmethod
def __init_session__(session):
tf.keras.backend.set_session(session)
def __init__(*args):
...
Hope that helps :)
I am stuck in a simple looking problem in Tensorflow.
Traceback (most recent call last):
File op_def_library.py, line 510, in _apply_op_helper
preferred_dtype=default_dtype)
File ops.py, line 1040, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File ops.py, line 883, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype int64 for Tensor with dtype float32: 'Tensor("sequence_sparse_softmax_cross_entropy/zeros_like:0", shape=(?, ?, 10004), dtype=float32)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "red.py", line 281, in <module>
main()
File "red.py", line 99, in main
sequence_length=lengths)
File loss.py, line 225, in sequence_sparse_softmax_cross_entropy
losses = xloss(labels=labels, logits=logits)
File loss.py", line 48, in loss
post = array_ops.where(target_tensor > zeros, target_tensor - sigmoid_p, zeros)
gen_math_ops.py, line 2924, in greater
"Greater", x=x, y=y, name=name)
op_def_library.py, line 546, in _apply_op_helper
inferred_from[input_arg.type_attr]))
TypeError: Input 'y' of 'Greater' Op has type float32 that does not match type int64 of argument 'x'
Using as type also does not work.
I just defined another function to be used. I defined it and tried to use it. What should I do to make it work? I just want to define a function that takes tensors as input just like tf cross entropy function. Please suggest how to do that.
In particular, how can I resolve the error?
I am extremely new to tensorflow and trying to learn how to save and load a previously trained model. I created a simple model using Estimator and trained it.
classifier = tf.estimator.Estimator(model_fn=bag_of_words_model)
# Train
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"words": x_train}, # x_train is 2D numpy array of shape (26, 5)
y=y_train, # y_train is 1D panda series of length 26
batch_size=1000,
num_epochs=None,
shuffle=True)
classifier.train(input_fn=train_input_fn, steps=300)
I then try to save the model:
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
features = {"words": tf.placeholder(tf.int64, shape=(None, 5))}
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
full_model_dir = classifier.export_savedmodel(export_dir_base="E:/models/", serving_input_receiver_fn=serving_input_receiver_fn)
I have actually copied the serving_input_receiver_fn from this similar question. I don't understand exactly what is going on in that function. But this stores my model in E:/models/<some time stamp>.
I now try to load this saved model:
from tensorflow.contrib import predictor
classifier = predictor.from_saved_model("E:\\models\\<some time stamp>")
The models perfectly loaded. Hereafter, I am struck on how to use this classifier object to get predictions on new data. I have followed a guide here to achieve it but couldn't do it :(. Here is what I did:
predictions = classifier({'predictor_inputs': x_test})["output"] # x_test is 2D numpy array same like x_train in the training part
I get error as follows:
2019-01-10 12:43:38.603506: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:tensorflow:Restoring parameters from E:\models\1547101005\variables\variables
Traceback (most recent call last):
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[{{node Placeholder}} = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 85, in <module>
predictions = classifier({'predictor_inputs': x_test})["output"]
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in __call__
return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153) = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'Placeholder', defined at:
File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 82, in <module>
classifier = predictor.from_saved_model("E:\\models\\1547101005")
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
config=config)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in __init__
loader.load(self._session, tags.split(','), export_dir)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 197, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 350, in load
**saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 278, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements
**kwargs))
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153) = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
It says that I have to feed value to the placeholder (I think the one defined in serving_input_receiver_fn). I have no idea how to do it without using a Session object of tensorflow.
Please feel free to ask for more information if required.
After somewhat vague understanding of serving_input_receiver_fn, I figured out that the features must not be a placeholder as it creates 2 placeholders (1 for serialized_tf_example and the other for features). I modified the function as follows (the changes is just for the features variable):
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
features = {"words": tf.tile(serialized_tf_example, multiples=[1, 1])} # Changed this
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
When I try to predict the output from the loaded model, I get no error now. It works! Only thing is that the output is incorrect (for which I am posting a new question :) ).
I am using an evolutionary algorithm to find satisfactory hyper-parameters for a CNN written in Keras/Theano. The stochastic nature of this approach means that from time to time a pathological configuration will be tried, which will yield an exception. In those scenarios, I'd like to catch the exception so I can assign an appropriate low fitness. Unfortunately, when Theano throws an exception, it appears to be masked before it reaches my try/catch block. That is, at some point the exception is caught and not re-raised, which means it never propagates up the stack to reach my try/catch block.
I've asked on the Keras Slack workspace if there was some configuration I had to tickle in Keras to un-mask these exceptions, but I was told that the problem was not at the Keras level, that it was something with Theano. And, so here I am.
I have the following configuration settings at the top of the corresponding theanorc file that I had hoped would solve the problem:
[config]
on_opt_error = raise
on_shape_error = raise
numpy.seterr_all = raise
compute_test_value = raise
And, these are the exceptions I am seeing:
ERROR (theano.gof.opt): SeqOptimizer apply <theano.tensor.opt.ShapeOptimizer object at 0x2aaae03674a8>
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/gof/opt.py", line 235, in apply
sub_prof = optimizer.optimize(fgraph)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/gof/opt.py", line 83, in optimize
self.add_requirements(fgraph)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1482, in add_requirements
fgraph.attach_feature(ShapeFeature())
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/gof/fg.py", line 541, in attach_feature
attach(self)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1299, in on_attach
self.on_import(fgraph, node, reason='on_attach')
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1362, in on_import
self.set_shape(r, s)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1151, in set_shape
shape_vars.append(self.unpack(s[i], r))
File "/ccs/proj/geo121/python3.5-packages/dl4sm/theano/tensor/opt.py", line 1073, in unpack
raise ValueError(msg)
ValueError: There is a negative shape in the graph!
Backtrace when that variable is created:
File "/ccs/proj/geo121/mcoletti/dl-4-settlement-mapping/eadl/train_cnn.py", line 218, in <module>
validation_accuracy = train_cnn(data_dir=args.data_dir, kernel_sizes=args.kernel_sizes, max_epoch=args.epoch, batch_sizes=args.batch_size)
File "/ccs/proj/geo121/mcoletti/dl-4-settlement-mapping/eadl/train_cnn.py", line 193, in train_cnn
model = create_cnn(kernel_sizes=kernel_sizes)
File "/ccs/proj/geo121/mcoletti/dl-4-settlement-mapping/eadl/train_cnn.py", line 52, in create_cnn
model.add(Conv2D(256, kernel_size=kernel_sizes[3], activation="relu", kernel_initializer="normal"))
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/models.py", line 475, in add
output_tensor = layer(self.outputs[0])
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/engine/topology.py", line 602, in __call__
output = self.call(inputs, **kwargs)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/layers/convolutional.py", line 164, in call
dilation_rate=self.dilation_rate)
File "/ccs/proj/geo121/python3.5-packages/dl4sm/keras/backend/theano_backend.py", line 1890, in conv2d
filter_dilation=dilation_rate)
And, if you're curious to see the try/catch block, it's just this:
try:
validation_accuracy = train_cnn(data_dir=args.data_dir, kernel_sizes=args.kernel_sizes, max_epoch=args.epoch, batch_sizes=args.batch_size)
except Exception as e:
print(socket.gethostname(), ', Caught exception while training:', str(e) )
My intuition is that this is probably something very, very simple. Maybe I need to add more options to the THEANORC file?
Setting theano.config.compute_test_value = 'raise' appears to work.
Curiously, compute_test_value should have been set from the Theano configuration file, which suggests that it's not being properly read and parsed. I should not have to set this value programmatically when I explicitly set it in the configuration file.