Strange sample_weight error when evaluating a tensorflow keras model - python-3.x

I am getting the following error when evaluating a model that I trained in tensorflow
Traceback (most recent call last):
File "./mesh.py", line 89, in <module>
testing.test(config, network_path)
File "/workspace/training/testing.py", line 54, in test
model.evaluate(testing_dataset)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1379, in evaluate
tmp_logs = test_function(iterator)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 697, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3075, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1224 test_function *
return step_function(self, iterator)
TypeError: tf__update_state() got an unexpected keyword argument 'sample_weight'
I don't set or use sample_weight anywhere in my code so I have no idea where this error could be coming from. Has anyone ever come across an error like this or have any pointers on how I can debug it?
The model is a keras model with weights loaded from a checkpoint file saved using the ModelCheckpoint callback during training (using the fit function) and I am evaluating using the evaluate function.
I am using the tensorflow-gpu==2.3.0 pip package

Related

Error using RandomizedSearchCV on a LSTM model

I want to find the optimal number of neurons and layers for a LSTM model and I have created the following:
def build_model(n_hidden=1, n_neurons=30, learning_rate=3e-3, input_shape=[20,11]):
model = keras.models.Sequential()
options = {"input_shape": input_shape}
for layer in range(n_hidden):
model.add(keras.layers.Dense(n_neurons, activation="swish", **options))
options = {}
model.add(keras.layers.Dense(10, **options))
#optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss="mse", optimizer='adam')
return model
keras_reg = keras.wrappers.scikit_learn.KerasRegressor(build_model)
from scipy.stats import reciprocal
from sklearn.model_selection import RandomizedSearchCV
param_distribs = {
"n_hidden": [0, 1, 2, 3],
"n_neurons": np.arange(1, 100),
#"learning_rate": reciprocal(3e-4, 3e-2),
}
rnd_search_cv = RandomizedSearchCV(keras_reg, param_distribs, n_iter=10)
rnd_search_cv.fit(x_tr, y_tr , epochs=100,
callbacks=[keras.callbacks.EarlyStopping(patience=10)])
The dataset used is composed of 11 features and in this model it looks for the 20-in previous times steps and aims to predict the following 10 time steps. When I run the following code I have the following error:
> /home/use/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:372: FitFailedWarning:
50 fits failed out of a total of 50.
The score on these train-test partitions for these parameters will be set to nan.
If these failures are not expected, you can try to debug them by setting error_score='raise'.
Below are more details about the failures:
> --------------------------------------------------------------------------------
1 fits failed with the following error:
Traceback (most recent call last):
File "/home/use/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 680, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/wrappers/scikit_learn.py", line 175, in fit
history = self.model.fit(x, y, **fit_args)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/use/anaconda3/lib/python3.9/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node 'gradient_tape/mean_squared_error/BroadcastGradientArgs' defined at (most recent call last):
File "/home/use/anaconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/use/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel_launcher.py", line 17, in <module>
app.launch_new_instance()
File "/home/use/anaconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 712, in start
self.io_loop.start()
File "/home/use/anaconda3/lib/python3.9/site-packages/tornado/platform/asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "/home/use/anaconda3/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/home/use/anaconda3/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
handle._run()
File "/home/use/anaconda3/lib/python3.9/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue
await self.process_one()
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 499, in process_one
await dispatch(*args)
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell
await result
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/kernelbase.py", line 730, in execute_request
reply_content = await reply_content
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/ipkernel.py", line 390, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/home/use/anaconda3/lib/python3.9/site-packages/ipykernel/zmqshell.py", line 528, in run_cell
return super().run_cell(*args, **kwargs)
File "/home/use/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 2914, in run_cell
result = self._run_cell(
File "/home/use/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 2960, in _run_cell
return runner(coro)
File "/home/use/anaconda3/lib/python3.9/site-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
coro.send(None)
File "/home/use/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3185, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/home/use/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3377, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "/home/use/anaconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3457, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_10055/1251208469.py", line 9, in <module>
rnd_search_cv.fit(x_tr, y_tr , epochs=100,
File "/home/use/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 891, in fit
self._run_search(evaluate_candidates)
File "/home/use/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 1766, in _run_search
evaluate_candidates(
File "/home/use/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 838, in evaluate_candidates
out = parallel(
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 1043, in __call__
if self.dispatch_one_batch(iterator):
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 861, in dispatch_one_batch
self._dispatch(tasks)
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 779, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 208, in apply_async
result = ImmediateResult(func)
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py", line 572, in __init__
self.results = batch()
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 262, in __call__
return [func(*args, **kwargs)
File "/home/use/anaconda3/lib/python3.9/site-packages/joblib/parallel.py", line 262, in <listcomp>
return [func(*args, **kwargs)
File "/home/use/anaconda3/lib/python3.9/site-packages/sklearn/utils/fixes.py", line 216, in __call__
return self.function(*args, **kwargs)
File "/home/use/anaconda3/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 680, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/wrappers/scikit_learn.py", line 175, in fit
history = self.model.fit(x, y, **fit_args)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
return fn(*args, **kwargs)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/engine/training.py", line 1650, in fit
tmp_logs = self.train_function(iterator)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/engine/training.py", line 1249, in train_function
return step_function(self, iterator)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/engine/training.py", line 1233, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/engine/training.py", line 1222, in run_step
outputs = model.train_step(data)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/engine/training.py", line 1027, in train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 526, in minimize
grads_and_vars = self.compute_gradients(loss, var_list, tape)
File "/home/use/anaconda3/lib/python3.9/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 259, in compute_gradients
grads = tape.gradient(loss, var_list)
Node: 'gradient_tape/mean_squared_error/BroadcastGradientArgs'
Incompatible shapes: [32,20,10] vs. [32,10]
[[{{node gradient_tape/mean_squared_error/BroadcastGradientArgs}}]] [Op:__inference_train_function_1033752]

ValueError: Found two metrics with the same name: recall

I'm training a detection model where train and test data are 3D NumPy array. When start train this model found this type of error. Code link is given below
Training_model.py
detection.py
perform_learning.py
model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=6,
epochs=epochs,
callbacks=[checkpoint, tensorboard])
Traceback (most recent call last):
File "/content/SpineFinder-master/train_detection_model.py", line 25, in
shuffle=True)
File "/content/SpineFinder-master/learning_functions/perform_learning.py", line 57, in perform_learning
callbacks=[checkpoint, tensorboard])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1479, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 848, in fit
tmp_logs = train_function(iterator)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 580, in call
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().wrapped(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:543 train_step **
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:391 update_state
self._build(y_pred, y_true)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:333 _build
self._set_metric_names()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:353 _set_metric_names
m._name))
ValueError: Found two metrics with the same name: recall
The error stems from the following in detection.py:
recall_background = km.binary_recall(label=0)
recall_vertebrae = km.binary_recall(label=1)
According to [1] and [2], km.binary_recall() instantiates the keras.metrics.recall() class. However, without the name kwarg, both lines uses the same name recall. Therefore, in order to avoid this, it's my understanding that you'd have to specify the name kwarg like so:
recall_background = km.binary_recall(name="recall_background", label=0)
recall_vertebrae = km.binary_recall(name="recall_vertebrae", label=1)
[1] - https://github.com/netrack/keras-metrics/blob/master/keras_metrics/\_\_init__.py#L34
[2] - https://github.com/netrack/keras-metrics/blob/master/keras_metrics/metrics.py#L150

output and feeb_dict inside session FailedPreconditionError (see above for traceback): Attempting to use uninitialized value

I am converting the MTCNN tensorflow into tensorflow tensorRT
When I run camera_test.py
I get this error FailedPreconditionError: Attempting to use uninitialized in Tensorflow
Traceback (most recent call last): File
"/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1334, in _do_call
return fn(*args) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1407, in _call_tf_sessionrun
run_metadata) tensorflow.python.framework.errors_impl.FailedPreconditionError:
Attempting to use uninitialized value conv4_2/biases [[{{node
conv4_2/biases/read}}]] [[{{node Squeeze_1}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "camera_test_trrt.py", line
48, in
boxes_c,landmarks = mtcnn_detector.detect(image) File "../Detection/MtcnnDetector.py", line 371, in detect
boxes, boxes_c, _ = self.detect_pnet(img) File "../Detection/MtcnnDetector.py", line 221, in detect_pnet
cls_cls_map, reg = self.pnet_detector.predict(im_resized) File "../Detection/fcn_detector_trrt.py", line 56, in predict
self.height_op: height}) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 929, in run
run_metadata_ptr) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1152, in _run
feed_dict_tensor, options, run_metadata) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1328, in _do_run
run_metadata) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/client/session.py",
line 1348, in _do_call
raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.FailedPreconditionError:
Attempting to use uninitialized value conv4_2/biases [[node
conv4_2/biases/read (defined at ../train_models/mtcnn_model.py:208) ]]
[[node Squeeze_1 (defined at ../train_models/mtcnn_model.py:245) ]]
Caused by op 'conv4_2/biases/read', defined at: File
"camera_test_trrt.py", line 23, in
PNet = FcnDetector(P_Net, '/home/jetsonnano/Downloads/MTCNN-Tensorflow-master/test/p_output_graph_FP16.pb')
File "../Detection/fcn_detector_trrt.py", line 23, in init
self.cls_prob, self.bbox_pred, _ = net_factory(image_reshape, training=False) File "../train_models/mtcnn_model.py", line 208, in
P_Net
bbox_pred = slim.conv2d(net,num_outputs=4,kernel_size=[1,1],stride=1,scope='conv4_2',activation_fn=None)
File
"/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py",
line 182, in func_with_args
return func(*args, **current_args) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py",
line 1158, in convolution2d
conv_dims=2) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py",
line 182, in func_with_args
return func(*args, **current_args) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py",
line 1061, in convolution
outputs = layer.apply(inputs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py",
line 1227, in apply
return self.call(inputs, *args, **kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/layers/base.py",
line 530, in call
outputs = super(Layer, self).call(inputs, *args, **kwargs) File
"/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py",
line 538, in call
self._maybe_build(inputs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py",
line 1603, in _maybe_build
self.build(input_shapes) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/keras/layers/convolutional.py",
line 174, in build
dtype=self.dtype) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/layers/base.py",
line 435, in add_weight
getter=vs.get_variable) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py",
line 349, in add_weight
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/training/checkpointable/base.py",
line 607, in _add_variable_with_custom_getter
**kwargs_for_getter) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py",
line 1479, in get_variable
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py",
line 1220, in get_variable
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py",
line 530, in get_variable
return custom_getter(**custom_getter_kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py",
line 1753, in layer_variable_getter
return _model_variable_getter(getter, *args, **kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py",
line 1744, in _model_variable_getter
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py",
line 182, in func_with_args
return func(*args, **current_args) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py",
line 350, in model_variable
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py",
line 182, in func_with_args
return func(*args, **current_args) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/variables.py",
line 277, in variable
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py",
line 499, in _true_getter
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py",
line 911, in _get_single_variable
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variables.py",
line 213, in call
return cls._variable_v1_call(*args, **kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variables.py",
line 176, in _variable_v1_call
aggregation=aggregation) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variables.py",
line 155, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py",
line 2495, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variables.py",
line 217, in call
return super(VariableMetaclass, cls).call(*args, **kwargs) File
"/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variables.py",
line 1395, in init
constraint=constraint) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/variables.py",
line 1557, in _init_from_args
self._snapshot = array_ops.identity(self._variable, name="read") File
"/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py",
line 180, in wrapper
return target(*args, **kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py",
line 81, in identity
ret = gen_array_ops.identity(input, name=name) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py",
line 3890, in identity
"Identity", input=input, name=name) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py",
line 788, in _apply_op_helper
op_def=op_def) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py",
line 507, in new_func
return func(*args, **kwargs) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/framework/ops.py",
line 3300, in create_op
op_def=op_def) File "/home/jetsonnano/.virtualenvs/jetsonnanotest/lib/python3.6/site-packages/tensorflow/python/framework/ops.py",
line 1801, in init
self._traceback = tf_stack.extract_stack()
FailedPreconditionError (see above for traceback): Attempting to use
uninitialized value conv4_2/biases [[node conv4_2/biases/read
(defined at ../train_models/mtcnn_model.py:208) ]] [[node Squeeze_1
(defined at ../train_models/mtcnn_model.py:245) ]]
how do i tf.global_variables_initializer will sess.run
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
When I have output parameters and feed_dict in sess.run
cls_prob, bbox_pred,landmark_pred = self.sess.run([self.cls_prob, self.bbox_pred,self.landmark_pred], feed_dict={self.image_op: data})
in detector.py
and
cls_prob, bbox_pred = self.sess.run([self.cls_prob, self.bbox_pred],feed_dict={self.image_op: databatch, self.width_op: width,self.height_op: height})
in fcn_detector.py
can anyone help out here?
Just after the following line
self.sess = tf.Session( config=tf.ConfigProto(allow_soft_placement=True, gpu_options=tf.GPUOptions(allow_growth=True)))
declare
init_op = tf.global_variables_initializer()
and do
self.sess.run(init_op)

Tensorflow #tf.function: AttributeError: in converted code

I created a class and defined the train_step function inside it: TF tutorial: NMT_attention
Without using the #tf.function significantly increases the training time. On defining it, I get a conversion error for the private variables declared inside the class.
#tf.function
def train_step(self, input, target, encoderHidden):
loss = 0
with tf.GradientTape() as tape:
encoderOutput, encoderHidden = self.__encoder(input, encoderHidden) #throws error
Below is the traceback:
Using TensorFlow backend.
2019-09-20 12:54:32.676302: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "/Users/Users/Library/Python/lib/python/site-packages/proj/Models/attention_model.py", line 499, in <module>
model.fit(path, epochs)
File "/Users/Users/Library/Python/lib/python/site-packages/proj/Models/attention_model.py", line 383, in fit
loss = self.train_step(input, target, encoderHidden)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 416, in __call__
self._initialize(args, kwds, add_initializers_to=initializer_map)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 359, in _initialize
*args, **kwds))
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1360, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1648, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1541, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 716, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 309, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2155, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/Users/Users/work_env/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 706, in wrapper
raise e.ag_error_metadata.to_exception(type(e))
AttributeError: in converted code:
relative to /Users/Users:
Library/Python/lib/python/site-packages/proj/Models/attention_model.py:262 train_step *
encoderOutput, encoderHidden = self.__encoder(input, encoderHidden)
work_env/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py:329 converted_call
f = getattr(owner, f)
AttributeError: 'Model' object has no attribute '__encoder'

Do I have to restart colaboratory runtime every time?

I cannot run my code in Google Colaboratory twice without restarting runtime. Is there a way to run it without restarting runtime.
My code takes TensorD libraries and compute aproximation of a random 2x4x4 tensor using CP ALS algorithm. (this is an example taken from https://github.com/Large-Scale-Tensor-Decomposition/tensorD)
!git clone https://github.com/Large-Scale-Tensor-Decomposition/tensorD.git
import sys
import time
sys.path.append("/content/tensorD")
from tensorD.factorization.env import Environment
from tensorD.dataproc.provider import Provider
from tensorD.demo.DataGenerator import *
from tensorD.factorization.cp import CP_ALS
# generate a random tensor with shape 3x4x4
t = time.time()
X = synthetic_data_cp([3, 4, 4], 7)
data_provider = Provider()
data_provider.full_tensor = lambda: X
env = Environment(data_provider, summary_path='/tmp/cp_' + '7')
cp = CP_ALS(env)
args = CP_ALS.CP_Args(rank=7, validation_internal=1)
# build CP model with arguments
cp.build_model(args)
# train CP model with the maximum iteration of 100
cp.train(50)
# obtain factor matrices from trained model
factor_matrices = cp.factors
# obtain scaling vector from trained model
lambdas = cp.lambdas
for matrix in factor_matrices:
print(matrix)
elapsed = time.time() - t
print(elapsed)
when I run it first time I have no problem. When I run it again (without restart of runtime) I obtain:
CP model initial finish
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1333 try:
-> 1334 return fn(*args)
1335 except errors.OpError as e:
7 frames
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [3,4,4]
[[{{node Placeholder}}]]
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1346 pass
1347 message = error_interpolation.interpolate(message, self._graph)
-> 1348 raise type(e)(node_def, op, message)
1349
1350 def _extend_graph(self):
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [3,4,4]
[[node Placeholder (defined at /content/tensorD/tensorD/factorization/cp.py:69) ]]
Caused by op 'Placeholder', defined at:
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-4-4d2250ec007b>", line 21, in <module>
cp.build_model(args)
File "/content/tensorD/tensorD/factorization/cp.py", line 69, in build_model
input_data = tf.placeholder(tf.float32, shape=self._env.full_shape())
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 2077, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 5791, in placeholder
"Placeholder", dtype=dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [3,4,4]
[[node Placeholder (defined at /content/tensorD/tensorD/factorization/cp.py:69) ]]
Any help will be appreciated!
This seems to mostly be a question about TensorFlow. Do you get what you want outside of Colab? I'm not totally clear about what you are expecting, but import tensorflow as tf; tf.reset_default_graph() at the top of your snippet seems sensible and squelches the error.

Resources