I reference the link: Keras custom loss function: Accessing current input pattern.
But I get error: " TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model. "
This is the source code: What happened ?
def custom_loss_wrapper(input_tensor):
def custom_loss(y_true, y_pred):
return K.binary_crossentropy(y_true, y_pred) + K.mean(input_tensor)
return custom_loss
input_tensor = Input(shape=(10,))
hidden = Dense(100, activation='relu')(input_tensor)
out = Dense(1, activation='sigmoid')(hidden)
model = Model(input_tensor, out)
model.compile(loss=custom_loss_wrapper(input_tensor), optimizer='adam')
X = np.random.rand(1000, 10)
y = np.random.rand(1000, 1)
model.train_on_batch(X, y)
In the tf 2.0, eager mode is on by default. It's not possible to get this functionality in eager mode as the above example is currently written. I think there are ways to do it in eager mode with some more advanced programming. But otherwise it's a simple matter to turn eager mode off and run in graph mode with:
from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()
When I received a similar error, I performed the following:
del model
Before:
model = Model(input_tensor, out)
It resolved my issue, you can give it a shot.
I am eager to know if it solves your problem :) .
Related
I am creating my custom layers tf.keras model using mobile net pretrained layer. Model training is running fine but when saving the best picked model it is giving an error. Below is the snippet of the code that I used
pretrained_model = tf.keras.applications.MobileNetV2(
weights='imagenet',
include_top=False,
input_shape=[*IMAGE_SIZE, IMG_CHANNELS])
pretrained_model.trainable = True #fine tuning
model = tf.keras.Sequential([
tf.keras.layers.Lambda(# Convert image from int[0, 255] to the format expect by this model
lambda data:tf.keras.applications.mobilenet.preprocess_input(
tf.cast(data, tf.float32)), input_shape=[*IMAGE_SIZE, 3]),
pretrained_model,
tf.keras.layers.GlobalAveragePooling2D()])
model.add(tf.keras.layers.Dense(64, name='object_dense',kernel_regularizer=tf.keras.regularizers.l2(l2=0.001)))
model.add(tf.keras.layers.BatchNormalization(scale=False, center = False))
model.add(tf.keras.layers.Activation('relu', name='relu_dense_64'))
model.add(tf.keras.layers.Dropout(rate=0.2, name='dropout_dense_64'))
model.add(tf.keras.layers.Dense(32, name='object_dense_2',kernel_regularizer=tf.keras.regularizers.l2(l2=0.01)))
model.add(tf.keras.layers.BatchNormalization(scale=False, center = False))
model.add(tf.keras.layers.Activation('relu', name='relu_dense_32'))
model.add(tf.keras.layers.Dropout(rate=0.2, name='dropout_dense_32'))
model.add(tf.keras.layers.Dense(16, name='object_dense_16', kernel_regularizer=tf.keras.regularizers.l2(l2=0.01)))
model.add(tf.keras.layers.Dense(len(CLASS_NAMES), activation='softmax', name='object_prob'))
m1 = tf.keras.metrics.CategoricalAccuracy()
m2 = tf.keras.metrics.Recall()
m3 = tf.keras.metrics.Precision()
optimizers = [
tfa.optimizers.AdamW(learning_rate=lr * .001 , weight_decay=wd),
tfa.optimizers.AdamW(learning_rate=lr, weight_decay=wd)
]
optimizers_and_layers = [(optimizers[0], model.layers[0]), (optimizers[1], model.layers[1:])]
optimizer = tfa.optimizers.MultiOptimizer(optimizers_and_layers)
model.compile(
optimizer= optimizer,
loss = 'categorical_crossentropy',
metrics=[m1, m2, m3],
)
checkpoint_path = os.getcwd() + os.sep + 'keras_model'
checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(filepath=os.path.join(checkpoint_path),
monitor = 'categorical_accuracy',
save_best_only=True,
save_weights_only=True)
history = model.fit(train_data, validation_data=test_data, epochs=N_EPOCHS, callbacks=[checkpoint_cb])
At tf.keras.callbacks.ModelCheckpoint is giving me an error
TypeError: Unable to serialize 1.0000000656873453e-05 to JSON. Unrecognized type <class 'tensorflow.python.framework.ops.EagerTensor'>.
Below is the link to the Google Colab notebook in case you want to replicate the issue
https://colab.research.google.com/drive/1wQbUFfhtDaB5Xta574UkAXJtthui7Bt9?usp=sharing
This seems to be a bug in Tensorflow or Keras. The tensor that's being serialized to JSON is from your optimizer definition.
model.optimizer.optimizer_specs[0]["optimizer"].get_config()["weight_decay"]
<tf.Tensor: shape=(), dtype=float32, numpy=1.0000001e-05>
From the implementation of tfa.optimizers.AdamW, the weight_decay is serialized using tf.keras.optimizers.Adam._serialize_hyperparameter. This function assumes that if you pass in a callable for the hyperparameter, it returns a non-tensor value when called, but in your notebook, it was implemented as
wd = lambda: 1e-02 * schedule(step)
where schedule() returns a Tensor. I tried some various ways to try to convert the tensor to a scalar value, but I couldn't get them to work. As a workaround, I implemented wd as a LearningRateSchedule so it'll serialize properly, though the code was clunkier. Replacing the definitions of wd and lr with this code allowed model training to complete for me without any issues.
class MyExponentialDecay(tf.keras.optimizers.schedules.ExponentialDecay):
def __call__(self, step):
return 1e-2 * super().__call__(step)
wd = MyExponentialDecay(
initial_learning_rate,
decay_steps=14,
decay_rate=0.8,
staircase=True)
lr = 1e2 * schedule(step)
After training completes, the model.save() call will fail. I believe this is the same issue which was reported here in the Tensorflow Addons Github. The summary of this issue is that the get_config() function for the optimizers will include a "gv" key in the config which stores Tensor objects, which aren't JSON serializable.
At the time of writing, this issue has not been resolved yet. If you don't need the optimizer state for the final saved model, you can pass in the include_optimizer=False argument to model.save() which worked for me. Otherwise, you may need to patch the library or the specific optimizer class implementation to get rid of the "gw" key in the config like the OP did in that thread.
I recently started learning and using automatic differentiation to determine the gradients and jacobian matrix of a neural network with respect to a given input. The method suggested by tensorflow is the tape.gradient and tape.jacobian method. However, I am not able to obtain the jacobian matrix using this method due to some bug in tensorflow. It works when I calculated tape.gradient(y_pred, x), but not the jacobian matrix, which should have a shape of (200,3). I am open to other ways to calculate the jacobian matrix, but I am more inclined to use automatic differentiation methods within Tensorflow. The current version I am using is Tensorflow 2.1.0. Greatly appreciate any advice!
import tensorflow as tf
import numpy as np
# The neural network accepts 3 inputs and produces 200 outputs. The actual values of the inputs and outputs are not written in the code as it is too involved.
num_inputs = 3
num_outputs = 200
num_hidden_layers = 5
num_neurons = 50
kernel = 'he_uniform'
activation = tf.keras.layers.LeakyReLU(alpha=0.3)
# Details of model (MLP)
current_model = tf.keras.models.Sequential()
current_model.add(tf.keras.Input(shape=(num_inputs,)))
for i in range(num_hidden_layers):
current_model.add(tf.keras.layers.Dense(units=num_neurons, activation=activation, kernel_initializer=kernel))
current_model.add(tf.keras.layers.Dense(units=num_outputs, activation='linear', kernel_initializer=kernel))
# Finding the Jacobian matrix with respect to a given input of the neural network
# In this case, the inputs are [0.02, 0.4 and 0.12] (i.e. 3 inputs)
x = tf.Variable([[0.02, 0.4, 0.12]], dtype=tf.float32)
with tf.GradientTape() as tape:
y_pred = x
for layer in current_model.layers:
y_pred = layer(y_pred)
jacobian = tape.jacobian(y_pred, x)
print(jacobian)
Below is the error returned. I removed some parts for privacy purposes.
StagingError: in converted code:
C:\Users\...\anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\ops\parallel_for\control_flow_ops.py:183 f *
return _pfor_impl(loop_fn, iters, parallel_iterations=parallel_iterations)
C:\Users\...\anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\ops\parallel_for\control_flow_ops.py:256 _pfor_impl
outputs.append(converter.convert(loop_fn_output))
C:\Users\...\anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\ops\parallel_for\pfor.py:1280 convert
output = self._convert_helper(y)
C:\Users\...\anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\ops\parallel_for\pfor.py:1453 _convert_helper
if flags.FLAGS.op_conversion_fallback_to_while_loop:
C:\Users\...\anaconda3\envs\tf\lib\site-packages\tensorflow_core\python\platform\flags.py:84 __getattr__
wrapped(_sys.argv)
C:\Users\...\anaconda3\envs\tf\lib\site-packages\absl\flags\_flagvalues.py:633 __call__
name, value, suggestions=suggestions)
UnrecognizedFlagError: Unknown command line flag 'f'
I have a code running a "custom" model which seems it was constructed using "Eager Mode". When I try to run the model.predict() function, I got the following error
File "/home/jptalledo/.local/lib/python3.6/site-packages/tensorflow/python/keras/utils/version_utils.py", line 122, in disallow_legacy_graph
raise ValueError(error_msg)
ValueError: Calling Model.predict in graph mode is not supported when the Model instance was constructed with eager mode enabled. Please construct your Model instance in graph mode or call Model.predict with eager mode enabled.
The python code looks like this:
def nn_predict(self, img):
"""Run model prediction to classify image as EV and return its probability"""
img = cv2.resize(cv2.cvtColor(img, cv2.COLOR_BGR2RGB), self.target_image_size).astype(np.float32) / 255.0
img = np.expand_dims(img, axis=0)
with self.tf_graph.as_default():
predictions = self.nn_model.predict(img)
return predictions
Where the issue resides on: predictions = self.nn_model.predict(img)
Any advice how to enable Eager Mode?
Thanks
For some library functionality I'm trying to rename the layers (including the input layers) of a given model.
The following minimal example shows the error I run into with my current approach (using TensorFlow 2.3):
from tensorflow.keras.models import load_model
model = load_model("model.h5")
for layer in model.layers:
layer._name = layer.name + "_renamed"
model.to_json()
ValueError: The target structure is of type `<class 'tensorflow.python.framework.ops.Tensor'>`
Tensor("input_1:0", shape=(None, 4), dtype=float32)
However the input structure is a sequence (<class 'list'>) of length 0.
The model.h5 file might have been created like this, for example:
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
inputs = Input(shape=(4,))
x = Dense(5, activation='relu', name='a')(inputs)
x = Dense(3, activation='softmax', name='b')(x)
model = Model(inputs=inputs, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='nadam')
model.save("model.h5")
Any idea on how to fix this?
Problem: Keras serializes the network by traversing layer._inbound_nodes and comparing against model._network_nodes; when setting layer._name, latter persists original names.
Solution: rename _network_nodes accordingly. Working function at bottom, with example below:
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
ipt = Input((16,))
out = Dense(16)(ipt)
model = Model(ipt, out)
model.compile('sgd', 'mse')
rename(model, model.layers[1], 'new_name')
model.save('model.h5')
loaded = load_model('model.h5')
Note: layer.name is a #property without a .setter, meaning it's not meant to be set (as evident). Further, layer.__setattr__ is overridden, and performs steps in addition to setting an attribute - likely necessary, but can't be sure exactly what other effects it may have. I've included an alternative which bypasses these. Treat this as a temporary solution at best; I suggest opening an Issue on Github, as API-side changes are due.
Function:
Not foolproof - _get_node_suffix's naming logic needs work (e.g. dense_1 can confound with dense_11).
def rename(model, layer, new_name):
def _get_node_suffix(name):
for old_name in old_nodes:
if old_name.startswith(name):
return old_name[len(name):]
old_name = layer.name
old_nodes = list(model._network_nodes)
new_nodes = []
for l in model.layers:
if l.name == old_name:
l._name = new_name
# vars(l).__setitem__('_name', new) # bypasses .__setattr__
new_nodes.append(new_name + _get_node_suffix(old_name))
else:
new_nodes.append(l.name + _get_node_suffix(l.name))
model._network_nodes = set(new_nodes)
I defined a new loss function in keras in losses.py file. I close and relaunch anaconda prompt, but I got ValueError: ('Unknown loss function', ':binary_crossentropy_2'). I'm running keras using python2.7 and anaconda on windows 10.
I temporarily solve it by adding the loss function in the python file I compile my model.
In Keras we have to pass the custom functions in the load_model function:
def my_custom_func():
# your code
return
from keras.models import load_model
model = load_model('my_model.h5', custom_objects={'my_custom_func':
my_custom_func})
None of these solutions worked for me because I had two or more nested functions for multiple output variables.
my solution was to not compile when loading the model. I compile the model later on with the list of loss functions that were used when you trained the model.
from tensorflow.keras.models import load_model
# load model weights, but do not compile
model = load_model("mymodel.h5", compile=False)
# printing the model summary
model.summary()
# custom loss defined for feature 1
def function_loss_o1(weights)
N_c = len(weights)
def loss(y_true, y_pred):
output_loss = ...
return output_loss/N_c
return loss
# custom loss defined for feature 2
def function_loss_o2(weights)
N_c = len(weights)
def loss(y_true, y_pred):
output_loss = ...
return output_loss/N_c
return loss
# list of loss functions for each output feature
losses = [function_loss_o1, function_loss_o2]
# compile and train the model
model.compile(optimizer='adam', loss=losses, metrics=['accuracy'])
# now you can use compiled model to predict/evaluate, etc
eval_dict = {}
eval_dict["test_evaluate"] = model.evaluate(x_test, y_test, batch_size=batch_size, verbose=0)
I didn't have luck with the above solutions, but I was able to do this:
from keras.models import load_model
from keras.utils.generic_utils import get_custom_objects
get_custom_objects().update({'my_custom_func': my_custom_func})
model = load_model('my_model.h5')
I found the solution here: https://github.com/keras-team/keras/issues/5916#issuecomment-294373616
It looks like you're trying to call the function via string alias, which requires more tampering with Keras' losses.py to map the string to the function (something you should not do as it gets overridden if you update the package). Instead just declare the function in your project and pass it to the loss parameter, for example:
from your.project import binary_crossentropy_2
# ...
model.fit(epochs, loss=binary_crossentropy_2)
As long as your function follows the satisfies the requirements here, it will work fine.
The solution was to add the function to the losses.py in keras within the environment's folder. At first, I added it in anaconda2/pkgs/keras.../losses.py, so that's why I got the error.
The path for losses.py in the environment is something like:
anaconda2/envs/envname/lib/python2.7/site-packages/keras/losses.py